September 18, 2017

Engineers have been teaching skills to robots to make them act more like humans and also  giving them the tools to learn on their own. One skill that engineers are still struggling with is programming robotic hands to grasp and hold objects in the same manner that human hands can.

While human hands are flexible and can assume many positions, they can also adapt based on the object that is placed into the hand. A robotic hand has to be programmed to not only pick up and grasp objects, but also has to make it stable and efficient, depending on the shape of the object. Researchers have created a framework for teaching robots to perform complex tasks involving object grasping and manipulation through goal-directed grasp imitation. Goal-directed imitation is an advanced learning strategy that has been adopted to teach robots complex motor tasks.

Using probabilistic graphical models—Bayesian networks—researchers encoded variations of all the variables to help teach the robots how to grasp objects. The researchers realized that the robots needed to be taught by humans manually how to do annotated task-related grasps. They also had to be taught to adapt to grasping in uncertain environments. These models can be turned into a system that detects observations allowing the robot to build an active learning system that would update and adapt to new situations.

For More Information

Read more about Task-Based Robot Grasp Planning Using Probabilistic Inference

Task-Based Robot Grasp Planning Using Probabilistic Inference

The transfer of information between a teacher (human/robot) and a student (robot) requires a common knowledge representation.

INTERACTIVE EXPERIENCES

Close Navigation