November 12, 2024
Imagine working side by side with a robot that understands the way you move and senses exactly how much force you are using to lift a box, or how gently you are grasping a tool. That is the future we are building with human-robot teaming.
Robots are becoming our teammates, whether they are used in manufacturing, hospitals or even for everyday tasks like assisting someone with mobility challenges. However, for this teaming to be successful, robots must be able to recognize the strength and subtle movements humans make on a daily basis.
For example, they must be able to recognize how gently you hold something delicate or how forcefully you press down when chopping vegetables. The challenge is that it is difficult to predict these hand forces. Think about how different it feels to lift a coffee mug versus carrying a box of books. Your movements vary depending on the task; it takes more than a hunch to have a machine recognize that variance. It is like teaching someone who has never cooked before how to knead dough. Too little pressure would cause it to crumble, and too much pressure will damage the texture.
That is where a new approach that we developed, the hierarchical recurrent-inception residual transformer (or HRIRT, for short), comes into play. It is like giving robots their own sense of “muscle memory” and intuition. The approach uses advanced sensors called force myography (FMG) sensors, which are non-intrusive sensors that attach to a person’s body and detect changes in muscle contractions. In the case of recent research conducted with my team, the FMG sensors were attached to a person’s hand, allowing us to monitor the precise amount of pressure that is applied in real time.
Then, the proposed HRIRT acts like a translator, instantly interpreting this data and predicting how hard or soft a person is pushing, lifting or grasping something.
Think of a robot helping an elderly person who struggles to lift heavy objects. The robot needs to know how much effort the person is putting into lifting something so it can assist without taking over or causing harm. The approach trains the robot to “feel” that effort and react appropriately, adjusting its help as needed. This makes the teaming interaction safe, efficient and intuitive, like working with a human co-worker who understands when to step in and when to let you take the lead.
The HRIRT approach not only makes robots better teammates, it also lays the foundation for their application in increasingly complex tasks. This technology has the potential to transform the ways in which humans and robots operate together in a wide range of sectors, whether it is in precision surgery, assisting those with impairments or challenging tasks. The truly fascinating part of this research is how we are tackling this difficulty by combining the most recent advancements in AI, leading to a time when working with robots will feel just as natural as interacting with people. We are heading towards a future in which robots understand how to accomplish and complete the tasks we assign them, resulting in teaming interactions that are safer, smarter, and easier.
ABOUT OUR AUTHORS
Filippo Sanfilippo is an IEEE Senior Member and Head of the Artificial Intelligence, Biomechatronics, and Collaborative Group at the Top Research Centre Mechatronics (TRCM), University of Agder, Grimstad, Norway. Muhammad Hamza Zafar is an IEEE Student Member. He is a PhD Fellow at the TRCM.