The specter of robots replacing humans might not be a thing yet. But people are working towards making them an efficient substitute to humans in several fields. Until this time, we knew about the one-armed robot being used in factories. Now, robots are planned to be used in domestic purposes. As house help or for taking care of elderly people or may be serving an injured. All of these would require a robot to have at least two arms.
Sounds like a plan? But using two hands is harder than we make it look. The coordination, the actions, and the voluntary movements of the two hands are way too complicated. Thus, it’s quite silly to ask a robot to do it autonomously. Therefore, this robotic control system learns from humans.
Why, you wonder?
Well, the primary idea behind this research is to eliminate the idea of building a two-armed robot. Yes, you read it right. It aims at creating a system that understands and executes the same type of manipulations that we humans do without actually thinking about them. The specialists originally had people wearing movement capture gear perform an assortment of simulated ordinary tasks, such as stacking glasses, opening holders and spilling out the substance, and grabbing things with different things adjusted on top. This information — where the hands go, how they connect, etc — was bitten up and ruminated on by an AI framework. This framework will, in turn, help the robots to interpret what to do in a situation.
According to the University of Wisconsin-Madison, humans tend to do one of four things with their hands:
This is where you pick an item and place it on the other hand so it’s simpler to put it where it’s going or to free up the first hand to accomplish something different.
One hand fixed
An article is held relentless by one hand giving a solid, inflexible grasp, while the different plays out an activity on it like removing a lid or blending the substance.
Two hands cooperate to lift something up and rotate or move it.
One hand seeking
Not ideally a two-handed activity, yet the standard of intentionally keeping one hand out of activity while different finds the article required or plays out its very own task.
How do these data help the robot?
The robots are fed with these data but not for doing the action itself, as we said these are absolute complicated actions that our current AIs are incapable of executing. But they can interpret these actions into knowledge that greatly improves the success rate of the attempts by remote operators to perform a set of tasks.
Imagine a scenario of stirring something in a bowl. We know we have to hold one side steady with a stronger grip. If you tried to do this remotely with robotic arms, that information is not present anymore, and the robot hand will likely fail to stir it properly because the other hand is not really gripping enough. The system of the robot created by the researchers takes in the count when one of the four actions is happening and takes measures to make sure that they’re a success.
Of course, this is all still human-controlled, more or less — but this time human’s actions are being augmented and re-interpreted. And they are not simple mechanical reproduction. You can click the link to see this robot learn its two-handed moves from human dexterity.
Though doing all these autonomously by a robot is still something way ahead. But this kind of research can put forward a real baseline. What matters is to make a robot understand “why” rather than “how” a human does a certain gesture.