Researchers from the KTH Royal Institute of Technology in Stockholm, Sweden, completed work on an European Union project aimed at advancing collaborative artificial intelligence — robots were enabled to coordinate with one another in the performance of complex tasks such as surgery, through body language. According to a university report, the project was completed in May 2016, in collaboration with project partners at Aalto University in Finland, the National Technical University of Athens in Greece, and the École Centrale Paris in France.

In a string of recorded demonstrations, scientists displayed the capabilities of off-the-shelf autonomous machines, including NAO robots, a breed of personal, programmable, interactive humanoid robots. One video portrayed a robot pointing out an object to another robot, conveying the message that it needs the robot to lift the item.

Dimos Dimarogonas, an associate professor at KTH and an associate editor for Automatica, IET Control Theory & Applications and IEEE Transactions on Automation Science and Engineering, says that researchers have developed sophisticated algorithms that enable the robots to detect when their colleague is in trouble, when to abandon their current task to help out and when to ask for help.

Dimarogonas says, “Robots can stop what they’re doing and go over to assist another robot which has asked for help. This will mean flexible and dynamic robots that act much more like humans — robots capable of constantly facing new choices and that are competent enough to make decisions. ”

As technology progresses, robots are given the responsibility of performing increasingly complex tasks while scientists are focusing on subdividing the task responsibilities in a much more efficient manner. Shared work could include lending an extra hand to lift and carry something or holding an object in place, but the concept can be scaled up to include any number of functions in a home, a factory or other kinds of workplaces, according to Dimarogonas. In another demonstration, two robots carry an object together — one leads the other, which senses what the lead robot wants judging by the force it exerts on the object.

The visual information received by the robots is translated into the same symbol for the same object with robots able to perceive one object from different angles, using upgraded visual technology. The algorithm then processes said information, which is translated into the same symbol and transmitted to the next tier, the decision-making stage, where the final bout of information processing occurs.

Dimarogonas explains the process saying that it’s similar to two people carrying a table and one of them knowing where to carry it. The other person would be able to detect the action to perform through the movements of the person and the way they turn, i.e. push or pull. The most significant point, however is that all of these actions take place without human interception or external assistance.

This decisive process is seamlessly carried out in real time with the project also using a unique communication medium that differentiates it from other orthodox collaborative robotic efforts. Yet the researchers have minimized any actual communication, as they feel it’s intrusive to the tasks being carried out. Furthermore, there is a symbolic communication protocol, but it’s not continuous. When assistance is required, a call for assistance is broadcasted and a helper robot transfers the message to another robot in a single seamless motion.

The discovery could be potentially revolutionary for the field of medicine as an increasing number of robots are being successfully used for surgical procedures, with some even outperforming humans. Currently, many surgical operations make use of robotic technology while crucial procedures in eye surgeries, hair transplants and knee operations are also performed by robots. Tissue surgeries are even more complex since there are an extensive amount of intricate and fluid parts present that can be quite difficult to closely handle. Therefore it is quite important that this technology be efficiently optimized for use in surgical procedures.

For now,  researchers have a significantly  steep climb ahead of them as their algorithms are currently being used in conjunction with “normal” non-surgical robots as well as for relatively simple tasks. Scientists will next have to focus on integrating this unique and innovative discovery within the surgeon’s table.

This final step is quite crucial and much needed since a significant amount of deaths are caused by medical negligence. These deaths could be avoided if surgeons had external assistance from robots since most medical errors are a result of an error in judgment, skill, care coordination, misdiagnosis, machine or system error.