The Possibilities to Augment Physical Capabilities using Brain Machine Interfaces

OPINION

October 2019

Christian I. Penaloza and Shuichi Nishio

Augmenting Human Capabilities

As humans, we have always tried to augment our physical and cognitive capabilities to enhance the strength or endurance of our bodies or minds. Although the concept of human augmentation is not new, with the fast development of new robotic systems, AI and genetic engineering, it has accelerated in the last decade and will continue accelerating even further in the upcoming decades. The expected outcome will be a new kind of human beings, superhumans or cyborgs, that will seamlessly integrate machines with their human bodies and will have unprecedented capabilities to achieve things that today’s humans are not capable of.

From cochlear implants to robotic prosthetics, a wide range of technologies that assist people with disabilities have been developed. These technologies are becoming more and more sophisticated than just recovering or replacing a lost function, they may allow users to directly enhance their capabilities and go beyond what healthy people can achieve. For instance, an advanced cochlear implant may allow a user to clearly hear from a very far distance, or a prosthetic hand made from a material that is much resilient to heat, may allow the user to interact with extremely hot objects with no risk of burnt; or prosthetic legs (i.e. running blades) may augment a person’s gait and speed with less effort because they weigh less than a healthy person’s real legs. It is natural to think that if people with certain disabilities can now benefit from these types of devices, what would healthy people be capable of if they also had access to such devices to augment their capabilities.  The answer is simple, human augmentation can make future humans more functional, faster, stronger and hence more productive.

In the last decade, human augmentation prototypes have started to be tested in healthy humans with promising results for collaborative task scenarios in which the augmentation device assists the operator in performing a particular task [1]. For instance, augmentation devices in the form of supernumerary robotic limbs (SRL) such as wearable robotic arms [2] or fingers [3] can collaboratively play musical instruments [5], support heavy objects while the operator completes the task [4], or grasp multiple objects simultaneously [3]. The methodologies to control these devices range from manual operation through a joystick to using electromyogram (EMG) signals from muscle impulses [3] from other limbs. This control approach makes it difficult for users to freely do a parallel task while controlling the augmentation device. Therefore, the next step towards truly human augmentation would involve controlling devices in a way that do not require other body limbs, that is – directly controlling them with the brain.

Controlling Devices with the Brain

Recent advances in invasive brain-machine interfaces (BMI) have allowed the monitoring of neural activity through electrodes surgically implanted in the brain of a patient and deliver control commands to an external assistive device. Invasive BMI systems have been used for a wide range of applications mostly for assistive purposes to provide communication and control capabilities to people with severe disabilities; e.g., who are totally paralyzed or ‘locked in’ [6]. However, the fact that a surgical procedure is needed to implant the electrodes into the person’s brain, makes it inconvenient and slows down the acceptance of the technology on healthy people in general.

On the other hand, non-invasive BMI systems have also shown that by monitoring brain signals from electrodes located on the scalp, it is possible for the user to control devices such as a virtual keyboard [7], a wheelchair [8] or a robotic arm [9]. However, these systems usually require the user to achieve high levels of concentration, and the user must avoid abrupt body movements during BMI operation. These limitations make the BMI system inconvenient for healthy users, who at the end may prefer to use their own limbs to achieve the intended task. It is likely that healthy users would only be motivated to use a BMI system if it provides them a benefit which they currently cannot achieve with their own bodies – that is, if the BMI can be used to enhance their physical and cognitive capabilities.

Brain Controlled Augmentation Devices

Until now, BMI systems had been used for recovery or replacement of a lost ability, but not to enhance the capabilities of the operator. As previously mentioned, one way to achieve this is by designing augmentation devices that allow operators to engage in more than one task at the same time. To our knowledge, no BMI studies had explored the control of augmentation devices such as an SRL to achieve multi-tasking. In our previous work [10], we explored the possibility that humans could do multitasking by controlling a body augmentation robotic arm with a BMI while simultaneously performing a parallel task with their own limbs.

In the experiment, a robotic arm was placed next to the participants and they wore an EEG headset. The system was then calibrated to pick up on the differences in brain patterns when participants imagined the arm grasping and releasing a bottle. To test the skill of multitasking, participants had to perform two tasks simultaneously. The first task was to hold and release a bottle using the robotic arm, and the second was to use their two real arms to balance a ball on a tray in a specified motion pattern, as shown in Fig. 1.

Figure 1 Experiment

Figure 1 Experiment

Interestingly, the resulting BMI performance during multitasking showed that participants fell into two groups—those who were quite successful in carrying out the requested tasks simultaneously, and those who were not. Those in the successful group were able to keep the ball balancing on the tray while mentally requesting the robot to grab and move the bottle an average of 85 percent of the time. Those in the less successful group were only able to accomplish the task 52 percent of the time. On the other hand, when controlling the robot arm for a single task, participants’ performance did not show a significant difference between the two groups. This result may suggest that some people are just better at multitasking than others, regardless of their BMI performance. This hypothesis should be tested in multitask paradigms that do not involve a BMI. An alternative hypothesis that explains this outcome was the use of motor strategies for balancing the ball that did not affect the BMI decoder.

Another interesting thing to consider is that multitasking results seem to be better than traditional BMI (85% in good performers compared to 60-70% for motor imagery experiments [11], although in general it is difficult to make direct comparisons across different studies due to many uncontrolled variables). One hypothesis is that this outcome may have been possible due to the positive effect of the visual feedback produced by the appearance of the robot arm, which in our case, a human-looking robot arm was used during the experiments. On this regard, BMI researchers have explored visual feedback in the form of virtual human-like hands [12], results suggests that this feedback is more engaging, motivating, and it influences the effectiveness and usability of a BMI system. In fact, in our previous work we also showed that visual feedback, in particular, the physical human-like robotic hands controlled by a BMI, induces a sense of ownership in operators [13-14] and they can modulate their sensorimothor rhytms better, compared to operators trained with the arrow-based visual feedback used in the classical BMI training approach [15].

So the question remains – why using the brain to control human augmentation devices? First, because it is natural to do so. We as humans do not inherently plan every muscle movement to achieve an action, we just think about it and our body responds accordingly. In this sense, it will then be necessary for robotic augmentation devices to have artificial intelligence and context awareness capabilities to sense the environment and act upon it according to the high-level command generated by the brain. For instance, take into consideration the system we proposed in [16], in which we integrated a camera to the robot arm and provided object recognition to the system so it could reconfigure the object grasping configuration autonomously depending on the object that the user intends to grasp.

Figure 2

Figure 2 Robot arm with AI capabilities to recognize objects and choose a grasping configuration autonomously.

And second, because there might be a possibility of cognitive enhancement from multitasking with a BMI. The fact that multitasking is related to the optimal allocation of cognitive resources (i.e. switching of attention, decision making, working memory, motor coordination) during simultaneous tasks, gives us the possibility that if participants can become better at controlling the third arm, this means that their ability of doing multitasking could also increase and their cognitive capabilities might be enhanced. Therefore, people would be able to benefit in their daily life since the cognitive capabilities they use in everyday life may be improved. Although this vision is promising, there are still many challenges to overcome besides the current technical limitations of the robot arm. For instance, it is necessary to investigate whether the multitasking skill acquired with the third arm is transferable to daily life multitasking. Moreover, it is important to find out how long the effect of multitasking skill with the third arm lasts and whether it could be relearned at a faster pace the second or third time it is practiced. Lastly, it would be interesting to analyze the multitasking effect in the human brain (perhaps with an fMRI study) and compare it with another type of multitasking skill. If the previously mentioned effects of multitasking skill with the third arm result to be positive (transferrable, long-lasting and different effect in the brain), perhaps it would mean that BMI-based cognitive training could become a revolutionary way to enhance the cognitive capabilities of humans.

References

[1] di Pino G., Maravita A., Zollo L., Guglielmelli E., di Lazzaro V. Augmentation-related brain plasticity. Frontiers in Systems Neuroscience. 2014;8, article 109 doi: 10.3389/fnsys.2014.00109

[2] Parietti F. and Asada H., Dynamic Analysis and State Estimation for Wearable Robotic Limbs Subject to Human-Induced Disturbances, IEEE International Conference on Robotics and Automation (ICRA 2013), Karlsruhe, Germany, May 2013.

[3] Wu F. and Asada H., “Bio-artificial synergies for grasp posture control of supernumerary robotic fingers,” in Proceedings of Robotics: Science and Systems, (Berkeley, USA), July 2014.

[4] Parietti F. and Asada H., Bracing the Human Body with Supernumerary Robotic Limbs for Physical Assistance and Load Reduction, IEEE International Conference on Robotics and Automation (2014 ICRA), Hong Kong, China, May 2014.

[5] Bretan, M., Gopintah, D., Mullins, P., and Weinberg, G. “A Robotic Prosthesis for an Amputee Drummer,” Preprint; Arxiv (co.RO/1612.04391), 2016.

[6] Lebedev M. A. and Nicolelis M. A., “Brain-Machine Interfaces: Past, Present and Future,” Trends in Neurosciences, Vol. 29, No. 9, 2006, pp. 536-546. doi:10.1016 j.tins.2006.07.004

[7] Donchin E, Spencer KM, Wijesinghe R, (2000) The Mental Prosthesis: Assessing the Speed of a P300-Based Brain–Computer Interface. IEEE Transactions on Rehabilitation Engineering, 8(2) Jun 2000

[8] Ferreira A., Silva R.L., Celeste W.C., Bastos T.F. and Sarcinelli M. (2007), Human-machine interface based on muscular and brain signals applied to a robotic wheelchair,J. Phys.: Conf. Ser. 2007 90 012094

[9] Carmena, J, Lebedev M, Crist R, O’Doherty J, Santucci D, Dimitrov, DF, Patil PG, Henriquez C, Nicolelis M (2003). Learning to control a brain-machine interface for reaching and grasping by primates. PLoS Biology 1 (2): E42. doi:10.1371/journal.pbio.0000042

[10]  Christian I. Penaloza and Shuichi Nishio, “Controlling a Third Arm with a BMI” Science Robotics, Vol. 3, Issue 20, eaat1228 (2018) DOI: 10.1126/scirobotics.aat1228

[11]Bashashati H, Ward RK, Birch GE, Bashashati A (2015) Comparing Different Classifiers in Sensory Motor Brain Computer Interfaces. PLoS ONE 10(6): e0129435.

[12]Evans N, Gale S, Schurger A, Blanke O (2015) Visual Feedback Dominates the Sense of Agency for Brain-Machine Actions. PLoS ONE 10(6): e0130019. https://doi.org/10.1371/journal.pone.0130019

[13]Alimardani M, Nishio S., \& Ishiguro H. (2013), “Humanlike robot hands controlled by brain activity arouse illusion of ownership in operators”, Scientific Reports, vol. 3, no. 2396, August, 2013.

[14] Alimardani, M., Nishio, S., \& Ishiguro, H. (2014). Effect of biased feedback on motor imagery learning in BCI-teleoperation system. Frontiers in systems neuroscience, 8, 52.

[15] C. I. Penaloza, M. Alimardani and S. Nishio, “Android Feedback-based Training modulates Sensorimotor Rhythms during Motor Imagery,” in IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. PP, no. 99, pp. 1-1.

[16] Christian I. Penaloza, David Hernandez-Carmona, and Shuichi Nishio. 2018. Towards Intelligent Brain-Controlled Robotic Limbs. In Proceedings of 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC2018), Miyazaki ,Japan, October 7-10, 2018.

 

Authors biographies:

Dr. Christian Penaloza holds a Masters and Ph.D. in Engineering Science from Osaka University. Currently, he is the CEO and Director of Mirai Innovation Research Institute in Osaka and researcher at the Advanced Telecommunications Research Institute (ATR) in Kyoto, Japan. Dr. Penaloza has authored international publications in the area of robotics, artificial intelligence and brain-machine interface systems. In 2016, the MIT Technology Review Magazine recognized Dr. Penaloza as one of the 10 most innovators under 35 years old and awarded him the “Innovator of the Year 2016 Mexico” award – a world-class award for young innovators who are at the forefront of technology.

 

Shuichi Nishio received his M.Sc. in computer science from Kyoto University in 1994, D. Eng from Osaka University in 2010 and is currently a specially appointed professor at Osaka University, Japan. His research interests include self-recognition system, bodily extension and elderly support with robots. He had also engaged in studies on networked robots, pattern recognition and standardization of robotic technologies.