Myoelectric control is filled with potential to significantly change human-robot interaction due to the ability to non-invasively measure human motion intent. However, current control schemes have struggled to achieve the robust performance that is necessary for use in commercial applications. As demands in myoelectric control trend toward simultaneous multifunctional control, multi-muscle coordinations, or synergies, play larger roles in the success of the control scheme. Detecting and refining patterns in muscle activations robust to the high variance and transient changes associated with surface electromyography is essential for efficient, user-friendly control. This article reviews the role of muscle synergies in myoelectric control schemes by dissecting each component of the scheme with respect to associated challenges for achieving robust simultaneous control of myoelectric interfaces. Electromyography recording details, signal feature extraction, pattern recognition and motor learning based control schemes are considered, and future directions are proposed as steps toward fulfilling the potential of myoelectric control in clinically and commercially viable applications.
One of the hottest topics in rehabilitation robotics is that of proper control of prosthetic devices. Despite decades of research, the state of the art is dramatically behind the expectations. To shed light on this issue, in June, 2013 the first international workshop on Present and future of non-invasive peripheral nervous system (PNS)-Machine Interfaces (MI; PMI) was convened, hosted by the International Conference on Rehabilitation Robotics. The keyword PMI has been selected to denote human-machine interfaces targeted at the limb-deficient, mainly upper-limb amputees, dealing with signals gathered from the PNS in a non-invasive way, that is, from the surface of the residuum. The workshop was intended to provide an overview of the state of the art and future perspectives of such interfaces; this paper represents is a collection of opinions expressed by each and every researcher/group involved in it.
Humans have the inherent ability to perform highly dexterous tasks with their arms, involving maintenance of posture, movement, and interaction with the environment. The latter requires the human to control the dynamic characteristics of the upper limb musculoskeletal system. These characteristics are quantitatively represented by inertia, damping, and stiffness, which are measures of mechanical impedance. Many previous studies have shown that arm posture is a dominant factor in determining the end point impedance on a horizontal plane. This paper presents the characterization of the end point impedance of the human arm in 3-D space. Moreover, it models the regulation of the arm impedance with muscle cocontraction. The characterization is made by route of experimental trials where human subjects maintained arm posture while their arms were perturbed by a robot arm. Furthermore, the subjects were asked to control the level of their arm muscles' cocontraction, using visual feedback, in order to investigate the effect of muscle cocontraction on the arm impedance. The results of this study show an anisotropic increase of arm stiffness due to muscle cocontraction. These results could improve our understanding of the human arm biomechanics, as well as provide implications for human motor control-specifically the control of arm impedance through muscle cocontraction.
Myoelectric controlled interfaces have become a research interest for use in advanced prostheses, exoskeletons, and robot teleoperation. Current research focuses on improving a user's initial performance, either by training a decoding function for a specific user or implementing "intuitive" mapping functions as decoders. However, both approaches are limiting, with the former being subject specific, and the latter task specific. This paper proposes a paradigm shift on myoelectric interfaces by embedding the human as controller of the system to be operated. Using abstract mapping functions between myoelectric activity and control actions for a task, this study shows that human subjects are able to control an artificial system with increasing efficiency by just learning how to control it. The method efficacy is tested by using two different control tasks and four different abstract mappings relating upper limb muscle activity to control actions for those tasks. The results show that all subjects were able to learn the mappings and improve their performance over time. More interestingly, a chronological evaluation across trials reveals that the learning curves transfer across subsequent trials having the same mapping, independent of the tasks to be executed. This implies that new muscle synergies are developed and refined relative to the mapping used by the control task, suggesting that maximal performance may be achieved by learning a constant, arbitrary mapping function rather than dynamic subject- or task-specific functions. Moreover, the results indicate that the method may extend to the neural control of any device or robot, without limitations for anthropomorphism or human-related counterparts.
Robots are increasingly used in tasks that include physical interaction with humans. Examples can be found in the area of rehabilitation robotics, power augmentation robots, as well as assistive and orthotic devices. However, current methods of physically coupling humans with robots fail to provide intrinsic safety, adaptation and efficiency, which limit the application of wearable robotics only to laboratory and controlled environments. In this paper we present the design and verification of a novel mechanism for physically coupling humans and robots. The device is intrinsically safe, since it is based on passive, non-electric features that are not prone to malfunctions. The device is capable of transmitting forces and torques in all directions between the human user and the robot. Moreover, its re-configurable nature allows for easy and consistent adjustment of the decoupling force. The latter makes the mechanism applicable to a wide range of human-robot coupling applications, ranging from low-force rehabilitation-therapy scenarios to high-force augmentation cases.
A learning scheme based on Random Forests is used to discriminate the task to be executed using only myoelectric activity from the upper limb. Three different task features can be discriminated: subspace to move towards, object to be grasped and task to be executed (with the object). The discrimination between the different reach to grasp movements is accomplished with a random forests classifier, which is able to perform efficient features selection, helping us to reduce the number of EMG channels required for task discrimination. The proposed scheme can take advantage of both a classifier and a regressor that cooperate advantageously to split the task space, providing better estimation accuracy with task-specific EMG-based motion decoding models, as reported in  and . The whole learning scheme can be used by a series of EMG-based interfaces, that can be found in rehabilitation cases and neural prostheses.
During locomotion, motor strategies can rapidly compensate for any obstruction or perturbation that could interfere with forward progression. Here we studied the contribution of interlimb pathways for evoking muscle activation patterns in the case where body weight is externally supported and vestibular feedback is limited. The experiments were conducted using a novel device intended for gait therapy: the MIT-Skywalker. The subjects body weight was supported by an underneath saddle-like seat, and a chest harness was used to provide stabilization of the torso. Eight neurologically healthy individuals were asked to walk on the MIT-Skywalker, while one side of its split belt treadmill was unexpectedly dropped either before heel-strike or during mid-stance. Leg kinematics will be reported. We found that unilateral perturbations evoked responses at the contralateral limb, which were observed in both kinematic and neuromuscular level. The latency of most responses exceeded 100 msec, which suggests a supraspinal (i.e. not local) pathway.
In this paper we present the alpha-prototype of a novel pediatric ankle robot. This lower-extremity robotic therapy module was developed at MIT to aid recovery of ankle function in children with cerebral palsy ages 5 to 8 years old. This lower-extremity robotic module will commence pilot testing with children with cerebral palsy at Blythedale Childrens Hospital (Valhalla, NY), Bambino Gesu Childrens Hospital (Rome, Italy), Riley Childrens Hospital (Indianapolis, IN). Its design follows the same guidelines as our upper-extremity robots and adult anklebot designs, i.e. it is a low friction, backdriveable device with intrinsically low mechanical impedance. We show the ankle robot characteristics and stability range. We also present pilot data with healthy children to demonstrate the potential of this device.
Human locomotion is based on the finely tuned coordination of the two legs. For this research, we studied the contribution of interlimb pathways for coordinating and synchronizing the legs motion in the case where body weight is externally supported and vestibular feedback is limited. The experiments were conducted using a novel device intended for gait therapy: the MIT-Skywalker. The subjects body weight was supported by an underneath saddle-like seat, and a chest harness was used to provide stabilization of the torso. Two neurologically healthy individuals were asked to walk on the MIT-Skywalker, while one side of its split belt treadmill was unexpectedly dropped during the perturbed leg stance phase. Leg kinematics are reported as well as the effect of the timing of perturbation on the unperturbed leg. Presented here are the phase-response curves (PRCs) for both legs. We found that unilateral perturbations evoked responses at the contralateral limb, while the timing of the activation played a significant role in those responses.
Walking impairments are a common sequela of neurological injury, severely affecting the quality of life of both adults and children. Gait therapy is the traditional approach to ameliorate the problem by re-training the nervous system and there have been some attempts to mechanize such approach. In this paper, we present a novel device to deliver gait therapy, which, in contrast to previous approaches, takes advantage of the concept of passive walkers and the natural dynamics of the lower extremity in order to deliver more "ecological" therapy. We also discuss the closed-loop control scheme, which enables safe and efficient operation of the device, and present the initial feasibility tests with unimpaired subjects.
Human-robot control interfaces have received increased attention during the last decades. These interfaces increasingly use signals coming directly from humans since there is a strong necessity for simple and natural control interfaces. In this paper, electromyographic (EMG) signals from the muscles of the human upper limb are used as the control interface between the user and a robot arm. A switching regime model is used to decode the EMG activity of 11 muscles to a continuous representation of arm motion in the 3-D space. The switching regime model is used to overcome the main difficulties of the EMG-based control systems, i.e., the nonlinearity of the relationship between the EMG recordings and the arm motion, as well as the nonstationarity of EMG signals with respect to time. The proposed interface allows the user to control in real time an anthropomorphic robot arm in the 3-D space. The efficiency of the method is assessed through real-time experiments of four persons performing random arm motions.
Human-robot control interfaces have received increased attention during the past decades. With the introduction of robots in everyday life, especially in providing services to people with special needs (i.e., elderly, people with impairments, or people with disabilities), there is a strong necessity for simple and natural control interfaces. In this paper, electromyographic (EMG) signals from muscles of the human upper limb are used as the control interface between the user and a robot arm. EMG signals are recorded using surface EMG electrodes placed on the users skin, making the users upper limb free of bulky interface sensors or machinery usually found in conventional human-controlled systems. The proposed interface allows the user to control in real time an anthropomorphic robot arm in 3-D space, using upper limb motion estimates based only on EMG recordings. Moreover, the proposed interface is robust to EMG changes with respect to time, mainly caused by muscle fatigue or adjustments of contraction level. The efficiency of the method is assessed through real-time experiments, including random arm motions in the 3-D space with variable hand speed profiles.
Related JoVE Video
Journal of Visualized Experiments
What is Visualize?
JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.
How does it work?
We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.
Video X seems to be unrelated to Abstract Y...
In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.