This paper presents a novel robotic interface to investigate the neuromechanical control of redundant planar arm movements. A unique aspect of this device is the third axis by which the wrist, and hence the pose of the arm can be fully constrained. The topology is based on a 5R, closed loop pantograph, with a decoupled wrist flexion/extension cable actuated mechanism. The design and characterization (in terms of range of motion, impedance, friction and dynamics) are described in this paper. This device is lightweight, safe and has high force capabilities and low impedance. Simple experiments illustrate the advantages of this device for the investigation of redundant motor control in humans.
There is a pressing need for new techniques capable of providing accurate information about sensorimotor function during the first 2?years of childhood. Here, we review current clinical methods and challenges for assessing motor function in early infancy, and discuss the potential benefits of applying technology-assisted methods. We also describe how the use of these tools with neuroimaging, and in particular functional magnetic resonance imaging (fMRI), can shed new light on the intra-cerebral processes underlying neurodevelopmental impairment. This knowledge is of particular relevance in the early infant brain, which has an increased capacity for compensatory neural plasticity. Such tools could bring a wealth of knowledge about the underlying pathophysiological processes of diseases such as cerebral palsy; act as biomarkers to monitor the effects of possible therapeutic interventions; and provide clinicians with much needed early diagnostic information.
Increasing the level of transparency in rehabilitation devices has been one of the main goals in robot-aided neurorehabilitation for the past two decades. This issue is particularly important to robotic structures that mimic the human counterparts morphology and attach directly to the limb. Problems arise for complex joints such as the human wrist, which cannot be accurately matched with a traditional mechanical joint. In such cases, mechanical differences between human and robotic joint cause hyperstaticity (i.e. overconstraint) which, coupled with kinematic misalignments, leads to uncontrolled force/torque at the joint. This paper focuses on the prono-supination (PS) degree of freedom of the forearm. The overall force and torque in the wrist PS rotation is quantified by means of a wrist robot. A practical solution to avoid hyperstaticity and reduce the level of undesired force/torque in the wrist is presented, which is shown to reduce 75% of the force and 68% of the torque.
This paper validates a novel instrumented object, the iBox, dedicated to the analysis of grasping and manipulation. This instrumented box can be grasped and manipulated, is fitted with an Inertial Measurement Unit (IMU) and can sense the force applied on each side and transmits measured force, acceleration and orientation data wirelessly in real time. The iBox also provides simple access to data for analysing human motor control features such as the coordination between grasping and lifting forces and complex manipulation patterns. A set of grasping and manipulation experiments was conducted with 6 hemiparetic patients and 5 healthy control subjects. Measures made of the forces, kinematics and dynamics are developed, which can be used to analyse grasping and contribute to assessment in patients. Quantitative measurements provided by the iBox reveal numerous characteristics of the grasping strategies and function in patients: variations in the completion time, changes in the force distribution on the object and grasping force levels, difficulties to adjust the level of applied forces to the task and to maintain it, along with movement smoothness decrease and pathological tremor.
This work introduces a coordinate-independent method to analyse movement variability of tasks performed with hand-held tools, such as a pen or a surgical scalpel. We extend the classical uncontrolled manifold (UCM) approach by exploiting the geometry of rigid body motions, used to describe tool configurations. In particular, we analyse variability during a static pointing task with a hand-held tool, where subjects are asked to keep the tool tip in steady contact with another object. In this case the tool is redundant with respect to the task, as subjects control position/orientation of the tool, i.e. 6 degrees-of-freedom (dof), to maintain the tool tip position (3dof) steady. To test the new method, subjects performed a pointing task with and without arm support. The additional dof introduced in the unsupported condition, injecting more variability into the system, represented a resource to minimise variability in the task space via coordinated motion. The results show that all of the seven subjects channeled more variability along directions not directly affecting the task (UCM), consistent with previous literature but now shown in a coordinate-independent way. Variability in the unsupported condition was only slightly larger at the endpoint but much larger in the UCM.
Efficient control of reciprocal activation and cocontraction of the muscles are critical to perform skillful actions with suitable force and impedance. However, it remains unclear how the brain controls force and impedance while recruiting the same set of muscles as actuators. Does control take place at the single muscle level leading to force and impedance, or are there higher-order centers dedicated to controlling force and impedance? We addressed this question using functional MRI during voluntary isometric wrist contractions with online electromyogram feedback. Comparison of the brain activity between the conditions requiring control of either wrist torque or cocontraction demonstrates that blood oxygen level-dependent activity in the caudo-dorsal premotor cortex (PMd) correlates well with torque, whereas the activity in the ventral premotor cortex (PMv) correlates well with the level of cocontraction. This suggests distinct roles of the PMd and PMv in the voluntary control of reciprocal activation and cocontraction of muscles, respectively.
Humans are able to learn tool-handling tasks, such as carving, demonstrating their competency to make movements in unstable environments with varied directions. When faced with a single direction of instability, humans learn to selectively co-contract their arm muscles tuning the mechanical stiffness of the limb end point to stabilize movements. This study examines, for the first time, subjects simultaneously adapting to two distinct directions of instability, a situation that may typically occur when using tools. Subjects learned to perform reaching movements in two directions, each of which had lateral instability requiring control of impedance. The subjects were able to adapt to these unstable interactions and switch between movements in the two directions; they did so by learning to selectively control the end-point stiffness counteracting the environmental instability without superfluous stiffness in other directions. This finding demonstrates that the central nervous system can simultaneously tune the mechanical impedance of the limbs to multiple movements by learning movement-specific solutions. Furthermore, it suggests that the impedance controller learns as a function of the state of the arm rather than a general strategy.
It has been shown that people can learn to perform a variety of motor tasks in novel dynamic environments without visual feedback, highlighting the importance of proprioceptive feedback in motor learning. However, our results show that it is possible to learn a viscous curl force field without proprioceptive error to drive adaptation, by providing visual information about the position error. Subjects performed reaching movements in a constraining channel created by a robotic interface. The force that subjects applied against the haptic channel was used to predict the unconstrained hand trajectory under a viscous curl force field. This trajectory was provided as visual feedback to the subjects during movement (virtual dynamics). Subjects were able to use this visual information (discrepant with proprioception) and gradually learned to compensate for the virtual dynamics. Unconstrained catch trials, performed without the haptic channel after learning the virtual dynamics, exhibited similar trajectories to those of subjects who learned to move in the force field in the unconstrained environment. Our results demonstrate that the internal model of the external dynamics that was formed through learning without proprioceptive error was accurate enough to allow compensation for the force field in the unconstrained environment. They suggest a method to overcome limitations in learning resulting from mechanical constraints of robotic trainers by providing suitable visual feedback, potentially enabling efficient physical training and rehabilitation using simple robotic devices with few degrees-of-freedom.
The central nervous system uses stereotypical combinations of the three wrist/forearm joint angles to point in a given (2D) direction in space. In this paper, we first confirm and analyze this Donders law for the wrist as well as the distributions of the joint angles. We find that the quadratic surfaces fitting the experimental wrist configurations during pointing tasks are characterized by a subject-specific Koenderink shape index and by a bias due to the prono-supination angle distribution. We then introduce a simple postural model using only four parameters to explain these characteristics in a pointing task. The model specifies the redundancy of the pointing task by determining the one-dimensional task-equivalent manifold (TEM), parameterized via wrist torsion. For every pointing direction, the torsion is obtained by the concurrent minimization of an extrinsic cost, which guarantees minimal angle rotations (similar to Listings law for eye movements) and of an intrinsic cost, which penalizes wrist configurations away from comfortable postures. This allows simulating the sequence of wrist orientations to point at eight peripheral targets, from a central one, passing through intermediate points. The simulation first shows that in contrast to eye movements, which can be predicted by only considering the extrinsic cost (i.e., Listings law), both costs are necessary to account for the wrist/forearm experimental data. Second, fitting the synthetic Donders law from the simulated task with a quadratic surface yields similar fitting errors compared to experimental data.
Traditionally motor studies have assumed that motor tasks are executed according to a single plan characterized by regular patterns, which corresponds to the minimum of a cost function in extrinsic or intrinsic coordinates. However, the novel via-point task examined in this paper shows distinct planning and execution stages in motion production and demonstrates that subjects randomly select from several available motor plans to perform a task. Examination of the effect of pre-training and via-point orientation on subject behavior reveals that the selection of a plan depends on previous movements and is affected by constraints both intrinsic and extrinsic of the body. These results provide new insights into the hierarchical structure of motion planning in humans, which can only be explained if the current models of motor control integrate an explicit plan selection stage.
Rehabilitation of hand function is challenging, and only few studies have investigated robot-assisted rehabilitation focusing on distal joints of the upper limb. This paper investigates the feasibility of using the HapticKnob, a table-top end-effector device, for robot-assisted rehabilitation of grasping and forearm pronation/supination, two important functions for activities of daily living involving the hand, and which are often impaired in chronic stroke patients. It evaluates the effectiveness of this device for improving hand function and the transfer of improvement to arm function.
Initial work on robot-assisted neurorehabilitation for the upper extremity aimed primarily at training, reaching movements with the proximal sections of the upper extremity. However, recent years have seen a surge in devices dedicated to hand function. This review describes the state of the art and the promises of this novel therapeutic approach.
While brain-computer interfaces (BCIs) can provide communication to people who are locked-in, they suffer from a very low information transfer rate. Further, using a BCI requires a concentration effort and using it continuously can be tiring. The brain controlled wheelchair (BCW) described in this paper aims at providing mobility to BCI users despite these limitations, in a safe and efficient way. Using a slow but reliable P300 based BCI, the user selects a destination amongst a list of predefined locations. While the wheelchair moves on virtual guiding paths ensuring smooth, safe, and predictable trajectories, the user can stop the wheelchair by using a faster BCI. Experiments with nondisabled subjects demonstrated the efficiency of this strategy. Brain control was not affected when the wheelchair was in motion, and the BCW enabled the users to move to various locations in less time and with significantly less control effort than other control strategies proposed in the literature.
Finger coordination and independence are often impaired in stroke survivors, preventing them from performing activities of daily living. We have developed a technique using a robotic interface, the HandCARE, to train these functions.
Advantages of virtual-reality simulators surgical skill assessment and training include more training time, no risk to patient, repeatable difficulty level, reliable feedback, without the resource demands, and ethical issues of animal-based training. We tested this for a key subtask and showed a strong link between skill in the simulator and in reality. Suturing performance was assessed for four groups of participants, including experienced surgeons and naive subjects, on a custom-made virtual-reality simulator. Each subject tried the experiment 30 times using five different types of needles to perform a standardized suture placement task. Traditional metrics of performance as well as new metrics enabled by our system were proposed, and the data indicate difference between trained and untrained performance. In all traditional parameters such as time, number of attempts, and motion quantity, the medical surgeons outperformed the other three groups, though differences were not significant. However, motion smoothness, penetration and exit angles, tear size areas, and orientation change were statistically significant in the trained group when compared with untrained group. This suggests that these parameters can be used in virtual microsurgery training.
We investigated the neuronal processing of the physiologically particularly important precision grip (opposition of index finger and thumb) by the combination of functional magnetic resonance imaging (fMRI) and an MR-compatible haptic interface. Ten healthy subjects performed isometric precision grip force generation with visual task instruction and real-time visual feedback in a block design. In a 2 x 2 two-factorial design, both the timing and force could be either constant or varying (identical average timing and force). As we expected only small changes in the fMRI response for the different fine-graded motor control conditions, we maximized the sensitivity of the data analysis and implemented a volumes of interest (VOI) restricted general linear model analysis including non-explanatory force regressors to eliminate directly force-related low-level activations. The VOIs were defined based on previous studies. We found significant associations: timing variation (variable vs. constant) and primary motor area (M1) and dorsal premotor area (PMd); force variation (variable vs. constant) and primary somatosensory area (S1), anterior intraparietal area (AIP) and PMd; interaction of timing and force and supplementary motor area (SMA) and AIP. We conclude that SMA and AIP integrate fine-graded higher-level timing and force control during precision grip. M1, S1 and PMd process lower-level timing and force control, yet not their integration. These results are the basis for a detailed assessment of manual motor control in a variety of motor diseases. The detailed behavioural assessment by our MR-compatible haptic interface is particularly valuable in patients due to expected larger inter-individual variation in motor performance.
This study examines micromanipulation accuracy in pointing and in tracing a circle, using a novel contact-free measurement system. Three groups of subjects enable us to investigate the influence of age and micromanipulation expertise. The results show that, for all groups of subjects, a 10x magnification increases accuracy, but larger magnification does not improve it further. Expertise leads to reduced error, and grip force does not affect accuracy in the magnified condition.
This article describes the evaluation of the Collaborative Wheelchair Assistant (CWA), a robotic wheelchair that lets the user control the speed and provides guiding assistance along virtual paths programmed in software.
This article examines the validity of a model to explain how humans learn to perform movements in environments with novel dynamics, including unstable dynamics typical of tool use. In this model, a simple rule specifies how the activation of each muscle is adapted from one movement to the next. Simulations of multijoint arm movements with a neuromuscular plant that incorporates neural delays, reflexes, and signal-dependent noise, demonstrate that the controller is able to compensate for changing internal or environment dynamics and noise properties. The computational model adapts by learning both the appropriate forces and required limb impedance to compensate precisely for forces and instabilities in arbitrary directions with patterns similar to those observed in motor learning experiments. It learns to regulate reciprocal activation and co-activation in a redundant muscle system during repeated movements without requiring any explicit transformation from hand to muscle space. Independent error-driven change in the activation of each muscle results in a coordinated control of the redundant muscle system and in a behavior that reduces instability, systematic error, and energy.
When coordinating movements, the nervous system often has to decide how to distribute work across a number of redundant effectors. Here, we show that humans solve this problem by trying to minimize both the variability of motor output and the effort involved. In previous studies that investigated the temporal shape of movements, these two selective pressures, despite having very different theoretical implications, could not be distinguished; because noise in the motor system increases with the motor commands, minimization of effort or variability leads to very similar predictions. When multiple effectors with different noise and effort characteristics have to be combined, however, these two cost terms can be dissociated. Here, we measure the importance of variability and effort in coordination by studying how humans share force production between two fingers. To capture variability, we identified the coefficient of variation of the index and little fingers. For effort, we used the sum of squared forces and the sum of squared forces normalized by the maximum strength of each effector. These terms were then used to predict the optimal force distribution for a task in which participants had to produce a target total force of 4-16 N, by pressing onto two isometric transducers using different combinations of fingers. By comparing the predicted distribution across fingers to the actual distribution chosen by participants, we were able to estimate the relative importance of variability and effort of 1:7, with the unnormalized effort being most important. Our results indicate that the nervous system uses multi-effector redundancy to minimize both the variability of the produced output and effort, although effort costs clearly outweighed variability costs.
While tremor has been studied extensively, the investigations thus far do not give detailed information on how the accuracy necessary for micromanipulations is affected while performing tasks in microsurgery and the life sciences. This paper systematically studies the effects of visual feedback, posture and grip force on the trial error and tremor intensity of subjects holding a forceps-like object to perform a pointing task. Results indicate that: (i) Arm support improves accuracy in tasks requiring fine manipulation and reduces tremor intensity in the 2-8 Hz region, but hand support does not provide the same effect; hence freedom of wrist movement can be retained without a significant increase in trial error. (ii) Magnification of up to x10 is critical to carry out accurate micromanipulations, but beyond that level, magnification is not the most important factor. (iii) While an appropriate grip force must be learned in order to grasp micro-objects, such as a needle, without damaging them, the level of grip force applied does not affect the endpoint accuracy.
The aging population and the wish to improve quality of life, as well as the economic pressure to work longer, call for intuitive and efficient assistive and rehabilitation technologies. Therefore, we have developed a project based education paradigm in the design of assistive and rehabilitation devices. Using a miniature wireless sensing and feedback platform, the multimodal interactive motor assessment and training environment (MIMATE), students from different engineering backgrounds were able to develop innovative devices implementing rehabilitative games in the short span of a one-term course. We describe here this novel H-CARD course on the human-centered design of assistive and rehabilitative devices.
While motor interaction between a robot and a human, or between humans, has important implications for society as well as promising applications, little research has been devoted to its investigation. In particular, it is important to understand the different ways two agents can interact and generate suitable interactive behaviors. Towards this end, this paper introduces a framework for the description and implementation of interactive behaviors of two agents performing a joint motor task. A taxonomy of interactive behaviors is introduced, which can classify tasks and cost functions that represent the way each agent interacts. The role of an agent interacting during a motor task can be directly explained from the cost function this agent is minimizing and the task constraints. The novel framework is used to interpret and classify previous works on human-robot motor interaction. Its implementation power is demonstrated by simulating representative interactions of two humans. It also enables us to interpret and explain the role distribution and switching between roles when performing joint motor tasks.
Traditional assessment of a stroke subjects motor ability, carried out by a therapist who observes and rates the subjects motor behavior using ordinal measurements scales, is subjective, time consuming and lacks sensitivity. Rehabilitation robots, which have been the subject of intense inquiry over the last decade, are equipped with sensors that are used to develop objective measures of motor behaviors in a semiautomated way during therapy. This article reviews the current contributions of robot-assisted motor assessment of the upper limb. It summarizes the various measures related to movement performance, the models of motor recovery in stroke subjects and the relationship of robotic measures to standard clinical measures. It analyses the possibilities offered by current robotic assessment techniques and the aspects to address to make robotic assessment a mainstream motor assessment method.
Humans skillfully manipulate objects and tools despite the inherent instability. In order to succeed at these tasks, the sensorimotor control system must build an internal representation of both the force and mechanical impedance. As it is not practical to either learn or store motor commands for every possible future action, the sensorimotor control system generalizes a control strategy for a range of movements based on learning performed over a set of movements. Here, we introduce a computational model for this learning and generalization, which specifies how to learn feedforward muscle activity in a function of the state space. Specifically, by incorporating co-activation as a function of error into the feedback command, we are able to derive an algorithm from a gradient descent minimization of motion error and effort, subject to maintaining a stability margin. This algorithm can be used to learn to coordinate any of a variety of motor primitives such as force fields, muscle synergies, physical models or artificial neural networks. This model for human learning and generalization is able to adapt to both stable and unstable dynamics, and provides a controller for generating efficient adaptive motor behavior in robots. Simulation results exhibit predictions consistent with all experiments on learning of novel dynamics requiring adaptation of force and impedance, and enable us to re-examine some of the previous interpretations of experiments on generalization.
In the rodent brain the hemodynamic response to a brief external stimulus changes significantly during development. Analogous changes in human infants would complicate the determination and use of the hemodynamic response function (HRF) for functional magnetic resonance imaging (fMRI) in developing populations. We aimed to characterize HRF in human infants before and after the normal time of birth using rapid sampling of the blood oxygen level dependent (BOLD) signal. A somatosensory stimulus and an event related experimental design were used to collect data from 10 healthy adults, 15 sedated infants at term corrected post menstrual age (PMA) (median 41+1 weeks), and 10 preterm infants (median PMA 34+4 weeks). A positive amplitude HRF waveform was identified across all subject groups, with a systematic maturational trend in terms of decreasing time-to-peak and increasing positive peak amplitude associated with increasing age. Application of the age-appropriate HRF models to fMRI data significantly improved the precision of the fMRI analysis. These findings support the notion of a structured development in the brains response to stimuli across the last trimester of gestation and beyond.
A computational model is proposed in this paper to capture learning capacity of a human subject adapting his or her movements in novel dynamics. The model uses an iterative learning control algorithm to represent human learning through repetitive processes. The control law performs adaptation using a model designed using experimental data captured from the natural behavior of the individual of interest. The control signals are used by a model of the body to produced motion without the need of inverse kinematics. The resulting motion behavior is validated against experimental data. This new technique yields the capability of subject-specific modeling of the motor function, with the potential to explain individual behavior in physical rehabilitation.
Related JoVE Video
Journal of Visualized Experiments
What is Visualize?
JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.
How does it work?
We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.
Video X seems to be unrelated to Abstract Y...
In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.