Our brain often needs to estimate unknown variables from imperfect information. Our knowledge about the statistical distributions of quantities in our environment (called priors) and currently available information from sensory inputs (called likelihood) are the basis of all Bayesian models of perception and action. While we know that priors are learned, most studies of prior-likelihood integration simply assume that subjects know about the likelihood. However, as the quality of sensory inputs change over time, we also need to learn about new likelihoods. Here, we show that human subjects readily learn the distribution of visual cues (likelihood function) in a way that can be predicted by models of statistically optimal learning. Using a likelihood that depended on color context, we found that a learned likelihood generalized to new priors. Thus, we conclude that subjects learn about likelihood.
A fundamental challenge for the nervous system is to encode signals spanning many orders of magnitude with neurons of limited bandwidth. To meet this challenge, perceptual systems use gain control. However, whether the motor system uses an analogous mechanism is essentially unknown. Neuromodulators, such as serotonin, are prime candidates for gain control signals during force production. Serotonergic neurons project diffusely to motor pools, and, therefore, force production by one muscle should change the gain of others. Here we present behavioral and pharmaceutical evidence that serotonin modulates the input-output gain of motoneurons in humans. By selectively changing the efficacy of serotonin with drugs, we systematically modulated the amplitude of spinal reflexes. More importantly, force production in different limbs interacts systematically, as predicted by a spinal gain control mechanism. Psychophysics and pharmacology suggest that the motor system adopts gain control mechanisms, and serotonin is a primary driver for their implementation in force production.
To generate new movements, we have to generalize what we have learned from previously practiced movements. An important question, therefore, is how the breadth of training affects generalization: does practicing a broad or narrow range of movements lead to better generalization? We address this question with a force field learning experiment. One group adapted while making many reaches in a small region (narrow group), and another group adapted while making reaches in a large region (broad group). Subsequently, both groups were tested for their ability to generalize without visual feedback. Not surprisingly the narrow group exhibited smaller adaptation errors, yet they did not generalize any better than the broad group. Path errors during generalization were indistinguishable across the two groups, while the broad group exhibited reduced terminal errors. These findings indicate that overall, practicing a variety of movements is advantageous for performance during generalization; movement paths are not hindered, and terminal errors are superior. Moreover, the evidence suggests a dissociation between the ability to generalize information about a novel dynamic disturbance, which generalizes narrowly, and the ability to accurately locate the limb in space, which generalizes broadly.
Bayesian statistics defines how new information, given by a likelihood, should be combined with previously acquired information, given by a prior distribution. Many experiments have shown that humans make use of such priors in cognitive, perceptual, and motor tasks, but where do priors come from? As people never experience the same situation twice, they can only construct priors by generalizing from similar past experiences. Here we examine the generalization of priors over stochastic visuomotor perturbations in reaching experiments. In particular, we look into how the first two moments of the prior--the mean and variance (uncertainty)--generalize. We find that uncertainty appears to generalize differently from the mean of the prior, and an interesting asymmetry arises when the mean and the uncertainty are manipulated simultaneously.
Sequence production tasks are a standard tool to analyze motor learning, consolidation, and habituation. As sequences are learned, movements are typically grouped into subsets or chunks. For example, most Americans memorize telephone numbers in two chunks of 3 digits, and one chunk of 4. Studies generally use response times or error rates to estimate how subjects chunk, and these estimates are often related to physiological data. Here we show that chunking is simultaneously reflected in reaction times, errors, and their correlations. This multimodal structure enables us to propose a Bayesian algorithm that better estimate chunks while avoiding over-fitting. Our algorithm reveals previously unknown behavioral structure, such as an increased error correlations with training, and promises a useful tool for the characterization of many forms of sequential motor behavior.
Analyzing data from experiments involves variables that we neuroscientists are uncertain about. Efficiently calculating with such variables usually requires Bayesian statistics. As it is crucial when analyzing complex data, it seems natural that the brain would "use" such statistics to analyze data from the world. And indeed, recent studies in the areas of perception, action, and cognition suggest that Bayesian behavior is widespread, in many modalities and species. Consequently, many models have suggested that the brain is built on simple Bayesian principles. While the brain's code is probably not actually simple, I believe that Bayesian principles will facilitate the construction of faithful models of the brain.
Powered prostheses are controlled using electromyographic (EMG) signals, which may introduce high levels of uncertainty even for simple tasks. According to Bayesian theories, higher uncertainty should influence how the brain adapts motor commands in response to perceived errors. Such adaptation may critically influence how patients interact with their prosthetic devices; however, we do not yet understand adaptation behavior with EMG control. Models of adaptation can offer insights on movement planning and feedback correction, but we first need to establish their validity for EMG control interfaces. Here we created a simplified comparison of prosthesis and able-bodied control by studying adaptation with three control interfaces: joint angle, joint torque, and EMG. Subjects used each of the control interfaces to perform a target-directed task with random visual perturbations. We investigated how control interface and visual uncertainty affected trial-by-trial adaptation. As predicted by Bayesian models, increased errors and decreased visual uncertainty led to faster adaptation. The control interface had no significant effect beyond influencing error sizes. This result suggests that Bayesian models are useful for describing prosthesis control and could facilitate further investigation to characterize the uncertainty faced by prosthesis users. A better understanding of factors affecting movement uncertainty will guide sensory feedback strategies for powered prostheses and clarify what feedback information best improves control.
Cancer and healthy cells have distinct distributions of molecular properties and thus respond differently to drugs. Cancer drugs ideally kill cancer cells while limiting harm to healthy cells. However, the inherent variance among cells in both cancer and healthy cell populations increases the difficulty of selective drug action. Here we formalize a classification framework based on the idea that an ideal cancer drug should maximally discriminate between cancer and healthy cells. More specifically, this discrimination should be performed on the basis of measurable cell markers. We divide the problem into three parts which we explore with examples. First, molecular markers should discriminate cancer cells from healthy cells at the single-cell level. Second, the effects of drugs should be statistically predicted by these molecular markers. Third, drugs should be optimized for classification performance. We find that expression levels of a handful of genes suffice to discriminate well between individual cells in cancer and healthy tissue. We also find that gene expression predicts the efficacy of some cancer drugs, suggesting that these cancer drugs act as suboptimal classifiers using gene profiles. Finally, we formulate a framework that defines an optimal drug, and predicts drug cocktails that may target cancer more accurately than the individual drugs alone. Conceptualizing cancer drugs as solving a discrimination problem in the high-dimensional space of molecular markers promises to inform the design of new cancer drugs and drug cocktails.
Cervical spinal cord injury (SCI) paralyzes muscles of the hand and arm, making it difficult to perform activities of daily living. Restoring the ability to reach can dramatically improve quality of life for people with cervical SCI. Any reaching system requires a user interface to decode parameters of an intended reach, such as trajectory and target. A challenge in developing such decoders is that often few physiological signals related to the intended reach remain under voluntary control, especially in patients with high cervical injuries. Furthermore, the decoding problem changes when the user is controlling the motion of their limb, as opposed to an external device. The purpose of this study was to investigate the benefits of combining disparate signal sources to control reach in people with a range of impairments, and to consider the effect of two feedback approaches. Subjects with cervical SCI performed robot-assisted reaching, controlling trajectories with either shoulder electromyograms (EMGs) or EMGs combined with gaze. We then evaluated how reaching performance was influenced by task-related sensory feedback, testing the EMG-only decoder in two conditions. The first involved moving the arm with the robot, providing congruent sensory feedback through their remaining sense of proprioception. In the second, the subjects moved the robot without the arm attached, as in applications that control external devices. We found that the multimodal-decoding algorithm worked well for all subjects, enabling them to perform straight, accurate reaches. The inclusion of gaze information, used to estimate target location, was especially important for the most impaired subjects. In the absence of gaze information, congruent sensory feedback improved performance. These results highlight the importance of proprioceptive feedback, and suggest that multi-modal decoders are likely to be most beneficial for highly impaired subjects and in tasks where such feedback is unavailable.
Prosthetic devices need to be controlled by their users, typically using physiological signals. People tend to look at objects before reaching for them and we have shown that combining eye movements with other continuous physiological signal sources enhances control. This approach suffers when subjects also look at non-targets, a problem we addressed with a probabilistic mixture over targets where subject gaze information is used to identify target candidates. However, this approach would be ineffective if a user wanted to move towards targets that have not been foveated. Here we evaluated how the accuracy of prior target information influenced decoding accuracy, as the availability of neural control signals was varied. We also considered a mixture model where we assumed that the target may be foveated or, alternatively, that the target may not be foveated. We tested the accuracy of the models at decoding natural reaching data, and also in a closed-loop robot-assisted reaching task. The mixture model worked well in the face of high target uncertainty. Furthermore, errors due to inaccurate target information were reduced by including a generic model that relied on neural signals only.
Successful motor performance requires the ability to adapt motor commands to task dynamics. A central question in movement neuroscience is how these dynamics are represented. Although it is widely assumed that dynamics (e.g. force-fields) are represented in intrinsic, joint-based coordinates (Shadmehr and Mussa-Ivaldi 1994), recent evidence has questioned this proposal. Here we re-examine the representation of dynamics in two experiments. By testing generalization following changes in shoulder, elbow, or wrist configurations, the first experiment tested for extrinsic, intrinsic or object-centered representations. No single coordinate frame accounted for the pattern of generalization. Rather, generalization patterns were better accounted for both by a mixture of representations or by models that assumed local learning and graded, decaying generalization. A second experiment, in which we replicated the design of an influential study that had suggested encoding in intrinsic coordinates (Shadmehr & Mussa-Ivaldi, 1994), yielded similar results. That is, we could not find evidence that dynamics are represented in a single coordinate system. Taken together, our experiments suggest that internal models do not employ a single coordinate system when generalizing and may well be represented as a mixture of coordinate systems, as a single system with local learning, or both.
Over the past few decades, one of the most salient lifestyle changes for us has been the use of computers. For many of us, manual interaction with a computer occupies a large portion of our working time. Through neural plasticity, this extensive movement training should change our representation of movements (e.g., [1-3]), just like search engines affect memory . However, how computer use affects motor learning is largely understudied. Additionally, as virtually all participants in studies of perception and actions are computer users, a legitimate question is whether insights from these studies bear the signature of computer-use experience. We compared non-computer users with age- and education-matched computer users in standard motor learning experiments. We found that people learned equally fast but that non-computer users generalized significantly less across space, a difference negated by two weeks of intensive computer training. Our findings suggest that computer-use experience shaped our basic sensorimotor behaviors, and this influence should be considered whenever computer users are recruited as study participants.
The frontal eye field (FEF) plays a central role in saccade selection and execution. Using artificial stimuli, many studies have shown that the activity of neurons in the FEF is affected by both visually salient stimuli in a neurons receptive field and upcoming saccades in a certain direction. However, the extent to which visual and motor information is represented in the FEF in the context of the cluttered natural scenes we encounter during everyday life has not been explored. Here, we model the activities of neurons in the FEF, recorded while monkeys were searching natural scenes, using both visual and saccade information. We compare the contribution of bottom-up visual saliency (based on low-level features such as brightness, orientation, and color) and saccade direction. We find that, while saliency is correlated with the activities of some neurons, this relationship is ultimately driven by activities related to movement. Although bottom-up visual saliency contributes to the choice of saccade targets, it does not appear that FEF neurons actively encode the kind of saliency posited by popular saliency map theories. Instead, our results emphasize the FEFs role in the stages of saccade planning directly related to movement generation.
A molecular device that records time-varying signals would enable new approaches in neuroscience. We have recently proposed such a device, termed a "molecular ticker tape", in which an engineered DNA polymerase (DNAP) writes time-varying signals into DNA in the form of nucleotide misincorporation patterns. Here, we define a theoretical framework quantifying the expected capabilities of molecular ticker tapes as a function of experimental parameters. We present a decoding algorithm for estimating time-dependent input signals, and DNAP kinetic parameters, directly from misincorporation rates as determined by sequencing. We explore the requirements for accurate signal decoding, particularly the constraints on (1) the polymerase biochemical parameters, and (2) the amplitude, temporal resolution, and duration of the time-varying input signals. Our results suggest that molecular recording devices with kinetic properties similar to natural polymerases could be used to perform experiments in which neural activity is compared across several experimental conditions, and that devices engineered by combining favorable biochemical properties from multiple known polymerases could potentially measure faster phenomena such as slow synchronization of neuronal oscillations. Sophisticated engineering of DNAPs is likely required to achieve molecular recording of neuronal activity with single-spike temporal resolution over experimentally relevant timescales.
For rehabilitation and diagnoses, an understanding of patient activities and movements is important. Modern smartphones have built in accelerometers which promise to enable quantifying minute-by-minute what patients do (e.g. walk or sit). Such a capability could inform recommendations of physical activities and improve medical diagnostics. However, a major problem is that during everyday life, we carry our phone in different ways, e.g. on our belt, in our pocket, in our hand, or in a bag. The recorded accelerations are not only affected by our activities but also by the phones location. Here we develop a method to solve this kind of problem, based on the intuition that activities change rarely, and phone locations change even less often. A hidden Markov model (HMM) tracks changes across both activities and locations, enabled by a static support vector machine (SVM) classifier that probabilistically identifies activity-location pairs. We find that this approach improves tracking accuracy on healthy subjects as compared to a static classifier alone. The obtained method can be readily applied to patient populations. Our research enables the use of phones as activity tracking devices, without the need of previous approaches to instruct subjects to always carry the phone in the same location.
Injuries of the cervical spinal cord can interrupt the neural pathways controlling the muscles of the arm, resulting in complete or partial paralysis. For individuals unable to reach due to high-level injuries, neuroprostheses can restore some of the lost function. Natural, multidimensional control of neuroprosthetic devices for reaching remains a challenge. Electromyograms (EMGs) from muscles that remain under voluntary control can be used to communicate intended reach trajectories, but when the number of available muscles is limited control can be difficult and unintuitive. We combined shoulder EMGs with target estimates obtained from gaze. Natural gaze data were integrated with EMG during closed-loop robotic control of the arm, using a probabilistic mixture model. We tested the approach with two different sets of EMGs, as might be available to subjects with C4- and C5-level spinal cord injuries. Incorporating gaze greatly improved control of reaching, particularly when there were few EMG signals. We found that subjects naturally adapted their eye-movement precision as we varied the set of available EMGs, attaining accurate performance in both tested conditions. The system performs a near-optimal combination of both physiological signals, making control more intuitive and allowing a natural trajectory that reduces the burden on the user.
We often need to learn how to move based on a single performance measure that reflects the overall success of our movements. However, movements have many properties, such as their trajectories, speeds and timing of end-points, thus the brain needs to decide which properties of movements should be improved; it needs to solve the credit assignment problem. Currently, little is known about how humans solve credit assignment problems in the context of reinforcement learning. Here we tested how human participants solve such problems during a trajectory-learning task. Without an explicitly-defined target movement, participants made hand reaches and received monetary rewards as feedback on a trial-by-trial basis. The curvature and direction of the attempted reach trajectories determined the monetary rewards received in a manner that can be manipulated experimentally. Based on the history of action-reward pairs, participants quickly solved the credit assignment problem and learned the implicit payoff function. A Bayesian credit-assignment model with built-in forgetting accurately predicts their trial-by-trial learning.
Most approaches to understanding human motor control assume that people maximize their rewards while minimizing their motor efforts. This tradeoff between potential rewards and a sense of effort is quantified with a cost function. While the rewards can change across tasks, our sense of effort is assumed to remain constant and characterize how the nervous system organizes motor control. As such, when a proposed cost function compares well with data it is argued to be the underlying cause of a motor behavior, and not simply a fit to the data. Implicit in this proposition is the assumption that this cost function can then predict new motor behaviors. Here we examined this idea and asked whether an inferred cost function in one setting could explain subjects behavior in settings that differed dynamically but had identical rewards. We found that the pattern of behavior observed across settings was similar to our predictions of optimal behavior. However, we could not conclude that this behavior was consistent with a conserved sense of effort. These results suggest that the standard forms for quantifying cost may not be sufficient to accurately examine whether or not human motor behavior abides by optimality principles.
Reaching is one of the central experimental paradigms in the field of motor control, and many computational models of reaching have been published. While most of these models try to explain subject data (such as movement kinematics, reaching performance, forces, etc.) from only a single experiment, distinct experiments often share experimental conditions and record similar kinematics. This suggests that reaching models could be applied to (and falsified by) multiple experiments. However, using multiple datasets is difficult because experimental data formats vary widely. Standardizing data formats promises to enable scientists to test model predictions against many experiments and to compare experimental results across labs. Here we report on the development of a new resource available to scientists: a database of reaching called the Database for Reaching Experiments And Models (DREAM). DREAM collects both experimental datasets and models and facilitates their comparison by standardizing formats. The DREAM project promises to be useful for experimentalists who want to understand how their data relates to models, for modelers who want to test their theories, and for educators who want to help students better understand reaching experiments, models, and data analysis.
Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical, magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. Based on this analysis, all existing approaches require orders of magnitude improvement in key parameters. Electrical recording is limited by the low multiplexing capacity of electrodes and their lack of intrinsic spatial resolution, optical methods are constrained by the scattering of visible light in brain tissue, magnetic resonance is hindered by the diffusion and relaxation timescales of water protons, and the implementation of molecular recording is complicated by the stochastic kinetics of enzymes. Understanding the physical limits of brain activity mapping may provide insight into opportunities for novel solutions. For example, unconventional methods for delivering electrodes may enable unprecedented numbers of recording sites, embedded optical devices could allow optical detectors to be placed within a few scattering lengths of the measured neurons, and new classes of molecularly engineered sensors might obviate cumbersome hardware architectures. We also study the physics of powering and communicating with microscale devices embedded in brain tissue and find that, while radio-frequency electromagnetic data transmission suffers from a severe power-bandwidth tradeoff, communication via infrared light or ultrasound may allow high data rates due to the possibility of spatial multiplexing. The use of embedded local recording and wireless data transmission would only be viable, however, given major improvements to the power efficiency of microelectronic devices.
To be effective, a prescribed prosthetic device must match the functional requirements and capabilities of each patient. These capabilities are usually assessed by a clinician and reported by the Medicare K-level designation of mobility. However, it is not clear how the K-level designation objectively relates to the use of prostheses outside of a clinical environment. Here, we quantify participant activity using mobile phones and relate activity measured during real world activity to the assigned K-levels. We observe a correlation between K-level and the proportion of moderate to high activity over the course of a week. This relationship suggests that accelerometry-based technologies such as mobile phones can be used to evaluate real world activity for mobility assessment. Quantifying everyday activity promises to improve assessment of real world prosthesis use, leading to a better matching of prostheses to individuals and enabling better evaluations of future prosthetic devices.
Experiments in systems neuroscience can be seen as consisting of three steps: (1) selecting the signals we are interested in, (2) probing the system with carefully chosen stimuli, and (3) getting data out of the brain. Here I discuss how emerging techniques in molecular biology are starting to improve these three steps. To estimate its future impact on experimental neuroscience, I will stress the analogy of ongoing progress with that of microprocessor production techniques. These techniques have allowed computers to simplify countless problems; because they are easier to use than mechanical timers, they are even built into toasters. Molecular biology may advance even faster than computer speeds and has made immense progress in understanding and designing molecules. These advancements may in turn produce impressive improvements to each of the three steps, ultimately shifting the bottleneck from obtaining data to interpreting it.
The measurement of body and limb posture is important to many clinical and research studies. Current approaches either directly measure posture (e.g., using optical or magnetic methods) or more indirectly measure it by integrating changes over time (e.g., using gyroscopes and/or accelerometers). Here, we introduce a way of estimating posture from movements without requiring integration over time and the resulting complications. We show how the almost imperceptible tremor of the hand is affected by posture in an intuitive way and therefore can be used to estimate the posture of the arm. We recorded postures and tremor of the arms of volunteers. By using only the minor axis in the covariance of hand tremor, we could estimate the angle of the forearm with a standard deviation of about 4° when the subjects elbow is resting on a table and about 10° when it is off the table. This technique can also be applied as a post hoc analysis on other hand-position data sets to extract posture. This new method allows the estimation of body posture from tremor, is complementary to other techniques, and so can become a useful tool for future research and clinical applications.
We tend to look at targets prior to moving our hand towards them. This means that our eye movements contain information about the movements we are planning to make. This information has been shown to be useful in the context of decoding of movement intent from neural signals. However, this is complicated by the fact that occasionally, subjects may want to move towards targets that have not been foveated, or may be distracted and temporarily look away from the intended target. We have previously accounted for this uncertainty using a probabilistic mixture over targets, where the gaze information is used to identify target candidates. Here we evaluate how the accuracy of prior target information influences decoding accuracy. We also consider a mixture model where we assume that the target may be foveated or, alternatively, that the target may not be foveated. We found that errors due to inaccurate target information were reduced by including a generic model representing movements to all targets into the mixture.
Recent studies suggest that motor adaptation is the result of multiple, perhaps linear processes each with distinct time scales. While these models are consistent with some motor phenomena, they can neither explain the relatively fast re-adaptation after a long washout period, nor savings on a subsequent day. Here we examined if these effects can be explained if we assume that the CNS stores and retrieves movement parameters based on their possible relevance. We formalize this idea with a model that infers not only the sources of potential motor errors, but also their relevance to the current motor circumstances. In our model adaptation is the process of re-estimating parameters that represent the body and the world. The likelihood of a world parameter being relevant is then based on the mismatch between an observed movement and that predicted when not compensating for the estimated world disturbance. As such, adapting to large motor errors in a laboratory setting should alert subjects that disturbances are being imposed on them, even after motor performance has returned to baseline. Estimates of this external disturbance should be relevant both now and in future laboratory settings. Estimated properties of our bodies on the other hand should always be relevant. Our model demonstrates savings, interference, spontaneous rebound and differences between adaptation to sudden and gradual disturbances. We suggest that many issues concerning savings and interference can be understood when adaptation is conditioned on the relevance of parameters.
In systems neuroscience, neural activity that represents movements or sensory stimuli is often characterized by spatial tuning curves that may change in response to training, attention, altered mechanics, or the passage of time. A vital step in determining whether tuning curves change is accounting for estimation uncertainty due to measurement noise. In this study, we address the issue of tuning curve stability using methods that take uncertainty directly into account. We analyze data recorded from neurons in primary motor cortex using chronically implanted, multielectrode arrays in four monkeys performing center-out reaching. With the use of simulations, we demonstrate that under typical experimental conditions, the effect of neuronal noise on estimated preferred direction can be quite large and is affected by both the amount of data and the modulation depth of the neurons. In experimental data, we find that after taking uncertainty into account using bootstrapping techniques, the majority of neurons appears to be very stable on a timescale of minutes to hours. Lastly, we introduce adaptive filtering methods to explicitly model dynamic tuning curves. In contrast to several previous findings suggesting that tuning curves may be in constant flux, we conclude that the neural representation of limb movement is, on average, quite stable and that impressions to the contrary may be largely the result of measurement noise.
Experiments on humans and other animals have shown that uncertainty due to unreliable or incomplete information affects behavior. Recent studies have formalized uncertainty and asked which behaviors would minimize its effect. This formalization results in a wide range of Bayesian models that derive from assumptions about the world, and it often seems unclear how these models relate to one another. In this review, we use the concept of graphical models to analyze differences and commonalities across Bayesian approaches to the modeling of behavioral and neural data. We review behavioral and neural data associated with each type of Bayesian model and explain how these models can be related. We finish with an overview of different theories that propose possible ways in which the brain can represent uncertainty.
Many perceptual cue combination studies have shown that humans can integrate sensory information across modalities as well as within a modality in a manner that is close to optimal. While the limits of sensory cue integration have been extensively studied in the context of perceptual decision tasks, the evidence obtained in the context of motor decisions provides a less consistent picture. Here, we studied the combination of visual and haptic information in the context of human arm movement control. We implemented a pointing task in which human subjects pointed at an invisible unknown target position whose vertical position varied randomly across trials. In each trial, we presented a haptic and a visual cue that provided noisy information about the target position half-way through the reach. We measured pointing accuracy as function of haptic and visual cue onset and compared pointing performance to the predictions of a multisensory decision model. Our model accounts for pointing performance by computing the maximum a posteriori estimate, assuming minimum variance combination of uncertain sensory cues. Synchronicity of cue onset has previously been demonstrated to facilitate the integration of sensory information. We tested this in trials in which visual and haptic information was presented with temporal disparity. We found that for our sensorimotor task temporal disparity between visual and haptic cue had no effect. Sensorimotor learning appears to use all available information and to apply the same near-optimal rules for cue combination that are used by perception.
Over the last five decades, progress in neural recording techniques has allowed the number of simultaneously recorded neurons to double approximately every 7 years, mimicking Moores law. Such exponential growth motivates us to ask how data analysis techniques are affected by progressively larger numbers of recorded neurons. Traditionally, neurons are analyzed independently on the basis of their tuning to stimuli or movement. Although tuning curve approaches are unaffected by growing numbers of simultaneously recorded neurons, newly developed techniques that analyze interactions between neurons become more accurate and more complex as the number of recorded neurons increases. Emerging data analysis techniques should consider both the computational costs and the potential for more accurate models associated with this exponential growth of the number of recorded neurons.
Trust and reciprocity facilitate cooperation and are relevant to virtually all human interactions. They are typically studied using trust games: one subject gives (entrusts) money to another subject, which may return some of the proceeds (reciprocate). Currently, however, it is unclear whether trust and reciprocity in monetary transactions are similar in other settings, such as physical effort. Trust and reciprocity of physical effort are important as many everyday decisions imply an exchange of physical effort, and such exchange is central to labor relations. Here we studied a trust game based on physical effort and compared the results with those of a computationally equivalent monetary trust game. We found no significant difference between effort and money conditions in both the amount trusted and the quantity reciprocated. Moreover, there is a high positive correlation in subjects behavior across conditions. This suggests that trust and reciprocity may be character traits: subjects that are trustful/trustworthy in monetary settings behave similarly during exchanges of physical effort. Our results validate the use of trust games to study exchanges in physical effort and to characterize inter-subject differences in trust and reciprocity, and also suggest a new behavioral paradigm to study these differences.
Recent studies in motor control have shown that visuomotor rotations for reaching have narrow generalization functions: what we learn during movements in one direction only affects subsequent movements into close directions. Here we wanted to measure the generalization functions for wrist movement. To do so we had 7 subjects performing an experiment holding a mobile phone in their dominant hand. The mobile phones built in acceleration sensor provided a convenient way to measure wrist movements and to run the behavioral protocol. Subjects moved a cursor on the screen by tilting the phone. Movements on the screen toward the training target were rotated and we then measured how learning of the rotation in the training direction affected subsequent movements in other directions. We find that generalization is local and similar to generalization patterns of visuomotor rotation for reaching.
The authors argue that "true" models that aim at faithfully mimicking or reproducing every property of the sensorimotor system cannot be compact as they need many free parameters. Consequently, most scientists in motor control use what are called "false" models--models that derive from well-defined approximations. The authors conceptualize these models as a priori limited in scope and approximate. As such, they argue that a quantitative characterization of the deviations between the system and the model, more than the mere act of falsifying, allows scientists to make progress in understanding the sensorimotor system. Ultimately, this process should result in models that explain as much data variance as possible. The authors conclude by arguing that progress in that direction could strongly benefit from databases of experimental results and collections of models.
We constantly make small errors during movement and use them to adapt our future movements. Movement experiments often probe this error-driven learning by perturbing movements and analyzing the after-effects. Past studies have applied perturbations of varying nature such as visual disturbances, position- or velocity-dependent forces and modified inertia properties of the limb. However, little is known about how the specific nature of a perturbation influences subsequent movements. For a single perturbation trial, the nature of a perturbation may be highly uncertain to the nervous system, given that it receives only noisy information. One hypothesis is that the nervous system can use this rough estimate to partially correct for the perturbation on the next trial. Alternatively, the nervous system could ignore uncertain information about the nature of the perturbation and resort to a nonspecific adaptation. To study how the brain estimates and responds to incomplete sensory information, we test these two hypotheses using a trial-by-trial adaptation experiment. On each trial, the nature of the perturbation was chosen from six distinct types, including a visuomotor rotation and different force fields. We observed that corrective forces aiming to oppose the perturbation in the following trial were independent of the nature of the perturbation. Our results suggest that the nervous system uses a nonspecific strategy when it has high uncertainty about the nature of perturbations during trial-by-trial learning.
Neurons in the sensory system exhibit changes in excitability that unfold over many time scales. These fluctuations produce noise and could potentially lead to perceptual errors. However, to prevent such errors, postsynaptic neurons and synapses can adapt and counteract changes in the excitability of presynaptic neurons. Here we ask how neurons could optimally adapt to minimize the influence of changing presynaptic neural properties on their outputs. The resulting model, based on Bayesian inference, explains a range of physiological results from experiments which have measured the overall properties and detailed time-course of sensory tuning curve adaptation in the early visual cortex. We show how several experimentally measured short term plasticity phenomena can be understood as near-optimal solutions to this adaptation problem. This framework provides a link between high level computational problems, the properties of cortical neurons, and synaptic physiology.
Our nervous system continuously combines new information from our senses with information it has acquired throughout life. Numerous studies have found that human subjects manage this by integrating their observations with their previous experience (priors) in a way that is close to the statistical optimum. However, little is known about the way the nervous system acquires or learns priors. Here we present results from experiments where the underlying distribution of target locations in an estimation task was switched, manipulating the prior subjects should use. Our experimental design allowed us to measure a subjects evolving prior while they learned. We confirm that through extensive practice subjects learn the correct prior for the task. We found that subjects can rapidly learn the mean of a new prior while the variance is learned more slowly and with a variable learning rate. In addition, we found that a Bayesian inference model could predict the time course of the observed learning while offering an intuitive explanation for the findings. The evidence suggests the nervous system continuously updates its priors to enable efficient behavior.
Humans can adapt their motor behaviors to deal with ongoing changes. To achieve this, the nervous system needs to estimate central variables for our movement based on past knowledge and new feedback, both of which are uncertain. In the Bayesian framework, rates of adaptation characterize how noisy feedback is in comparison to the uncertainty of the state estimate. The predictions of Bayesian models are intuitive: the nervous system should adapt slower when sensory feedback is more noisy and faster when its state estimate is more uncertain. Here we want to quantitatively understand how uncertainty in these two factors affects motor adaptation. In a hand reaching experiment we measured trial-by-trial adaptation to a randomly changing visual perturbation to characterize the way the nervous system handles uncertainty in state estimation and feedback. We found both qualitative predictions of Bayesian models confirmed. Our study provides evidence that the nervous system represents and uses uncertainty in state estimate and feedback during motor adaptation.
Plasticity is a crucial component of normal brain function and a critical mechanism for recovery from injury. In vitro, associative pairing of presynaptic spiking and stimulus-induced postsynaptic depolarization causes changes in the synaptic efficacy of the presynaptic neuron, when activated by extrinsic stimulation. In vivo, such paradigms can alter the responses of whole groups of neurons to stimulation. Here, we used in vivo spike-triggered stimulation to drive plastic changes in rat forelimb sensorimotor cortex, which we monitored using a statistical measure of functional connectivity inferred from the spiking statistics of the neurons during normal, spontaneous behavior. These induced plastic changes in inferred functional connectivity depended on the latency between trigger spike and stimulation, and appear to reflect a robust reorganization of the network. Such targeted connectivity changes might provide a tool for rerouting the flow of information through a network, with implications for both rehabilitation and brain-machine interface applications.
A central objective in neuroscience is to understand how neurons interact. Such functional interactions have been estimated using signals recorded with different techniques and, consequently, different temporal resolutions. For example, spike data often have sub-millisecond resolution while some imaging techniques may have a resolution of many seconds. Here we use multi-electrode spike recordings to ask how similar functional connectivity inferred from slower timescale signals is to the one inferred from fast timescale signals. We find that functional connectivity is relatively robust to low-pass filtering--dropping by about 10% when low pass filtering at 10 hz and about 50% when low pass filtering down to about 1 Hz--and that estimates are robust to high levels of additive noise. Moreover, there is a weak correlation for physiological filters such as hemodynamic or Ca2+ impulse responses and filters based on local field potentials. We address the origin of these correlations using simulation techniques and find evidence that the similarity between functional connectivity estimated across timescales is due to processes that do not depend on fast pair-wise interactions alone. Rather, it appears that connectivity on multiple timescales or common-input related to stimuli or movement drives the observed correlations. Despite this qualification, our results suggest that techniques with intermediate temporal resolution may yield good estimates of the functional connections between individual neurons.
To stabilize our position in space we use visual information as well as non-visual physical motion cues. However, visual cues can be ambiguous: visually perceived motion may be caused by self-movement, movement of the environment, or both. The nervous system must combine the ambiguous visual cues with noisy physical motion cues to resolve this ambiguity and control our body posture. Here we have developed a Bayesian model that formalizes how the nervous system could solve this problem. In this model, the nervous system combines the sensory cues to estimate the movement of the body. We analytically demonstrate that, as long as visual stimulation is fast in comparison to the uncertainty in our perception of body movement, the optimal strategy is to weight visually perceived movement velocities proportional to a power law. We find that this model accounts for the nonlinear influence of experimentally induced visual motion on human postural behavior both in our data and in previously published results.
When we stand upright, we integrate cues from multiple senses, such as vision and proprioception, to maintain and regulate our vertical posture. How these cues are combined has been the focus of a range of studies. These studies generally measured how subjects deviate from standing upright when confronted with a moving visual stimulus displayed in a virtual environment. Previous research had shown that uncertainty is central in such cue combination problems. Here we wanted to understand, quantitatively, how visual flow fields and uncertainty about them affect human posture. To do so, we combined experimental methods from perceptual psychophysics with methods from motor control studies. We used a two-alternative forced-choice paradigm to measure uncertainty as a function of the magnitude of a random-dot flow field and stimulus coherence. We subsequently measured movement amplitude as a function of visual stimulus parameters. In line with previous research, we find that sensorimotor behavior depends nonlinearly on the stimulus amplitude and, importantly, is affected by uncertainty. We find that this nonlinearity and uncertainty dependence is accurately predicted by standard Bayesian cue combination. Importantly, a Webers law where visual uncertainty depends on stimulus amplitude is enough to explain the nonlinear behavior.
A central theme of systems neuroscience is to characterize the tuning of neural responses to sensory stimuli or the production of movement. Statistically, we often want to estimate the parameters of the tuning curve, such as preferred direction, as well as the associated degree of uncertainty, characterized by error bars. Here we present a new sampling-based, Bayesian method that allows the estimation of tuning-curve parameters, the estimation of error bars, and hypothesis testing. This method also provides a useful way of visualizing which tuning curves are compatible with the recorded data. We demonstrate the utility of this approach using recordings of orientation and direction tuning in primary visual cortex, direction of motion tuning in primary motor cortex, and simulated data.
A large number of experiments have asked to what degree human reaching movements can be understood as being close to optimal in a statistical sense. However, little is known about whether these principles are relevant for other classes of movements. Here we analyzed movement in a task that is similar to surfing or snowboarding. Human subjects stand on a force plate that measures their center of pressure. This center of pressure affects the acceleration of a cursor that is displayed in a noisy fashion (as a cloud of dots) on a projection screen while the subject is incentivized to keep the cursor close to a fixed position. We find that salient aspects of observed behavior are well-described by optimal control models where a Bayesian estimation model (Kalman filter) is combined with an optimal controller (either a Linear-Quadratic-Regulator or Bang-bang controller). We find evidence that subjects integrate information over time taking into account uncertainty. However, behavior in this continuous steering task appears to be a highly non-linear function of the visual feedback. While the nervous system appears to implement Bayes-like mechanisms for a full-body, dynamic task, it may additionally take into account the specific costs and constraints of the task.
Humans use their arms to engage in a wide variety of motor tasks during everyday life. However, little is known about the statistics of these natural arm movements. Studies of the sensory system have shown that the statistics of sensory inputs are key to determining sensory processing. We hypothesized that the statistics of natural everyday movements may, in a similar way, influence motor performance as measured in laboratory-based tasks. We developed a portable motion-tracking system that could be worn by subjects as they went about their daily routine outside of a laboratory setting. We found that the well-documented symmetry bias is reflected in the relative incidence of movements made during everyday tasks. Specifically, symmetric and antisymmetric movements are predominant at low frequencies, whereas only symmetric movements are predominant at high frequencies. Moreover, the statistics of natural movements, that is, their relative incidence, correlated with subjects performance on a laboratory-based phase-tracking task. These results provide a link between natural movement statistics and motor performance and confirm that the symmetry bias documented in laboratory studies is a natural feature of human movement.
When we learn how to throw darts we adjust how we throw based on where the darts stick. Much of skill learning is computationally similar in that we learn using feedback obtained after the completion of individual actions. We can formalize such tasks as a search problem; among the set of all possible actions, find the action that leads to the highest reward. In such cases our actions have two objectives: we want to best utilize what we already know (exploitation), but we also want to learn to be more successful in the future (exploration). Here we tested how participants learn movement trajectories where feedback is provided as a monetary reward that depends on the chosen trajectory. We mathematically derived the optimal search policy for our experiment using decision theory. The search behavior of participants is well predicted by an ideal searcher model that optimally combines exploration and exploitation.
During motor adaptation the nervous system constantly uses error information to improve future movements. Todays mainstream models simply assume that the nervous system adapts linearly and proportionally to errors. However, not all movement errors are relevant to our own action. The environment may transiently disturb the movement production-for example, a gust of wind blows the tennis ball away from its intended trajectory. Apparently the nervous system should not adapt its motor plan in the subsequent tennis strokes based on this irrelevant movement error. We hypothesize that the nervous system estimates the relevance of each observed error and adapts strongly only to relevant errors. Here we present a Bayesian treatment of this problem. The model calculates how likely an error is relevant to the motor plant and derives an ideal adaptation strategy that leads to the most precise movements. This model predicts that adaptation should be a nonlinear function of the size of an error. In reaching experiments we found strong evidence for the predicted nonlinear strategy. The model also explains published data on saccadic gain adaptation, adaptation to visuomotor rotations, and force perturbations. Our study suggests that the nervous system constantly and effortlessly estimates the relevance of observed movement errors for successful motor adaptation.
Current multielectrode techniques enable the simultaneous recording of spikes from hundreds of neurons. To study neural plasticity and network structure it is desirable to infer the underlying functional connectivity between the recorded neurons. Functional connectivity is defined by a large number of parameters, which characterize how each neuron influences the other neurons. A Bayesian approach that combines information from the recorded spikes (likelihood) with prior beliefs about functional connectivity (prior) can improve inference of these parameters and reduce overfitting. Recent studies have used likelihood functions based on the statistics of point-processes and a prior that captures the sparseness of neural connections. Here we include a prior that captures the empirical finding that interactions tend to vary smoothly in time. We show that this method can successfully infer connectivity patterns in simulated data and apply the algorithm to spike data recorded from primary motor (M1) and premotor (PMd) cortices of a monkey. Finally, we present a new approach to studying structure in inferred connections based on a Bayesian clustering algorithm. Groups of neurons in M1 and PMd show common patterns of input and output that may correspond to functional assemblies.
For rehabilitative devices to restore functional movement to paralyzed individuals, user intent must be determined from signals that remain under voluntary control. Tracking eye movements is a natural way to learn about an intended reach target and, when combined with just a small set of electromyograms (EMGs) in a probabilistic mixture model, can reliably generate accurate trajectories even when the target information is uncertain. To experimentally assess the effectiveness of our algorithm in closed-loop control, we developed a robotic system to simulate a reaching neuroprosthetic. Incorporating target information by tracking subjects gaze greatly improved performance when the set of EMGs was most limited. In addition we found that online performance was better than predicted by the offline accuracy of the training data. By enhancing the trajectory model with target information the decoder relied less on neural control signals, reducing the burden on the user.
Due to multiple factors such as fatigue, muscle strengthening, and neural plasticity, the responsiveness of the motor apparatus to neural commands changes over time. To enable precise movements the nervous system must adapt to compensate for these changes. Recent models of motor adaptation derive from assumptions about the way the motor apparatus changes. Characterizing these changes is difficult because motor adaptation happens at the same time, masking most of the effects of ongoing changes. Here, we analyze eye movements of monkeys with lesions to the posterior cerebellar vermis that impair adaptation. Their fluctuations better reveal the underlying changes of the motor system over time. When these measured, unadapted changes are used to derive optimal motor adaptation rules the prediction precision significantly improves. Among three models that similarly fit single-day adaptation results, the model that also matches the temporal correlations of the non-adapting saccades most accurately predicts multiple day adaptation. Saccadic gain adaptation is well matched to the natural statistics of fluctuations of the oculomotor plant.
How interactions between neurons relate to tuned neural responses is a longstanding question in systems neuroscience. Here we use statistical modeling and simultaneous multi-electrode recordings to explore the relationship between these interactions and tuning curves in six different brain areas. We find that, in most cases, functional interactions between neurons provide an explanation of spiking that complements and, in some cases, surpasses the influence of canonical tuning curves. Modeling functional interactions improves both encoding and decoding accuracy by accounting for noise correlations and features of the external world that tuning curves fail to capture. In cortex, modeling coupling alone allows spikes to be predicted more accurately than tuning curve models based on external variables. These results suggest that statistical models of functional interactions between even relatively small numbers of neurons may provide a useful framework for examining neural coding.
Mobile phones with built-in accelerometers promise a convenient, objective way to quantify everyday movements and classify those movements into activities. Using accelerometer data we estimate the following activities of 18 healthy subjects and eight patients with Parkinsons disease: walking, standing, sitting, holding, or not wearing the phone. We use standard machine learning classifiers (support vector machines, regularized logistic regression) to automatically select, weigh, and combine a large set of standard features for time series analysis. Using cross validation across all samples we are able to correctly identify 96.1% of the activities of healthy subjects and 92.2% of the activities of Parkinsons patients. However, when applying the classification parameters derived from the set of healthy subjects to Parkinsons patients, the percent correct lowers to 60.3%, due to different characteristics of movement. For a fairer comparison across populations we also applied subject-wise cross validation, identifying healthy subject activities with 86.0% accuracy and 75.1% accuracy for patients. We discuss the key differences between these populations, and why algorithms designed for and trained with healthy subject data are not reliable for activity recognition in populations with motor disabilities.
Studies of motor generalization usually perturb hand reaches by distorting visual feedback with virtual reality or by applying forces with a robotic manipulandum. Whereas such perturbations are useful for studying how the central nervous system adapts and generalizes to novel dynamics, they are rarely encountered in daily life. The most common perturbations that we experience are changes in the weights of objects that we hold. Here, we use a center-out, free-reaching task, in which we can manipulate the weight of a participants hand to examine adaptation and generalization following naturalistic perturbations. In both trial-by-trial paradigms and block-based paradigms, we find that learning converges rapidly (on a timescale of approximately two trials), and this learning generalizes mostly to movements in nearby directions with a unimodal pattern. However, contrary to studies using more artificial perturbations, we find that the generalization has a strong global component. Furthermore, the generalization is enhanced with repeated exposure of the same perturbation. These results suggest that the familiarity of a perturbation is a major factor in movement generalization and that several theories of the neural control of movement, based on perturbations applied by robots or in virtual reality, may need to be extended by incorporating prior influence that is characterized by the familiarity of the perturbation.
Given a noisy sensory world, the nervous system integrates perceptual evidence over time to optimize decision-making. Neurophysiological accumulation of sensory information is well-documented in the animal visual system, but how such mechanisms are instantiated in the human brain remains poorly understood. Here we combined psychophysical techniques, drift-diffusion modeling, and functional magnetic resonance imaging (fMRI) to establish that odor evidence integration in the human olfactory system enhances discrimination on a two-alternative forced-choice task. Model-based measures of fMRI brain activity highlighted a ramp-like increase in orbitofrontal cortex (OFC) that peaked at the time of decision, conforming to predictions derived from an integrator model. Combined behavioral and fMRI data further suggest that decision bounds are not fixed but collapse over time, facilitating choice behavior in the presence of low-quality evidence. These data highlight a key role for the orbitofrontal cortex in resolving sensory uncertainty and provide substantiation for accumulator models of human perceptual decision-making.
High-throughput recording of signals embedded within inaccessible micro-environments is a technological challenge. The ideal recording device would be a nanoscale machine capable of quantitatively transducing a wide range of variables into a molecular recording medium suitable for long-term storage and facile readout in the form of digital data. We have recently proposed such a device, in which cation concentrations modulate the misincorporation rate of a DNA polymerase (DNAP) on a known template, allowing DNA sequences to encode information about the local cation concentration. In this work we quantify the cation sensitivity of DNAP misincorporation rates, making possible the indirect readout of cation concentration by DNA sequencing. Using multiplexed deep sequencing, we quantify the misincorporation properties of two DNA polymerases--Dpo4 and Klenow exo(-)--obtaining the probability and base selectivity of misincorporation at all positions within the template. We find that Dpo4 acts as a DNA recording device for Mn(2+) with a misincorporation rate gain of ?2%/mM. This modulation of misincorporation rate is selective to the template base: the probability of misincorporation on template T by Dpo4 increases >50-fold over the range tested, while the other template bases are affected less strongly. Furthermore, cation concentrations act as scaling factors for misincorporation: on a given template base, Mn(2+) and Mg(2+) change the overall misincorporation rate but do not alter the relative frequencies of incoming misincorporated nucleotides. Characterization of the ion dependence of DNAP misincorporation serves as the first step towards repurposing it as a molecular recording device.
Generalization studies examine the influence of perturbations imposed on one movement onto other movements. The strength of generalization is traditionally interpreted as a reflection of the similarity of the underlying neural representations. Uncertainty fundamentally affects both sensory integration and learning and is at the heart of many theories of neural representation. However, little is known about how uncertainty, resulting from variability in the environment, affects generalization curves. Here we extend standard movement generalization experiments to ask how uncertainty affects the generalization of visuomotor rotations. We find that although uncertainty affects how fast subjects learn, the perturbation generalizes independently of uncertainty.
Uncertainty shapes our perception of the world and the decisions we make. Two aspects of uncertainty are commonly distinguished: uncertainty in previously acquired knowledge (prior) and uncertainty in current sensory information (likelihood). Previous studies have established that humans can take both types of uncertainty into account, often in a way predicted by Bayesian statistics. However, the neural representations underlying these parameters remain poorly understood.
Fall prevention is a critical component of health care; falls are a common source of injury in the elderly and are associated with significant levels of mortality and morbidity. Automatically detecting falls can allow rapid response to potential emergencies; in addition, knowing the cause or manner of a fall can be beneficial for prevention studies or a more tailored emergency response. The purpose of this study is to demonstrate techniques to not only reliably detect a fall but also to automatically classify the type. We asked 15 subjects to simulate four different types of falls-left and right lateral, forward trips, and backward slips-while wearing mobile phones and previously validated, dedicated accelerometers. Nine subjects also wore the devices for ten days, to provide data for comparison with the simulated falls. We applied five machine learning classifiers to a large time-series feature set to detect falls. Support vector machines and regularized logistic regression were able to identify a fall with 98% accuracy and classify the type of fall with 99% accuracy. This work demonstrates how current machine learning approaches can simplify data collection for prevention in fall-related research as well as improve rapid response to potential injuries due to falls.
Neuroprosthetic devices promise to allow paralyzed patients to perform the necessary functions of everyday life. However, to allow patients to use such tools it is necessary to decode their intent from neural signals such as electromyograms (EMGs). Because these signals are noisy, state of the art decoders integrate information over time. One systematic way of doing this is by taking into account the natural evolution of the state of the body--by using a so-called trajectory model. Here we use two insights about movements to enhance our trajectory model: (1) at any given time, there is a small set of likely movement targets, potentially identified by gaze; (2) reaches are produced at varying speeds. We decoded natural reaching movements using EMGs of muscles that might be available from an individual with spinal cord injury. Target estimates found from tracking eye movements were incorporated into the trajectory model, while a mixture model accounted for the inherent uncertainty in these estimates. Warping the trajectory model in time using a continuous estimate of the reach speed enabled accurate decoding of faster reaches. We found that the choice of richer trajectory models, such as those incorporating target or speed, improves decoding particularly when there is a small number of EMGs available.
Related JoVE Video
Journal of Visualized Experiments
What is Visualize?
JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.
How does it work?
We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.
Video X seems to be unrelated to Abstract Y...
In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.