JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Robot-mediated interviews--how effective is a humanoid robot as a tool for interviewing young children?
PUBLISHED: 02-14-2013
Robots have been used in a variety of education, therapy or entertainment contexts. This paper introduces the novel application of using humanoid robots for robot-mediated interviews. An experimental study examines how childrens responses towards the humanoid robot KASPAR in an interview context differ in comparison to their interaction with a human in a similar setting. Twenty-one children aged between 7 and 9 took part in this study. Each child participated in two interviews, one with an adult and one with a humanoid robot. Measures include the behavioural coding of the childrens behaviour during the interviews and questionnaire data. The questions in these interviews focused on a special event that had recently taken place in the school. The results reveal that the children interacted with KASPAR very similar to how they interacted with a human interviewer. The quantitative behaviour analysis reveal that the most notable difference between the interviews with KASPAR and the human were the duration of the interviews, the eye gaze directed towards the different interviewers, and the response time of the interviewers. These results are discussed in light of future work towards developing KASPAR as an interviewer for young children in application areas where a robot may have advantages over a human interviewer, e.g. in police, social services, or healthcare applications.
Authors: Dianfan Li, Coilín Boland, Kilian Walsh, Martin Caffrey.
Published: 09-01-2012
Structure-function studies of membrane proteins greatly benefit from having available high-resolution 3-D structures of the type provided through macromolecular X-ray crystallography (MX). An essential ingredient of MX is a steady supply of ideally diffraction-quality crystals. The in meso or lipidic cubic phase (LCP) method for crystallizing membrane proteins is one of several methods available for crystallizing membrane proteins. It makes use of a bicontinuous mesophase in which to grow crystals. As a method, it has had some spectacular successes of late and has attracted much attention with many research groups now interested in using it. One of the challenges associated with the method is that the hosting mesophase is extremely viscous and sticky, reminiscent of a thick toothpaste. Thus, dispensing it manually in a reproducible manner in small volumes into crystallization wells requires skill, patience and a steady hand. A protocol for doing just that was developed in the Membrane Structural & Functional Biology (MS&FB) Group1-3. JoVE video articles describing the method are available1,4. The manual approach for setting up in meso trials has distinct advantages with specialty applications, such as crystal optimization and derivatization. It does however suffer from being a low throughput method. Here, we demonstrate a protocol for performing in meso crystallization trials robotically. A robot offers the advantages of speed, accuracy, precision, miniaturization and being able to work continuously for extended periods under what could be regarded as hostile conditions such as in the dark, in a reducing atmosphere or at low or high temperatures. An in meso robot, when used properly, can greatly improve the productivity of membrane protein structure and function research by facilitating crystallization which is one of the slow steps in the overall structure determination pipeline. In this video article, we demonstrate the use of three commercially available robots that can dispense the viscous and sticky mesophase integral to in meso crystallogenesis. The first robot was developed in the MS&FB Group5,6. The other two have recently become available and are included here for completeness. An overview of the protocol covered in this article is presented in Figure 1. All manipulations were performed at room temperature (~20 °C) under ambient conditions.
22 Related JoVE Articles!
Play Button
One Dimensional Turing-Like Handshake Test for Motor Intelligence
Authors: Amir Karniel, Guy Avraham, Bat-Chen Peles, Shelly Levy-Tzedek, Ilana Nisky.
Institutions: Ben-Gurion University.
In the Turing test, a computer model is deemed to "think intelligently" if it can generate answers that are not distinguishable from those of a human. However, this test is limited to the linguistic aspects of machine intelligence. A salient function of the brain is the control of movement, and the movement of the human hand is a sophisticated demonstration of this function. Therefore, we propose a Turing-like handshake test, for machine motor intelligence. We administer the test through a telerobotic system in which the interrogator is engaged in a task of holding a robotic stylus and interacting with another party (human or artificial). Instead of asking the interrogator whether the other party is a person or a computer program, we employ a two-alternative forced choice method and ask which of two systems is more human-like. We extract a quantitative grade for each model according to its resemblance to the human handshake motion and name it "Model Human-Likeness Grade" (MHLG). We present three methods to estimate the MHLG. (i) By calculating the proportion of subjects' answers that the model is more human-like than the human; (ii) By comparing two weighted sums of human and model handshakes we fit a psychometric curve and extract the point of subjective equality (PSE); (iii) By comparing a given model with a weighted sum of human and random signal, we fit a psychometric curve to the answers of the interrogator and extract the PSE for the weight of the human in the weighted sum. Altogether, we provide a protocol to test computational models of the human handshake. We believe that building a model is a necessary step in understanding any phenomenon and, in this case, in understanding the neural mechanisms responsible for the generation of the human handshake.
Neuroscience, Issue 46, Turing test, Human Machine Interface, Haptics, Teleoperation, Motor Control, Motor Behavior, Diagnostics, Perception, handshake, telepresence
Play Button
Measuring Attentional Biases for Threat in Children and Adults
Authors: Vanessa LoBue.
Institutions: Rutgers University.
Investigators have long been interested in the human propensity for the rapid detection of threatening stimuli. However, until recently, research in this domain has focused almost exclusively on adult participants, completely ignoring the topic of threat detection over the course of development. One of the biggest reasons for the lack of developmental work in this area is likely the absence of a reliable paradigm that can measure perceptual biases for threat in children. To address this issue, we recently designed a modified visual search paradigm similar to the standard adult paradigm that is appropriate for studying threat detection in preschool-aged participants. Here we describe this new procedure. In the general paradigm, we present participants with matrices of color photographs, and ask them to find and touch a target on the screen. Latency to touch the target is recorded. Using a touch-screen monitor makes the procedure simple and easy, allowing us to collect data in participants ranging from 3 years of age to adults. Thus far, the paradigm has consistently shown that both adults and children detect threatening stimuli (e.g., snakes, spiders, angry/fearful faces) more quickly than neutral stimuli (e.g., flowers, mushrooms, happy/neutral faces). Altogether, this procedure provides an important new tool for researchers interested in studying the development of attentional biases for threat.
Behavior, Issue 92, Detection, threat, attention, attentional bias, anxiety, visual search
Play Button
High-throughput Functional Screening using a Homemade Dual-glow Luciferase Assay
Authors: Jessica M. Baker, Frederick M. Boyce.
Institutions: Massachusetts General Hospital.
We present a rapid and inexpensive high-throughput screening protocol to identify transcriptional regulators of alpha-synuclein, a gene associated with Parkinson's disease. 293T cells are transiently transfected with plasmids from an arrayed ORF expression library, together with luciferase reporter plasmids, in a one-gene-per-well microplate format. Firefly luciferase activity is assayed after 48 hr to determine the effects of each library gene upon alpha-synuclein transcription, normalized to expression from an internal control construct (a hCMV promoter directing Renilla luciferase). This protocol is facilitated by a bench-top robot enclosed in a biosafety cabinet, which performs aseptic liquid handling in 96-well format. Our automated transfection protocol is readily adaptable to high-throughput lentiviral library production or other functional screening protocols requiring triple-transfections of large numbers of unique library plasmids in conjunction with a common set of helper plasmids. We also present an inexpensive and validated alternative to commercially-available, dual luciferase reagents which employs PTC124, EDTA, and pyrophosphate to suppress firefly luciferase activity prior to measurement of Renilla luciferase. Using these methods, we screened 7,670 human genes and identified 68 regulators of alpha-synuclein. This protocol is easily modifiable to target other genes of interest.
Cellular Biology, Issue 88, Luciferases, Gene Transfer Techniques, Transfection, High-Throughput Screening Assays, Transfections, Robotics
Play Button
Improving the Success Rate of Protein Crystallization by Random Microseed Matrix Screening
Authors: Marisa Till, Alice Robson, Matthew J. Byrne, Asha V. Nair, Stefan A. Kolek, Patrick D. Shaw Stewart, Paul R. Race.
Institutions: University of Bristol, Douglas Instruments.
Random microseed matrix screening (rMMS) is a protein crystallization technique in which seed crystals are added to random screens. By increasing the likelihood that crystals will grow in the metastable zone of a protein's phase diagram, extra crystallization leads are often obtained, the quality of crystals produced may be increased, and a good supply of crystals for data collection and soaking experiments is provided. Here we describe a general method for rMMS that may be applied to either sitting drop or hanging drop vapor diffusion experiments, established either by hand or using liquid handling robotics, in 96-well or 24-well tray format.
Structural Biology, Issue 78, Crystallography, X-Ray, Biochemical Phenomena, Molecular Structure, Molecular Conformation, protein crystallization, seeding, protein structure
Play Button
Training Synesthetic Letter-color Associations by Reading in Color
Authors: Olympia Colizoli, Jaap M. J. Murre, Romke Rouw.
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
Play Button
A High Throughput MHC II Binding Assay for Quantitative Analysis of Peptide Epitopes
Authors: Regina Salvat, Leonard Moise, Chris Bailey-Kellogg, Karl E. Griswold.
Institutions: Dartmouth College, University of Rhode Island, Dartmouth College.
Biochemical assays with recombinant human MHC II molecules can provide rapid, quantitative insights into immunogenic epitope identification, deletion, or design1,2. Here, a peptide-MHC II binding assay is scaled to 384-well format. The scaled down protocol reduces reagent costs by 75% and is higher throughput than previously described 96-well protocols1,3-5. Specifically, the experimental design permits robust and reproducible analysis of up to 15 peptides against one MHC II allele per 384-well ELISA plate. Using a single liquid handling robot, this method allows one researcher to analyze approximately ninety test peptides in triplicate over a range of eight concentrations and four MHC II allele types in less than 48 hr. Others working in the fields of protein deimmunization or vaccine design and development may find the protocol to be useful in facilitating their own work. In particular, the step-by-step instructions and the visual format of JoVE should allow other users to quickly and easily establish this methodology in their own labs.
Biochemistry, Issue 85, Immunoassay, Protein Immunogenicity, MHC II, T cell epitope, High Throughput Screen, Deimmunization, Vaccine Design
Play Button
High Throughput Quantitative Expression Screening and Purification Applied to Recombinant Disulfide-rich Venom Proteins Produced in E. coli
Authors: Natalie J. Saez, Hervé Nozach, Marilyne Blemont, Renaud Vincentelli.
Institutions: Aix-Marseille Université, Commissariat à l'énergie atomique et aux énergies alternatives (CEA) Saclay, France.
Escherichia coli (E. coli) is the most widely used expression system for the production of recombinant proteins for structural and functional studies. However, purifying proteins is sometimes challenging since many proteins are expressed in an insoluble form. When working with difficult or multiple targets it is therefore recommended to use high throughput (HTP) protein expression screening on a small scale (1-4 ml cultures) to quickly identify conditions for soluble expression. To cope with the various structural genomics programs of the lab, a quantitative (within a range of 0.1-100 mg/L culture of recombinant protein) and HTP protein expression screening protocol was implemented and validated on thousands of proteins. The protocols were automated with the use of a liquid handling robot but can also be performed manually without specialized equipment. Disulfide-rich venom proteins are gaining increasing recognition for their potential as therapeutic drug leads. They can be highly potent and selective, but their complex disulfide bond networks make them challenging to produce. As a member of the FP7 European Venomics project (, our challenge is to develop successful production strategies with the aim of producing thousands of novel venom proteins for functional characterization. Aided by the redox properties of disulfide bond isomerase DsbC, we adapted our HTP production pipeline for the expression of oxidized, functional venom peptides in the E. coli cytoplasm. The protocols are also applicable to the production of diverse disulfide-rich proteins. Here we demonstrate our pipeline applied to the production of animal venom proteins. With the protocols described herein it is likely that soluble disulfide-rich proteins will be obtained in as little as a week. Even from a small scale, there is the potential to use the purified proteins for validating the oxidation state by mass spectrometry, for characterization in pilot studies, or for sensitive micro-assays.
Bioengineering, Issue 89, E. coli, expression, recombinant, high throughput (HTP), purification, auto-induction, immobilized metal affinity chromatography (IMAC), tobacco etch virus protease (TEV) cleavage, disulfide bond isomerase C (DsbC) fusion, disulfide bonds, animal venom proteins/peptides
Play Button
Developing Neuroimaging Phenotypes of the Default Mode Network in PTSD: Integrating the Resting State, Working Memory, and Structural Connectivity
Authors: Noah S. Philip, S. Louisa Carpenter, Lawrence H. Sweet.
Institutions: Alpert Medical School, Brown University, University of Georgia.
Complementary structural and functional neuroimaging techniques used to examine the Default Mode Network (DMN) could potentially improve assessments of psychiatric illness severity and provide added validity to the clinical diagnostic process. Recent neuroimaging research suggests that DMN processes may be disrupted in a number of stress-related psychiatric illnesses, such as posttraumatic stress disorder (PTSD). Although specific DMN functions remain under investigation, it is generally thought to be involved in introspection and self-processing. In healthy individuals it exhibits greatest activity during periods of rest, with less activity, observed as deactivation, during cognitive tasks, e.g., working memory. This network consists of the medial prefrontal cortex, posterior cingulate cortex/precuneus, lateral parietal cortices and medial temporal regions. Multiple functional and structural imaging approaches have been developed to study the DMN. These have unprecedented potential to further the understanding of the function and dysfunction of this network. Functional approaches, such as the evaluation of resting state connectivity and task-induced deactivation, have excellent potential to identify targeted neurocognitive and neuroaffective (functional) diagnostic markers and may indicate illness severity and prognosis with increased accuracy or specificity. Structural approaches, such as evaluation of morphometry and connectivity, may provide unique markers of etiology and long-term outcomes. Combined, functional and structural methods provide strong multimodal, complementary and synergistic approaches to develop valid DMN-based imaging phenotypes in stress-related psychiatric conditions. This protocol aims to integrate these methods to investigate DMN structure and function in PTSD, relating findings to illness severity and relevant clinical factors.
Medicine, Issue 89, default mode network, neuroimaging, functional magnetic resonance imaging, diffusion tensor imaging, structural connectivity, functional connectivity, posttraumatic stress disorder
Play Button
Using Insect Electroantennogram Sensors on Autonomous Robots for Olfactory Searches
Authors: Dominique Martinez, Lotfi Arhidi, Elodie Demondion, Jean-Baptiste Masson, Philippe Lucas.
Institutions: Centre National de la Recherche Scientifique (CNRS), Institut d'Ecologie et des Sciences de l'Environnement de Paris, Institut Pasteur.
Robots designed to track chemical leaks in hazardous industrial facilities1 or explosive traces in landmine fields2 face the same problem as insects foraging for food or searching for mates3: the olfactory search is constrained by the physics of turbulent transport4. The concentration landscape of wind borne odors is discontinuous and consists of sporadically located patches. A pre-requisite to olfactory search is that intermittent odor patches are detected. Because of its high speed and sensitivity5-6, the olfactory organ of insects provides a unique opportunity for detection. Insect antennae have been used in the past to detect not only sex pheromones7 but also chemicals that are relevant to humans, e.g., volatile compounds emanating from cancer cells8 or toxic and illicit substances9-11. We describe here a protocol for using insect antennae on autonomous robots and present a proof of concept for tracking odor plumes to their source. The global response of olfactory neurons is recorded in situ in the form of electroantennograms (EAGs). Our experimental design, based on a whole insect preparation, allows stable recordings within a working day. In comparison, EAGs on excised antennae have a lifetime of 2 hr. A custom hardware/software interface was developed between the EAG electrodes and a robot. The measurement system resolves individual odor patches up to 10 Hz, which exceeds the time scale of artificial chemical sensors12. The efficiency of EAG sensors for olfactory searches is further demonstrated in driving the robot toward a source of pheromone. By using identical olfactory stimuli and sensors as in real animals, our robotic platform provides a direct means for testing biological hypotheses about olfactory coding and search strategies13. It may also prove beneficial for detecting other odorants of interests by combining EAGs from different insect species in a bioelectronic nose configuration14 or using nanostructured gas sensors that mimic insect antennae15.
Neuroscience, Issue 90, robotics, electroantennogram, EAG, gas sensor, electronic nose, olfactory search, surge and casting, moth, insect, olfaction, neuron
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
Designing a Bio-responsive Robot from DNA Origami
Authors: Eldad Ben-Ishay, Almogit Abu-Horowitz, Ido Bachelet.
Institutions: Bar-Ilan University.
Nucleic acids are astonishingly versatile. In addition to their natural role as storage medium for biological information1, they can be utilized in parallel computing2,3 , recognize and bind molecular or cellular targets4,5 , catalyze chemical reactions6,7 , and generate calculated responses in a biological system8,9. Importantly, nucleic acids can be programmed to self-assemble into 2D and 3D structures10-12, enabling the integration of all these remarkable features in a single robot linking the sensing of biological cues to a preset response in order to exert a desired effect. Creating shapes from nucleic acids was first proposed by Seeman13, and several variations on this theme have since been realized using various techniques11,12,14,15 . However, the most significant is perhaps the one proposed by Rothemund, termed scaffolded DNA origami16. In this technique, the folding of a long (>7,000 bases) single-stranded DNA 'scaffold' is directed to a desired shape by hundreds of short complementary strands termed 'staples'. Folding is carried out by temperature annealing ramp. This technique was successfully demonstrated in the creation of a diverse array of 2D shapes with remarkable precision and robustness. DNA origami was later extended to 3D as well17,18 . The current paper will focus on the caDNAno 2.0 software19 developed by Douglas and colleagues. caDNAno is a robust, user-friendly CAD tool enabling the design of 2D and 3D DNA origami shapes with versatile features. The design process relies on a systematic and accurate abstraction scheme for DNA structures, making it relatively straightforward and efficient. In this paper we demonstrate the design of a DNA origami nanorobot that has been recently described20. This robot is 'robotic' in the sense that it links sensing to actuation, in order to perform a task. We explain how various sensing schemes can be integrated into the structure, and how this can be relayed to a desired effect. Finally we use Cando21 to simulate the mechanical properties of the designed shape. The concept we discuss can be adapted to multiple tasks and settings.
Bioengineering, Issue 77, Genetics, Biomedical Engineering, Molecular Biology, Medicine, Genomics, Nanotechnology, Nanomedicine, DNA origami, nanorobot, caDNAno, DNA, DNA Origami, nucleic acids, DNA structures, CAD, sequencing
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
An Experimental Platform to Study the Closed-loop Performance of Brain-machine Interfaces
Authors: Naveed Ejaz, Kris D. Peterson, Holger G. Krapp.
Institutions: Imperial College London.
The non-stationary nature and variability of neuronal signals is a fundamental problem in brain-machine interfacing. We developed a brain-machine interface to assess the robustness of different control-laws applied to a closed-loop image stabilization task. Taking advantage of the well-characterized fly visuomotor pathway we record the electrical activity from an identified, motion-sensitive neuron, H1, to control the yaw rotation of a two-wheeled robot. The robot is equipped with 2 high-speed video cameras providing visual motion input to a fly placed in front of 2 CRT computer monitors. The activity of the H1 neuron indicates the direction and relative speed of the robot's rotation. The neural activity is filtered and fed back into the steering system of the robot by means of proportional and proportional/adaptive control. Our goal is to test and optimize the performance of various control laws under closed-loop conditions for a broader application also in other brain machine interfaces.
Neuroscience, Issue 49, Stabilization reflexes, Sensorimotor control, Adaptive control, Insect vision
Play Button
Measurement Of Neuromagnetic Brain Function In Pre-school Children With Custom Sized MEG
Authors: Graciela Tesan, Blake W. Johnson, Melanie Reid, Rosalind Thornton, Stephen Crain.
Institutions: Macquarie University.
Magnetoencephalography is a technique that detects magnetic fields associated with cortical activity [1]. The electrophysiological activity of the brain generates electric fields - that can be recorded using electroencephalography (EEG)- and their concomitant magnetic fields - detected by MEG. MEG signals are detected by specialized sensors known as superconducting quantum interference devices (SQUIDs). Superconducting sensors require cooling with liquid helium at -270 °C. They are contained inside a vacumm-insulated helmet called a dewar, which is filled with liquid. SQUIDS are placed in fixed positions inside the helmet dewar in the helium coolant, and a subject's head is placed inside the helmet dewar for MEG measurements. The helmet dewar must be sized to satisfy opposing constraints. Clearly, it must be large enough to fit most or all of the heads in the population that will be studied. However, the helmet must also be small enough to keep most of the SQUID sensors within range of the tiny cerebral fields that they are to measure. Conventional whole-head MEG systems are designed to accommodate more than 90% of adult heads. However adult systems are not well suited for measuring brain function in pre-school chidren whose heads have a radius several cm smaller than adults. The KIT-Macquarie Brain Research Laboratory at Macquarie University uses a MEG system custom sized to fit the heads of pre-school children. This child system has 64 first-order axial gradiometers with a 50 mm baseline[2] and is contained inside a magnetically-shielded room (MSR) together with a conventional adult-sized MEG system [3,4]. There are three main advantages of the customized helmet dewar for studying children. First, the smaller radius of the sensor configuration brings the SQUID sensors into range of the neuromagnetic signals of children's heads. Second, the smaller helmet allows full insertion of a child's head into the dewar. Full insertion is prevented in adult dewar helmets because of the smaller crown to shoulder distance in children. These two factors are fundamental in recording brain activity using MEG because neuromagnetic signals attenuate rapidly with distance. Third, the customized child helmet aids in the symmetric positioning of the head and limits the freedom of movement of the child's head within the dewar. When used with a protocol that aligns the requirements of data collection with the motivational and behavioral capacities of children, these features significantly facilitate setup, positioning, and measurement of MEG signals.
Neuroscience, Issue 36, Magnetoencephalography, Pediatrics, Brain Mapping, Language, Brain Development, Cognitive Neuroscience, Language Acquisition, Linguistics
Play Button
Combining Computer Game-Based Behavioural Experiments With High-Density EEG and Infrared Gaze Tracking
Authors: Keith J. Yoder, Matthew K. Belmonte.
Institutions: Cornell University, University of Chicago, Manesar, India.
Experimental paradigms are valuable insofar as the timing and other parameters of their stimuli are well specified and controlled, and insofar as they yield data relevant to the cognitive processing that occurs under ecologically valid conditions. These two goals often are at odds, since well controlled stimuli often are too repetitive to sustain subjects' motivation. Studies employing electroencephalography (EEG) are often especially sensitive to this dilemma between ecological validity and experimental control: attaining sufficient signal-to-noise in physiological averages demands large numbers of repeated trials within lengthy recording sessions, limiting the subject pool to individuals with the ability and patience to perform a set task over and over again. This constraint severely limits researchers' ability to investigate younger populations as well as clinical populations associated with heightened anxiety or attentional abnormalities. Even adult, non-clinical subjects may not be able to achieve their typical levels of performance or cognitive engagement: an unmotivated subject for whom an experimental task is little more than a chore is not the same, behaviourally, cognitively, or neurally, as a subject who is intrinsically motivated and engaged with the task. A growing body of literature demonstrates that embedding experiments within video games may provide a way between the horns of this dilemma between experimental control and ecological validity. The narrative of a game provides a more realistic context in which tasks occur, enhancing their ecological validity (Chaytor & Schmitter-Edgecombe, 2003). Moreover, this context provides motivation to complete tasks. In our game, subjects perform various missions to collect resources, fend off pirates, intercept communications or facilitate diplomatic relations. In so doing, they also perform an array of cognitive tasks, including a Posner attention-shifting paradigm (Posner, 1980), a go/no-go test of motor inhibition, a psychophysical motion coherence threshold task, the Embedded Figures Test (Witkin, 1950, 1954) and a theory-of-mind (Wimmer & Perner, 1983) task. The game software automatically registers game stimuli and subjects' actions and responses in a log file, and sends event codes to synchronise with physiological data recorders. Thus the game can be combined with physiological measures such as EEG or fMRI, and with moment-to-moment tracking of gaze. Gaze tracking can verify subjects' compliance with behavioural tasks (e.g. fixation) and overt attention to experimental stimuli, and also physiological arousal as reflected in pupil dilation (Bradley et al., 2008). At great enough sampling frequencies, gaze tracking may also help assess covert attention as reflected in microsaccades - eye movements that are too small to foveate a new object, but are as rapid in onset and have the same relationship between angular distance and peak velocity as do saccades that traverse greater distances. The distribution of directions of microsaccades correlates with the (otherwise) covert direction of attention (Hafed & Clark, 2002).
Neuroscience, Issue 46, High-density EEG, ERP, ICA, gaze tracking, computer game, ecological validity
Play Button
Haptic/Graphic Rehabilitation: Integrating a Robot into a Virtual Environment Library and Applying it to Stroke Therapy
Authors: Ian Sharp, James Patton, Molly Listenberger, Emily Case.
Institutions: University of Illinois at Chicago and Rehabilitation Institute of Chicago, Rehabilitation Institute of Chicago.
Recent research that tests interactive devices for prolonged therapy practice has revealed new prospects for robotics combined with graphical and other forms of biofeedback. Previous human-robot interactive systems have required different software commands to be implemented for each robot leading to unnecessary developmental overhead time each time a new system becomes available. For example, when a haptic/graphic virtual reality environment has been coded for one specific robot to provide haptic feedback, that specific robot would not be able to be traded for another robot without recoding the program. However, recent efforts in the open source community have proposed a wrapper class approach that can elicit nearly identical responses regardless of the robot used. The result can lead researchers across the globe to perform similar experiments using shared code. Therefore modular "switching out"of one robot for another would not affect development time. In this paper, we outline the successful creation and implementation of a wrapper class for one robot into the open-source H3DAPI, which integrates the software commands most commonly used by all robots.
Bioengineering, Issue 54, robotics, haptics, virtual reality, wrapper class, rehabilitation robotics, neural engineering, H3DAPI, C++
Play Button
The Trier Social Stress Test Protocol for Inducing Psychological Stress
Authors: Melissa A. Birkett.
Institutions: Northern Arizona University.
This article demonstrates a psychological stress protocol for use in a laboratory setting. Protocols that allow researchers to study the biological pathways of the stress response in health and disease are fundamental to the progress of research in stress and anxiety.1 Although numerous protocols exist for inducing stress response in the laboratory, many neglect to provide a naturalistic context or to incorporate aspects of social and psychological stress. Of psychological stress protocols, meta-analysis suggests that the Trier Social Stress Test (TSST) is the most useful and appropriate standardized protocol for studies of stress hormone reactivity.2 In the original description of the TSST, researchers sought to design and evaluate a procedure capable of inducing a reliable stress response in the majority of healthy volunteers.3 These researchers found elevations in heart rate, blood pressure and several endocrine stress markers in response to the TSST (a psychological stressor) compared to a saline injection (a physical stressor).3 Although the TSST has been modified to meet the needs of various research groups, it generally consists of a waiting period upon arrival, anticipatory speech preparation, speech performance, and verbal arithmetic performance periods, followed by one or more recovery periods. The TSST requires participants to prepare and deliver a speech, and verbally respond to a challenging arithmetic problem in the presence of a socially evaluative audience.3 Social evaluation and uncontrollability have been identified as key components of stress induction by the TSST.4 In use for over a decade, the goal of the TSST is to systematically induce a stress response in order to measure differences in reactivity, anxiety and activation of the hypothalamic-pituitary-adrenal (HPA) or sympathetic-adrenal-medullary (SAM) axis during the task.1 Researchers generally assess changes in self-reported anxiety, physiological measures (e.g. heart rate), and/or neuroendocrine indices (e.g. the stress hormone cortisol) in response to the TSST. Many investigators have adopted salivary sampling for stress markers such as cortisol and alpha-amylase (a marker of autonomic nervous system activation) as an alternative to blood sampling to reduce the confounding stress of blood-collection techniques. In addition to changes experienced by an individual completing the TSST, researchers can compare changes between different treatment groups (e.g. clinical versus healthy control samples) or the effectiveness of stress-reducing interventions.1
Medicine, Issue 56, Stress, anxiety, laboratory stressor, cortisol, physiological response, psychological stressor
Play Button
Adaptation of a Haptic Robot in a 3T fMRI
Authors: Joseph Snider, Markus Plank, Larry May, Thomas T. Liu, Howard Poizner.
Institutions: University of California, University of California, University of California.
Functional magnetic resonance imaging (fMRI) provides excellent functional brain imaging via the BOLD signal 1 with advantages including non-ionizing radiation, millimeter spatial accuracy of anatomical and functional data 2, and nearly real-time analyses 3. Haptic robots provide precise measurement and control of position and force of a cursor in a reasonably confined space. Here we combine these two technologies to allow precision experiments involving motor control with haptic/tactile environment interaction such as reaching or grasping. The basic idea is to attach an 8 foot end effecter supported in the center to the robot 4 allowing the subject to use the robot, but shielding it and keeping it out of the most extreme part of the magnetic field from the fMRI machine (Figure 1). The Phantom Premium 3.0, 6DoF, high-force robot (SensAble Technologies, Inc.) is an excellent choice for providing force-feedback in virtual reality experiments 5, 6, but it is inherently non-MR safe, introduces significant noise to the sensitive fMRI equipment, and its electric motors may be affected by the fMRI's strongly varying magnetic field. We have constructed a table and shielding system that allows the robot to be safely introduced into the fMRI environment and limits both the degradation of the fMRI signal by the electrically noisy motors and the degradation of the electric motor performance by the strongly varying magnetic field of the fMRI. With the shield, the signal to noise ratio (SNR: mean signal/noise standard deviation) of the fMRI goes from a baseline of ˜380 to ˜330, and ˜250 without the shielding. The remaining noise appears to be uncorrelated and does not add artifacts to the fMRI of a test sphere (Figure 2). The long, stiff handle allows placement of the robot out of range of the most strongly varying parts of the magnetic field so there is no significant effect of the fMRI on the robot. The effect of the handle on the robot's kinematics is minimal since it is lightweight (˜2.6 lbs) but extremely stiff 3/4" graphite and well balanced on the 3DoF joint in the middle. The end result is an fMRI compatible, haptic system with about 1 cubic foot of working space, and, when combined with virtual reality, it allows for a new set of experiments to be performed in the fMRI environment including naturalistic reaching, passive displacement of the limb and haptic perception, adaptation learning in varying force fields, or texture identification 5, 6.
Bioengineering, Issue 56, neuroscience, haptic robot, fMRI, MRI, pointing
Play Button
Eye Tracking Young Children with Autism
Authors: Noah J. Sasson, Jed T. Elison.
Institutions: University of Texas at Dallas, University of North Carolina at Chapel Hill.
The rise of accessible commercial eye-tracking systems has fueled a rapid increase in their use in psychological and psychiatric research. By providing a direct, detailed and objective measure of gaze behavior, eye-tracking has become a valuable tool for examining abnormal perceptual strategies in clinical populations and has been used to identify disorder-specific characteristics1, promote early identification2, and inform treatment3. In particular, investigators of autism spectrum disorders (ASD) have benefited from integrating eye-tracking into their research paradigms4-7. Eye-tracking has largely been used in these studies to reveal mechanisms underlying impaired task performance8 and abnormal brain functioning9, particularly during the processing of social information1,10-11. While older children and adults with ASD comprise the preponderance of research in this area, eye-tracking may be especially useful for studying young children with the disorder as it offers a non-invasive tool for assessing and quantifying early-emerging developmental abnormalities2,12-13. Implementing eye-tracking with young children with ASD, however, is associated with a number of unique challenges, including issues with compliant behavior resulting from specific task demands and disorder-related psychosocial considerations. In this protocol, we detail methodological considerations for optimizing research design, data acquisition and psychometric analysis while eye-tracking young children with ASD. The provided recommendations are also designed to be more broadly applicable for eye-tracking children with other developmental disabilities. By offering guidelines for best practices in these areas based upon lessons derived from our own work, we hope to help other investigators make sound research design and analysis choices while avoiding common pitfalls that can compromise data acquisition while eye-tracking young children with ASD or other developmental difficulties.
Medicine, Issue 61, eye tracking, autism, neurodevelopmental disorders, toddlers, perception, attention, social cognition
Play Button
Portable Intermodal Preferential Looking (IPL): Investigating Language Comprehension in Typically Developing Toddlers and Young Children with Autism
Authors: Letitia R. Naigles, Andrea T. Tovar.
Institutions: University of Connecticut.
One of the defining characteristics of autism spectrum disorder (ASD) is difficulty with language and communication.1 Children with ASD's onset of speaking is usually delayed, and many children with ASD consistently produce language less frequently and of lower lexical and grammatical complexity than their typically developing (TD) peers.6,8,12,23 However, children with ASD also exhibit a significant social deficit, and researchers and clinicians continue to debate the extent to which the deficits in social interaction account for or contribute to the deficits in language production.5,14,19,25 Standardized assessments of language in children with ASD usually do include a comprehension component; however, many such comprehension tasks assess just one aspect of language (e.g., vocabulary),5 or include a significant motor component (e.g., pointing, act-out), and/or require children to deliberately choose between a number of alternatives. These last two behaviors are known to also be challenging to children with ASD.7,12,13,16 We present a method which can assess the language comprehension of young typically developing children (9-36 months) and children with autism.2,4,9,11,22 This method, Portable Intermodal Preferential Looking (P-IPL), projects side-by-side video images from a laptop onto a portable screen. The video images are paired first with a 'baseline' (nondirecting) audio, and then presented again paired with a 'test' linguistic audio that matches only one of the video images. Children's eye movements while watching the video are filmed and later coded. Children who understand the linguistic audio will look more quickly to, and longer at, the video that matches the linguistic audio.2,4,11,18,22,26 This paradigm includes a number of components that have recently been miniaturized (projector, camcorder, digitizer) to enable portability and easy setup in children's homes. This is a crucial point for assessing young children with ASD, who are frequently uncomfortable in new (e.g., laboratory) settings. Videos can be created to assess a wide range of specific components of linguistic knowledge, such as Subject-Verb-Object word order, wh-questions, and tense/aspect suffixes on verbs; videos can also assess principles of word learning such as a noun bias, a shape bias, and syntactic bootstrapping.10,14,17,21,24 Videos include characters and speech that are visually and acoustically salient and well tolerated by children with ASD.
Medicine, Issue 70, Neuroscience, Psychology, Behavior, Intermodal preferential looking, language comprehension, children with autism, child development, autism
Play Button
Designing and Implementing Nervous System Simulations on LEGO Robots
Authors: Daniel Blustein, Nikolai Rosenthal, Joseph Ayers.
Institutions: Northeastern University, Bremen University of Applied Sciences.
We present a method to use the commercially available LEGO Mindstorms NXT robotics platform to test systems level neuroscience hypotheses. The first step of the method is to develop a nervous system simulation of specific reflexive behaviors of an appropriate model organism; here we use the American Lobster. Exteroceptive reflexes mediated by decussating (crossing) neural connections can explain an animal's taxis towards or away from a stimulus as described by Braitenberg and are particularly well suited for investigation using the NXT platform.1 The nervous system simulation is programmed using LabVIEW software on the LEGO Mindstorms platform. Once the nervous system is tuned properly, behavioral experiments are run on the robot and on the animal under identical environmental conditions. By controlling the sensory milieu experienced by the specimens, differences in behavioral outputs can be observed. These differences may point to specific deficiencies in the nervous system model and serve to inform the iteration of the model for the particular behavior under study. This method allows for the experimental manipulation of electronic nervous systems and serves as a way to explore neuroscience hypotheses specifically regarding the neurophysiological basis of simple innate reflexive behaviors. The LEGO Mindstorms NXT kit provides an affordable and efficient platform on which to test preliminary biomimetic robot control schemes. The approach is also well suited for the high school classroom to serve as the foundation for a hands-on inquiry-based biorobotics curriculum.
Neuroscience, Issue 75, Neurobiology, Bioengineering, Behavior, Mechanical Engineering, Computer Science, Marine Biology, Biomimetics, Marine Science, Neurosciences, Synthetic Biology, Robotics, robots, Modeling, models, Sensory Fusion, nervous system, Educational Tools, programming, software, lobster, Homarus americanus, animal model
Play Button
Making MR Imaging Child's Play - Pediatric Neuroimaging Protocol, Guidelines and Procedure
Authors: Nora M. Raschle, Michelle Lee, Roman Buechler, Joanna A. Christodoulou, Maria Chang, Monica Vakil, Patrice L. Stering, Nadine Gaab.
Institutions: Children’s Hospital Boston, University of Zurich, Harvard, Harvard Medical School.
Within the last decade there has been an increase in the use of structural and functional magnetic resonance imaging (fMRI) to investigate the neural basis of human perception, cognition and behavior 1, 2. Moreover, this non-invasive imaging method has grown into a tool for clinicians and researchers to explore typical and atypical brain development. Although advances in neuroimaging tools and techniques are apparent, (f)MRI in young pediatric populations remains relatively infrequent 2. Practical as well as technical challenges when imaging children present clinicians and research teams with a unique set of problems 3, 2. To name just a few, the child participants are challenged by a need for motivation, alertness and cooperation. Anxiety may be an additional factor to be addressed. Researchers or clinicians need to consider time constraints, movement restriction, scanner background noise and unfamiliarity with the MR scanner environment2,4-10. A progressive use of functional and structural neuroimaging in younger age groups, however, could further add to our understanding of brain development. As an example, several research groups are currently working towards early detection of developmental disorders, potentially even before children present associated behavioral characteristics e.g.11. Various strategies and techniques have been reported as a means to ensure comfort and cooperation of young children during neuroimaging sessions. Play therapy 12, behavioral approaches 13, 14,15, 16-18 and simulation 19, the use of mock scanner areas 20,21, basic relaxation 22 and a combination of these techniques 23 have all been shown to improve the participant's compliance and thus MRI data quality. Even more importantly, these strategies have proven to increase the comfort of families and children involved 12. One of the main advances of such techniques for the clinical practice is the possibility of avoiding sedation or general anesthesia (GA) as a way to manage children's compliance during MR imaging sessions 19,20. In the current video report, we present a pediatric neuroimaging protocol with guidelines and procedures that have proven to be successful to date in young children.
Neuroscience, Issue 29, fMRI, imaging, development, children, pediatric neuroimaging, cognitive development, magnetic resonance imaging, pediatric imaging protocol, patient preparation, mock scanner
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.