Waiting
Login processing...

Trial ends in Request Full Access Tell Your Colleague About Jove

Behavior

Design and Use of an Apparatus for Presenting Graspable Objects in 3D Workspace

Published: August 8, 2019 doi: 10.3791/59932

Summary

Presented here is a protocol to build an automatic apparatus that guides a monkey to perform the flexible reach-to-grasp task. The apparatus combines a 3D translational device and turning table to present multiple objects in an arbitrary position in 3D space.

Abstract

Reaching and grasping are highly-coupled movements, and their underlying neural dynamics have been widely studied in the last decade. To distinguish reaching and grasping encodings, it is essential to present different object identities independent of their positions. Presented here is the design of an automatic apparatus that is assembled with a turning table and three-dimensional (3D) translational device to achieve this goal. The turning table switches different objects corresponding to different grip types while the 3D translational device transports the turning table in 3D space. Both are driven independently by motors so that the target position and object are combined arbitrarily. Meanwhile, wrist trajectory and grip types are recorded via the motion capture system and touch sensors, respectively. Furthermore, representative results that demonstrate successfully trained monkey using this system are described. It is expected that this apparatus will facilitate researchers to study kinematics, neural principles, and brain-machine interfaces related to upper limb function.

Introduction

Various apparatuses have been developed to study the neural principles underlying reaching and grasping movement in non-human primate. In reaching tasks, touch screen1,2, screen cursor controlled by a joystick3,4,5,6,7, and virtual reality technology8,9,10 have all been employed to present 2D and 3D targets, respectively. To introduce different grip types, differently shaped objects fixed in one position or rotating around an axis were widely used in the grasping tasks11,12,13. An alternative is to use visual cues to inform subjects to grasp the same object with different grip types14,15,16,17. More recently, reaching and grasping movements have been studied together (i.e., subjects reach multiple positions and grasp with different grip types in an experimental session)18,19,20,21,22,23,24,25,26,27,28,29. Early experiments have presented objects manually, which inevitably lead to low time and spatial precision20,21. To improve experimental precision and save manpower, automatic presentation devices controlled by programs have been widely used. To vary the target position and grip type, experimenters have exposed multiple objects simultaneously, but the relative (or absolute) position of targets and the grip types are bound together, which causes rigid firing patterns through long-term training22,27,28. Objects are usually presented in a 2D plane, which limits the diversity of reaching movement and neural activity19,25,26. Recently, virtual reality24 and robot arm23,29 have been introduced to present objects in 3D space.

Presented here are detailed protocols for building and using an automated apparatus30 that can achieve any combination of multiple target positions and grip types in 3D space. We designed a turning table to switch objects and 3D translational device to transport the turning table in 3D space. Both the turning table and translational device are driven by independent motors. Meanwhile, the 3D trajectory of subject’s wrist and neural signals are recorded simultaneously throughout the experiment. The apparatus provides a valuable platform for the study of upper limb function in the rhesus monkey.

Subscription Required. Please recommend JoVE to your librarian.

Protocol

All behavioral and surgical procedures conformed to the Guide for the Care and Use of Laboratory Animals (China Ministry of Health) and were approved by the Animal Care Committee at Zhejiang University, China.

1.Assembling the 3D translational device

  1. Build a frame of size 920 mm x 690 mm x 530 mm with aluminum construction rails (cross section: 40 mm x 40 mm).
  2. Secure four pedestals to the two ends of the Y-rails with screws (M4) (Figure 1B).
  3. Fix two Y-rails onto the top surface of the frame in parallel by securing the four pedestals to the four corners of the top surface with screws (M6) (Figure 1B).
  4. Connect two Y-rails with a connecting shaft and two diaphragm couplings. Tighten the lock screws of couplings to synchronize the shafts of two rails (Figure 1B).
  5. Put six nuts (M4) into the back grooves of the Z-rail. Attach one side of the right triangle frame to the back of the Z-rail with screws.
  6. Pull the triangle frame to the end that is distal to the shaft and tighten the screws. Attach the other right triangle frame to the other Z-rail in the same way (Figure 1C).
  7. Secure the other right-angled sides of two triangle frames to the sliders of two Y-rails with screws (M6) (Figure 1C).
  8. Connect two Z-rails with a connecting shaft and diaphragm couplings and tighten the lock screws of coupling (Figure 1C).
  9. Attach the two T-shaped connecting boards to the back of the X-rail with nuts and screws (M4). Then pull the two T-shaped boards to the two ends of X-rail and tighten the screws (Figure 1D).
  10. Secure the two T-shaped connecting boards onto the sliders of two Z-rails with screws (M6), respectively (Figure 1D).
  11. Insert the stepping motor into the shaft hole of the gear reducer and screw their flanges together (Figure 1E).
  12. Secure the connecting ring to the shaft end of the active X-rail with screws (M4).
  13. Insert the shaft of X-rail into the coupling and fix the gear reducer to the connecting ring with screws (M4).  Tighten the lock screws of the coupling (Figure 1E).
  14. Fix the other two stepping motors and gear reducers to the active Y-rail and Z-rail using the methods described in steps 1.11–1.12.
  15. Insert the power and control cables of the three stepping motors to the power and control ports of their drivers, respectively and secure the cables with screws on the driver side.

2. Assembling the turning table

  1. Download the .DWG design files from the Supplemental Files of this paper. Prepare the objects, mental shaft, locating bar, rotator and case by 3D printing or mechanical processing.
  2. Put the touch sensors into the groove of the object body and stick them onto the predefined touch areas with double sided tape (Figure 2B).
    NOTE: Each object consists of four subcomponents: a backboard, object body with groove inside, cover board, and touch sensors.
  3. Pass the wires through the hole of the object backboard and secure the cover board onto the object body with screws (Figure 2B).
  4. Pass wires of touch sensors through the holes on the sides of rotator and fix the objects onto the rotator with screws. (Figure 2C).
  5. Solder the wire ends of touch sensors to the rotating wire ends of the electric slip ring and wrap the joints with electrical tape (Figure 2D).
  6. Secure the case to the slider of the X-rail with screws. Place the bearing in the bottom hole of the box and secure the locating bar to the top surface of case with screws (Figure 2E).
  7. Place the rotator into the case from side, coinciding the axes of rotator, bearing and box. Pass the wires of the electric slip ring through the top hole of the case (Figure 2F).
  8. Insert the metal shaft into the bearing from the top hole of case and fit the shaft key to the keyway of the rotator (Figure 2G).
  9. Set the electric slip ring around the metal shaft. Place the end of locating bar into the notch of electric slip ring to prevent the outer ring from rotating (Figure 2G).
  10. Insert the shaft of stepping motor into the hole of metal shaft and secure the motor on the top of the box with screws. (Figure 2H).
  11. Insert the power and control cables of the motor into the power and control ports of its driver and secure them with screws.
  12. Stick a tricolor LED (RGB) onto the front side of the case with tape and fix the right side board onto the case.

3. Setup of the control system

  1. Insert the direction and pulse control wires of the four motor drivers into the digital I/O ports (pins 81, 83, 85, 87) and digital counter ports (pins 89, 91, 93, 95) of the data acquisition (DAQ) board, respectively. Secure the wires with screws.
  2. Insert the control wires of LED (green color used for the “go” cue, blue color used for the “error” cue, and red color representing idle) into the digital I/O ports (pin 65 and 66) of the DAQ card and secure them with screws.
  3. Insert the output wires of touch sensors and switch button into the digital I/O ports (pin 67–77) of the DAQ board and secure the wires with screws.
  4. Insert the start-stop and direction control wires of the peristaltic pump into the digital I/O pins 1 and 80, respectively. Insert the flow velocity control wire into the analog I/O port AO2. Secure the wires with screws.
  5. Setup a motion capture system as described by the manufacturer to record the hand trajectory in 3D space.
    NOTE: A commercial motion capture system (see Table of Materials) was used, which consists of eight cameras, a power hub, an Ethernet switch and a supporting software (e.g., Cortex). Please refer to the manual to get more details about setup of the system.
  6. Setup a neural signal acquisition system as described by the manufacturer to record electrophysiology signal from subject.
    NOTE: A commercial data acquisition system (Table of Materials) was used, which consists of a neural signal processor (NSP), front-end amplifier (FEA), amplifier power supply (ASP), head stages, and its supporting software (e.g., Central). Refer to the manual for more details about the setup of the system.

4. Preparation of the experimental session

  1. Initialize the 3D translational device and the turning table. Specifically, pull the sliders of all linear slide rail to the starting point (lower left corner) and turn the first object (i.e., the vertically placed handle) of turning table to face the front side of the turning table.
  2. Power on the experimental devices, including motion capture system, neural signal acquisition, DAQ board, peristaltic pump, and four motors.
  3. Setup the paradigm software (Figure 3A).
    1. Double click Paradigm.exe to open the paradigm software (available on request).
    2. Define the number of the reaching positions and their 3D coordinates (x, y, and z, in millimeters) relative to initial positions (step 4.2).
    3. Write coordinates of all positions in the form of matrix in a .txt document. Make sure that each row includes the x-, y-, and z-coordinates of one position separated with a space. Save the txt document.
    4. Click Open File in the Pool panel of paradigm software and select the .txt document saved before to load the presentation positions into the paradigm software.
      NOTE: In this study, eight target positions were set according to animal’s reaching range, which are located at vertices of a cuboid workspace9,10 (90 mm x 60 mm x 90 mm).
    5. Check the objects to be presented in the experiment in the Object Pool of paradigm software.
    6. Adjust experimental parameters in the Time Parameters panel of paradigm software. Set Baseline = 400 ms, Motor Run = 2,000 ms, Planning = 1,000 ms, max Reaction Time = 500 ms, max Reach Time = 1,000 ms, min Hold Time = 500 ms, Reward = 60 ms, and Error Cue = 1,000 ms.
  4. Seat the rhesus monkey (with a micro-electrode array implanted in motor cortex) on the monkey chair by inserting its collar into the groove of chair and fixing its head.
  5. Fix the monkey chair to the aluminum construction frame. Keep the head 250 mm away from the front side of the cuboid and keep the eyes 50 mm above the top side of the cuboid workspace (horizontal visual angle: 20°; vertical visual angle: 18°).
  6. Construct a tracking template of motion capture system.
    1. Attach three reflective markers at the end of the arm (close to wrist) with double-sided tape. Make sure that the three markers form a scalene triangle.
    2. Click the Run button of the paradigm software to start the task.
    3. Click the Record button on the Motion Capture panel of Cortex software to record trajectories of three markers for 60 s when the monkey is doing the task. Click the Stop button to suspend the experiment.
    4. Build a tracking template of three markers on the Cortex software using the recorded trajectories and save the template.
      NOTE: Please refer to the manual of Cortex to get more details about how to build a model.
  7. Connect the GND ports of FEA and micro-electrode array implanted in the monkey’s motor cortex with a wire and pinch cocks. Then insert the head stages into the connector of the micro-electrode array31.
  8. Open the Central software of neural signal acquisition system and set recording parameters including storage path, line noise cancellation, spike filter, spike threshold, etc.
    NOTE: Please refer to the manual of neural signal acquisition system for more details of software setting.
  9. Open the synchronization software (Figure 3B, available on request). Click the three Connect buttons in the Cerebus, Motion Capture, and Paradigm panels to connect the synchronization software with the neural signal acquisition system, motion capture system and paradigm software, respectively.
  10. Click the Run button of the paradigm software to continue the experiment.
  11. Click the Record button on the File Storage panel of Central software to start recording the neural signals.
  12. Check the saved tracking template and click the Record button on the Motion Capture panel of Cortex software to start recording the trajectory of monkey’s wrist.

Subscription Required. Please recommend JoVE to your librarian.

Representative Results

The size of complete workspace of the apparatus is 600 mm, 300 mm, and 500 mm in x-, y-, and z-axes, respectively. The maximum load of the 3D translational device is 25 kg, while the turning table (including the stepping motor) is weighted 15 kg and can be transported at a speed of up to 500 mm/s. The kinematic precision of the 3D translational device is less than 0.1 mm and the noise of the apparatus is less than 60 dB.

To demonstrate the utility of the system, the monkey is trained (previously trained in a reaching task) to do a delayed reach-to-grasp task with the system30. Using the procedure presented above, the paradigm software automatically presents the behavioral experiment trial by trial (~500 trials per session). Specifically, the monkey must start a trial (Figure 4) by pushing the button and holding it before the “go” cue. As a first step (“motor run” phase), the 3D translational device transports the turning table to a pseudo randomly chosen position, and at the same time, the turning table will also rotate to present a pseudo randomly chosen object. This motor run phase lasts 2 s and all four motors (three in the 3D translational device and one in the turning table) start and stop at the same time. The motor run phase is followed by a “planning” phase (1 second), during which the monkey plans the following movement. Once the green LED (“go” cue) turns on, the monkey should release the button, reach into the turning table and grasp the object with the corresponding grip type as soon as possible (maximum reaction time = 0.5 s; maximum movement time = 1 s). The monkey receives a water reward after a minimum hold time of 0.5 s. One trial is aborted, and the blue LED turns on if the monkey releases the button before the “go” cue or does not release the button within maximum reaction time after the cue.

The synchronization software receives event labels (e.g., Button On, Go Cue, Button Off, etc., Figure 4) from paradigm software and a “start-record” label from motion capture system, then sends them to neural signal acquisition system in real-time during the experiment. All labels are saved with neural signals, but trajectory of the wrist is stored in a separate file. To align the neural signals and trajectory in time, the timestamp of the “start-record” label was taken as that of the first sample of trajectory, then incremental timestamps were assigned for the other samples according to the frame rate of motion capture system. Figure 3 shows the time-aligned event labels, trajectory of wrist, and example neuronal activity.

Trajectory of the wrist during the reaching phase in all successful trials was extracted and divided them into eight groups based on target positions (Figure 5). For each group of trajectories, average values and 95% confidence intervals at each timepoint were calculated. The trajectory plot in Figure 5 shows that the ends of eight groups of trajectory forms a cuboid, which has the same size as the predefined cuboid workspace (step 4.3.4). The peristimulus time histogram (PSTH) for single neuron was plotted with respect to reaching position and object, respectively. The spike trains in successful trials were binned with a sliding window of 50 ms and smoothed with a Gaussian kernel (σ = 100 ms). The average values and 95% confidence interval for each group were calculated by the bootstrap method (n = 2,000). Figure 6 shows the PSTHs of two example neurons tuning both reaching position and objects. The neuron in Figure 6A shows significant selectivity during the reaching and holding phases, while the neuron in Figure 6B starts to tune positions and objects from the middle of the “motor run” phase.

Figure 1
Figure 1: Step-by-step instructions for the 3D translational device assembly.
I-I X-rail, I-III Y-rail, I-II Z-rail, II connecting shafts, III stepping motors, IV planetary gear reducers, V connecting rings, VI diaphragm couplings, VII pedestals, VIII T-shaped connecting boards, IX right triangle frames. (A) The materials for the translational device assembly. (B) Building the frame and install the Y-rails (steps 1.1–1.4). (C) Fixing two Z-rails onto Y-rails (steps 1.5–1.7). (D) Fixing X-rail onto Z-rails (steps 1.8 and 1.9). (E) Installing stepping motor and gear recuder (steps 1.10 and 1.11). (F) Completely assembled 3D translational device (steps 1.12 and 1.3). Please click here to view a larger version of this figure.

Figure 2
Figure 2: Step-by-step instructions for the turning table assembly.
(A) Materials for turning table assembly. (B) Assembling objects and installing touch sensors (step 2.2). (C) Securing objects onto the rotator (step 2.3). (D) Connecting wires of sensors to electric slip ring (step 2.4). (E) Installing the base onto 3D translational device and placing the locating bar and bearing (step 2.5). (F) Putting the rotator into the case (step 2.6). (G) Install the shaft and electric slip ring (steps 2.7 and 2.8). (H) Installing the stepping motor (step 2.9). Please click here to view a larger version of this figure.

Figure 3
Figure 3: The graphical user interface of the paradigm and synchronization software.
(A) A custom-made LabView program to control the behavioral task. (B) A custom-made C++ program to communicate with the paradigm software, neural signal acquisition system, and motion capture system. Please click here to view a larger version of this figure.

Figure 4
Figure 4: Time aligned data in a successful trial.
All the event timings, wrist trajectories (X, Y, and Z) and neuronal activity (example unit 1–3) were recorded simultaneously. The short black lines in the top row are the event labels. “Button On” indicates the time when the monkey pressed the button down; “Position Index” is a number from 1–8 indicating which reaching position is presented; “Object Index” is a number from 1–6 indicating which object is presented; “Motor On” indicates the start time of four motors. “Motor Off” indicates their stop time; “Go Cue” indicates the moment when the green LED tunes on; “Button Off” indicates the moment when the monkey release the button; “Touch On” indicates the moment when the touch sensors in the object detect the hand; “Reward On” indicates the moment when the pump begins to deliver the water reward and represents the end of a trial. The “Button On”, “Position Index”, and “Object Index” labels are saved successively in a very short time at the beginning of a trial. Rows 2–4 (labeled with X, Y and Z) plot the trajectory of the wrist in 3D recorded by motion capture system. Rows 5–7 (labeled with Unit 1, 2 and 3) show the spike trains of three example neurons recorded by neural signal acquisition system. The bottom row shows the timeline for a complete trial which is divided into six phases based on event labels. Please click here to view a larger version of this figure.

Figure 5
Figure 5: Trajectories of wrist recorded by motion capture system.
All successful trials are divided into eight groups according to target positions (labeled with letter A to H). Each solid line is an average trajectory of one group and the shadow represents the variances of trajectories. This figure has been modified from a previous study30. Please click here to view a larger version of this figure.

Figure 6
Figure 6: PSTHs of two example neurons (A and B).
The vertical dashed lines from right to left in order is Motor On, Motor Off, Go Cue On, Button Off, and Touch On. Each solid line (in different colors) in PSTH represents an average firing rate across trials towards one target position and the shadow represents 95% confidence intervals (bootstrap; 2,000 times). For both A and B, the upper and lower panels show the PSTH with respect to different positions and objects, respectively. Please click here to view a larger version of this figure.

Supplementary Files. Please click here to download the files.

Subscription Required. Please recommend JoVE to your librarian.

Discussion

The behavioral apparatus is described here enables a trial-wise combination of different reaching and grasping movements (i.e., the monkey can grasp differently shaped objects in any arbitrary 3D locations in each trial). This is accomplished through the combination of a custom turning table that switches different objects and a linear translational device that transports the turning table to multiple positions in 3D space. In addition, the neural signals from the monkey, trajectory of wrist, and hand shapes were able to be recorded and synchronized for neurophysiological research.

The apparatus, which includes separately driven 3D translational device and turning table, presents multiple target positions and objects independently. That is, all predefined positions and objects were combined arbitrarily, which is important in studying multivariable encoding14,25,28. On the contrary, if the object to be grasped is linked to position (for instance, the object is fixed on a panel), it is difficult to determine whether a single neuron tunes an object or position18,27,32. Moreover, the apparatus presents objects in 3D space instead of on a 2D plane19,27, which activates more neurons with spatial modulation.

The bolted connection is widely used between subcomponents of the apparatus, which results in high expansibility and flexibility. By designing the shape of objects and placement of touch sensors, a large number of grip types were precisely induced and identified. The 3D translational device can move any subcomponent less than 25 kg in 3D space and is competent for most task involving spatial displacement. Moreover, although the apparatus was designed to train rhesus monkey (Macaca mulatta), due to the adjustable range of the 3D translational device, it is also competent for other primates with similar or larger body sizes or even humans.

One major concern of the behavioral task combining reaching and grasping movement is whether hand posture differs across different reaching positions even if the monkey grasps object with the same grip type. Although reaching and grasping is generally regarded as two different movements, their effectors (arm and hand) are connected. Thus, it is inevitable that the reaching movement interacts with grasping. According to the observations in this experiment, the monkey’s wrist angle changed slightly when grasping the same object in different positions, but significant differences in the hand posture were not observed.

One potential limitation of the apparatus is that the experimental room is not completely dark because of infrared light from the motion capture system. The monkey may see the target object throughout the whole trail, which leads to the undesired tuning before the planning period. To control visual access to an object, a switchable glass controlled by the paradigm software can be placed between the head and apparatus. The switchable glass is opaque during the baseline and planning phases and turns transparent after the “go” cue. In this way, the visual information is precisely controlled. In the same way, white noise can be employed to mask the motor running sound, which prevents the monkey from identifying the object’s location by the sound of the motor. Another limitation of the apparatus is that the motion of fingers cannot be tracked. This is because the monkey must reach the hand into the turning table to grasp the object, which blocks the cameras from capturing marks on the hand.

Subscription Required. Please recommend JoVE to your librarian.

Disclosures

The authors have nothing to disclose.

Acknowledgments

We thank Mr. Shijiang Shen for his advice on apparatus design and Ms. Guihua Wang for her assistance with animal care and training. This work was supported by National Key Research and Development Program of China (2017YFC1308501), the National Natural Science Foundation of China (31627802), the Public Projects of Zhejiang Province (2016C33059), and the Fundamental Research Funds for the Central Universities.

Materials

Name Company Catalog Number Comments
Active X-rail CCM Automation technology Inc., China W50-25 Effective travel, 600 mm; Load, 25 kg
Active Y-rail CCM Automation technology Inc., China W60-35 Effective travel, 300 mm, Load 35 kg
Active Z-rail CCM Automation technology Inc., China W50-25 Effective travel, 500 mm; Load 25 kg
Bearing Taobao.com 6004-2RSH Acrylic
Case Custom mechanical processing TT-C Acrylic
Connecting ring CCM Automation technology Inc., China 57/60-W50
Connecting shaft CCM Automation technology Inc., China D12-700 Diam., 12 mm;Length, 700 mm
Diaphragm coupling CCM Automation technology Inc., China CCM 12-12 Inner diam., 12-12mm
Diaphragm coupling CCM Automation technology Inc., China CCM 12-14 Inner diam., 14-12mm
Electric slip ring Semring Inc., China SNH020a-12 Acrylic
Locating bar Custom mechanical processing TT-L Acrylic
Motion capture system Motion Analysis Corp. US Eagle-2.36
Neural signal acquisition system Blackrock Microsystems Corp. US Cerebus
NI DAQ device National Instruments, US USB-6341
Object Custom mechanical processing TT-O Acrylic
Passive Y-rail CCM Automation technology Inc., China W60-35 Effective travel, 300 mm; Load 35 kg
Passive Z-rail CCM Automation technology Inc., China W50-25 Effective travel, 500 mm; Load 25 kg
Pedestal CCM Automation technology Inc., China 80-W60
Peristaltic pump Longer Inc., China BT100-1L
Planetary gearhead CCM Automation technology Inc., China PLF60-5 Flange, 60×60 mm; Reduction ratio, 1:5
Right triangle frame CCM Automation technology Inc., China 290-300
Rotator Custom mechanical processing TT-R Acrylic
Servo motor Yifeng Inc., China 60ST-M01930 Flange, 60×60 mm; Torque, 1.91 N·m; for Y- and Z-rail
Servo motor Yifeng Inc., China 60ST-M01330 Flange, 60×60 mm; Torque, 1.27 N·m; for X-rail
Shaft Custom mechanical processing TT-S Acrylic
Stepping motor Taobao.com 86HBS120 Flange, 86×86 mm; Torque, 1.27 N·m; Driving turning table
Touch sensor Taobao.com CM-12X-5V
Tricolor LED Taobao.com CK017, RGB
T-shaped connecting board CCM Automation technology Inc., China 110-120

DOWNLOAD MATERIALS LIST

References

  1. Leone, F. T., Monaco, S., Henriques, D. Y., Toni, I., Medendorp, W. P. Flexible Reference Frames for Grasp Planning in Human Parietofrontal Cortex. eNeuro. 2 (3), (2015).
  2. Caminiti, R., et al. Early coding of reaching: frontal and parietal association connections of parieto-occipital cortex. European Journal of Neuroscience. 11 (9), 3339-3345 (1999).
  3. Georgopoulos, A. P., Schwartz, A. B., Kettner, R. E. Neuronal population coding of movement direction. Science. 233 (4771), 1416-1419 (1986).
  4. Fu, Q. G., Flament, D., Coltz, J. D., Ebner, T. J. Temporal encoding of movement kinematics in the discharge of primate primary motor and premotor neurons. Journal of Neurophysiology. 73 (2), 836-854 (1995).
  5. Moran, D. W., Schwartz, A. B. Motor cortical representation of speed and direction during reaching. Journal of Neurophysiology. 82 (5), 2676-2692 (1999).
  6. Carmena, J. M., et al. Learning to control a brain-machine interface for reaching and grasping by primates. PLoS Biology. 1 (2), E42 (2003).
  7. Li, H., et al. Prior Knowledge of Target Direction and Intended Movement Selection Improves Indirect Reaching Movement Decoding. Behavioral Neurology. , 2182843 (2017).
  8. Reina, G. A., Moran, D. W., Schwartz, A. B. On the relationship between joint angular velocity and motor cortical discharge during reaching. Journal of Neurophysiology. 85 (6), 2576-2589 (2001).
  9. Taylor, D. M., Tillery, S. I., Schwartz, A. B. Direct cortical control of 3D neuroprosthetic devices. Science. 296 (5574), 1829-1832 (2002).
  10. Wang, W., Chan, S. S., Heldman, D. A., Moran, D. W. Motor cortical representation of hand translation and rotation during reaching. Journal of Neuroscience. 30 (3), 958-962 (2010).
  11. Murata, A., Gallese, V., Luppino, G., Kaseda, M., Sakata, H. Selectivity for the shape, size, and orientation of objects for grasping in neurons of monkey parietal area AIP. Journal of Neurophysiology. 83 (5), 2580-2601 (2000).
  12. Raos, V., Umiltá, M. A., Murata, A., Fogassi, L., Gallese, V. Functional Properties of Grasping-Related Neurons in the Ventral Premotor Area F5 of the Macaque Monkey. Journal of Neurophysiology. 95 (2), 709 (2006).
  13. Schaffelhofer, S., Scherberger, H. Object vision to hand action in macaque parietal, premotor, and motor cortices. eLife. 5, (2016).
  14. Baumann, M. A., Fluet, M. C., Scherberger, H. Context-specific grasp movement representation in the macaque anterior intraparietal area. Journal of Neuroscience. 29 (20), 6436-6448 (2009).
  15. Riehle, A., Wirtssohn, S., Grun, S., Brochier, T. Mapping the spatio-temporal structure of motor cortical LFP and spiking activities during reach-to-grasp movements. Frontiers in Neural Circuits. 7, 48 (2013).
  16. Michaels, J. A., Scherberger, H. Population coding of grasp and laterality-related information in the macaque fronto-parietal network. Scientific Reports. 8 (1), 1710 (2018).
  17. Fattori, P., et al. Hand orientation during reach-to-grasp movements modulates neuronal activity in the medial posterior parietal area V6A. Journal of Neuroscience. 29 (6), 1928-1936 (2009).
  18. Asher, I., Stark, E., Abeles, M., Prut, Y. Comparison of direction and object selectivity of local field potentials and single units in macaque posterior parietal cortex during prehension. Journal of Neurophysiology. 97 (5), 3684-3695 (2007).
  19. Stark, E., Asher, I., Abeles, M. Encoding of reach and grasp by single neurons in premotor cortex is independent of recording site. Journal of Neurophysiology. 97 (5), 3351-3364 (2007).
  20. Velliste, M., Perel, S., Spalding, M. C., Whitford, A. S., Schwartz, A. B. Cortical control of a prosthetic arm for self-feeding. Nature. 453 (7198), 1098-1101 (2008).
  21. Vargas-Irwin, C. E., et al. Decoding complete reach and grasp actions from local primary motor cortex populations. Journal of Neuroscience. 30 (29), 9659-9669 (2010).
  22. Mollazadeh, M., et al. Spatiotemporal variation of multiple neurophysiological signals in the primary motor cortex during dexterous reach-to-grasp movements. Journal of Neuroscience. 31 (43), 15531-15543 (2011).
  23. Saleh, M., Takahashi, K., Hatsopoulos, N. G. Encoding of coordinated reach and grasp trajectories in primary motor cortex. Journal of Neuroscience. 32 (4), 1220-1232 (2012).
  24. Collinger, J. L., et al. High-performance neuroprosthetic control by an individual with tetraplegia. The Lancet. 381 (9866), 557-564 (2013).
  25. Lehmann, S. J., Scherberger, H. Reach and gaze representations in macaque parietal and premotor grasp areas. Journal of Neuroscience. 33 (16), 7038-7049 (2013).
  26. Rouse, A. G., Schieber, M. H. Spatiotemporal distribution of location and object effects in reach-to-grasp kinematics. Journal of Neuroscience. 114 (6), 3268-3282 (2015).
  27. Rouse, A. G., Schieber, M. H. Spatiotemporal Distribution of Location and Object effects in Primary Motor Cortex Neurons during Reach-to-Grasp. Journal of Neuroscience. 36 (41), 10640-10653 (2016).
  28. Hao, Y., et al. Neural synergies for controlling reach and grasp movement in macaques. Neuroscience. 357, 372-383 (2017).
  29. Takahashi, K., et al. Encoding of Both Reaching and Grasping Kinematics in Dorsal and Ventral Premotor Cortices. Journal of Neuroscience. 37 (7), 1733-1746 (2017).
  30. Chen, J., et al. An automated behavioral apparatus to combine parameterized reaching and grasping movements in 3D space. Journal of Neuroscience Methods. 312, 139-147 (2019).
  31. Zhang, Q., et al. Development of an invasive brain-machine interface with a monkey model. Chinese Science Bulletin. 57 (16), 2036 (2012).
  32. Hao, Y., et al. Distinct neural patterns enable grasp types decoding in monkey dorsal premotor cortex. Journal of Neural Engineering. 11 (6), 066011 (2014).

Tags

Apparatus Graspable Objects 3D Workspace Reach And Grasp Turning Table Translational Device Upper Limb Function Neural Principles Brain-machine Interface Y Rails Connecting Shaft Diaphragm Couplings Z Rail Triangle Frame Sliders
Design and Use of an Apparatus for Presenting Graspable Objects in 3D Workspace
Play Video
PDF DOI DOWNLOAD MATERIALS LIST

Cite this Article

Xu, K., Chen, J., Sun, G., Hao, Y.,More

Xu, K., Chen, J., Sun, G., Hao, Y., Zhang, S., Ran, X., Chen, W., Zheng, X. Design and Use of an Apparatus for Presenting Graspable Objects in 3D Workspace. J. Vis. Exp. (150), e59932, doi:10.3791/59932 (2019).

Less
Copy Citation Download Citation Reprints and Permissions
View Video

Get cutting-edge science videos from JoVE sent straight to your inbox every month.

Waiting X
Simple Hit Counter