Journal
/
/
An Open-Source Virtual Reality System for the Measurement of Spatial Learning in Head-Restrained Mice
JoVE Journal
Neuroscience
A subscription to JoVE is required to view this content.  Sign in or start your free trial.
JoVE Journal Neuroscience
An Open-Source Virtual Reality System for the Measurement of Spatial Learning in Head-Restrained Mice

An Open-Source Virtual Reality System for the Measurement of Spatial Learning in Head-Restrained Mice

1,700 Views

08:59 min

March 03, 2023

DOI:

08:59 min
March 03, 2023

8 Views
, , , , , ,

Transcript

Automatically generated

This open source virtual reality system is an important tool for the studying of spatial learning in the brain because it allows researchers to present a consistent set of spatial stimuli to a head-restrained mouse using a simple modular electronic setup. The advantage of this system is that it is inexpensive, easy to set up, compact, and modular, which allows for building multiple behavioral setups for training and integration with existing head-restrained behavioral setups. This system is ideal for measuring spatial learning and head-restrained mice, however, it is equally able to deliver visual virtual reality environments for experiments in other species and preparations, including human psychophysics and neuroimaging.

Demonstrating this procedure will be Carla Diaz and Hannah Chung, research assistants in our laboratory. To begin, connect the wires between the rotary encoder component and the rotary ESP32. Rotary encoders generally have four wires, positive, GND, A, and B.Connect these via jumper wires to the ESP32, 3 0.3 volts, GND 25 and 26 pins.

Connect the serial RX/TX wires between the rotary ESP32 and the behavior ESP32. Make a simple two wire connection between the rotary ESP32, Serial0 RX/TX and the Serial2 port of the behavior ESP32. Connect the serial RX/TX wires between the rotary ESP32 and the single board computer GPIO or direct USB connection.

Make a two wire connection between the single board computer GPIO pins, 14, 15, RX/TX, and the rotary ESP32, Serial2, TX/RX pins 1716. Next, plug the rotary ESP32 USB into the single board computer USB to upload the initial rotary encoder code. Connect the 12 volt liquid solenoid valve to the ULN2803 IC output on the far left of the OMW small PCB, connect the lick port to the ESP32 touch input.

Plug the USB into the single board computer’s USB port to upload new programs to the behavior ESP32 for different experimental paradigms and to capture behavior data using the included processing sketch. Then plug the 12 volt DC wall adapter into the 2.1 millimeter barrel jack connector on the behavior ESP32 OMW small PCB to provide the power for the reward solenoid valve. Plug the single board computer’s HDMI two output into the projector HDMI port.

This will carry the graphical software environment rendered by the single board computer GPU to the projection screen. Open the terminal window in the single board computer and navigate to the Hall Pass VR folder. Run the indicated virtual reality or VR graphical user interface or GUI to open the GUI window.

Select and add four elements from the list box for each of the three patterns along the track, and then click on Generate. Select Floor and Ceiling Images from the dropdown menus and set the length of the track as two meters for this example code. Name this pattern, if desired.

Click the Start button and wait until the VR window starts before clicking elsewhere. The graphical software environment will appear on screen two. Run the processing sketch to acquire and plot the behavioral data movement.

Open the indicated command in the processing IDE. Change the animal to your mouse number variable and set session minutes equal to the length of the behavioral session in minutes. Click on the Run button on the processing IDE.

Check the processing plot window which should show the current mouse position on the virtual linear track as the wheel rotates along with the reward zones and running histograms of the licks, laps, and rewards, updated every 30 seconds. Advance the running wheel by hand to simulate the mouse running for testing or use a test mouse for the initial setup. Click on the plot window and press the Q key on the keyboard to stop acquiring behavioral data.

A text file of the behavioral events and times and an image of the final plot window in PNG is saved when session minutes has elapsed or the user presses the Q key to quit. For random forging with non-operant rewards, run the graphical software GUI program with a path of arbitrary visual elements. Then upload the behavior program to the behavior ESP32 with multiple non-operant rewards to condition the mouse to run and lick.

Gently place the mouse in the head fixation apparatus, adjust the lick spout to a location just anterior to the mouse’s mouth, and position the mouse wheel into the center of the projection screen zone. Set the animal’s name in the processing sketch and then press Run in the processing IDE to start acquiring and plotting the behavioral data. Run the mouse in 20 to 30 minute sessions until the mouse runs for at least 20 laps per session and lick for rewards presented in random locations.

For random foraging with operant rewards on alternate laps, upload the behavior program with alternating operant equal to one and train the mouse until it licks for both non-operant and operant reward zones. For fully operant random foraging, upload the behavior program with four operant random reward zones and train the mouse until it licks for rewards consistently along the track. Next for spatial learning, run the graphical software program with a path of a dark hallway with a single visual cue in the center.

Then upload the behavior program with a single hidden reward zone to the behavior ESP32. Let the mouse run for 30 minute sessions with a single hidden reward zone and single visual cue VR hallway and capture data during the session as described before. Download the txt data file from the processing sketch folder and analyze the behavioral data to observe emergence of spatially selective licking as an indicator of spatial learning.

Spatial learning using the graphical software environment is shown here. Through progressive stages of training on random foraging, mice learned to run on the wheel and lick consistently along the track at low levels before being switched to a single hidden reward location to show spatial learning. In this study, four of the seven mice learned the hidden reward task with a single visual cue in two to four sessions, as shown by their licking near the reward zone with increasing selectivity.

Furthermore, the mice exhibited both substantial within session and between session learning. Spatial licks per lap on day two showed increased licking prior to the reward zone and decreased licking elsewhere, indicating the development of spatially specific anticipatory licking. The main thing to remember when using the system is that mice will only perform well if reinforced and comfortable on the running wheel.

Therefore, water restrict the animals appropriately, handle them gently, and ensure their head restrain position is optimal for running while viewing the projection screen. A neuroscience researcher can combine this open source VR system with in vivo imaging or electrophysiology to investigate neuron circuits underlying spatial learning in the brain. We think that the simplicity of this open source VR system will allow researchers to integrate the system into diverse neuro recording setups.

The precise control over spatial stimuli in the VR environment will allow researchers to examine the contributions of specific neuron circuits to spatial learning.

Summary

Automatically generated

Here, we present a simplified open-source hardware and software setup for investigating mouse spatial learning using virtual reality (VR). This system displays a virtual linear track to a head-restrained mouse running on a wheel by utilizing a network of microcontrollers and a single-board computer running an easy-to-use Python graphical software package.

Read Article