Waiting
Login processing...

Trial ends in Request Full Access Tell Your Colleague About Jove

Behavior

Integrating Visual Psychophysical Assays within a Y-Maze to Isolate the Role that Visual Features Play in Navigational Decisions

Published: May 2, 2019 doi: 10.3791/59281

Summary

Here, we present a protocol to demonstrate a behavioral assay that quantifies how alternative visual features, such as motion cues, influence directional decisions in fish. Representative data are presented on the speed and accuracy where Golden Shiner (Notemigonus crysoleucas) follow virtual fish movements.

Abstract

Collective animal behavior arises from individual motivations and social interactions that are critical for individual fitness. Fish have long inspired investigations into collective motion, specifically, their ability to integrate environmental and social information across ecological contexts. This demonstration illustrates techniques used for quantifying behavioral responses of fish, in this case, Golden Shiner (Notemigonus crysoleucas), to visual stimuli using computer visualization and digital image analysis. Recent advancements in computer visualization allow for empirical testing in the lab where visual features can be controlled and finely manipulated to isolate the mechanisms of social interactions. The purpose of this method is to isolate visual features that can influence the directional decisions of the individual, whether solitary or with groups. This protocol provides specifics on the physical Y-maze domain, recording equipment, settings and calibrations of the projector and animation, experimental steps and data analyses. These techniques demonstrate that computer animation can elicit biologically-meaningful responses. Moreover, the techniques are easily adaptable to test alternative hypotheses, domains, and species for a broad range of experimental applications. The use of virtual stimuli allows for the reduction and replacement of the number of live animals required, and consequently reduces laboratory overhead.

This demonstration tests the hypothesis that small relative differences in the movement speeds (2 body lengths per second) of virtual conspecifics will improve the speed and accuracy with which shiners follow the directional cues provided by the virtual silhouettes. Results show that shiners directional decisions are significantly affected by increases in the speed of the visual cues, even in the presence of background noise (67% image coherency). In the absence of any motion cues, subjects chose their directions at random. The relationship between decision speed and cue speed was variable and increases in cue speed had a modestly disproportionate influence on directional accuracy.

Introduction

Animals sense and interpret their habitat continuously to make informed decisions when interacting with others and navigating noisy surroundings. Individuals can enhance their situational awareness and decision making by integrating social information into their actions. Social information, however, largely stems from inference through unintended cues (i.e., sudden maneuvers to avoid a predator), which can be unreliable, rather than through direct signals that have evolved to communicate specific messages (e.g., the waggle dance in honey bees)1. Identifying how individuals rapidly assess the value of social cues, or any sensory information, can be a challenging task for investigators, particularly when individuals are traveling in groups. Vision plays an important role in governing social interactions2,3,4 and studies have inferred the interaction networks that may arise in fish schools based on each individual’s field of view5,6. Fish schools are dynamic systems, however, making it difficult to isolate individual responses to particular features, or neighbor behaviors, due to the inherent collinearities and confounding factors that arise from the interactions among group members. The purpose of this protocol is to complement current work by isolating how alternative visual features can influence the directional decisions of individuals traveling alone or within groups.

The benefit of the current protocol is to combine a manipulative experiment with computer visualization techniques to isolate the elementary visual features an individual may experience in nature. Specifically, the Y-maze (Figure 1) is used to collapse directional choice to a binary response and introduce computer animated images designed to mimic the swimming behaviors of virtual neighbors. These images are projected up from below the maze to mimic the silhouettes of conspecifics swimming beneath one or more subjects. The visual characteristics of these silhouettes, such as their morphology, speed, coherency, and swimming behavior are easily tailored to test alternative hypotheses7.

This paper demonstrates the utility of this approach by isolating how individuals of a model social fish species, the Golden Shiner (Notemigonus crysoleucas), respond to the relative speed of virtual neighbors. The protocol focus, here, is on whether the directional influence of virtual neighbors change with their speed and, if so, quantifying the form of the observed relationship. In particular, the directional cue is generated by having a fixed proportion of the silhouettes act as leaders and move ballistically towards one arm or another. The remaining silhouettes act as distractors by moving about at random to provide background noise that can be tuned by adjusting the leader/distractor ratio. The ratio of leaders to distractors captures the coherency of the directional cues and can be adjusted accordingly. Distractor silhouettes remain confined to the decision area (“DA”, Figure 1A) by having the silhouettes reflect off of the boundary. Leader silhouettes, however, are allowed to leave the DA region and enter their designated arm before slowly fading away once the silhouettes traversed 1/3 the length of the arm. As leaders leave the DA, new leader silhouettes take their place and retrace their exact path to ensure that the leader/distractor ratio remains constant in the DA throughout the experiment.

The use of virtual fish allows for the control of the visual sensory information, while monitoring the directional response of the subject, which may reveal novel features of social navigation, movement, or decision making in groups. The approach used here can be applied to a broad range of questions, such as effects of sublethal stress or predation on social interactions, by manipulating the computer animation to produce behavioral patterns of varying complexity.

Subscription Required. Please recommend JoVE to your librarian.

Protocol

All experimental protocols were approved by the Institutional Animal Care and Use Committee of the Environmental Laboratory, US Army Engineer and Research and Development Center, Vicksburg, MS, USA (IACUC# 2013-3284-01).

1. Sensory maze design

  1. Conduct the experiment in a watertight poly methyl methacrylate Y-maze platform (made in-house) set atop of a transparent support platform in a dedicated room. Here the platform is 1.9 cm thick and is supported by 4 7.62 cm beams of extruded aluminum that is 1.3 m in width, 1.3 m in length, and 0.19 m in height.
  2. Construct the holding and decision areas to be identical in construction (Figure 1A). Here, the Y-maze arms are 46 cm in length, 23 cm in width, and 20 cm in depth with a central decision area approximately 46 cm in diameter.
  3. Adhere white project-through theater screen at the bottom of the Y-maze for projecting visual stimuli into the domain.
  4. Coat the sides of the Y-maze with white vinyl to limit external visual stimuli.
  5. Install a remotely controlled clear gate (via clear monofilament) to partition the holding area from the central decision area to release subjects into the maze after acclimation.
  6. Place additional blinds to prevent the fish from viewing lights, housing, and equipment, such as light-blocking blinds that reach the floor in door frames to minimize light effects and shadow movements from the external room or hallway.

2. Recording equipment

  1. Select an overhead camera (black and white) based on the contrast needed between the background imagery, virtual fish and subject fish.
  2. Install an overhead camera to record the maze from above and to record the behaviors of the fish and the visual projections.
    1. For this demonstration, use b/w Gigabyte Ethernet (GigE) cameras, such that 9 m IP cables were attached to a computer with a 1 Gb Ethernet card in a control room.
  3. Connect the camera to a computer in an adjoining room where the observer can remotely control the gate, visual stimuli program, and camera recording software.
  4. Ensure that camera settings are at sampling and frequency rates that prevent any flickering effects, which occur when the camera and software are out of phase with the room lights.
    1. Check the electrical frequency of the location; offset the camera sampling rate (frames per second, fps) to prevent flickering by multiplying or dividing the AC frequency by a whole number.
  5. Set the camera settings so that image clarity is optimized using the software and computer to visualize the relevant behaviors.
    1. For this demonstration, perform sampling at 30 fps with a spatial resolution of 1280 pixels x 1024 pixels.

3. Calibrate lighting, projector, and camera settings

  1. Install four overhead track lighting systems along the walls of the experimental room.
  2. Install adjustable control switches for the lights to provide greater flexibility in achieving the correct room ambient light.
  3. Position the lights to avoid reflections on the maze (Figure 1B).
  4. Secure a short throw (ST) projector to the bottom edge of the maze’s support structure (Figure 1C).
    1. Select the projection resolution (set to 1440 pixels x 900 pixels for this demonstration).
  5. Adjust ambient light levels, created by the overhead lights and projector, to match the lighting conditions found in the subjects’ housing room (here set to 134 ± 5 lux during the demonstration experiment, which is equivalent to natural lighting on an overcast day).
    1. Lock or mark the location of the dimmer switch for ease and consistency during experimental trials.
  6. Use a camera viewer program to configure the camera(s) to control exposure mode, gain, and white balance control.
    1. In this demonstration, set the Pylon Viewer to “continuous shot”, 8000 μs exposure time, 0 gain, and 96 white balance, which provides control of the video recording.

4. Calibrate visual projection program: background

  1. Project a homogenous background up onto the bottom of the maze and measure any light distortion from the projector. Here the background was created using Processing (v. 3), which is a tractable and well documented platform to create customized visualizations for scientific projects (https://processing.org/examples/).
    1. Create a program that will run a processing window to be projected onto the bottom of the maze. Customizing the background color of the window is done with the background command, which accepts an RGB color code. Several small example programs are found in the Processing tutorials (https://processing.org/tutorials/).
    2. Use the background color program to calibrate the projector and external lighting conditions.
  2. Measure any light distortion created by the projector using an image processing program to identify any deviations from the expected homogeneous background created. The following steps apply to using ImageJ (v. 1.52h; https://imagej.nih.gov/ij/).
    1. Capture a still frame image of the illuminated Y-maze with a uniform background color and open in ImageJ.
    2. Using the straight, segmented, or freehand line tool draw a straight vertical line from the brightest location in the center of the hotspot to the top of the Y-maze (Figure 2A).
    3. From the analyze menu, select Plot Profile to create a graph of gray scale values versus distance in pixels.
    4. Save pixel data as a comma separated file (.csv file extension) consisting of an index column and a pixel value column.
  3. Align the projection area with the maze (Figure 2B) and model any unwanted light distortion to reduce any color distortion that may be created by the projector (Figure 2C). The following outline the steps taken in the current demonstration.
    1. Import the ImageJ pixel intensity data file using the appropriate tab delimited read function (e.g., read_csv from the tidyverse package to read in comma separated files).
    2. Calculate the variability in light intensity along the sample transect, such as with a coefficient of variation, to provide a baseline reference for the level of distortion created in the background.
    3. Transform the raw pixel values to reflect a relative change in intensity from brightest to dimmest, where the smallest pixel intensity will approach the desired background color value selected in the image program.
    4. Plot the transform pixel intensity values beginning at the brightest part of the anomaly generally yields a decaying trend in intensity values as a function of the distance from the source. Use nonlinear least squares (function nls) to estimate the parameter values that best fit the data (here, a Gaussian decay function).
  4. Create the counter gradient using the same program adopted to generate the background counter image (Processing v. 3) to reduce any color distortion that may be created by the projector using R (v. 3.5.1).
    NOTE: The gradient function will generate a series of concentric circles centered on the brightest spot in the image that change in pixel intensity as a function of the distance from the center. The color of each ring is defined by subtracting the change in pixel intensity predicted by the model from the background color. Correspondingly, ring radius increases with distance from the source as well. The best fit model should reduce, if not eliminate, any pixel intensity across the gradient to provide a background uniformity.
    1. Create a Gaussian gradient (Equation) using the visual stimulus program by adjusting the required parameters.
      1. Parameter a affect the brightness/darkness of the Gaussian distribution gradient. The higher the value, the darker the gradient.
      2. Parameter b affects the variance of the gradient. The larger the value, the broader the gradient will extend before leveling out to the desired background pixel intensity, c.
      3. Parameter c sets the desired background pixel intensity. The larger the value, the darker the background.
    2. Save the image to a folder using the saveFrame function, so that a fixed background image can be uploaded during the experiments to minimize memory load when rendering the stimuli during an experimental trial.
    3. Rerun the background generating program and visually inspect the results, as shown in Figure 2C. Repeat step 4.3 to quantify any observed improvements in reducing the degree of variability in light intensity across the sample transect.
  5. Empirically adjust the lighting levels, model parameters, or the distance covered in the transect (e.g., outer radius of the counter gradient) to make any additional manual adjustments until RGB values of the acclimation zone are similar to the decision area. Model parameters in this test were: a = 215, b = 800, and c = 4.
  6. Add the final filter to the experiment visual stimuli program.

5. Calibrate visual projection program: visual stimuli

NOTE: Rendering and animating the visual stimuli can also be done in Processing using the steps below as guides along with the platform’s tutorials. A schematic of the current program’s logic is provided in (Figure 3) and additional details can be found in Lemasson et al. (2018)7. The following steps provide examples of the calibration steps taken in the current experiment.

  1. Open the visual projection program Vfish.pde to center the projection within the maze’s decision area (Figure 1A) and calibrate the visual projections based on the hypotheses being tested (e.g., calibrate the size and speeds of the silhouettes to match those of the test subjects). Calibrations are hand-tuned in the header of the main program (Vfish.pde) using pre-selected debugging flags. In debugging mode (DEBUG = TRUE) sequentially step through each DEBUGGING_LEVEL_# flag (numbers 0-2) to make the necessary adjustments
    1. Set the DEBUGGING_LEVEL_0 flag to ‘true’ and run the program by pressing the play icon in the sketch window. Change the x and y position values (Domain parameters dx and dy, respectively) until the projection is centered.
    2. Set the DEBUGGING_LEVEL_1 to ‘true’ to scale the size of the fish silhouette (rendered as an ellipse). Run the program and iteratively adjust the width (eW) and length (eL) of the ellipse until it matches the average size of the test subjects. Afterwards, set the DEBUGGING_LEVEL_2 to ‘true’ to adjust the baseline speed of the silhouettes (ss).
    3. Set DEBUG = FALSE to exit debugging mode.
  2. Check that distractor silhouettes remain bounded to the Decision Area (DA, Figure 1A), that leader silhouette trajectories are properly aligned with either arm, and that the leader/distractor ratio within the DA remains constant.
  3. Step through the program’s GUI to ensure functionality of the options.
  4. Check that data are being properly written out to file.
  5. Ensure that the recording software can track the subject fish with visual projections in place. Steps to track fish have previously been described in Kaidanovich-Berlin et al. (2011)8, Holcomb et al. (2014)9, Way et al. (2016)10 and Zhang et al. (2018)11.

6. Animal preparation

  1. Choose the subject species based on the research question and application, including sex, age, genotype. Assign subjects to the experimental holding tanks and record baseline biometric statistics (e.g., body length and mass).
  2. Set the environmental conditions in the maze to that of the holding system. Water quality conditions for baseline experiments of behavior are often held at optimal for the species and for the experimental domain setup.
    1. In this demonstration, use the following conditions: 12 h light/12 h dark cycle, overhead flicker-free halogen lights set to 134 ± 5 lux, 22 ± 0.3°C, 97.4 ± 1.3% dissolved oxygen, and pH of 7.8 ± 0.1.
  3. Habituate the animals by transferring them to the domain for up to 30 min per day for 5 days without the computer-generated visual stimuli (e.g., fish silhouettes) before the start of the experimental trials.
  4. Ensure that the subject fish at that time is selected, assigned, weighed, measured and transferred to experimental tanks.
    NOTE: Here, Golden Shiners standard length and wet weight were 63.4 ± 3.5 mm SL and 1.8 ± 0.3 g WW, respectively.
  5. Use a water-to-water transfer when moving fish between tanks and the maze to reduce stress from handling and air exposure.
  6. Conduct experiments during a regular, fixed light cycle reflecting the subjects’ natural biological rhythm. This allows the subjects to be fed at the end of each day’s experimental trials to limit digestion effects on behavior.

7. Experimental procedure

  1. Turn on room projector and LED light track systems to predetermined level of brightness (in this demonstration 134 ± 5 lux) allowing the bulbs to warm (approximately 10 minutes).
  2. Open the camera viewer program and load the settings for aperture, color, and recording saved from setup to ensure best quality video can be attained.
    1. Open Pylon Viewer and activate the camera to be used for recording.
    2. Select Load Features from the camera dropdown menu and navigate to the saved camera settings folder.
    3. Open the saved settings (here labeled as camerasettings_20181001) to ensure video quality and click on continuous shot.
    4. Close Pylon Viewer.
  3. Open the visual projection program Vfish.pde and check that the projection remains centered in the maze, that the DataOut folder is empty, and that the program is operating as expected
    1. Check that the calibration ring is centered in the DA using step 5.1.1.
    2. Open the DataOut folder to ensure that it is empty for the day.
    3. Run the visual stimuli program by pressing play in the sketch window of Vfish.pde and use dummy variables to ensure program functionality.
      1. Enter fish id number (1-16), press Enter, and then confirm the selection by pressing Y or N for yes or no.
      2. Enter group size (fixed here at 1) and confirm selection.
      3. Enter desired silhouette speed (0-10 BL/s) and confirm selection.
      4. Press Enter to move past the acclimatization period and check the projection of the virtual fish in the decision area.
      5. Press Pause to pause the program and enter the dummy outcome choice, i.e., left (1) or right (2).
      6. Press Stop to terminate the program and write the data out to file.
    4. Check that data were properly written to file in the DataOut folder and log the file as a test run in the lab notes before fish are placed into the domain for acclimation.
  4. Use clock time and a stopwatch to log start and stop times of the trial in lab notebook to complement the elapsed times that can later be extracted from video playback due to the short duration of some replicate trials.
  5. Conduct a water change (e.g., 30%) using the holding system sump water before transferring a subject to the maze.
  6. Confirm that water quality is similar between the maze and holding system, and check gate functioning to ensure that it slides smoothly to just above water height.
  7. Using the predetermined experimental schedule, which has randomized subject-treatment exposures over the course of the experiment, enter the values selected for the current trial (stopping at the acclimatization screen, steps 7.3.3.1 - 7.3.3.3).
    1. Record treatment combination data into the lab notebook.
  8. Transfer the subject into the Y-maze holding area for a 10-minute acclimation period.
  9. Start the video recording, then hit the Return key in the Vfish.pde window at the end of the acclimation period. This will start the visual projections.
  10. When the virtual fish appear in the domain, log the clock time, and lift the holding gate (Figure 4A).
  11. End the trial when 50% of the subject’s body moves into a choice arm (Figure 4B) or when the designated period of time elapses (e.g., 5 min).
    1. Log the clock time, start and stop times from the stopwatch, and the subjects’ choice (i.e., left (1), right (2), or no choice(0)).
    2. Stop the video recording and press Pause in the visual stimuli program, which will prompt the user for trial outcome data (the arm number selected or a 0 to indicate that no choice was made). Upon confirming the selection, the program will return to the first screen and await the values expected for the next experimental trial.
  12. Collect the subject and return it to the respective holding tank. Repeat Steps 7.7-7.13 for each trial.
  13. At the conclusion of a session (AM or PM) press Stop in the program once the last fish of the session has made a decision. Pressing Stop will write the session’s data out to file.
  14. Repeat the water exchange at the conclusion of the morning session to ensure water quality stability.
  15. After the last trial of the day, review the lab notebook and make any needed notes.
    1. Press Stop in the visual stimuli program to output the collected data to the DataOut folder, after the last trial of the day.
  16. Verify the number, name, and location of the data files saved by the visualization program.
  17. Log water quality, along with light levels in the maze room to compare with the morning settings. Place the aeration system and heaters into the Y-maze.
  18. Turn off the projector and experimental room tracking lighting.
  19. Feed fish the predetermined daily ration.

8. Data Analysis

  1. Ensure that the experimental data contain the necessary variables (e.g., date, trial, subject id, arm selected by program, visual factors tested, subject choice, start and stop times, and comments).
  2. Check for any recording errors (human or program induced).
  3. Tabulate responses and check for signs of any directional biases on the part of the subjects (e.g., binomial test on arm choice in the control condition)7.
  4. When the experiment is designed using repeated measurements on the same individuals, as in the case here, the use of mixed effects models is suggested.

Subscription Required. Please recommend JoVE to your librarian.

Representative Results

Hypothesis and design

To demonstrate the utility of this experimental system we tested the hypothesis that the accuracy with which Golden Shiner follow a visual cue will improve with the speed of that cue. Wild type Golden Shiner were used (N = 16, body lengths, BL, and wet weights, WW, were 63.4 ± 3.5 mm and 1.8 ± 0.3 g, respectfully). The coherency of the visual stimuli (leader/distractor ratio) were fixed at 0.67, while we manipulated the speed at which our motion cues (i.e., the leaders) moved with respect to their distractors. Speed levels of the leader silhouettes that provide the directional cues ranged from 0-10 BL/s (in increments of 2), which spans the range of speeds typically considered to reflect sustained, prolonged, or burst swimming modes of activity in fish12. At the control level, 0, the leader silhouettes were oriented towards a destination arm among the randomly oriented distractors, but none of the silhouettes moved. The destination arm was chosen at random for each trial by the program. Distance units are in body length, which was defined by the mean standard length of our subjects, and time is in seconds. The current representative analysis focuses on measuring primary response variables (decision speed and accuracy), yet the design of the experiment also enables investigators to extract added information by tracking subject movements and analyzing their kinematics.

Our fish subjects were housed following section 6 of the protocol. Each subject was exposed to one level of the treatment per day. We randomized both within subject treatment level (cue speed) across days and the order in which subjects were tested on each day. Linear and generalized linear mixed effects models (LMM and GLMM, respectively) were used to test the effects of leader silhouette speed on the speed and accuracy with which subjects followed the visual stimuli. Subject id was included as the random effect in both models.

Data and findings

In the absence of any motion cues Golden Shiner acted as expected and chose their direction at random (stimulus speed = 0, binomial test, nLeft = 33, nRight = 40, = 0.45, P = 0.483). While most subjects showed no signs of stressful behavior within the domain and made a decisive decision within the allotted time (5 min), 22% of the subjects showed a reluctance to leave the holding area or enter the decision area. Data from these indecisive fish were not included in the analysis. The remaining 78% of our subjects showed a significant improvement in the accuracy with which they followed the directional stimuli as the speed of those stimuli increased (GLMM, z = 1.937, P = 0.053). Figure 5A shows the nature of this relationship, where we find a 1.2-fold increase in directional accuracy for each increase in stimulus speed level. This relationship is only modestly disproportionate and is not, by itself, suggestive of a threshold response to changes in cue speed. Increases in stimulus speed also led to a significant increase in decision speed (LMM, F1,56 = 4.774, P = 0.033). However, as evident in Figure 5B the trend in decision speed was inconsistent and highly variable across stimulus speed levels. What is apparent in these decision speed data is that it took subjects, on average, anywhere from 5-20x longer to make their decision when the stimuli were moving than when they were not (decision speeds of 4.6 ± 2.3 s and 81.4 ± 74.7 s for stimulus speeds of 0 and 8, respectively, ± standard deviation, SD). Indeed, without the control level we found no significant change in decision speed as a function of stimulus speed.

Figure 1
Figure 1. Y-Maze domain. A. Image of the Y-maze apparatus for decision-making test. Annotations represent the following: Holding Area (HA, green), Decision Area (DA, blue), Left Decision Arm (LDA), and Right Decision Arm (RDA). B. Image of the Y-maze and room with overhead adjustable track lighting and GigE camera placement (only one of the four overheads lights strips are visible). C. Image of the Y-maze (side-view) including the projector placement which is locked by the sliding carriage to eliminate movements during, or between, trials. Please click here to view a larger version of this figure.

Figure 2
Figure 2. Background and stimulus calibration. A. Image of the illuminated Y-maze with a uniform background color and a pixel intensity transect (green line) between the holding area and the Decision Area, DA (mean pixel intensity 112 ± 1278). The light gradient generated by the projector’s bulb (hotspot) is clearly visible. B. Image showing the alignment of the projections with the DA. C. Image of the maze with the filtered background and a solitary silhouette projected in the center of the DA for calibration (size, speed). The addition of the counter gradient background in (C) results in a darker background (mean pixel intensity 143.1 ± 5.5) and far less spatial variability (coefficient of variation drops from 11.4 (A.) to 0.03 (C.). Please click here to view a larger version of this figure.

Figure 3
Figure 3. Schematic of the general flow of operations in the visualization program used in the experiments. For additional procedural details see7. Please click here to view a larger version of this figure.

Figure 4
Figure 4. Experimental trial with both real and virtual fish silhouettes. A. Image a (live) Golden Shiner leaving the holding area (green circle). B. Image of a (live) Golden Shiner in the decision area (green circle) among the virtual fish silhouettes. Please click here to view a larger version of this figure.

Figure 5
Figure 5. Accuracy and speed of directional responses to changes in the relative speed of motion cues. A. Graph of the fish decision accuracy with which Golden Shiner followed the ‘leader’ silhouettes plotted against the stimulus speed (BL/s). B. Graph of the fish decision speed plotted against the stimulus speed (BL/s). Data are means ± standard errors, SE. Groups of 15 virtual silhouettes were randomly distributed throughout the decision zone with a 67% coherency level (10 of the 15 silhouettes acted as Leaders, the remaining 5 silhouettes acted as distractors) and we varied the speed of the leaders from 0-10 BL/s. Distractor speeds remained fixed at 1 BL/s at all speed levels, except the control in which none of the silhouettes moved. Please click here to view a larger version of this figure.

Subscription Required. Please recommend JoVE to your librarian.

Discussion

Visual cues are known to trigger an optomotor response in fish exposed to black and white gratings13 and there is increasing theoretical and empirical evidence that neighbor speed plays an influential role in governing the dynamical interactions observed in fish schools7,14,15,16,17. Contrasting hypotheses exist to explain how individuals in groups integrate neighbor movements, such as reacting proportionally to all discernible cues14, adopting a motion-threshold response17, or monitoring collision times18. A first step in testing these alternative hypotheses is validating their underlying assumptions. Here we demonstrated the utility of our protocol in identifying the role that a particular sensory feature can have on guiding directional decisions.

We isolated how individuals of a social fish species, the Golden Shiner, responded to changes in the relative speed of visual stimuli designed to mimic conspecifics in a school. Golden Shiner directional accuracy did improve with increases in the relative speed of the visual stimuli, but the functional relationship between these variables was only marginally disproportionate. The relationship between decision speed and stimulus speed, while significant, was highly variable and inconsistent. The results do demonstrate, however, that a speed difference found in images scattered across the field of view of these fish does play an important role in triggering a response and guiding their overt attention. Teasing apart how individuals select among the actions of specific neighbors could be probed with the current design by introducing conflicting directions in the stimuli.

In a recent experiment with Zebrafish, Danio rerio, we found no evidence of indecisiveness in solitary trials7, yet Golden Shiner in this demonstration displayed a greater reluctance to leave the holding area. The differences between these two species may be explained by their life history strategies and the relative strength of their social tendencies (or reliance). Zebrafish appear to display more variable social coherency than Golden Shiners (e.g., facultative vs. obligate schoolers3). It is likely that the stronger social coherency in Golden Shiner may have contributed to subjects showing higher levels of shyness, or hesitancy within the domain than their zebrafish counterparts.

The order of the steps is subtle yet critical in the protocol. The process of balancing the lights, the projector, and program filter can take more time than often anticipated for new domains. In this protocol, lessons learned have been included to reduce setup and light balance time, such as use of track lights that reflect off the wall (not on the domain), adjustable light controllers, and program-generated filters for the projector. Consider also that what may appear to be visually acceptable to the human eye will not be viewed by the camera and software the same way, thus your lighting conditions may require additional adjustments. Even slight changes in monitor angles will result in background gradient changes. Thus, detailed note taking and saving file settings will greatly reduce the likelihood of changes occurring during the experiment. Moving through the process from physical to filtering, as presented here, yields the fastest steps to success.

The use of a ST projector enables greater spatial flexibility over a monitor, but this approach creates an unwanted visual anomaly called a “hotspot”. A hotspot is a bright spot on the projection surface created by the proximity of the projector’s bulb. In the protocol, Section 4 was dedicated to the creation of background filters and checking for homogeneous lightning across the domain. The steps provided here will help users avoid, or minimize, the unwanted effects of the hotspot by modeling any unwanted gradient and using the model to reproduce an inverse gradient to counter the effects. Lastly, the ST projector model may vary, however, image adjustments (rotate, flip, front or rear screen projection) and keystone correction (± 3-5 degrees) are useful features to ensure the desire image fits the domain and can be adjusted for distortion.

Over time, the experimental rooms were updated for ease by changes in the hardware (i.e., cameras, cabling, video cards, monitors). It is noteworthy to mention that hardware changes will likely result in additional start-up time to balance lighting and work through any potential program issues. Therefore, it is recommended that any hardware be dedicated to a system until completion of the desired experiments. Most challenges have been tied to performance differences between monitors, video cards and the cameras resulting sometimes in alteration of the programming code. Since the time of this work, new domains have been developed in which the inner test domain can be removed and switched for other test domains. We recommend this flexibility be considered when designing the experimental domains and support structures.

The current protocol allows investigators to isolate and manipulate visual features in a manner that both reflects the visual environment expected within a school, while also controlling for confounding factors that accompany exposure to real conspecifics (e.g., hunger, familiarity, aggression)7. In general, computer animation (CA) of virtual fish (i.e., silhouettes) is a practice that is becoming more common place due to its distinct advantages in stimulating behavioral responses19,20,21. CA allows one to customize visual cues (direction, speed, coherency, or morphology), while introducing a level of standardization and repeatability in the desired stimulus that exceeds what can be achieved when using live animals as the stimulant. The use of virtual reality in behavioral studies, on both animals22 and humans23, is also steadily increasing and promises to become a powerful empirical tool as the technology becomes more available and tractable. Taken together, these virtual approaches also replace and reduce the live animal requirements of animal ethics in science (e.g., IACUC, AAALAC, and ACURO)24, while concomitantly lowering laboratory costs and burdens.

Subscription Required. Please recommend JoVE to your librarian.

Disclosures

All authors contributed to the experimental design, analyses and writing the paper. A.C.U. and C.M.W. setup and collected the data. The authors have nothing to disclose.

Acknowledgments

We thank Bryton Hixson for setup assistance. This program was supported by the Basic Research Program, Environmental Quality and Installations (EQI; Dr. Elizabeth Ferguson, Technical Director), US Army Engineer Research and Development Center.

Materials

Name Company Catalog Number Comments
Black and white IP camera Noldus, Leesburg, VA, USA https://www.noldus.com/
Extruded aluminum 80/20 Inc., Columbia City, IN, USA 3030-S https://www.8020.net 3.00" X 3.00" Smooth T-Slotted Profile, Eight Open T-Slots
Finfish Starter with Vpak, 1.5 mm extruded pellets Zeigler Bros. Inc., Gardners, PA, USA http://www.zeiglerfeed.com/
Golden shiners Saul Minnow Farm, AR, USA http://saulminnow.com/
ImageJ (v 1.52h) freeware National Institute for Health (NIH), USA https://imagej.nih.gov/ij/
LED track lighting Lithonia Lightening, Conyers, GA, USA BR20MW-M4 https://lithonia.acuitybrands.com/residential-track
Oracle 651 white cut vinyl 651Vinyl, Louisville, KY, USA 651-010M-12:5ft http://www.651vinyl.com. Can order various sizes.
PowerLite 570 overhead projector Epson, Long Beach CA, USA V11H605020 https://epson.com/For-Work/Projectors/Classroom/PowerLite-570-XGA-3LCD-Projector/p/V11H605020
Processing (v 3) freeware Processing Foundation https://processing.org/
R (3.5.1) freeware The R Project for Statistical Computing https://www.r-project.org/
Ultra-white 360 theater screen Alternative Screen Solutions, Clinton, MI, USA 1950 https://www.gooscreen.com. Must call for special cut size
Z-Hab system Pentair Aquatic Ecosystems, Apopka, FL, USA https://pentairaes.com/. Call for details and sizing.

DOWNLOAD MATERIALS LIST

References

  1. Dall, S. R. X., Olsson, O., McNamara, J. M., Stephens, D. W., Giraldeau, L. A. Information and its use by animals in evolutionary ecology. Trends in Ecology and Evolution. 20 (4), 187-193 (2005).
  2. Pitcher, T. Sensory information and the organization of behaviour in a shoaling cyprinid fish. Animal Behaviour. 27, 126-149 (1979).
  3. Partridge, B. The structure and function of fish schools. Scientific American. 246 (6), 114-123 (1982).
  4. Fernández-Juricic, E., Erichsen, J. T., Kacelnik, A. Visual perception and social foraging in birds. Trends in Ecology and Evolution. 19 (1), 25-31 (2004).
  5. Strandburg-Peshkin, A., et al. Visual sensory networks and effective information transfer in animal groups. Current Biology. 23 (17), R709-R711 (2013).
  6. Rosenthal, S. B., Twomey, C. R., Hartnett, A. T., Wu, S. H., Couzin, I. D. Behavioral contagion in mobile animal groups. Proceedings of the National Academy of Sciences (U.S.A.). 112 (15), 4690-4695 (2015).
  7. Lemasson, B. H., et al. Motion cues tune social influence in shoaling fish. Scientific Reports. 8 (1), e9785 (2018).
  8. Kaidanovich-Beilin, O., Lipina, T., Vukobradovic, I., Roder, J., Woodgett, J. R. Assessment of social interaction behaviors. Journal of Visualized. Experiments. (48), e2473 (2011).
  9. Holcombe, A., Schalomon, M., Hamilton, T. J. A novel method of drug administration to multiple zebrafish (Danio rerio) and the quantification of withdrawal. Journal of Visualized. Experiments. (93), e51851 (2014).
  10. Way, G. P., Southwell, M., McRobert, S. P. Boldness, aggression, and shoaling assays for zebrafish behavioral syndromes. Journal of Visualized. Experiments. (114), e54049 (2016).
  11. Zhang, Q., Kobayashi, Y., Goto, H., Itohara, S. An automated T-maze based apparatus and protocol for analyzing delay- and effort-based decision making in free moving rodents. Journal of Visualized. Experiments. (138), e57895 (2018).
  12. Videler, J. J. Fish Swimming. , Netherlands. Springer. 260 pp., ISBN-13 9789401115803 (1993).
  13. Orger, M. B., Smear, M. C., Anstis, S. M., Baier, H. Perception of Fourier and non-Fourier motion by larval zebrafish. Nature Neuroscience. 3 (11), 1128-1133 (2000).
  14. Romey, W. L. Individual differences make a difference in the trajectories of simulated schools of fish. Ecological Modeling. 92 (1), 65-77 (1996).
  15. Katz, Y., Tunstrom, K., Ioannou, C. C., Huepe, C., Couzin, I. D. Inferring the structure and dynamics of interactions in schooling fish. Proceedings of the National Academy of Sciences (U.S.A.). 108 (46), 18720-18725 (2011).
  16. Herbert-Read, J. E., Buhl, J., Hu, F., Ward, A. J. W., Sumpter, D. J. T. Initiation and spread of escape waves within animal groups). Proceedings of the National Academy of Sciences (U.S.A.). 2 (4), 140355 (2015).
  17. Lemasson, B. H., Anderson, J. J., Goodwin, R. A. Motion-guided attention promotes adaptive communications during social navigation. Proceedings of the Royal Society. 280 (1754), e20122003 (2013).
  18. Moussaïd, M., Helbing, D., Theraulaz, G. How simple rules determine pedestrian behavior and crowd disasters. Proceedings of the National Academy of Sciences (U.S.A.). 108 (17), 6884-6888 (2011).
  19. Bianco, I. H., Engert, F. Visuomotor transformations underlying hunting behavior in zebrafish). Current Biology. 25 (7), 831-846 (2015).
  20. Chouinard-Thuly, L., et al. Technical and conceptual considerations for using animated stimuli in studies of animal behavior. Current Zoology. 63 (1), 5-19 (2017).
  21. Nakayasu, T., Yasugi, M., Shiraishi, S., Uchida, S., Watanabe, E. Three-dimensional computer graphic animations for studying social approach behaviour in medaka fish: Effects of systematic manipulation of morphological and motion cues. PLoS One. 12 (4), e0175059 (2017).
  22. Stowers, J. R., et al. Virtual reality for freely moving animals. Nature Methods. 14 (10), 995-1002 (2017).
  23. Warren, W. H., Kay, B., Zosh, W. D., Duchon, A. P., Sahuc, S. Optic flow is used to control human walking. Nature Neuroscience. 4 (2), 213-216 (2001).
  24. The IACUC Handbook. Silverman, J., Suckow, M. A., Murthy, S. , 3rd Edition, CRC Press, Taylor and Francis. 827 pp., ISBN-13 9781466555648 (2014).

Tags

Visual Psychophysical Assays Y-Maze Visual Features Navigational Decisions Sensory Cues Individual Level Directional Decisions Collective Decision Making Artificial Intelligence Complex Engineered Systems Freely Moving Individuals Video Documentation Experimental Methods Water-tight Poly Methyl Methacrylate Y Maze Platform
Integrating Visual Psychophysical Assays within a Y-Maze to Isolate the Role that Visual Features Play in Navigational Decisions
Play Video
PDF DOI DOWNLOAD MATERIALS LIST

Cite this Article

Woodley, C. M., Urbanczyk, A. C.,More

Woodley, C. M., Urbanczyk, A. C., Smith, D. L., Lemasson, B. H. Integrating Visual Psychophysical Assays within a Y-Maze to Isolate the Role that Visual Features Play in Navigational Decisions. J. Vis. Exp. (147), e59281, doi:10.3791/59281 (2019).

Less
Copy Citation Download Citation Reprints and Permissions
View Video

Get cutting-edge science videos from JoVE sent straight to your inbox every month.

Waiting X
Simple Hit Counter