Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind.
Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a building's layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.
Finding one's way in an unfamiliar environment presents as a significant challenge for the blind. Navigating successfully requires an understanding of the spatial relationships that exist between one's self and objects in the environment1,2. The mental representation that describes surrounding space is referred to as a spatial cognitive map3. Blind individuals can gather relevant spatial information regarding their surrounding environment through other sensory channels (such as hearing) allowing for the generation of an accurate spatial cognitive map for the purposes of real world navigation tasks4,5.
Considerable interest has arisen regarding the educative potential of virtual environments and action video games as a means to learn and master skills6-9. Indeed, many strategies and approaches have been developed for the blind for this purpose (see 4,10-12). We have developed Audio-based Environment Simulator (AbES); a user-centered audio-based virtual environment that allows for simulated navigation and exploration of an existing physical building. Drawing from original architectural floor plans, a virtual rendering of a modern two-story building (located at the Carroll Center for the Blind; Newton, MA) was generated with the AbES software (Figures 1A and B). AbES incorporates an action game metaphor with a premise designed to promote full exploration of the building space. Using simple key strokes and spatialized sound cues, users navigate and explore the entire building to collect a maximum number of jewels hidden in various rooms. Users must avoid roving monsters that can take them away and hide them elsewhere in the building (Figure 1C).
We demonstrate that interacting with AbES allows a blind user to generate an accurate spatial cognitive map of a target building based on auditory information acquired within the context of an action game metaphor. This is confirmed by a series of post-training behavioral performance tests designed to assess the transfer of acquired spatial information from a virtual environment to a real-world and large scale indoor navigation task (see Figure 2 for overall study design). Our results show that blind users are able to successfully navigate throughout a building for which they were previously unfamiliar, despite the fact that at no time were they informed of the overall purpose of the study, nor were they instructed to recall the spatial layout of the building while playing the game.
1. Participant Demographics
This is an on-going study that recruits blind male and female participants aged between 18-45 years. All participants are legally blind of early onset (documented prior to the age of 3) and of varying ocular etiologies. None of the study participants were previously familiar with the spatial layout of the target physical building.
2. Preparation and Familiarization with AbES
- Provide the participant with a blindfold and headphones to be worn throughout the training and assessment process. Ensure that the blindfold is comfortably placed over the eyes and the headphones are properly oriented and positioned over the ears (i.e. left speaker over left ear).
- Train participant how to use assigned keys and the information represented by the audio cues in AbES. Using specific key strokes (Figure 3), a user navigates through and explores the building virtually (moving forward, right or left). Each virtual step approximates one step in the real physical building.
- Familiarize with rules and premise of the game.
- Familiarize with audio cues specific to game play (e.g. sound of locating jewels and sound of monsters nearby). As the user navigates through the building, auditory-based and contextual spatial information is acquired sequentially and is dynamically updated. Spatial and situational information is based on iconic and spatialized sound cues provided after each step taken. Orientation is based on cardinal compass headings (e.g. "north" or "east") and text through speech (TTS) is used to provide further information regarding a user's current location, orientation and heading (e.g. "you are in the corridor on the first floor, facing west") as well as the identity of objects and obstacles in their path (e.g. "this is a door"). Distance cues are provided based on modulating sound intensity. The spatial localization of the sounds is updated to match the user's egocentric heading. Essentially, the software is designed to play an appropriate audio file as a function of the location and egocentric heading of the user and keeps track of the user's position as they move through the environment. For example, if a door is located on the person's right side, the knocking sound is heard in the user's right ear (i.e. the software plays an audio file of a knocking sound in the right channel). If the person now turns around 180 degrees so that the same door is now located on their left side, the same knocking sound is now heard in the left channel (i.e. the software plays an audio file of a knocking sound in the left channel). Finally, if the user is facing the door, the same knocking sound is heard in both ears equally. By keeping track of the user's egocentric heading, the software can play the appropriate spatial localized sounds that identify the presence and location of objects and keep track of these changes as the user moves through the virtual environment. See Figure 4.
3. Training and Game Play with AbES (3 Sessions Each Lasting 30 min for a Total of 1.5 hr)
- Allow for free game play and note any difficulties and challenges (i.e. use of key strokes, audio cues, areas of difficult navigation). Positive reinforcement and clarifications are provided at the end of each training session.
- Record game performance (e.g. number, time and location where a participant finds a jewel).
4. Assess Virtual Navigation Task Performance
- Explain to participant the details of the testing and provide instructions on how to complete the virtual navigation tasks. The participant will complete 10 predetermined navigation tasks presented sequentially using the AbES software (i.e. once the participant successfully completes the first task, the computer will automatically re-locate them to the starting point of the following task).
- Inform the participant that they will have a maximum of 6 min to complete each navigation task.
- 10 virtual navigation paths of comparable difficulty (i.e. distance traveled and number of turns) are chosen based on predetermined pairings of 10 start and stop locations (i.e. rooms). Specifically, the range of steps needed to navigate the target route ranged between 25-35 steps (in the virtual environment) and incorporated between 3-4 turns of 90 degrees.
- Load the 10 navigation pairs into AbES for automated presentation and data capture of performance.
- Outcome measures are automatically recorded using AbES' internal software. Outcome measures include: successful completion of the navigation task and time taken to reach target. See Figure 5A.
- Instructions describing the start location and the target destination are provided automatically by the AbES software at the start of each task. Timing begins immediately once the subject takes their first virtual step from the starting location and ends once arriving at the target location (unless time takes longer than 6 min, for which the run is scored as incomplete and the next path is presented). Captured data is automatically sent to a text file and opened subsequently in database/statistical software for further analysis.
5. Assess Physical Navigation Task Performance
- Explain to participant the details of the testing and provide instructions on how to complete the physical navigation tasks. The participant will complete 10 predetermined navigation tasks (presented in scrambled order from the previous virtual performance assessment) and under the supervision of an experienced investigator.
- Inform the participant they will have a maximum of 6 min to complete each navigation task. For the purposes of the physical navigation task, the participant is allowed to use their white cane for mobility support.
- 10 physical navigation paths are chosen based on predetermined pairings of 10 start and stop locations (i.e. rooms) of comparable difficulty (i.e. distance traveled and number of turns).
- Investigator prepares stopwatch and clipboard with list of navigation tasks for manual scoring of performance.
- Outcome measures are manually recorded by the investigator. Outcome measures include: successful completion of the navigation task and time taken to reach target.
- "Square-off" the participant (i.e. position the participant with the door of the starting location behind them). Instructions describing the start location and the target destination are provided by the investigator at the start of each task. Timing begins immediately once the subject takes their first physical step from the starting location and ends when the participant verbally reports arriving at the destination (unless time takes longer than 6 min, for which the run is scored as an incomplete and the next path is presented). Captured data is recorded manually and subsequently transferred to database/statistical software for further analysis. See Figure 5B.
6. Assess Physical Drop off Task Performance
- Explain to participant the details of the testing and provide instructions on how to complete the physical drop off navigation tasks. The participant will complete 5 navigation tasks with the goals of exiting the building using the shortest route possible and under the supervision of an experienced investigator.
- Inform the participant that they will have a maximum of 6 min to complete each navigation task. For the purposes of the physical drop off navigation task, the participant is allowed to use their white cane for mobility support.
- 5 predetermined physical starting locations are used such that 3 exit paths of different lengths are possible.
- Investigator prepares stopwatch and clipboard with list of navigation tasks for manual scoring of performance.
- Outcome measures are manually recorded by the investigator. Outcome measures include: successful completion of the navigation task and time taken to reach target. Furthermore, paths are scored such that the shortest path taken is given maximum points (i.e. 3 for shortest path, 2 for the second, 1 for the longest, and 0 for not being able to complete the task). See Figure 5C.
- "Square-off" the participant at the first starting location. Instructions describing the start location are provided by the investigator at the start of each task. Timing begins immediately once the subject takes their first physical step from the starting location and ends when the participant verbally reports arriving to an exit door of the building (unless time takes longer than 6 min, for which the run is scored as incomplete and the next start location is presented). Captured data is recorded manually and subsequently transferred to database/statistical software for further analysis.
Results from three early blind participants (aged between 19 and 22 years) are shown (see Table 1 for participant characteristics). In summary, all three participants showed a high level of success on all three navigation tasks following game play with the AbES software. This was confirmed by the performance scores (group mean and individual) on all three behavioral tasks (see Figure 6). The percentage correct performance for the virtual (mean: 90%) followed by the physical (mean: 88.7%) navigation tasks illustrates a high level of success and comparable performance for both tasks (Figure 6A). Performance on the drop off experiments suggests that participants often selected the shortest route possible to exit the building (mean score: 3.0) (Figure 6B). Finally, the average time taken to navigate to target is shown for all three navigation tasks is shown in Figure 6C. Virtual navigation time (assessed first) was typically longer (mean: 137.3 sec) than physical (73.8 sec) navigation performance. The shorter mean navigation times observed in the drop off task (mean: 37.3 sec) are consistent with the fact that participants were likely to choose the shortest possible path to exit the building.
Assessing individual results from one representative study participant and navigation route on all three tasks assessed revealed that virtual navigation from a start to end point located on the first floor took 79 sec (Figure 7A; path shown in yellow). Assessment of performance on the same path in the physical building took 46 sec (Figure 7B). Assessment of drop off task performance illustrates that the participant took the shortest path possible (scoring 3 points and a taking navigation time of 48 sec) (Figure 7C).
|subject||age (years)||etiology of blindness||level of visual function|
|1||22||retinopathy of prematurity||residual (light perception)|
|2||19||Peters anomaly; bilateral retinal detachment; end stage glaucoma||profound (no light perception)|
|3||19||retinopathy of prematurity||residual (light perception)|
Table 1. Participant characteristics.
Figure 1. Virtual environment rendered in AbES. A) original two-story building floor plan. The building includes 23 rooms and a series of connecting corridors as well as 3 separate entrances and 2 stairwells. Given the existing spatial layout, there are multiple route possibilities to enter and exit the building, B) virtual rendering of target building in AbES, C) objects encountered while playing AbES in game mode. Click here to view larger figure.
Figure 2. Overall Study Design. All participants undergo a fixed training and game play period with AbES followed by a series of navigation assessments (always in sequential order). Assessments of performance include virtual, physical, and drop off navigation tasks.
Figure 3. AbES keystrokes.
Figure 4. Training and game play with AbES. A) Participants sit at a computer terminal wearing a blindfold and stereo headphones. B) Photo of an investigator with a study participant.
Figure 5. Summary of navigation task assessments. A) Data capture from virtual navigation path assessment. The start and end points are read to the participant and the next path is loaded automatically after completion. The path taken (shown in yellow) and time to target are collected automatically by the software. B) Investigator assesses performance in a physical navigation task. Timing (using a stopwatch) commences with the participant's first step and ends when the participant reports arriving to the target end point. C) Sample route and scoring strategy for drop off navigation task. There are three exit doors and thus multiple possible routes to exit the building. Based on the starting point, the path taken (shown in yellow) is scored. Three (3) points are given for using the shortest exit, followed by 2 and 1 point (a score of zero indicates unable to find an exit). Click here to view larger figure.
Figure 6. Summary results from navigation task assessments. Results (group means and individual results from 10 tested navigation routes) from 3 representative participants in the study are shown. A) Percentage correct performance for the virtual followed by the physical navigation tasks. B) Performance results (average number of points) on drop off tasks. C) Average time taken to navigate to target is shown for all three navigation assessments. Click here to view larger figure.
Figure 7. Individual results from navigation task assessments. Representative results are shown from one study participant on all three navigation tasks assessed. A) virtual navigation (path shown on yellow). B) assessment of performance on the same path in the physical building. C) assessment of a drop off task illustrates that the participant took the shortest path possible. The alternative potential paths (yellow dotted lines) and score value relative to the given starting point are also shown. Click here to view larger figure.
Supplmental Movie 1. Supplementary video of annotated video game play. Video sequence showing a player (yellow moving icon) entering a room located on the first floor where a jewel is hidden. Spatialized sounds (left and right channel) allow the player to orient and identify the location of objects (e.g. doors and obstacles) in their surrounding environment. Once a jewel is found, the player exits the building and they must avoid roving monsters (red moving icons). The player then continues to explore the building (first and second floors) to find more hidden jewels. Click here to view supplemental movie.
We describe an interactive audio-based virtual environment simulator designed to improve general spatial awareness and navigation skills in the blind. We demonstrate that interacting with AbES provides accurate cues that describe the spatial relationships between objects and the overall layout of the target environment. Blind users can generate accurate spatial cognitive maps based on this auditory information and by interacting with the immersive virtual environment. Furthermore, interacting with AbES within the context of a game metaphor demonstrates that spatial cognitive constructs can be learned implicitly and rather simply through causal interaction with the software. As demonstrated in this initial phase of the study, the interactive and immersive nature of the game can improve the individual's spatial awareness of a new environment, provide a platform for creating an accurate spatial cognitive map, and may reduce the insecurity associated with independent navigation prior to arriving at an unfamiliar building.
Typically, individuals with visual impairment can gain functional independence through orientation and mobility (O&M) training. It is important however that training strategies remain flexible and adaptable so that they can be applied to novel and unfamiliar situations and tailored to a person's own strengths and weaknesses so as to address their particular challenges, needs and learning strategies. The creative use of interactive virtual navigation environments such as AbES may provide for this flexibility and supplement current O&M training curriculum. This software represents an adjunctive strategy that not only draws upon the benefits of high motivational drive, but also provides for a testing platform to carry out more controlled and quantifiable studies to test and validate the effectiveness of these training approaches.
Current and future investigations will include a large-scale study where participants are randomized to differing methods of training (e.g. gaming compared to direct serial route learning) and navigation (i.e. route finding) performance will be compared. We will also investigate differences between early and late blind as well as the relationship between additional factors of interest including age and gender.
Finally, given the apparently engaging nature of this combined virtual environment and gaming approach, it would also be of interest to investigate the potential benefit of AbES on navigation skill development in blind individuals beyond the profile described here. For example, the largest (and fastest growing) segment of visual impairment is in the aging population and current trends are expected to increase13. Thus, it would seem highly relevant to explore the effectiveness of this approach for the non-visual acquisition of spatial information supporting navigation skills in this demographic group. Given that AbES is a computer based approach, it is difficult to speculate at this time on its effectiveness on non-digital natives. Along similar lines, developing AbES in manner that would be amenable to individuals with residual vision (i.e. low vision) could also be worthwhile. Given that the majority of individuals who are legally blind fall under this category13, training in virtual environments prior to actual physical travel may also be of benefit to plan routes and avoid difficulties associated with trying to access information in an unfamiliar environment. In this direction, current work is aimed at developing AbES features such as zooming (i.e. high magnification) and high a contrast display to support individuals with low vision.
The authors declare no conflicts of interests.
The authors would like to thank Rabih Dow, Padma Rajagopal, Molly Connors and the staff of the Carroll Center for the Blind (Newton MA, USA) for their support in carrying out this research. This work was supported by the NIH/NEI grant: RO1 EY019924.
|Laptop computer||Laptop used exclusively for training participants and collecting data|
|Stereo Head phones (fully enclosed circumaural design)||Worn by all participants during training|
|Blindfold||Worn by all participants during training and testing|
- Loomis, J. M., Klatzky, R. L., Golledge, R. G. Navigating without vision: basic and applied research. Optom. Vis. Sci. 78, 282-289 (2001).
- Siegel, A. W., White, S. H. The development of spatial representations of large-scale environments. Adv. Child Dev. Behav. 10, 9-55 (1975).
- Strelow, E. R. What is needed for a theory of mobility: direct perception and cognitive maps--lessons from the blind. Psychol. Rev. 92, 226-248 (1985).
- Giudice, N. A., Bakdash, J. Z., Legge, G. E. Wayfinding with words: spatial learning and navigation using dynamically updated verbal descriptions. Psychol. Res. 71, 347-358 (2007).
- Ashmead, D. H., Hill, E. W., Talor, C. R. Obstacle perception by congenitally blind children. Percept. Psychophys. 46, 425-433 (1989).
- Dede, C. Immersive interfaces for engagement and learning. Science. 323, 66-69 (2009).
- Bavelier, D., et al. Brains on video games. Nat. Rev. Neurosci. 12, 763-768 (2011).
- Bavelier, D., Green, C. S., Dye, M. W. Children, wired: for better and for worse. Neuron. 67, 692-701 (2010).
- Lange, B., et al. Designing informed game-based rehabilitation tasks leveraging advances in virtual reality. Disabil. Rehabil. (2012).
- Merabet, L., Sánchez, J. Audio-based Navigation Using Virtual Environments: Combining Technology and Neuroscience. AER Journal: Research and Practice in Visual Impairment and Blindness. 2, 128-137 (2009).
- Kalia, A. A., Legge, G. E., Roy, R., Ogale, A. Assessment of Indoor Route-finding Technology for People with Visual Impairment. J. Vis. Impair. Blind. 104, 135-147 (2010).
- Lahav, O., Schloerb, D. W., Srinivasan, M. A. Newly blind persons using virtual environment system in a traditional orientation and mobility rehabilitation program: a case study. Disabil. Rehabil. Assist Technol. (2011).
- WHO | Global trends in the magnitude of blindness and visual impairment [Internet]. World Health Organization (WHO). Available from: http://www.who.int/blindness/causes/trends/en/index.html (2012).