A Networked Desktop Virtual Reality Setup for Decision Science and Navigation Experiments with Multiple Participants

Behavior

Your institution must subscribe to JoVE's Behavior section to access this content.

Fill out the form below to receive a free trial or learn more about access:

 

Summary

This paper describes a method for conducting multi-user experiments on decision-making and navigation using a networked computer laboratory.

Cite this Article

Copy Citation | Download Citations | Reprints and Permissions

Zhao, H., Thrash, T., Wehrli, S., Hölscher, C., Kapadia, M., Grübel, J., Weibel, R. P., Schinazi, V. R. A Networked Desktop Virtual Reality Setup for Decision Science and Navigation Experiments with Multiple Participants. J. Vis. Exp. (138), e58155, doi:10.3791/58155 (2018).

Abstract

Investigating the interactions among multiple participants is a challenge for researchers from various disciplines, including the decision sciences and spatial cognition. With a local area network and dedicated software platform, experimenters can efficiently monitor the behavior of the participants that are simultaneously immersed in a desktop virtual environment and digitalize the collected data. These capabilities allow for experimental designs in spatial cognition and navigation research that would be difficult (if not impossible) to conduct in the real world. Possible experimental variations include stress during an evacuation, cooperative and competitive search tasks, and other contextual factors that may influence emergent crowd behavior. However, such a laboratory requires maintenance and strict protocols for data collection in a controlled setting. While the external validity of laboratory studies with human participants is sometimes questioned, a number of recent papers suggest that the correspondence between real and virtual environments may be sufficient for studying social behavior in terms of trajectories, hesitations, and spatial decisions. In this article, we describe a method for conducting experiments on decision-making and navigation with up to 36 participants in a networked desktop virtual reality setup (i.e., the Decision Science Laboratory or DeSciL). This experiment protocol can be adapted and applied by other researchers in order to set up a networked desktop virtual reality laboratory.

Introduction

Research on spatial cognition and navigation typically studies the spatial decision-making (e.g., turning left or right at an intersection) and mental representation of individuals in real and virtual environments1,2. The advantages of virtual reality (VR) include the prevention of ethical and safety issues (e.g., during a dangerous evacuation3), the automatic measurement and analysis of spatial data4, and a balanced combination of internal and external validity5,6,7. For example, Weisberg and colleagues extended previous research on individual differences in spatial knowledge acquisition by demonstrating that spatial tasks in VR can provide an objective behavioral measure of spatial ability8. This study also suggested that the navigation behavior in VR approximates real-world navigation because the virtual environment was modeled after the university campus used by Schinazi and colleagues9 (see also the study of Ruddle and colleagues10). VR has also been applied to psychotherapy11, clinical assessment12, consumer behavior13, and surgery14,15. However, most VR systems lack proprioceptive and audio feedback that may improve presence and immersion16,17,18,19, require training with the control interface20,21,22, and lack social cues. Indeed, people in the real world often move in groups23, avoid or follow other people3,24, and make decisions based on social context25,26.

At the same time, research on crowd behavior often focuses on emergent characteristics of crowds (e.g., lane formation, congestion at bottlenecks) that are simulated on a computer or observed in the real world. For example, Helbing and colleagues used a combination of real-world observations and computer simulations in order to suggest improvements to traffic flow in an intersection by separating inflow and outflow with physical barriers and placing an obstacle in the center27. Moussaïd and colleagues used a heuristics-based model to study high-density situations during a crowd disaster28. This approach suggested improvements to an environmental setting for mass events in order to avoid crowd disasters. With the aid of an existing open source framework, the implementation of such simulations could be relatively easy. SteerSuite is an open source framework that allows users to simulate steering algorithms and crowd behavior easily by providing tools for facilitating, benchmarking, and testing29. This framework can provide the core of an agent's navigation rationale, which is critical for successful crowd simulation. In addition, Singh and colleagues demonstrated a single platform that combines a variety of steering techniques30. While researchers can propose design interventions using such simulations, they are rarely validated with human participants in a controlled setting. Controlled experiments are rare in crowd research because they can be difficult to organize and dangerous to the participants.

VR has been employed to investigate social behavior using simple and complex virtual environments with one or more computer-simulated agents. In the study of Bode and colleagues31,32, the participants were asked to evacuate a simple virtual environment from a top-down perspective among several agents and found that exit choice was affected by static signage and motivation. Presenting participants with a more complex environment from a first-person perspective, Kinateder and colleagues found that the participants were more likely to follow a single computer-simulated agent during the escape from a virtual tunnel fire25. In a complex virtual environment with multiple agents, Drury and colleagues found that the participants tended to assist a fallen agent during an evacuation when they identified with the crowd26. Collectively, these findings suggest that VR can be an effective way of eliciting social behaviors, even with computer-simulated agents. However, some crowd behaviors may only be observed when there is a realistic social signal (i.e., when the participants are aware that the other avatars are controlled by people3). In order to address this shortcoming, the present protocol describes a method for conducting controlled experiments with multiple users in a networked VR setup. This approach has been employed in a recent study by Moussaid and colleagues in order to investigate the evacuation behavior of 36 networked participants3.

Research on networked VR has focused on topics unrelated to navigation strategies33,34 and/or relied on existing online gaming platforms such as Second Life. For example, Molka-Danielsen and Chabada investigated evacuation behavior in terms of exit choice and spatial knowledge of the building using participants recruited among existing users of Second Life35. While the authors provide some descriptive results (e.g., visualizations of trajectories), this study had difficulties with participant recruitment, experimental control, and generalization beyond this specific case. More recently, Normoyle and colleagues found that existing users of Second Life and participants in a laboratory were comparable in terms of evacuation performance and exit choice and different in terms of self-reported presence and frustration with the control interface36. The findings from these two studies highlight some of the challenges and opportunities afforded by online and laboratory experiments. Online studies are capable of drawing from a much larger and motivated population of potential participants. However, laboratory studies allow for more experimental control of the physical environment and potential distractions. In addition, online studies may pose some ethical concerns regarding data anonymity and confidentiality.

As a networked desktop VR laboratory, the Decision Science Laboratory (DeSciL) at ETH Zürich is primarily used to study economic decision-making and strategic interactions in a controlled environment. The technical infrastructure at the DeSciL consists of hardware, software for laboratory automation, and software that supports the multi-user desktop VR setup. The hardware includes high-performance desktop computers with Microsoft Windows 10 Enterprise operating system, control interfaces (e.g., mouse and keyboard, joysticks), headphones, and eye trackers (Table of Materials). All client computers are connected with Ethernet of one gigabit per second to the university network and the same network file share. There is no visible delay or lag when there are 36 clients connected. The number of frames per second is consistently above 100. The experiments are also managed and controlled with laboratory automation software based on Microsoft PowerShell (i.e., PowerShell Desired State Configuration and PowerShell Remoting). All relevant steps of the protocol are preprogrammed with PowerShell scripts called Cmdlets (e.g., Start-Computer, Stop-Computer). During the experiment, these scripts can be executed simultaneously and remotely on all client computers. This type of laboratory automation ensures an identical state of the client computers, reduces potential errors and complexity during scientific testing, and prevents researchers from having to perform repetitive manual tasks. For the navigation experiments, we use the Unity game engine (<https://unity3d.com/>) in order to support the development of 2D and 3D environments for multi-user, interactive desktop VR. The 36 client computers are connected to a server via an authoritative server architecture. At the start of every experiment, each client sends an instantiation request to the server, and the server responds by instantiating an avatar for that user on all of the connected machines. Each user's avatar has a camera with a 50 degrees field of view. Throughout the experiment, the clients send user' input to the server, and the server updates the movement of all of the clients.

In the physical laboratory, each computer is contained in a separate cubicle within three semi-independent rooms (Figure 1). The overall size of the laboratory is 170 m2 (150 m2 for experiment room and 20 m2 for control room). Each of these rooms is equipped with audio and video recording devices. Experiments are controlled from a separate adjacent room (i.e., by providing instructions and initiating the experimental program). From this control room, the experimenters can also observe the participants in both physical and virtual environments. Together with the Department of Economics at the University of Zürich, the DeSciL also maintains the University Registration Center for Study Participants, which was implemented based on h-root37.

Although similar systems have been described in the literature38, the DeSciL is the first functional laboratory that is suitable for multi-user desktop VR experiments on navigation and crowd behavior to our knowledge. Here, we describe the protocol for conducting an experiment in the DeSciL, present representative results from one study on social navigation behavior and discuss the potential and limitations of this system.

Subscription Required. Please recommend JoVE to your librarian.

Protocol

All methods described here have been approved by Research Ethics Committee of ETH Zürich as part of the proposal EK 2015-N-37.

1. Recruit Participants for the Planned Experimental Session.

  1. Sample the participants within particular constraints (e.g., age, gender, educational background) using the participant recruitment system.
  2. Send invitations by email to the randomly selected participants using the contact information provided by the recruitment system.
  3. Wait for these participants to register via the online system. Make sure that more participants than required (e.g., 4 overbooked participants for a session that requires 36 people) register. Overbooked participants help ensure that a session is viable in the event of no-shows.
  4. Ensure that a confirmation email is sent to registered participants automatically.

2. Prepare the Experimental Session.

  1. Prepare the laboratory environment.
    1. Print the participant list from the recruitment system.
    2. Turn on the server and the lights in the control room of the DeSciL and organize the testing rooms according to the required number of participants.
    3. Copy the executable experiment program and its corresponding configuration files on the network drive. This executable program deploys a custom-written software framework based on the Unity game engine to support client-server communication among different computers through a local area network. For navigation experiments, the framework provides a bird-eye observer server system for monitoring the client’s behaviors during the experiment.
    4. Open PowerShell Integrated Scripting environment on the Windows desktop. In the PowerShell console, specify an array of computer names (e.g., $pool = “descil-w01”, “descil-w02”…) to create a client pool object. Next, type Start-Pool $pool to start the client computers and Register-Pool $pool to connect the server to the client computers.
    5. Prepare the computers on the client side before launching the program. Type Invoke-Pool { Mount-NetworkShare $path} to direct the computers to enter the right folder path.
    6. Execute the prepared functions on the server (i.e., Start-GameServer) and on the clients (i.e., Invoke-Pool { Start-GameClient }). Specify the IP address of the server as parameter of the function.
    7. Wait for a message on the server’s monitor that indicates a successful connection.
    8. Distribute the consent forms and pens in each cubicle. The consent forms contains the information regarding the study (e.g., the purpose of the study, potential risks and benefits of the experiment), the contact information for the experimenter, and a legal disclaimer.
    9. Shuffle the deck of seating cards that indicate the seating arrangement of the participants.
  2. Welcome the participants.
    1. Ask the participants to wait outside of the laboratory. 5 min before the official start time, check the participants’ identity documents to ensure that they match the list of registered participants. At the same time, let the participants pick a card that indicates their seat number. Have the participants walk to the corresponding cubicle and wait for the experiment to begin.
    2. Wait a few minutes for the participants to read and sign the consent forms. Collect these forms before conducting the experiment.

3. Conduct the Experiment.

  1. Broadcast the experiment instructions with the microphone to all of the participants. Inform them of the basic rules, including no communication to other participants and no personal electronic devices permitted. Ask the participants to raise their hands if they have any questions regarding the experiment.
  2. Begin the experiment by presenting the demographic questionnaire (e.g., gender and age) on each client.
  3. Deploy the training scene to teach the participants how to maneuver through the virtual environment. If the participants have trouble using the control interface (e.g., mouse and keyboard), walk towards their cubicle in order to assist them. Keep monitoring the participants’ progress by requesting screenshots from all of the clients (i.e., type Get-ScreenShots on PowerShell console) until all of the participants have finished the training session.
  4. After the training session, begin the testing phase of the experiment. Observe the participants’ behaviors from the bird’s-eye interface on the server computer. Send warning messages to the participants through the program if they are doing something abnormal by clicking on their avatar. Otherwise, try not to interfere with the participants during the experiment.
  5. Ensure that there is a short waiting period before each trial for loading the next scene and allowing the participants to read the instructions.

4. Finalize the Experiment.

  1. Close the server and client program by typing Stop-GameClient and Stop-GameServer in the PowerShell console.
  2. Ask the participants to remain seated until their number is called over the microphone.
  3. Extract the participants’ final scores from the file “Score.txt” in the project folder on the server computer and convert their scores into a monetary payment.
  4. Call the cubicle numbers one at a time and meet each participant at the reception desk. Thank the participants and give them the corresponding payment.
  5. Examine the cubicles and collect any remaining pens or forms.
  6. Copy and save the experiment data from the server to an external disk for future analysis.

Subscription Required. Please recommend JoVE to your librarian.

Representative Results

For each client on each trial, the experiment data from the DeSciL typically include trajectories, time stamps, and measures of performance (e.g., whether the participant turned in the "correct" direction at a particular intersection). A representative study investigated the effects of signage complexity on the route choice for a crowd of human participants (with virtual avatars) in a simple Y-shaped virtual environment. In this experiment, 28 participants (12 women and 16 men; mean age = 22.5) were given the same goal location (i.e., gate number) and were asked to choose the corresponding route option at the intersection using a map (see Figure 2).

The map complexity varied over 16 trials, and the hypothesis was that the decision time and accuracy would be higher for maps that are more complex. While we expect the decision accuracy to be relatively high overall, the participants' trajectories can be used in future experiments to define the walking paths of agents that convey a realistic social signal (i.e., believable movements). The total experiment time was approximately 1 h, including welcoming the participants, conducting the training session (for the control interface), and testing in the Y-shaped corridor. The obtained data are summarized in Table 1.

Figure 3 indicates the minimum and maximum completion times for each trial. These descriptive statistics provide an indirect measure of congestion during the trial. The obtained data also allows for the visualization of trajectories generated by the virtual crowd (see Figure 4). Spatial statistics can then be used to analyze the changes in trajectories over trials. For example, the researchers may be interested in how closely the participants followed each other or how smoothly the participants maneuver with particular control interfaces.

Figure 1
Figure 1: Photographs of the DeSciL laboratory. (a) The control room contains the server that receives traffic from the 36 client computers and monitors the participants in their cubicles. This room can be isolated from the testing rooms in terms of sound and vision. Communication to participants is provided via microphone and speaker system. (b) The three testing rooms contain 36 cubicles. (c) Each cubicle contains a desktop computer, a monitor, a mouse and a keyboard interface, headphones, and an eye tracker. Please click here to view a larger version of this figure.

Figure 2
Figure 2: Views of the Y-shaped virtual environment. (a) From the server, the researchers can observe the participants moving towards the intersection. (b) From the clients, the participants can view the virtual environment and other avatars from a first-person perspective during movement. Please click here to view a larger version of this figure.

Figure 3
Figure 3: Representative results from 16 experimental trials. The maximum and minimum times are the times required by the fastest and slowest participants to reach the destination on each trial. Please click here to view a larger version of this figure.

Figure 4
Figure 4: Participants trajectories from (a) trial 1 and (b) trial 16. The x- and y-axes represent the locations of the avatars in the crowd. The color bar represents time elapsed during the trial. Please click here to view a larger version of this figure.

Trial Number MapType Accuracy(%)  Average time/s
1 Simple 100 42.01
2 Complex 96.4 40.51
3 Simple 100 39.15
4 Complex 100 38.66
5 Complex 100 38.52
6 Complex 100 38.87
7 Simple 100 38.43
8 Complex 100 38.26
9 Simple 100 37.43
10 Simple 100 38.44
11 Complex 100 37.08
12 Complex 100 36.8
13 Simple 100 37.67
14 Complex 100 36.52
15 Simple 100 36.83
16 Simple 100 37.88

Table 1: Representative results from 16 experimental trials. Decision accuracy represents the percentage of correct choices (i.e., turning towards the correct gate) over all participants. Mean decision time is the mean time required to reach the destination (whether correct or not) over all trials.

Subscription Required. Please recommend JoVE to your librarian.

Discussion

In this article, we described a multi-user desktop virtual reality laboratory in which up to 36 participants can interact and simultaneously navigate through various virtual environments. The experimental protocol details the steps necessary for this type of research and unique to multi-user scenarios. Considerations specific to these scenarios include the number of participants in attendance, the cost of seemingly small experimenter errors, rendering and networking capacities (both server- and client-side), training with the control interface, and data security. Overbooking participants is necessary in order to ensure a precise number of participants in an experimental session. If too few participants attend, then the cost of a failed experimental session is relatively high. Similarly, experimental errors can lead to a failed session when either the participants' data were contaminated before the error was detected, or the experiment cannot be conducted because of software or hardware failures. For example, if too much information is distributed through the network, then a relaunch of the entire system may be necessary. This is especially problematic if the experiment has already begun. In addition, the participants in virtual navigation experiments require experience and/or training with the control interface because the controls are less intuitive than real walking21 and the interaction with the controls can interfere with spatial memory tasks20. Responsible data management also becomes especially important given the large amount of data obtained per session.

While there are many opportunities afforded by the DeSciL, at least three limitations remain. First, the current system is setup for up to 36 simultaneous participants. Experiments on larger virtual crowds may require computer-controlled agents, traces of human participants from several previous sessions, or the capability of including online participants. Second, future hardware upgrades (e.g., for better graphics cards and better processors) will be much more expensive than for the traditional, single-user system. Third, multi-user desktop virtual reality research cannot yet be conducted with control interfaces that are more similar to real walking. Thus, research on locomotion and the physical interactions among participants is limited.

Despite these limitations, the DeSciL offers several advantages over real-world studies, single-user laboratory studies, and multi-user online studies. The software automation gives the researchers the abilities to adapt the experimental protocol with respect to their needs. Compared to both real-world and online studies, the DeSciL allows for more experimental control. For example, experiments in the DeSciL may employ systematic variations of the environment and provide direct observation of the participants in both virtual and physical worlds. Compared to single-user desktop virtual reality studies with computer-controlled agents, the participants can interact with each other in real-time, and the emergent behavior of the virtual crowd is less reliant on the experimenter's preconceptions. Computer-controlled agents in VR often rely on scripted actions and do not adapt to the users' movements in real time. In contrast, networked desktop VR provides a more ecological context in which human-controlled avatars affect (and are affected by) each other's movements. In addition, this approach can inform the movement parameters (e.g., walking speed and hesitations) of future agent-based models in crowd research (e.g., for evacuation scenarios39). In general, multi-user desktop virtual reality studies allow for more precise measurement of spatial behavior and the detection of patterns that may have previously been overlooked.

Recently, the DeSciL has been successfully employed in a series of decision-making40,41 and navigation studies3,21. For example, Moussaid and colleagues used the multi-user desktop VR setup in order to study the effect of stress on crowd behavior during an evacuation3. In this study, the "correct" exit varied from trial to trial, and only a proportion of the participants were informed of the correct exit. The results indicated that participants under stress led to a more efficient evacuation, but this finding may be attributable to the way in which the collisions were implemented. In addition, participants tended to follow other avatars under stress, suggesting that a social signal was conveyed among the participants despite the lack of direct physical interaction. These results emphasize the advantages of multi-user VR compared to single-user VR with computer-controlled agents. Future studies will include the comparison of multi-user data acquired either online or in the laboratory, more complex environmental variations, and the addition of peripheral devices such as eye trackers or physiological devices. These advancements will allow for the collection of different types of complex behavioral data42. For example, low-cost eye trackers can be incorporated in order to monitor the participants' attention or detect coarsely areas of interest on the screen.

Subscription Required. Please recommend JoVE to your librarian.

Disclosures

The authors have nothing to disclose.

Acknowledgments

The representative study was funded by the Swiss National Science Foundation as part of the grant "Wayfinding in Social Environments" (No. 100014_162428). We want to thank M. Moussaid for insightful discussions. We also want to thank C. Wilhelm, F. Thaler, H. Abdelrahman, S. Madjiheurem, A. Ingold, and A. Grossrieder for their work during the software development.

Materials

Name Company Catalog Number Comments
PC Lenovo IdeaCentre AIO 700 24’’ screen, 16 GB RAM, and SSDs. CPU: Intel core i7. GPU:NVidia GeForce GTX 950A
Keyboard Lenovo LXH-EKB-10YA
Mouse Lenovo SM-8825
Eye tracker Tobii Technology Tobii EyeX Data rate: 60 Hz. Tracking screen size: Up to 27″
Communication audio system Biamp Systems Networked paging station - 1 Ethernet:100BaseTX

DOWNLOAD MATERIALS LIST

References

  1. Waller, D., Nadel, L. Handbook of Spatial Cognition. American Psychological Association. Washington D.C. (2013).
  2. Denis, M. Space and Spatial Cognition: A Multidisciplinary Perspective. Routledge. Abingdon, Oxon. (2017).
  3. Moussaïd, M., Kapadia, M., Thrash, T., Sumner, R. W., Gross, M., Helbing, D., Hölscher, C. Crowd behaviour during high-stress evacuations in an immersive virtual environment. Journal of the Royal Society Interface. 13, (122), 20160414 (2016).
  4. Grübel, J., Weibel, R., Jiang, M. H., Hölscher, C., Hackman, D. A., Schinazi, V. R. EVE: A Framework for Experiments in Virtual Environments. Spatial Cognition X: Lecture Notes in Artificial Intelligence. 159-176 (2017).
  5. Loomis, J. M., Blascovich, J. J., Beall, A. C. Immersive virtual environment technology as a basic research tool in psychology. Behavior Research Methods, Instruments, & Computers. 31, (4), 557-564 (1999).
  6. Brooks, F. P. What's Real About Virtual Reality? Proceedings IEEE Virtual Reality. Cat. No. 99CB36316 (1999).
  7. Moorthy, K., Munz, Y., Jiwanji, M., Bann, S., Chang, A., Darzi, A. Validity and reliability of a virtual reality upper gastrointestinal simulator and cross validation using structured assessment of individual performance with video playback. Surgical Endoscopy and Other Interventional Techniques. 18, (2), 328-333 (2004).
  8. Weisberg, S. M., Schinazi, V. R., Newcombe, N. S., Shipley, T. F., Epstein, R. A. Variations in cognitive maps: Understanding individual differences in navigation. Journal of Experimental Psychology: Learning Memory and Cognition. 40, (3), 669-682 (2014).
  9. Schinazi, V. R., Nardi, D., Newcombe, N. S., Shipley, T. F., Epstein, R. A. Hippocampal size predicts rapid learning of a cognitive map in humans. Hippocampus. 23, (6), 515-528 (2013).
  10. Ruddle, R. A., Payne, S. J., Jones, D. M. Navigating Large-Scale "Desk- Top" Virtual Buildings: Effects of orientation aids and familiarity. Presence. 7, (2), 179-192 (1998).
  11. Riva, G. Virtual Reality in Psychotherapy: Review. CyberPsychology & Behavior. 8, (3), 220-230 (2005).
  12. Ruse, S. A., et al. Development of a Virtual Reality Assessment of Everyday Living Skills. Journal of Visualized Experiments. (86), 1-8 (2014).
  13. Ploydanai, K., van den Puttelaar, J., van Herpen, E., van Trijp, H. Using a Virtual Store As a Research Tool to Investigate Consumer In-store Behavior. Journal of Visualized Experiments. (125), 1-15 (2017).
  14. Satava, R. M. Virtual reality surgical simulator - The first steps. Surgical Endoscopy. 7, (3), 203-205 (1993).
  15. Stanney, K. M., Hale, K. S. Handbook of virtual environments: Design, implementation, and applications. CRC Press. 811-834 (2014).
  16. Ryu, J., Kim, G. J. Using a vibro-tactile display for enhanced collision perception and presence. Proceedings of the ACM symposium on Virtual reality software and technology VRST 04. 89 (2004).
  17. Louison, C., Ferlay, F., Mestre, D. R. Spatialized vibrotactile feedback contributes to goal-directed movements in cluttered virtual environments. 2017 IEEE Symposium on 3D User Interfaces (3DUI). 99-102 (2017).
  18. Knierim, P., et al. Tactile Drones - Providing Immersive Tactile Feedback in Virtual Reality through Quadcopters. Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems - CHI EA '17. 433-436 (2017).
  19. Serafin, S., Nordahl, R., De Götzen, A., Erkut, C., Geronazzo, M., Avanzini, F. Sonic interaction in virtual environments. 2015 IEEE 2nd VR Workshop on Sonic Interactions for Virtual Environments (SIVE). 1-2 (2015).
  20. Grübel, J., Thrash, T., Hölscher, C., Schinazi, V. R. Evaluation of a conceptual framework for predicting navigation performance in virtual reality. PLoS One. 12, (9), (2017).
  21. Thrash, T., Kapadia, M., Moussaid, M., Wilhelm, C., Helbing, D., Sumner, R. W., Hölscher, C. Evaluation of control interfaces for desktop virtual environments. Presence. 24, (4), (2015).
  22. Ruddle, R. A., Volkova, E., Bülthoff, H. H. Learning to walk in virtual reality. ACM Transactions on Applied Perception. 10, (2), 1-17 (2013).
  23. Moussaïd, M., Perozo, N., Garnier, S., Helbing, D., Theraulaz, G. The walking behaviour of pedestrian social groups and its Impact on crowd dynamics. PLoS One. 5, (4), e10047 (2010).
  24. Bode, N. W. F., Franks, D. W., Wood, A. J., Piercy, J. J. B., Croft, D. P., Codling, E. A. Distinguishing Social from Nonsocial Navigation in Moving Animal Groups. The American Naturalist. 179, (5), 621-632 (2012).
  25. Kinateder, M., et al. Social influence on route choice in a virtual reality tunnel fire. Transportation Research Part F: Traffic Psychology and Behaviour. 26, 116-125 (2014).
  26. Drury, J., et al. Cooperation versus competition in a mass emergency evacuation: A new laboratory simulation and a new theoretical model. Behavior Research Methods. 41, (3), 957-970 (2009).
  27. Helbing, D., Buzna, L., Johansson, A., Werner, T. Self-Organized Pedestrian Crowd Dynamics: Experiments, Simulations, and Design Solutions. Transportation Science. 39, (1), 1-24 (2005).
  28. Moussaïd, M., Helbing, D., Theraulaz, G. How simple rules determine pedestrian behavior and crowd disasters. Proceedings of the National Academy of Sciences of the United States of America. 108, (17), 6884-6888 (2011).
  29. Singh, S., Kapadia, M., Faloutsos, P., Reinman, G. An open framework for developing, evaluating, and sharing steering algorithms. International Workshop on Motion in Games. Springer. Berlin, Heidelberg. 158-169 (2009).
  30. Singh, S., Kapadia, M., Hewlett, B., Reinman, G., Faloutsos, P. A modular framework for adaptive agent-based steering. Symposium on Interactive 3D Graphics and Games. ACM. (2011).
  31. Bode, N., Codling, E. Human exit route choice in virtual crowd evacuations. Animal Behaviour. 86, 347-358 (2013).
  32. Bode, N. W. F., Kemloh Wagoum, A. U., Codling, E. A. Human responses to multiple sources of directional information in virtual crowd evacuations. Journal of the Royal Society Interface. 11, (91), 20130904 (2014).
  33. Pandzic, I. S., Capin, T., Lee, E., Thalmann, N. M., Thalmann, D. A flexible architecture for virtual humans in networked collaborative virtual environments. Computer Graphics Forum. 16, Blackwell Publishers Ltd. (1997).
  34. Joslin, C., Pandzic, I. S., Thalmann, N. M. Trends in networked collaborative virtual environments. Computer Communications. 26, (5), 430-437 (2003).
  35. Molka-Danielsen, J., Chabada, M. Application of the 3D multi user virtual environment of Second Life to emergency evacuation simulation. System Sciences (HICSS), 2010 43rd Hawaii International Conference. IEEE. 1-9 (2010).
  36. Normoyle, A., Drake, J., Safonova, A. Egress online: Towards leveraging massively, multiplayer environments for evacuation studies. Tech Reports No MS-CIS-12-15 (2012).
  37. Bock, O., Baetge, I., Nicklisch, A. hroot: Hamburg registration and organization online tool. European Economic Review. 71, 117-120 (2014).
  38. Tanvir Ahmed, D., Shirmohammadi, S., Oliveira, J., Bonney, J. Supporting large-scale networked virtual environments. Virtual Environments, Human-Computer Interfaces and Measurement Systems, 2007. IEEE Symposium. 150-154 (2007).
  39. Cipresso, P., Bessi, A., Colombo, D., Pedroli, E., Riva, G. Computational psychometrics for modeling system dynamics during stressful disasters. Frontiers in Psychology. 8, 1-6 (2017).
  40. Bernold, E., Gsottbauer, E., Ackermann, K., Murphy, R. Social framing and cooperation: The roles and interaction of preferences and beliefs. 1-26 (2015).
  41. Balietti, S., Goldstone, R. L., Helbing, D. Peer review and competition in the Art Exhibition Game. Proceedings of the National Academy of Sciences of the United States of America. 113, (30), 8414-8419 (2016).
  42. Gomez-Marin, A., Paton, J. J., Kampff, A. R., Costa, R. M., Mainen, Z. F. Big behavioral data: Psychology, ethology and the foundations of neuroscience. Nature Neuroscience. 17, (11), 1455-1462 (2014).

Comments

0 Comments


    Post a Question / Comment / Request

    You must be signed in to post a comment. Please or create an account.

    Usage Statistics