Here, we present the computer-based, multi-agent game HoneyComb, which enables experimental investigations of collective human movement behavior via black-dot-avatars on a virtual 2D hexagonal playfield. Different experimental conditions, like variable incentives on goal fields or vision radius, can be set, and their effects on human movement behavior can be investigated.
Cite this ArticleCopy Citation | Download Citations | Reprints and Permissions
Boos, M., Pritz, J., Belz, M. The HoneyComb Paradigm for Research on Collective Human Behavior. J. Vis. Exp. (143), e58719, doi:10.3791/58719 (2019).
Translate text to:
Collective human behavior such as group movement frequently shows surprising patterns and regularities, such as the emergence of leadership. Recent literature has revealed that these patterns, often visible at the global level of the group, are based on self-organized, individual behaviors that follow several simple local parameters. Understanding the dynamics of human collective behavior can help to improve coordination and leadership in group and crowd scenarios, such as identifying the ideal placement and number of emergency exits in buildings.
In this article, we present the experimental paradigm HoneyComb, which can be used to systematically investigate conditions and effects of human collective behavior. This paradigm uses a computer-based multi-user platform, providing a setting that can be shaped and adapted to various types of research questions. Situational conditions (e.g., cost-benefit ratios for specific behavior, monetary incentives and resources, various degrees of uncertainty) can be set by experimenters, depending on the research question. Each participant's motions are recorded by the server as hexagonal coordinates with timestamps at an accuracy of 50 ms and with individual IDs. Thus, a metric can be defined on the playfield, and movement parameters (e.g., distances, velocity, clustering, etc.) of participants can be measured over time. Movement data can in turn be combined with non-computerized data from questionnaires garnered within the same experiment setup.
The HoneyComb paradigm is paving the way for new types of human movement experiments. We demonstrate here that these experiments can render results with sufficient internal validity to meaningfully deepen our understanding of human collective behavior.
The computer-based multi-agent game HoneyComb1 offers a methodological paradigm to experimentally investigate how collective human movement patterns and group structures emerge from individual behavior. Human participants are visually represented as avatars (black dots) on a hexagonal virtual playfield resembling a honeycomb (Figure 1). Participants move their avatars via mouse-click to reach goal hexagons, spend move resources (Video 1), and maximize their monetary rewards by building cohesive groups (Video 2). Spatial conditions (e.g., vision radius), reward structures (e.g., monetary goal fields), and communication channels can be manipulated in order to discover which and to what extent these condition rules impact coordination and leadership in collective movement.
The game's procedural/condition rules, goals, and reward motivators have been designed by social psychologists to investigate human collective movement. In animal swarms as well as human crowds, one can observe emergent phenomena (i.e., global patterns) transpiring from individual behavior that follows local rules. For example, schools of fish and flocks of birds seem to move as coherent entities towards a spatial goal2,3,4, despite large group sizes that reduce their capacity for global or inter-individual communication. Empirical research5,6,7, behavioral modeling8,9,10, and computer simulations11,12,13 have shown that in diverse species, including humans14,15,16, complex patterns at the group level emerge without internal control or external supervision. Local individual movement and, often times, simple rules on the microscopic level are sufficient to generate orderly movement on the macroscopic level. Such experiments contribute to increasing evidence2,6 that not only large swarms but also small groups (human groups as well as other animal groups) are coordinated by local interaction rules1.
Our novel approach using computer-based multi-user avatar games shows one main advantage in researching dynamic human collective phenomena. Using the HoneyComb avatar platform1,17,18,19, spatio-temporal data of individual movement behavior (movement governed by actual individuals) can be fully collected by the server, and the development of behavioral patterns and collective structures can be analyzed with an accuracy of 50 ms (Table 1). As visual and auditory sensory communication can be restricted by requiring participants to use earplugs and encasing their workstations with partition walls, swarm and other crowd behavior conditions can be approximated experimentally. In several experiments1,17,18,19, we manipulated vision radius (global vs. local, Figure 2), monetary incentives (Figure 3a,b), subgroups (Figure 4), and the co-presence of other players (Figure 5) in order to test the impact of these variables on the emergence of collective behavioral patterns such as human flocking behavior17, leadership1, and competition18. To collect the data, a setup of ten to twelve notebooks and one server was used (Figure 6).
The self-organized coordination of individual activities in group-living species has attracted much scientific attention, particularly within the last decade. Examinations are wide-ranging, from simple trail formation and path selection in ants to the complex emergence of vortex structures in fish shoals, and even the segregation of bidirectional flows of pedestrians2.
With our HoneyComb paradigm, we contribute a methodological approach to empirically investigate the impact of varied situational options/constraints, diverse behavioral rules, and individual characteristics on the microscopic level on the emergence of macroscopic behavioral structures in humans. An important advantage is that the paradigm offers strictly controllable experimental settings defined by experimenters, making it possible for manipulation to measure the outcomes of a single experiment or compare multiple experiments. The virtual playfield can be configured according to the requirements of the study design, and sensory communication channels between the participants can be eliminated or reduced according to the experiment parameters. Additionally, environmental affordances can be shaped (e.g., competitive, non-competitive consensus, and rescue settings). Thus, our platform enforces internal validity (i.e., matching the study design as closely as possible to the research questions) by offering the possibility to manipulate/control variables relevant for the specific research question, using human-governed movement data to examine human movement. Field experiments render benefits in terms of external validity (generalizability) of results15,20,21 to the real world, because they do not preclude effects of unknown uncontrollable/insuppressible social cues as well as non- and para-verbal behaviors in humans1.
The computer-based multi-agent game HoneyComb has served to investigate the emergence of coordination and leadership patterns of human players moving their avatars on the virtual playfield. Participants were only provided local information about monetary incentives obtainable on goal hexagons, which included the incentive for group cohesion based on the multiplication of monetary rewards by the number of co-players who also ended up on the same goal hexagon. In our first series of studies, we restricted the experiment setup to two simple parameters of swarming behavior (alignment and cohesion) and reduced mutual information transfer to "reading/transmitting" of only movement behavior of the other participants. We then varied the sight radius of other participant movement behavior to either a global or local view of the virtual playfield, which consists of 97 smaller hexagons, and limited the expendable movement resources (possible moves) of the players.
The shape and the elements of the virtual platform and the experimenter-defined parameters of games able to be played on said platform can be designed according to the specific research questions. Depending on the research goal, the size of the playfield can be changed; the colors, shapes, and meanings of the avatars can be adapted; resources can be implemented; and the reward structure and content can be varied. More or less information, uncertainty, and conflicting preferences can also be implemented22. Varying global player-view information and control are also possible. Therefore, via experimental instructions, the environmental affordances of the experiment can be altered (e.g., a consensus vs. escape scenario). In the next section, we will clarify how these variables can be applied by describing a real study that used some of these parameters to answer specific study questions.
Data collection and data analysis in this project have been approved by the Ethics Committee of the Georg-Elias-Müller Institute for Psychology of the University of Göttingen (proposal 039/2012).
1. Experimental Setup
- Choose a location that is away from a high-traffic area, such as in a computer lab or other specified area with individual workstations that can be configured as an LAN (local area network).
- Arrange for 10 to 12 notebooks of the same type to be used for the experiment as well as one computer to function as the server (Figure 6). The server program as well as client programs need a JAVA runtime environment, which is available on all common operation systems (a Raspberry Pi as client can suffice).
- Equipment configuration
- Arrange notebooks on individual workstation tables with chairs as shown in Figure 7.
- Connect notebooks to the server computer via Ethernet cables and a network switch to create a local area network.
- Install partition walls between individual workstations to prevent visual sensory communication (eye contact, hand gestures, facial expressions, etc.) between neighboring participants.
- Acquire earplugs (for one-time use) to be distributed to all participants to prevent audible communication between participants.
2. Participant Recruitment
- Choose a recruiting location where there are a large amount of people, such as the entrance hall of an auditorium.
- Address potential recruits using the standardized text that explains the purpose and background of experiment, experiment duration, maximum payment calculated according to performance, and requirement for participation in a multiplayer game on institution-owned laptops.
- Ensure that participants can understand the English and German instructions and questionnaires related to the experiments.
- If the experimental design includes the use of colors, ensure that participants are free of any color blindness that may prevent them from differentiating the colors used.
- Do not recruit previous participants, as participants should be naïve to the experiment.
- Lead willing recruits to a waiting area away from the recruiting area. Kindly request that they await completion of the group recruitment without talking to each other. Explain that this restriction is related to integrity of the experimental results.
- Once 10 to 12 participants have been recruited, lead them into the pre-arranged computer lab or specified area where the experiment will take place.
- Before participants take their places in the partition-encased workstations, have the participants sign a form designating informed consent.
- Distribute the hygienic, one-time use earplugs to all participants. Inform them that audio-visual communication with other participants is prohibited. Therefore, the use of earplugs and partition-encased workstations is mandatory.
- Have participants take their places in the partition-encased workstations.
3. Experimental Procedure
NOTE: In this experimental procedure, the game used by Boos et al.1 is described as an application example.
- Preparation phase
- The program itself is formatted as a zip-file HC.zip containing 1) the runnables HC.jar, 2) three files for configuration, namely hc_server.config, hc_panel.config, and hc_client.config, and 3) two subfolders named intro and rawdata.
- Create a shared folder on the server computer and unzip the HC.zip into this folder.
- On each client computer, mount and access this shared folder and open a terminal (Linux, Mac OS X: spotlight | search | terminal) or a prompt (Windows: search "cmd"), respectively. Use the command “dir” or “ls” so that the unzipped files appear on each terminal.
- Execute the command “java -version” on each terminal to ensure that a java runtime environment is available. If not, install java before continuing.
- Look in the three configure files.
- Edit hc_server.configure to configure the 1) number of players, 2) minimum numbers and maximum numbers of moves each player can make, 3) values of the so-called nuggets, and 4) perception radius condition (local or global).
NOTE: The two perception conditions are the global condition (player can see positions of avatars of all participants) and local condition (player can see only those avatars adjacent to their avatar; see Figure 3)
- Edit hc_client.configure to tell the clients the server's IP.
- In hc_panel.config, adjust the size of the hexagons according to the screen's resolution.
- Edit hc_server.configure to configure the 1) number of players, 2) minimum numbers and maximum numbers of moves each player can make, 3) values of the so-called nuggets, and 4) perception radius condition (local or global).
- First, start the server program HC_Gui.jar (Figure 8) using the command “java -jar HC_Gui.jar”. Then, start the client programs on each workstation using the command “java -jar HC_ClientAppl.jar”.
NOTE: The clients' screens should display the message, "Please wait. The computer is connecting to the server." In the server GUI, a line appears displaying the IP address of each of the clients. When all clients are connected, the server program displays the message, "All Clients are connected. Ready to start?"
NOTE: The experimenter can prepare the session up to this point.
- When all participants have taken their places, give the final instructions before they insert the earplugs.
- Click "OK" to start the session. Hereon, the experiment is controlled only by the instructions on the screens visible to participants. Instructions for a single experiment require multiple screen pages, and reading is made possible by the participants paging back and forth as necessary.
NOTE: Each participant indicates, by clicking a designated button on the screen, that he/she has read the instructions. The experiment cannot commence until all participants are finished reading the instructions.
- Testing phase
- Observe whether the participants are mouse-controlling their avatar dot (twice as big as the visible avatar dots of the other participants) on the HoneyComb virtual 97-hexagon playfield (see Figure 1).
- Have participants start the testing phase in the center of the field, then move on the HoneyComb virtual playfield according to the previously provided instructions on screen.
- All instructions on how to play the game are placed as editable html-files within the program folder of the HoneyComb game. See subfolders intro/de and intro/en for German and English instructions, respectively.
- Have players left-click into the adjacent small hexagon of their choice to move their avatar dot. Only adjacent fields can be chosen for the initial and subsequent moves.
NOTE: After each move, a small tail appears for 4000 ms for each participant, indicating the last direction from which he/she hailed.
- Allow each participant to partake only once in order to avoid possible biases.
NOTE: The game described here requires 5-10 min, including the reading of instructions. Overall, 400 participants in 40 ten-person groups were tested by Boos et al.1.
- Do not restart the experiment with the same participants if there is a technical breakdown or if a participant fails.
4. Post-Test Phase
- Once the game is completed, have participants fill out questionnaires assessing demographic data, Big Five personality factors, perceived levels of stress or calmness, and pay satisfaction (to be paid upon completion of the experiment). These questionnaires can be offered as stand-alone html-files.
- While participants fill out questionnaires, prepare anonymous envelopes with the appropriate amount of money earned in the HoneyComb game just completed. The game’s HoneyComb-computed amounts to be paid to each player are stipulated on the server screen.
- Distribute the earned payments to participants as they exit the testing area.
- Close the server program, then close the client programs once the server program has finished closing.
- Transfer the data, in the form of 2 text files marked by day- and time-stamp of the experiment, to a USB stick.
An initial experiment with the HoneyComb paradigm demonstrated that humans showed basic signs of flocking behavior, such as seeking the proximity of others, without being rewarded17. Subsequently, we addressed the question of how individual humans can be behaviorally coordinated to reach the same physical target/goal also investigated by Boos et al.1, focusing not only on unspecific flocking behavior, but also group coordination and leadership behavior. Using the above-described experiment-defined parameters, goal hexagon locations were defined, and a monetary reward option was used to examine shared goals based on shared incentives, as well as motivation toward group cohesion. Motivation to achieve group cohesion was enhanced by stipulating an additional reward based on how many other participants ended up in the same goal hexagon. Within each of the 40 ten-person groups, two subgroups (a minority group comprised of two randomly selected individuals and a majority group comprised of the remaining eight) were created by giving the following levels of information. The two minority group members were informed about the location of one two-euro prize hexagon and five one-euro prize hexagons (Figure 9, left). The eight members of the majority group were not informed about the two-euro prize hexagon and instead were shown the locations of six equally rewarded one-euro goal hexagons (Figure 9, right). None of the participants were told that there were different subgroups.
We designed our study questions according to Couzin et al.'s23 computer simulation model. Because the only information exchanged among the players were their abilities to perceive the movement of other players, we aimed to see (i) if this information was sufficient for the informed/higher rewarded minority group to coordinate the movements of the uninformed/lower rewarded majority group, and if so, (ii) how the double prize goal-informed minority group would/could lead the uninformed majority to their two-euro goal hexagon. As stated earlier, we restricted these study designs to two basic parameters of swarming behavior, 1) alignment (group members moving towards a goal hexagon) and 2) cohesion (group members tending toward moving as group). For the alignment parameter, we set up the six goal hexagons that granted a monetary payoff. For the cohesion parameter (i.e., making move choices that were coordinated with moves with fellow participants), we granted participants a reward based on the amount of avatars at the end that were in the same hexagon as their own.
The HoneyComb playfield contains 97 hexagons. All participants' avatars began the game together in the honeycomb's middle hexagon. Each player was granted a maximum 15 move-count. All were restricted to move their avatar (via a mouse click) only across one of hexagon's six sides to an adjacent hexagon. The game ended when every avatar was on a payoff field or when every player had completely used their 15 move-count.
An additional experiment factor was implemented to answer a third study question: (iii) whether perception radius (global vs. local condition) of the other participants affects movement coordination. The perception of half of the 40 ten-person groups was restricted on a random basis, which meant that twenty groups (local condition) could perceive the movement of only those avatars adjacent to their avatar. The remaining twenty 10-person groups (global condition) could perceive all participants' avatar locations and movements.
To answer question (ii) [which movement characteristics of the minority groups led to more success (successfully reaching a goal field as a group and therefore greater monetary reward)], we defined and analyzed various movement behaviors including first mover, shared movement paths/directions of the two minority participants, path lengths, average time between moves, initial-move order among participants, Big Five personality characteristics (extraversion, openness, etc.), and computer literacy. The statistical procedure, a finite mixture model with two binomials, and detailed results are published in Boos et al.1.
Our study demonstrated that in a group of humans, assigned avatars in a 2D HoneyComb play field (moving according to the above-described parameters and conditions), 20% of them (the 2-person minority group) based solely on their movements could successfully lead the other 80%, even when their perception was restricted to only adjacent avatars on the playfield. Here, successful leadership of these 2-person minority group participants entailed that their fellow participants made similar initial moves and that these 2-person minority participants were first to make an initial move1 (Video 2). For detailed parameters of this group’s movement behavior, please see Table 2. An in-depth analysis of the group’s dispersion over time is provided in Figure 10. We also found, surprisingly, that neither personality variables nor computer literacy among these minority participants played a crucial role in their success.
Figure 1: Playfield of the computer-based multi-agent game HoneyComb. Visual representation of human players as avatars (black dots) on a hexagon virtual playfield. Please click here to view a larger version of this figure.
Figure 2: Local vs. global perspectives. Participants with local perspectives can only see other players' avatars within their visual range. In this case, the marked player (red) is only able to see 4 out of 9 co-players. A global perspective, if configured, would provide visibility of all co-players. Please click here to view a larger version of this figure.
Figure 3: Monetary incentives. This illustration shows how monetary incentives can be implemented within the HoneyComb game. Avatars marked as grey are outside of the local perception radius and are thus invisible to the respective player. Two different points of view are shown. (a) Informed player: this player is endowed with one higher-rewarded goal field, which is marked as "€€", (b) uninformed player: this player is provided six equally lower-rewarded goal fields, which are marked as "€". Please click here to view a larger version of this figure.
Figure 4: Subgroup avatar experiment. In this scenario, two sub-groups are created by coloring the participants' avatars as blue and yellow. Please click here to view a larger version of this figure.
Figure 5: Single vs. joint game. This illustration shows two different settings from one player's point of view, comparable to Belz et al.17 (1a/b) Single game: co-players are invisible and cannot be found on the hexagon virtual playfield, (2a/b) joint game: co-players are visible as long as they stay within the local perception radius of other players. Please click here to view a larger version of this figure.
Figure 6: Server and client configuration. Ten to twelve notebooks (clients C1 through C12) should be arranged in the vicinity of (and connected to) the server computer. The use of partitions encasing each participant's workstation (indicated as thick lines) prohibits visual communication with others outside the virtual environment. Use of LAN-cables instead of WLAN is recommended due to less latency and more reliable data throughput. Please click here to view a larger version of this figure.
Figure 7: Contextual setting. Communication (sensory, visual, auditory) among participants is restricted due to the use of partition walls and earplugs. Please click here to view a larger version of this figure.
Figure 8: Graphic interface on the server. For each connected client, there is a line showing IP and other data (e.g., number of moves, position, amount to be paid to each player). Please click here to view a larger version of this figure.
Figure 9: Successful leadership. On the left side, the screenshot shows one informed player approaching a monetary goal field (see also Figure 4), successfully leading five other players to his/her goal field. On the right side, an uninformed player lost sight of his/her co-players. Please click here to view a larger version of this figure.
Figure 10: In-depth analysis of spatial dispersion over gaming time (group 44). Mean distance between group members over time for the whole group (group mean), compared with both players who were informed about the location of the higher-rewarded €€ goal-field (Informed 1, Informed 2), and eight uninformed players (Uninformed). By the end of the game, one uninformed player had lost the group and arrived on a € goal-field (Isolated player). Please click here to view a larger version of this figure.
Video 1: Example of collective movement from the perspective of an uninformed player (group 44). Please click here to view this video. (Right-click to download.)
Video 2: Example of collective movement from the perspectives of the two informed players in the same game as Video 1 (group 44). Please click here to view this video. (Right-click to download.)
Table 1: Data format. Each participant's moves and associated timestamps on the hexagon virtual playfield are recorded as hexagonal coordinates in separate rows, enabling the use of hierarchical/mixed modelling. The table shows an excerpt of the dataset generated by a group consisting of 10 players (group 44).
|Time||% of field
|(a) Individual level variables|
|(b) Group level variables|
Table 2: Detailed results of group movement behavior analysis (group 44). Results are listed (a) for the individual level, and (b) for the group level. On the group level, means were calculated for the uninformed majority (eight players), informed minority (two players), and the whole group (10 players). 1Players with IDs 0 and 7 were randomly chosen to be informed about the location of the higher-rewarded €€ goal-field; ∑ Moves = total number of moves; Rank of 1st move = rank of the 1st move in relation to the other players; Latency = mean movement latency between two steps in sec.; Payout = individual reward after completion of the game in €; Final distance = average distance of each player to all remaining players by the end of the game; Distance to €€-field = distance to the €€ goal-field by the end of the game; Time = total duration of the game in sec.; % of fields explored = percentage of the total field (97 hexagons) explored by the group. Please see also Figure 10 for an in-depth analysis of the group’s dispersion over gaming time, Video 1 and Video 2 for the collective movement of the group, and Table 1 for an excerpt of movement data.
One fundamental question in using multi-client virtual environments as a research paradigm to investigate human collective behavior is whether the results are applicable to actual scenarios. In other words, does the methodological approach yield results with sufficient ecological or external validity? Representing human participants as avatars on a virtual playfield and letting them move via mouse-clicks reduces social cues. Additionally, keeping communication to a minimum allows experimenters to investigate which tacit behavioral cues are transmitted among humans that may affect human group coordination and leadership behavior and under which environmental affordance (e.g., rescue, competition, evacuation) these behaviors are affected by more and to what extent. As long as there is strict adherence to the two pre-testing phases in the protocol and testing procedures, this reductionist approach guarantees internal validity. In order to allow the transfer of results to "real" group and crowd dynamics, the experimental setup and test phases may be gradually modified to become more complex (e.g., allowing for additional communication beyond mere transmitting/reading movement behavior, adding information on individual characteristics embedded semantically into various real-world scenarios, etc.) and described in the on-screen instructions read by the participants before the game starts.
To address the matter of external validity, the hexagon playfield [initially chosen to standardize player's movements to standardized, two-dimensional hexagonal coordinates due to (pre-tested) usability and reduction of confounding factors] can be varied. A two-dimensional grid with free movement choice would enable players to create more continuous and complex movement data. A three-dimensional environment created by Unity- or Unreal-Engine, for example, can also heighten the ecological/external validity. However, with each step towards lessoning the restriction of movement, a problem arises. With rising complexity of freedom-of-movement in the simulated scenario, the influence of confounding factors (e.g., interpersonal differences such as computer experience, familiarity with spatial orientation in three-dimensional games) increases, which can lead to biased results and reduce internal validity.
The advantage of the method outlined in the HoneyComb protocol is that it can be combined with computer simulation models and used as a paradigm to empirically test if collective patterns found in the computer simulations also hold for behavior in groups of humans. To enhance the external validity of such tests, participants should be asked in the post-test phase questionnaire if they felt sufficiently and humanly represented by their avatars and whether they were able to perceive their co-players as human actors. The protocol specifies the physical presence of the co-players sitting in workstations beside each other (even though the protocol parameters preclude sensory auditory or visual communication) in order to enhance these feelings of human embodiment.
In sum, the methods applied by the HoneyComb approach outlined in the protocol's pre-test, test, and post-test phases provide a novel paradigm to investigate basic mechanisms of collective phenomena such as group coordination, leadership, and intra-group differentiation. The method's most important limitation is its vulnerability to human error by the recruiters, particularly if they are not stringent enough in ensuring that participants do not communicate with each other during the pre-test and test phases.
The authors have nothing to disclose.
This research was funded by the German Initiative of Excellence (Institutional Strategy: https://www.uni-goettingen.de/en/32632.html). We thank Margarita Neff-Heinrich for her English proofreading.
|Partition walls between work stations|
|Equipment for LAN installation|
- Boos, M., Pritz, J., Lange, S., Belz, M. Leadership in Moving Human Groups. PLoS Computational Biology. 10, (4), e1003541 (2014).
- Moussaid, M., Garnier, S., Theraulaz, G., Helbing, D. Collective information processing and pattern formation in swarms, flocks, and crowds. Topics in Cognitive Science. 1, (3), 469-497 (2009).
- Sumpter, D. J. T. Collective Animal Behavior. Princeton University Press. Princeton. (2010).
- Krause, J., et al. Fish shoal composition: mechanisms and constraints. Proceedings - Royal Society. Biological Sciences. 267, (1456), 2011-2017 (2000).
- Camazine, S., et al. Self-Organization in Biological Systems. Princeton University Press. Princeton. (2003).
- King, A. J., Sueur, C., Huchard, E., Cowlishaw, G. A rule-of-thumb based on social affiliation explains collective movements in desert baboons. Animal Behavior. 82, (6), 1337-1345 (2011).
- Fischer, J., Zinner, D. Communication and cognition in primate group movement. International Journal of Primatology. 32, (6), 1279-1295 (2011).
- Couzin, I. D., Krause, J. Self-organization and collective behavior in vertebrates. Advances in the Study of Behavior. 32, 1-75 (2003).
- Katz, Y., Tunstrøm, K., Ioannou, C. C., Huepe, C., Couzin, I. D. Inferring the structure and dynamics of interactions in schooling fish. Proceedings of the National Academy of Sciences of the United States of America. 108, (46), 18720-18725 (2011).
- Guy, S. J., Curtis, S., Lin, M. C., Manocha, D. Least-effort trajectories lead to emergent crowd behaviors. Physical Review E: Statistical, Nonlinear, and Soft Matter Physics. 85, (1), 016110 (2012).
- Shao, W., Terzopoulos, D. Autonomous pedestrians. Proceedings of the 2005 ACM SIGGRAPH/Eurographics Symposium on Computer Animation. (2005).
- Reynolds, C. W. Flocks, herds and schools: A distributed behavioral model. Seminal Graphics. ACM. 273-282 (1987).
- Pelechano, N., Allbeck, J. M., Badler, N. I. Controlling individual agents in high-density crowd simulation. Proceedings of the 2007 ACM SIGGRAPH/Eurographics Symposium on Computer Animation. 99-108 (2007).
- Helbing, D., Molnár, P., Farkas, I. J., Bolay, K. Self-organizing pedestrian movement. Environment and Planning B: Planning and Design. 28, (3), 361-383 (2001).
- Dyer, J. R. G., Johansson, A., Helbing, D., Couzin, I. D., Krause, J. Leadership, consensus decision making and collective behavior in humans. Philosophical Transactions - Royal Society. Biological Sciences. 364, (1518), 781-789 (2009).
- Moussaid, M., Schinazi, V. R., Kapadia, M., Thrash, T. Virtual Sensing and Virtual Reality: How New Technologies can Boost Research on Crowd Dynamics. Frontiers in Robotics and AI. (2018).
- Belz, M., Pyritz, L. W., Boos, M. Spontaneous flocking in human groups. Behavioral Processes. 92, 6-14 (2013).
- Boos, M., Franiel, X., Belz, M. Competition in human groups - Impact on group cohesion, perceived stress and outcome satisfaction. Behavioral Processes. 120, 64-68 (2015).
- Boos, M., Li, W., Pritz, J. Patterns of Group Movement on a Virtual Playfield: Empirical and Simulation Approaches. Social Network Analysis: Interdisciplinary Approaches and Case Studies. Fu, X., Luo, J. -D., Boos, M. 197-223 (2017).
- Dyer, J. R. G., et al. Consensus decision making in human crowds. Animal Behavior. 75, (2), 461-470 (2008).
- Faria, J. J., Dyer, J. R. G., Tosh, C. R., Krause, J. Leadership and social information use in human crowds. Animal Behavior. 79, (4), 895-901 (2010).
- Conradt, L. Models in animal collective decision-making: information uncertainty and conflicting preferences. Interface Focus. 2, (2), 226-240 (2011).
- Couzin, I. D., Krause, J., Franks, N. R., Levin, S. A. Effective leadership and decision-making in animal groups on the move. Nature. 433, 513-516 (2005).