Login processing...

Trial ends in Request Full Access Tell Your Colleague About Jove


The Collective Trust Game: An Online Group Adaptation of the Trust Game Based on the HoneyComb Paradigm

Published: October 20, 2022 doi: 10.3791/63600
* These authors contributed equally


The Collective Trust Game is a computer-based, multi-agent trust game based on the HoneyComb paradigm, which enables researchers to assess the emergence of collective trust and related constructs, such as fairness, reciprocity, or forward-signaling. The game allows detailed observations of group processes through movement behavior in the game.


The need to understand trust in groups holistically has led to a surge in new approaches to measuring collective trust. However, this construct is often not fully captured in its emergent qualities by the available research methods. In this paper, the Collective Trust Game (CTG) is presented, a computer-based, multi-agent trust game based on the HoneyComb paradigm, which enables researchers to assess the emergence of collective trust. The CTG builds on previous research on interpersonal trust and adapts the widely known Trust Game to a group setting in the HoneyComb paradigm. Participants take on the role of either an investor or trustee; both roles can be played by groups. Initially, investors and trustees are endowed with a sum of money. Then, the investors need to decide how much, if any, of their endowment they want to send to the trustees. They communicate their tendencies as well as their final decision by moving back and forth on a playfield displaying possible investment amounts. At the end of their decision time, the amount the investors have agreed upon is multiplied and sent to the trustees. The trustees have to communicate how much of that investment, if any, they want to return to the investors. Again, they do so by moving on the playfield. This procedure is repeated for multiple rounds so that collective trust can emerge as a shared construct through repeated interactions. With this procedure, the CTG provides the opportunity to follow the emergence of collective trust in real time through the recording of movement data. The CTG is highly customizable to specific research questions and can be run as an online experiment with little, low cost equipment. This paper shows that the CTG combines the richness of group interaction data with the high internal validity and time-effectiveness of economic games.


The Collective Trust Game (CTG) provides the opportunity to measure collective trust online within a group of humans. It generalizes the original Trust Game by Berg, Dickhaut, and McCabe1 (BDM) to the group level and can capture and quantify collective trust in its emergent qualities2,3,4, as well as related concepts such as fairness, reciprocity, or forward-signaling.

Previous research mostly conceptualizes trust as a solely interpersonal construct, for example, between a leader and a follower5,6, excluding higher levels of analysis. Especially in organizational contexts, this might not be enough to comprehend trust holistically, so there is great need to understand the processes by which trust builds (and diminishes) on a group level.

Recently, trust research has incorporated more multi-level thinking. Fulmer and Gelfand7 reviewed a number of studies on trust and categorized them according to the level of analysis that is investigated in each study. The three different levels of analysis are interpersonal (dyadic), group, and organizational. Importantly, Fulmer and Gelfand7 additionally distinguish between different referents. The referents are those entities at which trust is directed. This means that when "A trusts B to X", then A (the investor in economic games) is represented by the level (individual, group, organizational) and B (the trustee) is represented by the referent (individual, group, organizational). X represents a specific domain to which trust refers. This means that X can be anything such as a generally positive inclination, active support, reliability, or financial exchanges as in economic games1.

Here, collective trust is defined based on Rousseau and colleagues' definition of interpersonal trust8, and similar to previous studies on collective trust9,10,11,12,13,14; collective trust comprises a group's intention to accept vulnerability based upon positive expectations of the intentions or behavior of another individual, group, or organization. Collective trust is a psychological state shared among a group of humans and formed in interaction among this group. The crucial aspect of collective trust is therefore the sharedness within a group.

This means that research on collective trust needs to look beyond a simple average of individual processes and conceptualize collective trust as an emergent phenomenon2,3,4, as new developments in group science show that group processes are fluid, dynamic, and emergent2,15. We define emergence as a "process by which lower level system elements interact and through those dynamics create phenomena that manifest at a higher level of the system"16 (p. 335). Proposedly, this should also apply to collective trust.

Research that reflects the focus on emergence and dynamics of group processes should use appropriate methodologies17 to capture these qualities. However, the current status of collective trust measurement seems to lag behind. Most studies have employed a simple averaging technique across the data of each individual in the group9,10,12,13,18. Arguably, this approach has only little predictive validity2 as it disregards that groups are not simply aggregations of individuals but higher-level entities with unique processes. Some studies have tried to address these drawbacks: A study by Adams19 employed a latent variable approach, while Kim and colleagues10 used vignettes to estimate collective trust. These approaches are promising in that they recognize collective trust as a higher-level construct. Yet, as Chetty and colleagues20 note, survey-based measures lack incentives to answer truthfully, so research on trust has increasingly adopted behavioral or incentive-compatible measures21,22.

This concern is addressed by a number of studies which have adapted a behavioral method, namely the BDM1, to be played by groups23,24,25,26. In the BDM, two parties act as either investors (A) or trustees (B). In this sequential economic game, both A and B receive an initial endowment (e.g., 10 Euros). Then, A needs to decide how much, if any, of their endowment they would like to send to B (e.g., 5 Euros). This amount is then tripled by the experimenter, before B can decide how much, if any, of the received money (e.g., 15 Euros) they would like to send back to A (e.g., 7.5 Euros). The amount of money A sends to B is operationalized to be the level of trust of A toward B, while the amount that B sends back can be used to measure the trustworthiness of B or the degree of fairness in the dyad of A and B. A large body of research has investigated behavior in dyadic trust games27. The BDM can be played both as a so-called 'one-shot' game, in which participants play the game only once with a specific person, and in repeated rounds, in which aspects such as reciprocity28,29 as well as forward-signaling might play a role.

In many studies that have adapted the BDM for groups23,24,25,26, either the investor, the trustee, or both roles were played by groups. However, none of these studies recorded group processes. Simply substituting individuals with groups in study designs does not meet the standards Kolbe and Boos17 or Kozlowski15 set up for investigations of emergent phenomena. To fill this gap, the CTG was developed.

The aim of developing the CTG was to create a paradigm that would combine the widely used BDM1 with an approach that captures collective trust as an emergent behavior-based construct that is shared among a group.

The CTG is based on the HoneyComb paradigm by Boos and colleagues30, that has also been published in the Journal of Visualized Experiments31 and has now been adapted for use in trust research. As described by Ritter and colleagues32, the HoneyComb paradigm is "a multi-agent computer-based virtual game platform that was designed to eliminate all sensory and communication channels except the perception of participant-assigned avatar movements on the playfield" (p. 3). The HoneyComb paradigm is especially suitable to research group processes as it allows researchers to record the movement of members of a real group with spatio-temporal data. It could be argued that, next to group interaction analysis17, HoneyComb is one of the few tools that allows researchers to follow group processes in great detail. In contrast to group interaction analysis, quantitative analysis of the spatio-temporal data of HoneyComb is less time-intensive. Additionally, the reductionist environment and possibility to exclude all interpersonal communication between participants except the movement on the playfield allows researchers to limit confounding factors (e.g., physical appearance, voice, facial expressions) and create experiments with high internal validity. While it is difficult to identify all influential aspects of a group process in studies employing group discussion designs33, the focus on basic principles of group interaction in a movement paradigm allows researchers to quantify all aspects of the group process in this experiment. Additionally, previous research has used proxemic behavior34-so reducing space between oneself and another individual-to investigate trust35,36.

Figure 1
Figure 1: Schematic overview of the CTG. (A) Schematic procedure of one CTG round. (B) Initial placement of avatars at beginning of round. The three blue-colored investors are standing on the initial field "0". The yellow trustee is standing on the initial field "0". (C) Screenshot during the invest phase showing three investors (blue avatars) on the lower half of the playfield. One (big blue avatar) is currently standing on "12", two investors are currently standing on "24". Two avatars have tails (indicated by orange arrows). The tails are indicating from which direction they moved to their current field (e.g., one investor (big blue avatar) just moved from "0" to "12"). The avatar without a tail has been standing on this field for at least 4000 ms. (D) Screenshot during the return phase showing one trustee (yellow avatar) and the upper half of the playfield. The trustee is currently standing on "3/6" and has recently moved there from "2/6" as indicated by the tail. The blue number below (36) indicates the investment made by the investors. The yellow number, indicated by the arrow, is the current return (54) as depicted in the middle of the playfield. The return is calculated as follows: (invest (36 cents) x 3) x current return fraction (3/6) = 54 cent. (E) Pop-up window giving feedback to participants on how much they have earned during the round, displayed for 15 s after time-out of trustee expires. Please click here to view a larger version of this figure.

The main procedure of the CTG (Figure 1A) is closely based on the procedure of the BDM1, in order to make results comparable to previous studies using this economic game. As the HoneyComb paradigm is based on the principle of movement, participants indicate the amount they would like to invest or return by moving their avatar onto the small hexagon field that indicates a certain amount of money or fraction to return (Figure 1C,D). Prior to each round, both the investors and trustees are endowed with a certain amount of money (e.g., 72 cents) with the investors being placed in the lower half of the playfield and the trustees being placed in the upper half of the playfield (Figure 1B). In the default setting, the investors are allowed to move first, while the trustees remain still. The investors move across the playfield to indicate how much, if any, of their endowment they would like to send to the trustee (Figure 1C). Through moving back and forth on the field, participants may also communicate to other investors how much they would like to send to the trustee. Depending on the configuration, participants need to reach a unanimous decision on how much they would like to invest by converging on one playfield when the time-out is reached. Unanimous decisions were required in order to enforce that investors need to interact with each other, instead of simply play alongside one another. If the investors do not reach a joint decision, a penalty (e.g., 24 cents) is deducted from their account. This was implemented to ensure that investors would be highly motivated to reach a shared level of collective trust. Once the investors' time is up, the invested money is multiplied and sent to the trustees who are then allowed to move while the investors remain still. The trustees indicate through movement how much they would like to return to the investors (Figure 1D). The available return options are displayed as fractions on the playfield to keep cognitive load on trustees comparatively low. The playfield on which trustees stand once their allocated time runs out indicates which fraction (e.g., 4/6) is returned to investors. The round ends with a pop-up (Figure 1E) that summarizes for each participant how much they earned during that round and what their current account balance is.

Rounds should be repeated multiple times. Researchers should have participants play the CTG for at least 10 or 15 rounds in the same roles. This is necessary as collective trust is an emergent construct and needs to develop during repeated interactions within a group. Similarly, other concepts such as forward-signaling (i.e., reciprocating high returns from trustees with high investments in the next round) will only emerge in repeated interactions. It is crucial, however, that participants are unaware of the exact number of rounds to be played as it has been shown that behavior can drastically change when participants are aware that they are playing the last round (i.e., more unfair behavior or deflections in economic games37,38).

In this way, the CTG provides information about the emergence of collective trust on multiple levels. First, the level of collective trust exhibited in the final round should be a close representation of the shared level of trust investors hold towards the trustee(s). Second, the amount invested in each round can serve as a proxy for the emergence of collective trust over repeated interactions. Third, movement data sheds light on the group process that determines how much money is invested in each round.

Subscription Required. Please recommend JoVE to your librarian.


Data collection and data analysis in this project have been approved by the Ethics Committee of the Georg-Elias-Müller Institute for Psychology of the University of Göttingen (proposal 289/2021); the protocol follows the guidelines on human research of the Ethics Committees of the Georg-Elias-Müller-Institute for Psychology. The CTG software can be downloaded from the OSF project (DOI 10.17605/OSF.IO/U24PX) under the link: https://s.gwdg.de/w88YNL.

1. Prepare technical setup

  1. Prepare online consent forms and questionnaires
    1. Prepare an online consent form in an online questionnaire tool.
    2. If applicable, prepare an online questionnaire in an online questionnaire tool.
      NOTE: It is possible to include a short questionnaire within the HoneyComb program (see step 1.3.5). To use longer questionnaires, use a separate online questionnaire tool instead. Examples for online questionnaire tools are given in the Table of Materials.
  2. Prepare remote desktop server
    1. Install a Linux-based operating system on a remote server. If possible, ask technical assistants about the available resources at the institution. Otherwise, follow an installation guideline39.
    2. Create different users on this server40.
      1. Create a user admin which has root permissions and is accessed solely by the technical lead in the experiment.
      2. Create a user experimenter which has permissions to create shared folders, import and export data, and can be accessed by all personnel collecting data (including students/research assistants, etc.).
      3. Create multiple users named participant-1, participant-2, etc.
        NOTE: Researchers will only be able to test as many participants in one experimental session as users that are created.
    3. Execute the command java -version on the admin user to ensure that a Java runtime environment is available on the server. If not, install the most recent Java version before continuing and make sure all users can access it.
    4. Install the program
      1. Download the program.
        NOTE: The program can be downloaded as a zip-file HC_CTG.zip containing 1) the runnable HC.jar, 2) three files for configuration (hc_server.config, hc_panel.config, and hc_client.config), and 3) two subfolders named intro and rawdata.
      2. Create a folder on the experimenter user and share it with the other users41. Extract the files from the compressed file HC_CTG.zip into this folder.
      3. For each participant user, access this shared folder and check that the user can access the files.
  3. Open the three configuration files.
    1. Edit hc_server.config and save the edited file.
      1. Configure the number of players by setting n_Pl to the desired number. For example, enter 4 behind the =.
      2. Configure the number of rounds to play (playOrder) by repeating the game number 54a (e.g., 54a, 54a, 54a, 54a for four rounds).
        NOTE: i54a stands for the instructions and should not be deleted in the configuration file.
      3. Configure whether a questionnaire should be shown in HoneyComb by including 200 at the end of playOrder. Delete 200 if a separate online questionnaire tool is used.
      4. Configure the investment scale. To configure the scale for investors (iscale), enter which values should be available as investment steps (e.g., 0, 12, 24, 36, 48, 60, 72). Use integers that are multiples of three so that payouts are also integers.
        NOTE: These configured values are also displayed as possible investment steps to the investors.
        1. Configure the display scale for trustees (tlabel) by choosing which values should be displayed as possible returns on the playfield (e.g., 0, 1/6, 2/6, 3/6, 4/6, 5/6, 1). NOTE: This scale does not influence the calculation of payouts.
        2. Configure the scale for trustees (tscala) by choosing which return values should be possible as returns (e.g., 0, 0.166666, 0.3333, 0.5, 0.6666, 0.833331, 1). Use digital values only (i.e., no fractions).
          NOTE: These values are used to calculate payouts and are NOT displayed on the playfield.
      5. Configure the time-ins (timeInI for investors, timeInT for trustees) and time-outs (timeOutI for investors, timeout for trustees) in seconds. For example, timeInI = 0, timeOutI = 30, timeInT = 30, and timeout = 45.
      6. Configure the amount of money investors and trustees are endowed with in each round in cents (r52).
      7. Configure the factor with which the investment is multiplied before being sent to the trustee (f52).
      8. Configure whether the group has to reach a unanimous decision (set bUnanimous to true) or not (set unanimous to false)
      9. Configure whether the group is paid out in equal parts (set bCommon to true) or according to how much each investor has contributed to the investment (set bCommon to false).
      10. If bUnanimous is set to true, configure the penalty-the amount of money deducted from the investors if a unanimous decision is not reached (p52).
    2. Edit hc_client.config if necessary. Make sure to set ip_nr to localhost so that the clients can connect to the experimenter.
    3. Edit hc_panel.config.
      1. Adjust the size of the hexagons (radius) according to the screen resolution. Test the experiment on multiple different screens to ensure that the experiment will be visible on a wide variety of screens.
      2. Adjust the text that is displayed on the playfield under labels (e.g., Your role is: investor, Account Balance, etc.)
    4. Adjust and/or translate the instructions, if necessary. To do so, edit and save the simple HTML-files (Figure 2A) in the "intro" folder within the HoneyComb program folder.
    5. If you want to use the questionnaire within the HoneyComb program, adjust and/or translate the questionnaire in the file qq.txt and save the file.
    6. Keep this setup constant across all experiment sessions (within one experiment condition). Document all configurations.

2. Participant recruitment

  1. Online advertisement
    1. Recruit participants over available channels (e.g., social media, university blog, flyer with QR-code). Name important information about the experiment, such as its purpose, duration, and maximum payment calculated according to game behavior.
      ​NOTE: The sample presented here was recruited via an online blog for psychology students at the University of Göttingen as well as unpaid advertisements in social media groups. An example flyer can be seen in Supplemental Figure 1.
    2. Make potential participants aware that participation will require usage of personal laptops/PCs with a stable internet connection and in a quiet, secluded area. Make participants aware that they might need to install a program to establish the Remote Desktop connection.
      NOTE: Participation via mobile phones or tablets is not possible.
    3. Make sure the participants meet the experiment's inclusion criteria such as language requirements or color sightedness.
    4. Make sure the participants have not taken part in previous experiments on the CTG.
  2. Book experimental sessions with the participants
    1. Ask the participants to book time-slots for their participation.
    2. Use a participant management software to send automated invitation or reminder e-mails.
    3. Overbook time-slots by at least one participant to ensure enough participants are present to run the experiment.
  3. Send participants a confirmation e-mail with the following details: guide on computer setup, installation of Remote Desktop Connection Tool, and establishing connection to Remote Desktop. Make sure to NOT send any log in information yet, in order to avoid technical issues due to earlier log in.
  4. Send participants reminder e-mails about 24 h prior to the experiment, including the link to the video conferencing platform. Include the information about installation that was sent in the confirmation e-mail.

3. Experimental setup (before each experimental session)

  1. Prepare the video conferencing platform (Figure 3)
    1. Make sure the participants are blocked from sharing their microphone or camera. Make sure the participants cannot see each other's names.
    2. Share the experimenter's microphone and camera, and share the screen with minimal instructions on the video conferencing platform (Figure 3).
  2. Prepare the remote desktop
    1. User experimenter
      1. Start a remote desktop connection with the experimenter user. Open the shared folder and start a terminal by right clicking in the directory and choosing Open Terminal here.
      2. Start the server program HC_Gui.jar by typing the command java -jar HC_Gui.jar in the terminal and pressing ENTER.
    2. Users participant-1, participant-2, etc.
      1. Establish a remote desktop connection with users participant-1, participant-2, .... Open the shared folder and start a terminal in this folder as before.
      2. Start the client programs for each user by typing the command java -jar HC.jar in the terminal and pressing ENTER.
      3. Check whether the connections are established correctly on all participant users.
        NOTE: The participant users' screens should display the message Please wait. The computer is connecting to the server. It is recommended to have as many laptops present as users (Figure 4).
    3. User experimenter
      1. Check that a line appears in the server GUI, displaying the IP address of each of the participant users. When all participant users are connected, check that the server program displays the message All Clients are connected. Ready to start?. Click on OK.
      2. Check that the screens of the participant users display the welcome screen of the experiment (first instructions page).
        ​NOTE: The experimenter can prepare the session up to this point.

4. Experimental procedure

  1. Admit participants to the video conference at the scheduled experiment time-slot. Welcome all participants using a standardized text. Explain the technical procedure to participants.
  2. Share the link to the online consent form. Check that all participants have given written consent.
  3. Guide participants to open the Remote Desktop Connection tool and send each participant their individual login data via personal chat in the video conference.
    NOTE: When the participants log in to the participant users, the notebooks in the laboratory will lose connection to the participant users. From here on, the experiment runs automatically until the participants reach the final page, instructing them to return to the video conference.
  4. Have participants confirm that they have read the first instructions page by clicking on OK. Once all participants have confirmed, wait until the participants have completed the game.
    NOTE: The participants can page through the instructions at their preferred pace. Once all participants have confirmed that they have read the instructions, the CTG automatically commences. The game progresses automatically through as many rounds as indicated in the server.config file.
  5. Testing phase
    1. Assign participants to one of two roles: investor or trustee.
      NOTE: Multiple participants can be assigned the same role.
    2. Have investors start on the bottom-most field (indicated investment of 0) and trustees on the upmost field (indicating return of 0) (Figure 1B).
    3. Instruct participants to move their avatar by left-click into an adjacent hexagon field. Instruct participants that only adjacent fields can be chosen and fields cannot be skipped. Instruct participants that their avatar will display a small tail for 4000 ms after each move that indicates the last direction from which they moved to the current field (Figure 1C).
    4. Allow investors to move from the beginning (time-in = 0) to indicate through movement how much they would like to invest. After a certain amount of time, prohibit the movement of investors (time-out).
      NOTE: The field on which they stand will then indicate how much is invested. In the middle of the playfield, a blue number will additionally show the amount sent to the trustee. If the experiment is set up to require unanimous invests, investments will only be made if all participants stand on the same field.
    5. Explain in the instructions that the invested amount is multiplied by a factor (e.g., three) and sent to the trustees. Restrict the trustees from moving for as long as the investors are moving by setting the trustee time-in to the length of the trustee time-out.
    6. Instruct the trustees to move to indicate the fraction they would like to return to the investors. Once the trustee time-out is reached, the field on which the trustees stand is taken to indicate the fraction that is returned to the investors. The amount returned is also indicated in the middle of the playfield by a yellow number (Figure 1D).
    7. Have the pop-up window display the amount of money the person has earned at the end of the round (Figure 1E).
    8. Repeat the game round as needed (i.e., as indicated in the server.config file).
    9. Once all rounds are completed, ask participants to generate a personal unique code so that the in-game earnings can be connected to their name while keeping the behavioral data anonymous.
    10. After participants have generated the code, display a screen which instructs participants to return to the video conference and close the Remote Desktop connection.
      ​NOTE: The experimental procedure (section 4 in this protocol with 15 game rounds) takes 35 min.
    11. If technical issues or failure of a participant require that the experiment session is aborted, refrain from restarting the experiment with the same participants.
  6. Post-testing phase
    1. Once the game is completed, make sure that all participants have closed the Remote Desktop connection. Have the participants fill out questionnaires as seen fit for a specific research question.
    2. While the participants are filling out the questionnaires, close the server program on the experimenter user by clicking on Stop & Exit. This will also close the program on the participant users.
    3. Thank participants for their time and explain how and when their earnings will be transferred to them. Make sure all participants have left the video conference, especially if another experiment time-slot is scheduled directly afterwards.

5. Finishing the experiment

  1. Transfer and back up the data (e.g., in the cloud), in the form of one *.csv and one *.txt file per group and experiment time-slot, marked by a day- and time-stamp of the experiment.
  2. Close all Remote Desktop connections.

Subscription Required. Please recommend JoVE to your librarian.

Representative Results

This paper presents results of a pilot study conducted with the CTG with 16 participants (five men, 11 women; Age: M = 21, SD = 2.07). According to Johanson and Brooks42, this sample size is sufficient in a pilot experiment, especially when paired with a qualitative approach to reach a high information density about participants' subjective experience during the experiment. It is recommended that whenever researchers intend to adapt the CTG to their specific research idea, for example, by customizing the number of participants within each group, a similar pilot study should be run prior to the main data collection in order to ensure high data quality.

On the basis of the pilot data, this paper provides both an illustration of possible analysis methods of CTG data as well as a first validation of the CTG setup. Results reported here include movement and investment data from the CTG pilot study (example output from one group can be seen in Supplementary Data 1 and Supplementary Data 2 and an example data preprocessing script can be seen in the OSF project: https://s.gwdg.de/Cwx3ex) as well as questionnaire data on participants’ subjective experience during the experiment and remarks on the game.

For this publication, pilot data (N = 16) is used in order to demonstrate how scientific hypotheses might be tested with the CTG when a sufficient sample size has been reached. It should be noted that, usually, much larger sample sizes are needed in order to reach sufficient power for statistical analyses. The results reported here should merely serve as illustrations for possible analyses and visualizations (Figure 5). The CTG is especially suitable for investigating processes of collective trust, and how it emerges or wanes depending on the behavior of other group members or the trustee.

First, the qualities of collective trust as an emergent phenomenon were investigated. It is hypothesized that investments in the collective trust game change over time (i.e., emerge). This means that mean investments in the first, middle (i.e., seventh), and fifteenths round should be significantly different from each other. This hypothesis was tested with paired sample t-tests (Bonferroni corrected). Due to the small sample size (N = 16 in four groups), no significant differences could be found in the pilot data between the first (M = 27.0, SD = 20.49), seventh (M = 39, SD = 30.0; difference to round 1: t(3) = -0.511, p = 1), and fifteenth round (M = 42, SD = 31.75; difference to round 1: t(3) = -0.678, p = 1; difference to round 7: t(3) = -0.397, p = 1). The data were reanalyzed using only those invests that had been made unanimously. No significant differences were found between the rounds, probably due to the small sample as well (M1 = 24, SD1 = 24; M7 = 52, SD7 = 18.33; M15 = 56, SD15 = 18.33). The accompanying data can be seen in Figure 5A. In studies with sufficient sample sizes, a significant difference between rounds and either a continuous increase or decrease in investments over rounds would indicate emergence of collective trust in the experiment as investors in the group can repeatedly interact and, therefore, establish a shared level of trust.

Additionally, the emergence of collective trust can also be investigated using movement data, as can be seen in Figure 5B, which shows three behavioral markers of the decision process: (a) decision time (red; time until last move of investors; M = 12.25, SD = 7.05) as an operationalization of process length, (b) move length (green; average time between two moves: M = 2.42, SD = 2.16) as an operationalization of deliberation, and (c) direction changes (blue; number of times a movement direction was changed; M = 0.25, SD = 0.66) as an operationalization of adjustment to other investors during a decision. If collective trust emerges over rounds, the process as quantified by the three behavioral markers should become less complex over time as collective trust should be the basis for the group investment decision. This means that if collective trust is an emergent construct, we should see groups take longer for investment decisions in earlier rounds as no shared level of trust (i.e., collective trust) has emerged yet. Over interactions, investment decisions should become shorter (as measured by decision time) and easier (as measured by move length and direction changes) as a shared level of collective trust has developed and less interaction or coordination is needed to determine a group investment. Therefore, researchers should use a larger sample to model the progression of behavioral markers over rounds. A negative slope might indicate the emergence of collective trust as a basis for group investment decisions.

Second, the behavior of the trustee and dependencies of the trustees' and investors' behavior were analyzed. It was hypothesized that trustees would return a non-zero amount of money to the investors, as has been found in research on individual trust games1,43. A one-sample t-test indeed showed that trustees returned significantly more than zero (M = 43.89, SD = 35.38) to investors; t(59) = 9.608, p < .001. This was even more pronounced when only those returns were included which were preceded by non-zero invests (M = 62.70, SD = 24.36; t(46) = 16.677, p < .001). Figure 5C shows that trustees most often chose to return 4/6 of the investment.

Additionally, it was investigated whether the trustees' returns are based on reciprocity, in that a higher investment in one round correlates with higher return fractions (i.e., 0/6, 1/6, 2/6, ...) in the same round. There seems to be a significant correlation between invests and returns as can be seen in Figure 5D, left panel; t(58) = 9.446, p < .001, r = .78. This indicates that trustees might have reciprocated high invests with high returns. However, this might be driven by the rounds in which investors invested either zero or did not reach a unanimous decision so that the trustee did not have an option to return anything. Lastly, it was analyzed whether higher return fractions are perceived as forward signals by investors, so that higher return fractions in round t are correlated with high invests in round t+1. As can be seen in Figure 5D, right panel, this was not corroborated by the data; t(54) = 0.207, p = .837, r = .028.

To summarize, the quantitative data from the CTG consists of both movement and investment data of each participant in each round. While investment data provides parallels to previous applications of the individual trust game, movement data allows researchers to observe the process of collective trust. It should be noted that data is collected in actual groups, which increases external validity, but necessitates that the nested data structure is considered. This was not done for the reported analyses as the small sample size of the pilot data restricts the application of mixed-effects linear models.

Additionally, data on subjective experience was gathered in the pilot sample with a post-experiment questionnaire (Supplementary File 1) that included 13 items in total, of which 11 were open-ended questions. Next to subjective experience during the experiment, the items asked about specific aspects of the CTG that might influence data quality, such as participants' subjective principles of behavior during the game, believed intention of the experiment, or clarity of instructions. Two closed-format questions assessed on a five-point Likert scale whether participants perceived the investment through movement to be intuitive (-2: "not at all" to +2: "very") and whether the time given to participants to move in the game seemed sufficient (-2: "much too short"; 0: "about right"; +2: "much too long").

Generally, participants reported subjective experiences in line with the intention of the experiment and ease of following instructions, while also showing sufficient naïveté of the study's intention. Participants on average reported the game to be "quite intuitive" (M = 0.69, SD = 0.79) and perceived the time to be "about right" (M = -0.31, SD = 0.79).

Participants' answers to the open-ended questions were analyzed qualitatively according to Mayring44. Overall, participants were satisfied with the recruitment process and online procedure, preservation of anonymity in the experiment, the clarity of instructions and information provided, and the logic of the game. Most participants were satisfied with the design of the avatars in that they could be distinguished easily. However, only half of the participants reported that they felt represented by their avatar and remarked that symbols or animal faces might have been more interesting. Due to these results, researchers should consider including a measure of participants' embodiment in applications of the CTG to control for this experience while still maintaining a minimalist experimental design.

Most participants remarked that they experienced the urge to converge in the middle of the playfield, (i. e., at the highest investment option). Participants who experienced this reported that the urge to converge in the middle coincided with their willingness to invest high amounts. Additionally, some participants reported that instead of feeling drawn to the middle they felt they had to pull co-players toward the middle. Because of practical constraints of the experiment and potential trade-offs with intuitiveness, the initial design was retained in which high investments and returns converge in the middle.

Participants reported a multitude of suppositions about the aim of the study, such as group influence on own decisions, trust, or behavior of trustees. While those suppositions are thematically close to the investigated emergence of trust, the participants reported behavioral strategies such as profit maximization or intentions to influence the behavior of co-players. These strategies fit well with the economic game character of the CTG and do not counteract behaviors the study aimed to observe.

On the basis of results on subjective experience, it could be concluded that the CTG satisfies criteria of internal validity. The quantitative data analysis reported here should merely serve as an illustration of how data collected with the CTG can be statistically analyzed.

Figure 2
Figure 2: Example of game instructions. (A) HTML code prepared by experimenter. (B) HTML file displayed in browser. (C) Instructions as shown to participants during the experiment. Note the buttons on the bottom to navigate through instructions. Please click here to view a larger version of this figure.

Figure 3
Figure 3: Screenshot of video conference platform. The experimenter has shared their camera, microphone, and presentation with basic information on the video conferencing platform and Remote Desktop connection. One participant has already joined the conference but is prohibited from sharing their microphone, screen, or camera in order to keep anonymity. Please click here to view a larger version of this figure.

Figure 4
Figure 4: Setup in laboratory. Before the experiment starts, the experimenter will start a Remote Desktop connection with all laptops. Notebook 1 is connected with the experiment user and remains connected throughout the experiment. Notebooks 2 through 5 are used to establish and check connection with participant users ("participant-1" through "participant-4"). When participants establish connection to participant users via Remote Desktop Connection tool, notebooks in laboratory will lose the connection. Please click here to view a larger version of this figure.

Figure 5
Figure 5: Results based on pilot data (N = 16 in four groups). (A) Violin plots of group investments (cents) in rounds 1, 7, and 15. Violin shapes indicate probability density of invests, bold lines indicate median, boxes in violins indicate interquartile range, and whiskers indicate 1.5 times interquartile range. Left; all invests. Right; unanimous invests. (B) Three different markers of movement data that can be used to quantify aspects of investment decision process in group. Red; decision time (time until last move in seconds). Green; mean of move lengths (time from one move to the next in seconds). Blue; number of direction changes in movement pattern (count). (C) Frequency (count) plot of returns. Left; all returns (as return fractions) across rounds are counted. Right; only those returns (as return fractions) are counted prior to which trustees received an investment. (D) Scatterplots of investments (cents) and returns (as return fractions). Blue line indicates predicted values (using a linear model with formula: y ~ x), grey ribbon indicates standard error of predictions. Left; reciprocity correlation. Do high invests correlate with high returns in same round? Right; forward-signaling correlation. Do high returns correlate with high invests in subsequent round? Please click here to view a larger version of this figure.

Supplementary Figure 1: Example of online advertisement through flyer that was posted on an online blog. This flyer is an example of what information should be included in the advertisement of the participant recruitment flyer and in which way it could be presented. Please click here to download this File.

Supplementary File 1: Full questionnaire of pilot study. The full questionnaire used in the pilot study can be found here. Please click here to download this File.

Supplementary Data1: Example data output containing investment data of one group (i.e., four participants: three investors (pid 0-2) and one trustee (pid 4). This is an example of a raw data file containing a) information of play order, b) the list of players, c) the starting ("StartSicht") and final positions ("last common playground") of all players, as well as d) their investment, earnings, and account balance ("Balances: cost reward saldo"). Please click here to download this File.

Supplementary Data 2: Example data output containing movement data of one group (i.e., four participants: three investors (pid 0-2) and one trustee (pid 4). This is an example of a raw data file containing coordinated ("sj") of each player ("pid") at any given time in the experiment. The start of a new round is indicated by a "-1" as the "pid". Please click here to download this File.

Subscription Required. Please recommend JoVE to your librarian.


The CTG provides researchers with the opportunity to adapt the classic BDM1 for groups and observe emergent processes within the groups in depth. While other work23,24,25,26 has already attempted to adapt the BDM1 to group settings, the only way to access group processes in these studies are laborious group interaction analyses of video-taped discussions. As this is often a tedious and time-consuming task17, studies regularly do not report these aspects. With respect to these existing methods, the CTG is, to the authors' knowledge, the first paradigm that allows researchers to follow collective trust as an emergent phenomenon in real time through movement data. The CTG is, therefore, more time-efficient. Additionally, using quantitative analyses to capture group processes allows researchers to preregister process analyses, which is often difficult with more qualitative approaches.

For the paradigm to produce high-quality data, it is crucial to closely follow the protocol. The following five critical steps warrant researchers' special attention. First, the configurations made in the game are to be held constant across all experiment sessions and should be documented. Second, participants that have already participated in similar studies (i.e., studies using any trust game version) should be excluded at the recruitment stage as this might create biases in behavior and reduce effect sizes45. Third, researchers need to ensure that participants are anonymous by prohibiting participants to share their microphone, camera, and full name during the video conference, as the level of anonymity has been shown to affect behavior in economic games27. Fourth, during start-up of the game, researchers need to check thoroughly that a correct connection between the participant user and the experiment user is established by making sure that the participant user is listed in the experimenter GUI. Fifth, research assistants who collect the data need to be trained extensively to be able to troubleshoot technical challenges with participants. In case participants experience problems establishing the Remote Desktop connection, research assistants need to be able to provide support in order to retain participants in the group. If a person drops out due to technical difficulties, all participants within the experiment time-slot might have to be rescheduled, resulting in additional monetary costs and time-loss.

If technical difficulties occur during start-up of the game, make sure that (a) a current Java runtime environment is installed on your Remote Desktop machine, (b) all users can access and execute the files in the shared folders, (c) all users are executing the commands in the same directory, and (d) all PCs/laptops accessing the Remote Desktop connection have a stable internet connection. For troubleshooting during the experimental session, check that (a) all participants and the researchers have a stable internet connection, (b) participants received the correct log in information for the Remote Desktop Connection, and (c) the server running the Remote Desktop Connection has sufficient resources (e.g., check CPU utilization) during the experimental session.

The CTG is highly adaptable to different research questions which allows for a breadth of possible applications in research. Depending on the aim of a study, a multitude of parameters can be customized, such as the number of players, requirement of unanimous decisions, visual appearance, timing, and monetary parameters of the BDM. While the flexibility of this paradigm is an advantage, it is important to keep in mind that adaptations of the paradigm should always be rigorously founded in theory and piloted. Beyond the configurations that researchers can make in the *.config files, the game can be adjusted only through the source code programmed by Johannes Pritz, which is not available online yet. While many adaptations are possible, the framework of the HoneyComb platform restricts possible applications to movement tasks and to discrete investment options.

In future applications of the CTG, the amount of return fractions could be increased (e.g., 1/10, 2/10, 3/10, ...) in order to provide higher resolution on return behavior. In this way, both the side of investors as well as trustees can be played by individuals or groups, allowing investigation of different levels and referents of trust as was proposed by Fulmer and Gelfand7. Future applications of this protocol might also combine the online procedure of this method with other experiments from the HoneyComb platform30,32,46,47 or include other forms of communication such as a chat or even face-to-face interaction between investors and/or trustees in an on-site experiment as presented by Boos and colleagues31. In this way, other cues influencing the emergence of collective trust, such as nonverbal communication, could also be studied using this paradigm.

Overall, the CTG combines the advantages of economic games-high internal validity and simplicity-with rich group process data. By this means, the CTG can serve as a stepping stone in group research on trust and fairness processes.

Subscription Required. Please recommend JoVE to your librarian.


The authors have nothing to disclose.


This research did not receive any external funding.


Name Company Catalog Number Comments
Data Analysis Software and Packages R version 4.2.1 (2022-06-23 ucrt) R Core Team R: A Language and Environment for Statistical Computing. at [https://www.R-project.org/]. R Foundation for Statistical Computing. Vienna, Austria. (2020).
Data Analysis Software and Packages R Studio version 2022.2.3.492 "Prairie Trillium" RStudio Team RStudio: Integrated Development Environment for R. at [http://www.rstudio.com/]. RStudio, PBC. Boston, MA. (2020).
Data Analysis Software and Packages ggplot2 version 3.3.6 Wickham, H. ggplot2: Elegant Graphics for Data Analysis. at [https://ggplot2.tidyverse.org]. Springer-Verlag New York. (2016).
Data Analysis Software and Packages cowplot version 1.1.1 Wilke, C.O. cowplot: Streamlined Plot Theme and Plot Annotations for “ggplot2.” at [https://CRAN.R-project.org/package=cowplot]. (2020).
OnlineQuestionnaireTool LimeSurvey Community Edition Version 3.28.16+220621  Any preferred online questionnaire tool can be used. LimeSurvey or SoSciSurvey are recommended.
Notebooks or PCs DELL Latitude 7400 Any laptop that is able to establish a stable Remote Desktop Connection can be used.
Participant Management Software ORSEE version 3.1.0 It is recommended to use ORSEE (Greiner, B. [2015]. Subject pool recruitment procedures: Organizing experiments with ORSEE. Journal of the Economic Science Association, 1, 114–125. https://doi.org/10.1007/s40881-015-0004-4), but other software options might be available.
Program to Open RemoteDesktop Connection Remote Desktop Connection (Program distributed with each Windows 10 installation.) The following tools are recommended: RemoteDesktopConnection (for Windows), Remmina (for Linux), or Microsoft Remote Desktop (for Mac OS).
Server to run RemoteDesktop Environment VMware vSphere environment based on vSphere ESXi version 6.5 Ideally provided by IT department of university/institution.
VideoConference Platform BigBlueButton Version 2.3 It is recommend to use a platform such as BigBlueButton or other free software that does not record participant data on an external server. The platform should provide the following functions: 1) possibility to restrict access to microphone and camera for participants, 2) hide participant names from other participants, 3) possibility to send private chat message to participants.
Virtual Machine running Linux-Installation Xubuntu version 20.04 "Focal Fossa" Other Linux-based systems will also be possible.



  1. Berg, J., Dickhaut, J., McCabe, K. Trust, reciprocity, and social history. Games and Economic Behavior. 10 (1), 122-142 (1995).
  2. Costa, A. C., Fulmer, C. A., Anderson, N. R. Trust in work teams: An integrative review, multilevel model, and future directions. Journal of Organizational Behavior. 39 (2), 169-184 (2018).
  3. Kiffin-Petersen, S. Trust: A neglected variable in team effectiveness research. Journal of the Australian and New Zealand Academy of Management. 10 (1), 38-53 (2004).
  4. Grossman, R., Feitosa, J. Team trust over time: Modeling reciprocal and contextual influences in action teams. Human Resource Management Review. 28 (4), 395-410 (2018).
  5. Schoorman, F. D., Mayer, R. C., Davis, J. H. An integrative model of organizational trust: Past, present, and future. Academy of Management Review. 32 (2), 344-354 (2007).
  6. Shamir, B., Lapidot, Y. Trust in organizational superiors: Systemic and collective considerations. Organization Studies. 24 (3), 463-491 (2003).
  7. Fulmer, C. A., Gelfand, M. J. At what level (and in whom) we trust: Trust across multiple organizational levels. Journal of Management. 38 (4), 1167-1230 (2012).
  8. Rousseau, D. M., Sitkin, S. B., Burt, R. S., Camerer, C. Not so different after all: A cross-discipline view of trust. Academy of Management Review. 23 (3), 393-404 (1998).
  9. Dirks, K. T. Trust in leadership and team performance: Evidence from NCAA basketball. Journal of Applied Psychology. 85 (6), 1004-1012 (2000).
  10. Kim, P. H., Cooper, C. D., Dirks, K. T., Ferrin, D. L. Repairing trust with individuals vs. groups. Organizational Behavior and Human Decision Processes. 120 (1), 1-14 (2013).
  11. Forsyth, P. B., Barnes, L. L. B., Adams, C. M. Trust-effectiveness patterns in schools. Journal of Educational Administration. 44 (2), 122-141 (2006).
  12. Gray, J. Investigating the role of collective trust, collective efficacy, and enabling school structures on overall school effectiveness. Education Leadership Review. 17 (1), 114-128 (2016).
  13. Kramer, R. M. Collective trust within organizations: Conceptual foundations and empirical insights. Corporate Reputation Review. 13 (2), 82-97 (2010).
  14. Kramer, R. M. The sinister attribution error: Paranoid cognition and collective distrust in organizations. Motivation and Emotion. 18 (2), 199-230 (1994).
  15. Kozlowski, S. W. J. Advancing research on team process dynamics: Theoretical, methodological, and measurement considerations. Organizational Psychology Review. 5 (4), 270-299 (2015).
  16. Kozlowski, S. W. J., Chao, G. T. The dynamics of emergence: Cognition and cohesion in work teams. Managerial and Decision Economics. 33 (5-6), 335-354 (2012).
  17. Kolbe, M., Boos, M. Laborious but elaborate: The benefits of really studying team dynamics. Frontiers in Psychology. 10, 1478 (2019).
  18. McEvily, B. J., Weber, R. A., Bicchieri, C., Ho, V. Can groups be trusted? An experimental study of collective trust. Handbook of Trust Research. , 52-67 (2002).
  19. Adams, C. M. Collective trust: A social indicator of instructional capacity. Journal of Educational Administration. 51 (3), 363-382 (2013).
  20. Chetty, R., Hofmeyr, A., Kincaid, H., Monroe, B. The trust game does not (only) measure trust: The risk-trust confound revisited. Journal of Behavioral and Experimental Economics. 90, 101520 (2021).
  21. Harrison, G. W. Hypothetical bias over uncertain outcomes. Using Experimental Methods in Environmental and Resource Economics. , 41-69 (2006).
  22. Harrison, G. W. Real choices and hypothetical choices. Handbook of Choice Modelling. , Edward Elgar Publishing. 236-254 (2014).
  23. Holm, H. J., Nystedt, P. Collective trust behavior. The Scandinavian Journal of Economics. 112 (1), 25-53 (2010).
  24. Kugler, T., Kausel, E. E., Kocher, M. G. Are groups more rational than individuals? A review of interactive decision making in groups. WIREs Cognitive Science. 3 (4), 471-482 (2012).
  25. Cox, J. C. Trust, reciprocity, and other-regarding preferences: Groups vs. individuals and males vs. females. Experimental Business Research. Zwick, R., Rapoport, A. , Springer. Boston, MA. 331-350 (2002).
  26. Song, F. Intergroup trust and reciprocity in strategic interactions: Effects of group decision-making mechanisms. Organizational Behavior and Human Decision Processes. 108 (1), 164-173 (2009).
  27. Johnson, N. D., Mislin, A. A. Trust games: A meta-analysis. Journal of Economic Psychology. 32 (5), 865-889 (2011).
  28. Rosanas, J. M., Velilla, M. Loyalty and trust as the ethical bases of organizations. Journal of Business Ethics. 44, 49-59 (2003).
  29. Dunn, J. R., Schweitzer, M. E. Feeling and believing: The influence of emotion on trust. Journal of Personality and Social Psychology. 88 (5), 736-748 (2005).
  30. Boos, M., Pritz, J., Lange, S., Belz, M. Leadership in moving human groups. PLOS Computational Biology. 10 (4), 1003541 (2014).
  31. Boos, M., Pritz, J., Belz, M. The HoneyComb paradigm for research on collective human behavior. Journal of Visualized Experiments. (143), e58719 (2019).
  32. Ritter, M., Wang, M., Pritz, J., Menssen, O., Boos, M. How collective reward structure impedes group decision making: An experimental study using the HoneyComb paradigm. PLOS One. 16 (11), 0259963 (2021).
  33. Kocher, M., Sutter, M. Individual versus group behavior and the role of the decision making process in gift-exchange experiments. Empirica. 34 (1), 63-88 (2007).
  34. Ickinger, W. J. A behavioral game methodology for the study of proxemic behavior. , Doctoral Dissertation (1985).
  35. Deligianis, C., Stanton, C. J., McGarty, C., Stevens, C. J. The impact of intergroup bias on trust and approach behaviour towards a humanoid robot. Journal of Human-Robot Interaction. 6 (3), 4-20 (2017).
  36. Haring, K. S., Matsumoto, Y., Watanabe, K. How do people perceive and trust a lifelike robot. Proceedings of the World Congress on Engineering and Computer Science. 1, 425-430 (2013).
  37. Gintis, H. Behavioral game theory and contemporary economic theory. Analyse & Kritik. 27 (1), 48-72 (2005).
  38. Weimann, J. Individual behaviour in a free riding experiment. Journal of Public Economics. 54 (2), 185-200 (1994).
  39. How to install Xrdp server (remote desktop) on Ubuntu 20.04. Linuxize. , Available from: https://linuxize.com/post/how-to-install-xrdp-on-ubuntu-20-04/ (2020).
  40. How to create users in Linux (useradd Command). Linuxize. , Available from: https://linuxize.com/post/how-to-create-users-in-linux-using-the-useradd-command/ (2018).
  41. How to create a shared folder between two local user in Linux. GeeksforGeeks. , Available from: https://www.geeksforgeeks.org/how-to-create-a-shared-folder-between-two-local-user-in-linux/ (2019).
  42. Johanson, G. A., Brooks, G. P. Initial scale development: Sample size for pilot studies. Educational and Psychological Measurement. 70 (3), 394-400 (2010).
  43. Glaeser, E. L., Laibson, D. I., Scheinkman, J. A., Soutter, C. L. Measuring trust. The Quarterly Journal of Economics. 115 (3), 811-846 (2000).
  44. Mayring, P. Qualitative Content Analysis: Theoretical Background and Procedures. Approaches to Qualitative Research in Mathematics Education: Examples of Methodology and Advances in Mathematics Education. Kikner-Ahsbahs, A., Knipping, C., Presmed, N. , Springer. Dordrecht. 365-380 (2015).
  45. Chandler, J., Paolacci, G., Peer, E., Mueller, P., Ratliff, K. A. Using nonnaive participants can reduce effect sizes. Psychological Science. 26 (7), 1131-1139 (2015).
  46. Belz, M., Pyritz, L. W., Boos, M. Spontaneous flocking in human groups. Behavioural Processes. 92, 6-14 (2013).
  47. Boos, M., Franiel, X., Belz, M. Competition in human groups-Impact on group cohesion, perceived stress and outcome satisfaction. Behavioural Processes. 120, 64-68 (2015).


Collective Trust Game Online Group Adaptation Trust Game Honeycomb Paradigm Multi-agent Adaptation Investigate Collective Trust Emergent Group Processes Economic Game Spatial Temporal Data Complete Anonymity Remote Desktop Connection Stable Internet Connection Server Program Client Programs IP Address Connections Established
The Collective Trust Game: An Online Group Adaptation of the Trust Game Based on the HoneyComb Paradigm
Play Video

Cite this Article

Ritter, M., Kroll, C. F., Voigt, H., More

Ritter, M., Kroll, C. F., Voigt, H., Pritz, J., Boos, M. The Collective Trust Game: An Online Group Adaptation of the Trust Game Based on the HoneyComb Paradigm. J. Vis. Exp. (188), e63600, doi:10.3791/63600 (2022).

Copy Citation Download Citation Reprints and Permissions
View Video

Get cutting-edge science videos from JoVE sent straight to your inbox every month.

Waiting X
Simple Hit Counter