Measuring the Switch Cost of Smartphone Use While Walking

Gabrielle-Naïmé Mourra1, David Brieugne1, Emma Rucco1, Élise Labonté-Lemoyne1, François Courtemanche1, Sylvain Sénécal1, Marc Fredette1, Ann-Frances Cameron1, Jocelyn Faubert2, Franco Lepore2, François Bellavance1, Pierre-Majorique Léger1
Video Coming Soon
This article has been published

Cite this Article

Copy Citation | Download Citations | Reprints and Permissions

Mourra, G. N., Brieugne, D., Rucco, E., Labonté-Lemoyne, É., Courtemanche, F., Sénécal, S., Fredette, M., Cameron, A. F., Faubert, J., Lepore, F., Bellavance, F., Léger, P. M. Measuring the Switch Cost of Smartphone Use While Walking. J. Vis. Exp. (158), e60555, doi:10.3791/60555 (2020).


This paper presents a study protocol to measure the task-switching cost of using a smartphone while walking. This method involves having participants walk on a treadmill under two experimental conditions: a control condition (i.e., simply walking) and a multitasking condition (i.e., texting while walking). During these conditions, the participants must switch between the tasks related to the experimental condition and a direction determining task. This direction task is done with a point-light walker figure, seemingly walking towards the left or the right of the participant. Performance on the direction task represents the participant’s task-switching costs. There were two performance measures: 1) correct identification of the direction and 2) response time. EEG data are recorded in order to measure the alpha oscillations and cognitive engagement occurring during the task switch. This method is limited in its ecological validity: pedestrian environments have many stimuli occurring simultaneously and competing for attention. Nonetheless, this method is appropriate for pinpointing task-switching costs. The EEG data allow the study of the underlying mechanisms in the brain that are related to differing task-switching costs. This design allows the comparison between task switching when doing one task at a time, as compared to task switching when multitasking, prior to the stimulus presentation. This allows understanding and pinpointing both the behavioral and neurophysiological impact of these two different task-switching conditions. Furthermore, by correlating the task-switching costs with the brain activity, we can learn more about what causes these behavioral effects. This protocol is an appropriate base for studying the switching cost of different smartphone uses. Different tasks, questionnaires, and other measures can be added to it in order to understand the different factors involved in the task-switching cost of smartphone use while walking.


Because both smartphone penetration and the tendency to multitask are increasing, it is important to understand the impact smartphone use while walking has on attention. The literature has demonstrated repeatedly that task switching comes with a cost1, including smartphone use while walking. Studies have found that using a smartphone while walking can be distracting and dangerous2,3,4. These dangers have been linked to the attentional impairments of doing such a task3,4,5,6,7. Due to the complex nature of the pedestrian environment, studying it in an experimental context that is ecologically valid can be problematic. Nonetheless, conducting such studies in actual pedestrian environments can come with complications of their own because many extraneous variables can come into play, and there is a risk of harm to the participant due to distractions. It is important to be able to study such a phenomenon in a relatively safe environment that remains as realistic as possible. In this article, we describe a research methodology that studies the task-switching cost of texting while walking, while both increasing the validity of the task and mitigating the potential risks involved.

When using a smartphone while walking, individuals are forced to switch from the smartphone tasks to walking and environment-related tasks. Hence, in order to study such a phenomenon, we found it pertinent to frame this method within the literature on multitasking, specifically focused on the task switching paradigm. In order to do this, the task switching paradigm was used1, having participants switch between a pre-stimulus task and a post-stimulus task. One of the two pre-stimulus tasks involved multitasking, while the other one did not. In the post-stimulus task, participants had to respond to a stimulus whose perception is influenced by divided attention8. Moreover, experimental laboratory studies that try to be as ecologically valid as possible have often used virtual pedestrian environments to understand the attentional impact of smartphone use while walking4,9. Nonetheless, in order to capture the underlying neurophysiological mechanisms, we chose to focus on the specific task-switching reaction to one stimulus to minimize the number of stimuli participants had to react to. In this way, we can pinpoint more precisely the task-switching cost coming purely from switching attention away from the smartphone and towards the stimulus. With our study design, we use behavioral measures (i.e., task-switching cost) and neurophysiological data to better understand the attentional impairments found during pedestrian smartphone use.

During a task-switching experiment, participants usually performed at least two simple tasks pertaining to a set of stimuli, with each task requiring a different set of cognitive resources referred to as a “task-set”1. When individuals are forced to switch between tasks, their mental resources need to adapt (i.e., inhibition of previous task-set and activation of the current task-set). This “task-set reconfiguration” process is believed to be the cause of the task-switching cost1. The task-switching cost is usually determined by observing the differences in either the response time and/or the error rate between trials where participants switch between tasks and those where they do not10. In our experiment, we had three task-sets: 1) responding to a point-light walker stimulus; 2) texting on a smartphone while walking; and 3) simply walking. We compared the switch cost between two different conditions: 1) simply walking prior to responding to the stimulus, and 2) walking while texting prior to responding. In this way, we captured the cost of multitasking on a smartphone prior to switching the task and were able to directly compare it to the non-multitasking switch cost of simply walking before the appearance of the visual stimulus. Because the smartphone used in this study was of a specific brand, all participants were screened prior to the experiment to be sure they knew how to properly use the device.

In order to simulate a realistic experience representative of the pedestrian context, we decided to use a point-light walker figure as a visual stimulus, representing a human form walking with a 3.5° deviation angle towards the left or the right of the participant. This figure is made up of 15 black dots on a white background, with the dots representing the head, shoulders, hips, elbows, wrists, knees, and ankles of a human (Figure 1). This stimulus is based on biological motion, which means that it follows the pattern of movement that is typical of humans and animals11. Furthermore, this stimulus is more than ecologically valid; it requires complex visual processing and attention in order to be analyzed successfully12,13. Interestingly, Thornton et al.8 found that proper identification of the point-like walker’s direction is greatly impacted by divided attention, making it suitable as a performance measure when studying task-switching costs when multitasking. Participants were asked to verbally state the direction the figure was walking. The appearance of the walker was always preceded by an auditory cue that signaled its appearance on the screen.

Performance on the point-light walker task and neurophysiological data allowed us to determine the attentional impact of both conditions and help determine what caused them. Performance was measured by looking at the error rates and response times when determining the direction of the point-light walker figure. In order to understand the underlying cognitive and attentional mechanisms involved in the attentional impairments we found with the performance measure, we assessed the participants' neurophysiological data using the EEG actiCAP with 32 electrodes. EEG is an appropriate tool in terms of temporary precision, which is important when trying to see what causes poor performance at specific moments in time (e.g., the appearance of the point-light walker figure), although artefacts may be present in the data due to movements. When analyzing the EEG data, two indexes are particularly relevant: 1) alpha oscillations; and 2) cognitive engagement. Research has found that alpha oscillations may represent working memory control as well as active inhibition of task-irrelevant brain circuits14,15,16,17. By comparing the alpha oscillations at baseline levels with those occurring with the stimulus presentation18,19, we obtained the alpha ratio. With this ratio, we determined the event-related changes that could be underlying the attentional impairment observed when texting while walking. With regards to cognitive engagement, Pope et al.20 developed an index where beta activity represents increased arousal and attention, and alpha and theta activity reflect decreases in arousal and attention21,22. This analysis was done to determine whether increased engagement prior to the appearance of the stimulus would complicate the task set reconfiguration required in order to respond to the walker figure.

With the methodology described in this paper, we seek to grasp the underlying mechanisms that impact task-switching performance in participants engaged in multitasking episodes. The walking condition represents a non-multitasking task-switch performance that is compared to a multitasking task-switch performance (i.e., texting while walking). By measuring the roles of task-set inhibition and task-set activation, we sought to better understand the switch costs that occur when texting while walking. It is relevant to note that the original study was done in an immersive virtual environment23 but was later replicated in an experimental room (see Figure 2) with a projector displaying the walker figure on a screen in front of the participant. Because this virtual environment is no longer available, the protocol was adapted to the current experimental room design.

Subscription Required. Please recommend JoVE to your librarian.


Before beginning the data collection, it is important to receive all the necessary ethical research approval for human participants. This should be done through the appropriate review boards and/or human participants review committees.

This protocol was approved and certified by the ethics board from HEC Montréal for the Tech3Lab research facility.

1. Preparation of the visual stimulus

  1. Create the experimental template for the visual stimulus with a visual experiment presentation software, such as E-prime. Create one for the practice trial (six trials) and one for the experimental conditions (22 trials).
  2. Open the E-prime software and go to the structure window, where the logic of the experiment can be created.
    1. Double click on SessionProc (the timeline for sequencing the order of appearance of the E-objects).
    2. Drag the TextDisplay object from the toolbox into the SessionProc line.
      1. Double click the TextDisplay object that has been inserted into the SessionProc and write the study instructions: “When you hear the auditory cue, please raise your head and indicate aloud which direction the walker is going, either towards your left or towards your right. The experiment will begin shortly.”
      2. Click on the Property Pages icon on the top of the TextDisplay window. Click on the Common tab and change the Name box setting to instructions. Click on the Duration drop down menu and select Infinite. Click on the Duration/Input tab and choose Add, select Keyboard and press OK. Click on OK again to exit the Property Pages. This ensures that the instructions stay on the screen until you press to start the experiment.
    3. In SessionProc, now drag and drop a List object into the SessionProc line (place it after the instructions). Double click on the List object. In the Procedure column write “Left-Trial”, press Enter and click YES to the pop-up window asking to create a new procedure. When the next pop-up window asks to make this the default value click NO.
    4. Double click the List object in the SessionProc line. Click on the green button named Add Attribute. Name the attribute as: correct response. Click OK.
    5. Click on the blank space in the column correct response and write L (this is to signal that this list object if for the walker going towards the left).
    6. Return to SessionProc and click on the new object that was created called Left-Trial.
    7. Go to the SessionProc and double click the Left-Trial object.
      1. Drag and drop the InLine object into the Left-Trial line and rename it.
        1. SelectITI. Double click on the InLine object and write the following code:
          Dim nRandom As Integer
          nRandom = Random (16500, 17500)
          c.SetAttrib “ITIDur”, nRandom
        2. The code presents the walker stimulus at intervals of time between 16,500 ms and 17,500 ms.
    8. Double click Left-Trial. Drag and drop a Slide object into the Left-Trial line. Rename it Waiting, this object will be a blank screen that appears between the visual stimuli for the amount of time determined by the code in step 1.2.7.
    9. Double click on the Slide object.
      1. Click on the Sub-Object called SlideText and click somewhere in the slide to place the object there.
      2. Remove the existing text from that image.
      3. Click Sub-Object Property Pages.
      4. In the Frame tab set both the width and height to 100%. Click OK.
    10. Click the Property Pages and go to the Duration/Input tab. Type in duration the following value: [ITIDur].
    11. Double click Left-Trial and drag and drop a SoundOut object onto the Left-Trial line.
      1. Double click the SoundOut object.
      2. Under Filename select the appropriate sound cue file directory.
      3. Change the Buffer size to 1,000 ms.
      4. Click OK.
    12. Go back to Left-Trial and drag and drop a Slide object into the Left-Trial line and rename it Walker Left.
      1. Double click this new object.
      2. Add a SlideMovie sub-object by clicking the sub-object and then clicking the slide.
      3. Click Sub-Object Property Pages and under Filename select the directory of the video file of the left walker.
      4. Set Stop After Mode to OffsetTime.
      5. Click on Stretch and chose YES.
      6. Set End Movie Action to Terminate.
      7. Click on the Frame tab and set the width and the height to 100%.
      8. For the Position, set both X and Y position to 50%.
      9. Finally, set the Border Colour to white.
      10. Click OK.
      11. Click on the Property Pages of the Slide object.
        1. Click on the Duration/Input tab.
        2. Set the Duration to 4,000. Set PreRelease to 500.
        3. Click OK.
    13. Repeat this whole procedure (i.e., from steps 1.2.3–1.2.9) for the right trial. Name the procedure Right-Trial. When following the procedure, change only the correct response (i.e., to R instead or L) and the video file. Use the video file directory for the right walker.
  3. Double click the SessionProc.
    1. Drag and drop and Slide object to the SessionProc line
    2. Double click this object and add a SlideText Sub-Object.
    3. Write Pause as the text.
    4. Again, go into the Sub-Object Property Pages and in the Frame tab make the width and height 100%. Make the Position for X and Y 50%.
    5. Click OK.
  4. Double click on the List object that was already created.
    1. Click on the Property Pages of the List object.
    2. In the Selection tab set the Order to Random and click OK.
    3. In the Weight column insert the following numbers:
      1. Practice: enter the number 3 in both the Left-Trial row and the Right-Trial row.
      2. Experiment: enter the number 11 in both the Left-Trial row and the Right-Trial row.
  5. At the top of the window click the icon Generate to create an executable script file. Save it to the desktop for easy access. This is the file that will be run during the experiment.
    1. Save the practice trial as “Practice” and the experimental trials as “Experiment”.
    2. Test the script created by clicking on the Run icon.
  6. In the E-studio folder, an E-run file will be created. Both the files created (one for the practice trial and one for the experimental trials) can be placed in a folder on the computer’s desktop. To run the visual experiment, simply click on the appropriate icon.
  7. Once the visual stimulus’ experimental templates are created attempt to display them with the projector.
    1. With the projector settings, modify the height of the walker figure and make sure that it is centered directly in front of where the participant would be standing on the treadmill.
    2. With a measuring tape, measure the height of the walker directly on the projector screen. Calculate the distance between the screen and the eyes of a person standing on the treadmill in order for the stimulus to cover 25° of visual angle, and move the treadmill accordingly. To calculate the necessary distances, one may use the following website:

2. Setup of the laboratory environment

  1. Turn on the four recording computers, the EEG amplifier, the projector, the treadmill, the speakers, and the smartphone.
  2. Set up the recording equipment.
    1. Open the synchronization software with the specific subroutine created for the study with markers at 10 s.
      1. The synchronization software sends a pulse that appears in the form of a marker and light pulse in the EEG and video recordings every 10 s.
    2. Turn on the video recording software. The cameras should automatically turn on as well. If not, manually turn them on.
    3. Open and set up the EEG recording software for the participant.
    4. Open the folder containing the visual stimulus executable script file made with the visual experiment presentation software.
    5. Prepare the EEG setup and materials according to the procedures suggested by the manufacturers.
    6. Delete the conversation from the previous participant from the smartphone.
    7. Place a new bottle of water next to the participant’s resting chair.

3. Participant preparation

  1. Welcome the participant into room 1 and tell them briefly about the study duration and the compensation.
  2. Ask participants to remove their jewelry (e.g., earrings, piercings, necklaces), glasses, smartphone, and any content in their pockets, place these in a bin, and put it in a locker.
  3. Ask participants to get rid of any chewing gum they might be eating and ensure that they have eaten prior to beginning the experiment.
  4. Ensure that the participant is wearing comfortable walking shoes and have them double knot their shoelaces to guarantee the participant’s safety during the experiment.
  5. Have the participant read and sign the consent form.
    1. Read the following script and have the participant sit so they can read and sign the consent form:

      Here is a consent form stating that you agree to participate in this study. Read it carefully and sign it. Do not hesitate if you have any questions.”
  6. Take the participant to the designated participant preparation room, room 3, where the EEG cap is to be set up.
  7. Read the prepared script that explains the flow of the experimental process:

    ​"You may notice that every so often I might read a text. This is done to ensure that all participants receive identical instructions. In this study, we are interested in how people interact with a stimulus in front of them while sending text messages and walking at a moderate speed. For about 40 minutes, you will text [name of research assistant] that you met earlier with this smartphone [show the smartphone]. While you are texting you will hear a sound from time to time. This sound will be followed by an image of a walking character. Your task is to raise your head to the screen here [point to the screen] and to indicate aloud whether the character is walking towards your right OR your left. You will not be asked to do anything else. I will write down your answers. Note that in all blocks there are two choices for an answer (right and left), so it is impossible that there will, for example, only left or right as a single choice. The direction the character comes from is totally random. After dictating your answer, you simply continue to text [name of research assistant]. It is important not to turn around when you answer or if you want to talk to me because you could be destabilized and fall. Keep your head forward. I will be behind this mirror here [point to the glass] throughout the duration of the experiment. Do you have any questions?"
  8. Measure the participant’s head circumference for the EEG electrode cap. For this experiment an EEG actiCap with 32 preamplified electrodes was used.
    1. Choose the appropriate size EEG cap, place it on a foam head for support, and place all the electrodes in their proper location.
    2. Remeasure the participant’s head circumference to determine the starting point of the cap by using the 10-20 reference system.
    3. Place the cap on the participant’s head starting from the front and hold it in position while pulling it backwards. Make sure the cap is placed properly.
    4. Connect the EEG cap’s cables to the EEG control box.
    5. Show the gel applicator to the participant so they can see that it is not sharp and allow them to touch it if they so desire. Read the following script:

      Here is the applicator and the tip that I will use to put the gel on the EEG cap that you have on your head. You can touch it; it does not hurt. The tip is just short enough that it never touches your head.
    6. Turn on the EEG electrode box so that all the electrode lights turn red.
    7. Activate the electrodes by first moving the hair out of the way and then applying the gel to each electrode: start with the ground electrode and then the reference electrode. Once these two electrodes turn green add the remaining electrodes.
    8. Place the gel until all the electrode sensors turn green.
    9. Measure the impedance on the control box.
    10. Disconnect the cables from the control box and connect them to the move adapter (i.e., the adapter kit that wirelessly transmits the data back to the control box).
    11. Place the adapter kit in a fanny pack and ask the participant to attach it around their waist, with the cables and the adapter kit placed towards the participant’s back.
    12. Go back to the computer room (room 4) and check the impedance of each electrode.
    13. Verify that the data quality is satisfactory by visually inspecting the signal on the EEG software’s monitor screen. If necessary, fix the problematic electrodes.
  9. Take the participant into the experimental room (i.e., room 2).
  10. Have the participant stand on the treadmill and attach the treadmill safety key to the participant.
  11. Turn on the treadmill to a speed of 0.8 mph and have the participant walk for 2 min so they get familiarized with the speed. During these 2 min, remind the participant about the instructions:

    For about 40 minutes, you will text [name of research assistant] with a smartphone. While you are texting, you will hear a sound from time to time. This sound will be followed by an image of a walking character. Your task is to raise your head to the screen at that time and indicate aloud, in your opinion, whether the character is walking towards your right or your left. You will not be asked to do anything else. I will write down your answers. After stating your answer, you simply continue to text [name of research assistant]. It is important to always give an answer. If you are not sure, tell us your best guess. Do not turn around when you give your answer or if you want to talk to me because you could be destabilized and fall. Keep your head forward. There are four parts to the experiment, two where you text [name of research assistant] while walking, and two where you are just walking. Each part lasts about 12 minutes and there is a break of 2 minutes between each part. Do you have any questions?

4. Practice trial

  1. Give the participant the smartphone.
  2. Tell the participant that they will be doing a practice trial.
  3. Click on the stimulus’ executable script file for the Practice trials. Enter the participant number and begin the trial.
  4. Have the participant practice responding to the visual stimuli while partaking in a texting conversation with the research assistant. This practice session will last 3 min.
  5. Once the session begins, follow the texting conversation script created for the study.
  6. Write down the participant's answer to each stimulus appearance on a spreadsheet template.
  7. After the 3 min, have the participant sit on a chair and drink some water. During this time adjust the treadmill speed to 0.4 mph.
  8. Remind the participant of the study instructions.

5. Data collection

  1. Setup
    1. Go to the workflow sheet to choose the condition order for the current participant. Two orders are possible: In order A, trials 1 and 3 use the texting condition, while trials 2 and 4 use the control condition. In order B, trials 1 and 3 use the control conditions and trials 2 and 4 use the texting conditions. During each trial the visual stimulus appears 22 times.
    2. Make sure that all the recording software is ready to be started in synchrony.
    3. Turn on all the recording software (e.g., EEG, video)
    4. Have the participant get back on the treadmill and slowly increase the speed back to 0.8 mph.
    5. Turn on the visual stimulus program and start running it.
    6. Read the trial’s instructions depending on the experimental condition.
      1. Run the stimulus’ executable script file for the Experiment trials. Enter the participant number and the code chosen for the specific conditions. Begin the trial.
  2. Control condition
    1. Ensure that the smartphone is out of the participant’s field of vision during this task.
    2. Instruct the participant to simply walk on the treadmill and respond to the visual stimulus every time it appears by answering "left" or "right":

      For this task, you will simply have to walk on the treadmill. From time to time you will hear a sound. This sound will be followed by an image of a walking character. Your task is to raise your head to the screen then and indicate aloud, in your opinion, whether the character is moving to your right or your left. You will not be asked to do anything else. I will write down your answers myself. After dictating your answer, you simply continue walking. It is important to always give an answer. If you are not sure, tell us your best guess. Do not turn around when you give your answer or if you want to talk to me because you could be destabilized and fall. Keep your head forward. Start when I give you the signal. Do you have any questions?
    3. Signal to the participant that the trial is about to begin and start the visual stimulus trial.
    4. Write down the participant's response every time they answer to the visual stimulus. When a participant fails to respond leave the field blank.
    5. At the end of the trial, have the participant sit down and drink some water.
    6. During these breaks, keep running all the recording software and leave the treadmill on at a speed of 0.4 mph.
    7. After the break get the participant back on the treadmill and as they walk, gradually increase the speed back to 0.8 mph.
  3. Texting condition
    1. While the participant is walking on the treadmill hand them the smartphone.
    2. Instruct the participant to text as they would naturally (e.g., using one hand or two hands) while walking on the treadmill and respond to the visual stimulus every time it appears by answering "left" or "right":

      For this task, you will text [name of research assistant] with a smartphone. On the smartphone, open the message application. Then select the conversation saying "Hello". You will have to actively participate in a texting conversation. While you are texting you will hear a sound from time to time. This sound will be followed by an image of a walking character. Your task is to raise your head to the screen here and indicate, in your opinion, whether the character is moving to your right or your left. You will not be asked to do anything else. I will write down your answers myself. After dictating your answer, you simply continue texting. It is important to always give an answer. If you are not sure, tell us your best guess. Do not turn around when you give your answer or if you want to talk to me because you could be destabilized and fall. Keep your head forward. Start when I give you the signal. Do you have any questions?
    3. Signal to the participant that the trial is about to begin and start the visual stimulus trial.
    4. Instruct the participant to have a texting conversation while walking on the treadmill. Instruct them to also respond to the visual stimulus every time it appears by answering "left" or "right".
    5. Have the research assistant follow the conversation script and keep the conversation going throughout the condition.
    6. Write down the participant's response every time they answer to the visual stimulus. When a participant fails to respond leave the field blank.
    7. At the end of the trial, take the smartphone from the participant and have the participant sit down and drink some water.
    8. During these breaks, keep running all the recording software and leave the treadmill on at a speed of 0.4 mph.
    9. After the break get the participant back on the treadmill and as they walk, gradually increase the speed back to 0.8 mph.

6. End of data collection

  1. At the end of the experimental manipulation have the participant turn off the treadmill. Have the participant sit down and drink some water.
  2. Remove the EEG cap and take the participant to a shower where they can wash their hair if they so choose.
  3. Give the participant their compensation and thank them for their participation. Ensure that the participant leaves with their copy of the consent form and that they retrieve all their personal items.

Subscription Required. Please recommend JoVE to your librarian.

Representative Results

This study protocol was originally conducted with 54 participants, each responding to 88 direction trials. Half of those trials occurred when participants were simply walking prior to the stimulus presentation; the other half occurred when the participants were texting while walking prior to the stimulus presentation.

Behavioral results
Performance on the point-light walker’s direction represents task-switching costs, with lower performance representing higher task-switching costs. Participants’ responses were analyzed with two response variables: 1) Correct identification; and 2) response time. The two experimental conditions represented the two groups: 1) Texting while walking; and 2) simply walking before responding to the stimulus. Response times were calculated at the end of the experiment. The video recordings of the experiment were converted into audio files and then analyzed with a sound software that marked the peaks in sound wavelengths. Once the sound of the cue and the sound of the participant’s verbal response were marked, the time between the two was determined. Correct response times were analyzed by exporting the participant’s correct direction for the 88 trials, from the experimental presentation software, and adding it to the database file containing the participants' responses. In the program used (Excel), a formula to test accuracy (=IF(A1=B1,1,0)) was used to determine whether the information contained in the first data column (i.e., participants' response) was the same as the second column.

Because each participant had to repeatedly determine the orientation of the stimulus, a t-test could not be used to analyze the differences in performance means across conditions. Instead, to account for intra-subject correlation between trials, a generalized linear regression model was used. This analysis was run using Proc Glimmix with the SAS 9.4 software. The group variable was the explanatory variable for the response variables and a random Gaussian intercept was added for each subject. The accuracy of the response variables (correct or incorrect response) was binary, and as such, a logit link function was appropriate for this regression model.

We found that participants were more likely to identify the correct direction for the point-light walker stimulus when they were not texting prior to the appearance of the stimulus (Odds Ratio = 0.77; T = −3.12; p = 0.001; 95% confidence interval (.657;.908)). No significant difference in reaction time was found (β = −0.005; T = −.26; p = 0.799; 95% confidence interval (-.047;. 036)) (see Figure 3).

In order to combine accuracy with response time, the Inverse Efficiency Score (IES)24 was used. The probability of being accurate on the direction trials were modeled using a logistic regression with response time as the control variable. Again, an individual random intercept was added for each subject to account for potential intra-subject correlations between trials. The results of this mixed effect regression showed a significant effect of experimental condition, where the estimated probability of accurately responding to the stimulus was 18.9% smaller in the condition where participants texted while walking, as compared to when they simply walked prior to the appearance of the stimulus (Odds ratio = 0.811; T = −2.46; p = 0.014; 95% confidence interval 0.686–0.959; see Figure 3). This showed that regardless of the response time, the accuracy of the stimulus’ direction was consistently lower when participants texted while walking.

Neurophysiological data
EEG recordings were used to determine the neurophysiological activity involved in task switching by observing alpha oscillations and cognitive engagement. Using EEG during movement led to more artefacts. In order to ensure the quality of the data, several steps were taken. First, to allow recording during walking, new active electrode technology with a noise subtraction circuit (i.e., pre-amplified electrodes) was used. Second, the EEG data were filtered offline with a lowpass IIR filter at 20 Hz, to isolate the alpha waves, and a highpass IIR filter at 1 Hz, was used to reduce noise. Third, an Independent Component Analysis (ICA) was applied in order to attenuate the artefacts caused by eye blinks and ocular saccades in the EEG data25. Fourth, an automatic artefact rejection was used to exclude epochs with voltage differences over 50 μV between two neighboring sampling points and a difference over 50 μV in a 75 ms interval.

Data analysis was performed with Vision Analyzer 2. Based on Luck26, data were re-referenced to the common average reference. Furthermore, the data were segmented to isolate the 2 s after the presentation of the walker stimulus as well as a 2 s baseline. For each stimulus presentation a baseline representing the activity occurring when the participant solely walked or texted while walking was determined. This baseline was obtained during a 2 s time point, occurring 12 s before the auditory cue of each stimulus appearance. Both segments were analyzed separately with a Fast-Fourier Transform on 1 s epochs to obtain power values in the frequency domain. All epochs were averaged separately by experimental condition.

The aim of this analysis was to determine whether the two sub-steps of task-set inhibition and task-set activation impact the behavioral switch cost (i.e., performance measures) differently. In order to do this, the EEG data were analyzed based on two indexes: 1) alpha oscillations; and 2) cognitive engagement. All the calculations were done using the Cz and Pz sites because their data contained less noise and fewer artefacts. The changes in alpha oscillations, due to the stimulus presentation, were analyzed with alpha ratios by comparing the baseline alpha power with the alpha power occurring with the stimulus presentation18,19. Using the cognitive engagement index developed by Pope et al.20, a ratio was created of the combined power in the beta (14–20 Hz) divided by total power in alpha (8–12 Hz) and theta (4–8 Hz) components. In order to calculate the combined power, the sums of powers used were at Cz and Pz locations.

The alpha ratio and its effect on performance were compared between the two conditions. The alpha ratio reflects the processes of task inhibition. Because the alpha ratio was measured for each participant, it was necessary to compare the ratio with the aggregated performance during that condition (i.e., the correct response percentage of the 44 trials of that condition). To compare the correlation coefficient of both conditions, the z-test proposed by Steiger27 was used as a means to compare correlation coefficients measured from the same individual. At the Pz site, it was found that the correlation between performance and the alpha ratio was statistically different between the two conditions (p = 0.032; 95% confidence interval = 0.054–1.220) (see Figure 4). Because the correlations of each condition were of opposite signs, it was shown that the inhibition processes impacted performance differently in the two conditions, with a higher alpha ratio leading to better performance during the walking condition, while in the texting condition performance was hindered by a higher alpha ratio. These results show that when texting while walking, the amount of resources needed to inhibit the previous task set negatively impacted performance. Thus, the extent to which participants engaged resources in task-set inhibition had more effect on upcoming performance when were texting. With regards to the Cz site, no significant differences were found, suggesting that the effect was mostly located in the parietal region of the scalp.

The cognitive engagement ratio and its effect on performance were also compared between the two conditions. As for the alpha ratio, the z-test proposed by Steiger27 was also used for this analysis. The results showed a statistically significant difference between the two conditions, where the engagement on the task done immediately before the appearance of the stimulus (i.e., walking or texting while walking) impacted performance differently in each condition (p = 0.027; 95% confidence interval = -1.062 – -0.061). Here again the correlations were of opposite signs. Our results suggest that when participants were walking before the task switch, a higher ratio of cognitive engagement was related to a decrease in performance, whereas when participants were texting while walking before the task switch a higher ratio of cognitive engagement was related to an increase in performance. This shows that the higher task-switching cost of texting while walking was not due to a higher cognitive engagement in that task.

Movie 1
Figure 1: In this video, a figure walking towards the right side of the subject is visible. Please click here to view this video. (Right-click to download.)

Figure 2
Figure 2: Experimental setup of the room. Please click here to view a larger version of this figure.

Figure 3
Figure 3: Effect of texting on accuracy and response time. Please click here to view a larger version of this figure.

Figure 4
Figure 4: Correlation between Alpha at Fz and performance. Please click here to view a larger version of this figure.

Subscription Required. Please recommend JoVE to your librarian.


A critical choice when using the protocol would be ensuring the quality of the neurophysiological data. There is an inherent complication to using a tool like EEG during movement, because excessive movement can create a lot of noise in the data. It is therefore important to consider, prior to the data collection, how the data will be prepared to remove as many artefacts as possible without modifying the actual signal. Nonetheless, it is still quite likely that there will be higher rates of data exclusion because participants walk on a treadmill throughout the experiment. Certain participant’s data will be unusable due to artefacts caused by excessive facial, head, and body movements, as well as due to the potential of excessive sweating and equipment malfunction. To avoid biasing or impacting the results, data exclusions should be determined prior to the behavioral analysis. Since conducting this study, our laboratory has acquired the capability of localizing electrode position and we hope to use this technology in future studies to better analyze source activity. We recommend that future studies take advantage of electrode localization technology to permit source estimation of related EEG signals.

A critical step to pay attention to in this protocol is the script for the participant’s texting conversation with the research assistant. It is important that the texting conversations be guided with predefined topics and some open-ended questions. There is much value in following such a script. First, we ensure that all participants have similar types of conversation, so we remove the variability that would exist in a naturally occurring conversation. In this way we ensure that the level of distraction does not vary due to the conversation being excessively different between participants. Secondly, we can ensure that the conversation does not lead to strong emotional reactions by choosing the topics wisely. Emotionally charged interactions may alter EEG analysis and distractibility levels, which would in turn complicate the interpretation of both the behavioral and neurophysiological results. All texting conversations will inevitably vary to some extent, but having a script allows us a certain amount of control over this variability. To further limit the variability in the conversation, it would be preferable to have one specified research assistant responsible for this task throughout the duration of the research project. Nonetheless, by adhering to a script we also lose the ecological validity of such a conversation. When individuals have conversations with their friends, for example, these conversations may be emotionally charged, and this may in fact alter the task-switching cost. Yet it is important to consider that to analyze the impact of conversation types on the task-switching cost, the goal of the study would have to focus on that aspect, due to the complexity of such an analysis. Hence, for our purposes the use of a script was more appropriate.

There should also be caution when creating the database file where the participants’ responses will be noted. The formula we used in Excel to test accuracy (i.e., =IF(A1=B1,1,0)) is format dependent (e.g., it will be influenced by extra blank spaces and capitalized letters). It is therefore recommended to write R for right or L for left, in the same format than that used in the output extracted from the visual experiment presentation software. Any error in the writing of the file may cause false negatives in the accuracy rating. Finally, for this kind of study, where visual processing plays a big role, it is important that all participants have normal or corrected-to-normal vision. Because we are using EEG tools, it is also relevant to screen for epilepsy and neurological, as well as psychiatric, diagnoses, which could impact the brain signals of the participants. It is wise to exclude those participants from the study, as differences in brain activity may bias the results.

This methodology can be modified to test multiple smartphone uses (e.g., reading, social media, gaming, viewing images, etc.)28. Questionnaires can also be added in between experimental conditions, or at the end of the experiment, to gain more insight into the participants' characteristics and perceptions (see Mourra29). Questionnaires in between the tasks should not be time consuming to avoid increasing unnecessarily the participants' fatigue for the following conditions. This moment is quite useful to test different task-related constructs, such as the perception of time, the interest in the task the participant just completed, and the perceived difficulty. Questionnaires at the end of the experiment can be more time consuming, but the fatigue of finalizing the conditions must be taken into account. The timing of the questionnaires should be done in a way to avoid participants’ answers being biased by their experience during the task, and to avoid participants' behavior being biased due to the questions asked previously.

This method is limited in that real pedestrian environments have many stimuli presented simultaneously, so the cognitive load required in these environments is probably much higher than in this study (see Pourchon et al.7). Nonetheless, to truly to be able to pinpoint the underlying neurophysiological mechanisms, it seemed necessary to make such a trade-off. Depending on the purpose of the particular study, the visual stimulus may be modified to test different factors that may impact the task-switching cost of using a smartphone while walking. In this methodology, the point-light walker figure was used instead of an actual human figure because this point-light walker is less prone to bias. The appearance of an actual human walker could be more pleasing or displeasing to certain participants and this may impact the attention attributed to it. By using a group of dots representing a human form and human movement, we can bypass this potential extraneous variable of the human walker’s gender, clothing, body image, among other variables that may skew the results. For example, participants who find the human walker more attractive may be more prone to focusing their attention on the walker than they would have otherwise.

This methodology can be used for different applications in future studies. By modifying, for example, the visual stimulus to have different characteristics, it would be possible to study how the characteristics of the object in an environment can influence the task-switching cost. It may also be interesting to use this method with a manual treadmill, where the action of the participants' feet against the deck moves the treadmill belt. In this way, we could determine how speed fluctuates during the experiment due to multitasking or due to the task switching. This would increase the ecological validity while adding a new variable to consider in the analysis (e.g., does stopping, or walking slower or faster influence the participants' performance?). Thus, both in terms of stimuli and subject movement, there are many other possibilities than the ones proposed in this method (i.e., point-light walker and automatic treadmill) to investigate texting while walking behaviors (Pourchon et al.7, Schabrun et al.30). This would increase the internal or external validity of future studies. Also, it must be noted that our decision to use EEG data from only two electrodes comes with some limitations. Future research should try to extend the analysis to regions of interest encompassing multiple electrodes. It would also be possible to not use a conversation script and let conversation occur naturally. In such instances the content of the conversation could be analyzed with a content analysis, and the impact of different types of conversations could be studied in a natural way. In sum, this methodology can be the base on which more complex studies can build on to grow the knowledge of the different factors that may impact our capacity to multitask with a smartphone while walking. 

Subscription Required. Please recommend JoVE to your librarian.


The authors have nothing to disclose.


The authors acknowledge the financial support of the Social Sciences and Humanities Research Council of Canada (SSHERC).


Name Company Catalog Number Comments
The Observer XT Noldus Integration and synchronization software: The Noldus Observer XT (Noldus Information Technology) is used to synchronize all behavioral, emotional and cognitive engagement data.
MediaRecorder Noldus Audio and video recording software
FaceReader Noldus Software for automatic analysis of the 6 basic facial expressions
E-Prime Psychology Software Tools, Inc. Software for computerized experiment design, data collection, and analysis
BrainVision Recorder Brain Vision Software used for recording neuro-/electrophysiological signals (EEG in this case)
Analyzer EEG signal processing software
Qualtrics Qualtrics Online survey environment
Tapis Roulant ThermoTread GT Office Treadmill
Syncbox Noldus Syncbox start the co-registration of EEG and gaze data by sending a Transistor-Transistor Logic (TTL) signal to the EGI amplifier and a keystroke signal to the Tobii Studio v 3.2.
Move2actiCAP Brain Vision Add-on for a digital wireless system for EEG
iPhone 6s Apple
iMessage Apple
iPad Apple



  1. Monsell, S. Task switching. Trends in Cognitive Sciences. 7, (3), 134-140 (2003).
  2. Haga, S., et al. Effects of using a Smart Phone on Pedestrians' Attention and Walking. Procedia Manufacturing. 3, 2574-2580 (2015).
  3. Hatfield, J., Murphy, S. The effects of mobile phone use on pedestrian crossing behaviour at signalised and unsignalised intersections. Accident Analysis, Prevention. 39, (1), (2007).
  4. Stavrinos, D., Byington, K. W., Schwebel, D. C. Distracted walking: Cell phones increase injury risk for college pedestrians. Journal of Safety Research. 42, (2), 101-107 (2011).
  5. Nasar, J., Hecht, P., Wener, R. Mobile telephones, distracted attention, and pedestrian safety. Accident Analysis, Prevention. 40, (1), 69-75 (2008).
  6. Hyman, I. E., Boss, S. M., Wise, B. M., McKenzie, K. E., Caggiano, J. M. Did you see the unicycling clown? Inattentional blindness while walking and talking on a cell phone. Applied Cognitive Psychology. 24, (5), 597-607 (2010).
  7. Pourchon, R., et al. Is augmented reality leading to more risky behaviors? An experiment with pokémon go. Proceedings of the International Conference on HCI in Business, Government, and Organizations. 354-361 (2017).
  8. Thornton, I. M., Rensink, R. A., Shiffrar, M. Active versus Passive Processing of Biological Motion. Perception. 31, (7), 837-853 (2002).
  9. Neider, M. B., McCarley, J. S., Crowell, J. A., Kaczmarski, H., Kramer, A. F. Pedestrians, vehicles, and cell phones. Accident Analysis, Prevention. 42, 589-594 (2010).
  10. Wylie, G., Allport, A. Task switching and the measurement of "switch costs". Psychological Research. 63, (3-4), 212-233 (2000).
  11. Johansson, G. Visual perception of biological motion and a model for its analysis. Perception, Psychophysics. 14, (2), 201-211 (1973).
  12. Cavanagh, P., Labianca, A. T., Thornton, I. M. Attention-based visual routines: Sprites. Cognition. 80, (1-2), 47-60 (2001).
  13. Troje, N. F. Retrieving Information from Human Movement Patterns. Understanding Events. 308-334 (2008).
  14. Klimesch, W. EEG alpha and theta oscillations reflect cognitive and memory performance: A review and analysis. Brain Research Reviews. 29, (2-3), 169-195 (1999).
  15. Jensen, O., Gelfand, J., Kounios, J., Lisman, J. E. Oscillations in the Alpha Band (9-12 Hz) Increase with Memory Load during Retention in a Short-term Memory Task. Cerebral Cortex. 12, (8), 877-882 (2002).
  16. Busch, N. A., Herrmann, C. S. Object-load and feature-load modulate EEG in a short-term memory task. NeuroReport. 14, (13), 1721-1724 (2003).
  17. Herrmann, C. S., Senkowski, D., Röttger, S. Phase-Locking and Amplitude Modulations of EEG Alpha. Experimental Psychology. 51, (4), 311-318 (2004).
  18. Pfurtscheller, G., Aranibar, A. Event-related cortical desynchronization detected by power measurements of scalp EEG. Electroencephalography and Clinical Neurophysiology. 42, (6), 817-826 (1977).
  19. Sauseng, P., et al. EEG alpha synchronization and functional coupling during top-down processing in a working memory task. Human Brain Mapping. 26, (2), 148-155 (2005).
  20. Pope, A. T., Bogart, E. H., Bartolome, D. S. Biocybernetic system evaluates indices of operator engagement in automated task. Biological Psychology. 40, (1-2), 187-195 (1995).
  21. Scerbo, M. W., Freeman, F. G., Mikulka, P. J. A brain-based system for adaptive automation. Theoretical Issues in Ergonomics Science. 4, (1-2), 200-219 (2003).
  22. Charland, P., et al. Assessing the Multiple Dimensions of Engagement to Characterize Learning: A Neurophysiological Perspective. Journal of Visualized Experiments. (101), e52627 (2015).
  23. Courtemanche, F., et al. Texting while walking: An expensive switch cost. Accident Analysis, Prevention. 127, 1-8 (2019).
  24. Townsend, J. T., Ashby, F. G. The stochastic modeling of elementary psychological processes. Cambridge University Press. Cambridge. (1983).
  25. Jung, T., et al. Removal of eye activity artifacts from visual event-related potentials in normal and clinical subjects. Clinical Neurophysiology. 111, 1745-1758 (2000).
  26. Luck, S. J. An Introduction to the Event-related Potential Technique (Cognitive Neuroscience). MIT Press. Cambridge, MA. (2005).
  27. Steiger, J. H. Tests for comparing elements of a correlation matrix. Psychological Bulletin. 87, (2), 245-251 (1980).
  28. Léger, P. -M., et al. Task Switching and Visual Discrimination in Pedestrian Mobile Multitasking: Influence of IT Mobile Task Type. Information Systems and Neuroscience: Vienna Retreat on NeuroIs 2019. Davis, F., Riedl, R., vom Brocke, J., Léger, P. -M., Randolph, A., Fischer, T. H. Springer. 245-251 (2020).
  29. Mourra, G. N. Addicted to my smartphone: what factors influence the task-switching cost that occurs when using a smartphone while walking. Retrieved from: (2019).
  30. Schabrun, S. M., van den Hoorn, W., Moorcroft, A., Greenland, C., Hodges, P. W. Texting and walking: strategies for postural control and implications for safety. PloS One. 9, (1), 84312 (2014).



    Post a Question / Comment / Request

    You must be signed in to post a comment. Please sign in or create an account.

    Usage Statistics