Waiting
Login processing...

Trial ends in Request Full Access Tell Your Colleague About Jove

Behavior

Real-Time Proxy-Control of Re-Parameterized Peripheral Signals using a Close-Loop Interface

Published: May 8, 2021 doi: 10.3791/61943

Summary

We present protocols and methods of analyses to build co-adaptive interfaces that stream, parameterize, analyze, and modify human body and heart signals in close-loop. This setup interfaces signals derived from the peripheral and central nervous systems of the person with external sensory inputs to help track biophysical change.

Abstract

The fields that develop methods for sensory substitution and sensory augmentation have aimed to control external goals using signals from the central nervous systems (CNS). Less frequent however, are protocols that update external signals self-generated by interactive bodies in motion. There is a paucity of methods that combine the body-heart-brain biorhythms of one moving agent to steer those of another moving agent during dyadic exchange. Part of the challenge to accomplish such a feat has been the complexity of the setup using multimodal bio-signals with different physical units, disparate time scales and variable sampling frequencies.

In recent years, the advent of wearable bio-sensors that can non-invasively harness multiple signals in tandem, has opened the possibility to re-parameterize and update the peripheral signals of interacting dyads, in addition to improving brain- and/or body-machine interfaces. Here we present a co-adaptive interface that updates efferent somatic-motor output (including kinematics and heart rate) using biosensors; parameterizes the stochastic bio-signals, sonifies this output, and feeds it back in re-parameterized form as visuo/audio-kinesthetic reafferent input. We illustrate the methods using two types of interactions, one involving two humans and another involving a human and its avatar interacting in near real time. We discuss the new methods in the context of possible new ways to measure the influences of external input on internal somatic-sensory-motor control.

Introduction

The Natural Close-Loop Controller
Sensory-motor information flows continuously between the brain and the body to produce well-organized, coordinated behaviors. Such behaviors can be studied while focusing on the person's actions alone, as in a monologue style (Figure 1A), or during complex dynamic actions shared between two agents in a dyad, as in a dialogue style (Figure 1B). Yet, a third option is to assess such complex interactions through a proxy controller, within the context of a human-computer close-loop interface (Figure 1C). Such interface can track the moment-by-moment movements' fluctuations contributed by each agent in the dyad, and by the type of cohesiveness that self-emerges from their synchronous interactions, helping steer the dyad's rhythms in desirable ways.

Figure 1
Figure 1: Different forms of control. (A) Self brain-controlled interfaces rely on the close-loop relations between the person's brain and the person's own body, which can self-regulate and self-interact in "monologue" style. This mode attempts the control of self-generated motions, or it may also aim to control external devices. (B) "Dialogue" style control is introduced for two dancers that interact with each other and through physical entrainment and turn-taking to attain control over each other's motions. (C) "Third party" dialogue control of the dyad is introduced as mediated by a computer interface that harnesses in tandem the bio-signals from both dancers, parameterizes it and feeds it back to the dancers in re-parameterized form using audio and/or vision as forms of sensory guidance. The re-parameterization in the examples presented here were attained using audio or visual feedback, enhanced by the real time kinesthetic motor output of one of the dancers to influence the other; or of both dancers, taking turns in some alternating pattern. Please click here to view a larger version of this figure.

The overall goal of this method is to show that it is possible to harness, parameterize and re-parameterize the moment-by-moment fluctuations in biorhythmic activities of bodies in motion, as two agents engage in dyadic exchange that may involve two humans, or a human and his/her self-moving avatar.

Investigations on how the brain may control actions and predict their sensory consequences have generated many lines of theoretical enquiries in the past1,2,3 and produced various models of neuromotor control4,5,6,7,8. One line of research in this multi-disciplinary field has involved the development of close-loop brain-machine or brain-computer interfaces. These types of setups offer ways to harness and adapt the CNS signals to control an external device, such as a robotic arm9,10,11, an exoskeleton12, a cursor on a computer screen13 (among others). All these external devices share the property that they do not have own intelligence. Instead, the brain trying to control them does have, and part of the problem that the brain faces is to learn how to predict the consequences of the motions that it generates in these devices (e.g., the cursor's motions, the robotic arm's motions, etc.) while generating other supportive motions that contribute to the overall sensory motor feedback in the form of kinesthetic reafference. Often, the overarching aim of these interfaces has been to help the person behind that brain bypass an injury or disorder, to regain the transformation of his/her intentional thoughts into volitionally controlled physical acts of the external device. Less common however has been the development of interfaces that attempt to steer the movements of bodies in motion.

Much of the original research on brain-machine interfaces focus on the control of the central nervous system (CNS) over body parts that can accomplish goal-directed actions9,14,15,16,17. There are, however, other situations whereby using the signals derived from activities of the peripheral nervous systems (PNS), including those of the autonomic nervous systems (ANS), is informative enough to influence and steer the signals of external agents, inclusive of another human or avatar, or even interacting humans (as in Figure 1C). Unlike with a robotic arm or cursor, the other agent in this case, has intelligence driven by a brain (in the case of the avatar that has been endowed with the person's motions, or of another agent, in the case of an interacting human dyad).

A setup that creates an environment of a co-adaptive close-loop interface with dyadic exchange may be of use to intervene in disorders of the nervous systems whereby the brain cannot volitionally control one's own body in motion at will, despite not having physically severed the bridge between the CNS and the PNS. This may be the case owing to noisy peripheral signals whereby the feedback loops to aid the brain continuously monitor and adjust its own self-generated biorhythms may have been disrupted. This scenario arises in patients with Parkinson's disease18,19, or in participants with autism spectrum disorders with excess noise in their motor output. Indeed, in both cases, we have quantified high levels of noise-to-signal ratio in the returning kinesthetic signals derived from the speed of their intended movements20,21,22 and from the heart23. In such cases, trying to master the brain-control of external signals, while also trying to control the body in motion, may result in a self-reactive signal from the re-entrant (re-afferent) stream of information that the brain receives from the continuous (efferent) motor stream at the periphery. Indeed, the moment-by-moment fluctuations present in such self-generated efferent motor stream contain important information useful to aid the prediction of the sensory consequences of purposeful actions24. When this feedback is corrupted by noise, it becomes difficult to predictably update the control signals and bridge intentional plans with physical acts.

If we were to extend such feedback loop to another agent and control the person and agent's interactions through a third party (Figure 1C), we may have a chance to steer each other's performances in near real time. This would provide us with the proof of concept that we would need to extend the notion of co-adaptive brain-body or brain-machine interfaces to treat disorders of the nervous systems that result in poor realization of physical volition from mental intent.

Purposeful actions have consequences, which are precisely characterized by motor stochastic signatures that are context-dependent and enable inference of levels of mental intent with high certainty25,26. Thus, an advantage of a new method that leverages dyadic exchange over prior person-centered approaches to the brain machine or brain computer interfaces, is that we can augment the control signals to include the bodily and heart biorhythms that transpire largely beneath the person's awareness, under different levels of intent. In this way, we dampen reactive interference that conscious control tends to evoke in the process of adapting brain-cursor control17. We can add more certainty to the predictive process by parameterizing the various signals that we can access. Along those lines, prior work exists using brain and bodily signals in tandem27,28,29; but work involving dyadic interactions captured by brain-bodily signals remains scarce. Further, the extant literature has yet to delineate the distinction between deliberate segments of the action performed under full awareness and transitional motions that spontaneously occur as the consequence of the deliberate ones30,31. Here we make that distinction in the context of dyadic exchange, and offer new ways to study this dichotomy32, while providing examples of choreographed (deliberate) vs. improvised (spontaneous) motions in the dance space.

Because of the transduction and transmission delays in the sensory-motor integration and transformation processes33, it is necessary to have such predictive code in place, to learn to anticipate upcoming sensory input with high certainty. To that end, it is important to be able to characterize the evolution of the noise-to-signal ratio derived from signals in the continuously updating kinesthetic reafferent stream. We then need protocols in place to systematically measure change in motor variability. Variability is inherently present in the moment-by-moment fluctuations of the outgoing efferent motor stream34. Since these signals are non-stationary and sensitive to contextual variations35,36, it is possible to parameterize changes that occur with alterations of the tasks' context. To minimize interference from reactive signals that emerge from conscious CNS control, and to evoke quantifiable changes in the efferent PNS motor stream, we introduce here a proxy close-loop interface that indirectly alters the sensory feedback, by recruiting the peripheral signal that is changing largely beneath the person's self-awareness. We then show ways to systematically measure the change that ensues the sensory manipulations, using stochastic analyses amenable to visualize the process that the proxy close-loop interface indirectly evokes in both agents.

Introducing a Proxy Close-Loop Controller
The sensory-motor variability present in the peripheral signals constitute a rich source of information to guide the performance of the nervous systems while learning, adaptation and generalization take place across different contexts37. These signals partly emerge as a byproduct of the CNS trying to volitionally control actions but are not the direct goal of the controller. As the person naturally interacts with others, the peripheral signals can be harnessed, standardized and re-parameterized; meaning that their variations can be parameterized and systematically shifted, as one alters the efferent motor stream that continuously re-enters the system as kinesthetic reafference. In such settings, we can visualize the stochastic shifts, capturing with high precision a rich signal that is otherwise lost to the types of grand averaging that more traditional techniques perform.

To achieve the characterization of change under the new statistical platform, we here introduce protocols, standardized data types and analytics that permit the integration of external sensory input (auditory and visual) with internally self-generated motor signals, while the person naturally interacts with another person, or with an avatar version of the person. In this sense, because we are aiming at controlling the peripheral signals (rather than modifying the CNS signals to directly control the external device or media), we coin this a proxy close-loop interface (Figure 2). We aim at characterizing the changes in the stochastic signals of the PNS, as they impact those in the CNS.

Figure 2
Figure 2: Proxy control of a dyadic interaction using close-loop multi-modal interface. (A) Indirect control of two dancers (dancing salsa) via a computer co-adaptive interface vs. (B) an interactive artificial person-avatar dyad controlled by harnessing the peripheral nervous systems signals and re-parameterizing it as sounds and/or as visual input. (C) The concept of sonification using a new standardized data type (the micro-movement spikes, MMS) derived from the moment-by-moment fluctuations in biorhythmic signals amplitude/timing converted to vibrations and then to sound. From Physics, we borrow the notions of compressions and rarefactions produced by a tuning fork outputting soundwave as measurable vibrations. Schematics of soundwaves represented as pressure modulated over time in parallel to spike concentrations for sonification. Example of a physical signal to undergo the proposed pipeline from MMS to vibrations and sonification. We use the heart rate signal as input to the interface. This takes fluctuations in the signal's amplitude aligned to the movement onset every 4 seconds of motion and builds MMS trains representing the vibrations. The spike trains from the MMS are standardized from [0,1]. The color of the spikes as per the color bar, represents the intensity of the signal. We then sonify these vibrations using Max. This sonified signal can be used to play back in A, or to alter in B the interactions with the avatar. Further, in B it is possible to embed the sound in the environment and use the body position to play the sound back at a region of interest (RoI), or to modulate the audio features as a function of distance to the RoI, speed or acceleration of a body part anchored to another body part, when passing by the RoI. Please click here to view a larger version of this figure.

The PNS signals can be harnessed non-invasively with wearable sensing technologies that co-register multi-modal efferent streams from different functional layers of the nervous systems, ranging from autonomic to voluntary32. We can then measure in near real time the changes in such streams and select those whose changes enhance the signal-to-noise ratio. This efferent motor signal can then be augmented with other forms of sensory guidance (e.g., auditory, visual, etc.) Because the PNS signals scape full awareness, they are easier to manipulate without much resistance 38. As such, we use them to help steer the person's performance in ways that may be less stressful to the human system.

Building the Interface
We present the design of the proxy control mediated by a close-loop co-adaptive multimodal interface. This interface steers the real-time multisensory feedback. Figure 3 displays the general design.

The close-loop interface is characterized by 5 main steps. The 1st step is the multi-modal data collection from multiple wearable instruments. The 2nd step is the synchronization of the multi-modal streams through the platform of LabStreamingLayer (LSL, https://github.com/sccn/labstreaminglayer) developed by the MoBI group 39. The 3rd step is the streaming of the LSL data structure to a Python, MATLAB or other programming language interface to integrate the signals and to empirically parametrize physiological features (relevant to our experimental setup) in real-time. The 4th step is to re-parameterize the selected features extracted from the continuous stream of the bodily signal studied and augment it using a sensory modality of choice (e.g., visual, auditory, kinesthetic, etc.) to play it back in the form of sounds or visuals, to augment, substitute or enhance the sensory modality that is problematic in the person's nervous system. Finally, the 5th step is to re-assess the stochastic signatures of the signals generated by the system in real time, to select which sensory modality brings the stochastic shifts of the bodily fluctuations to a regime of high certainty (noise minimization) in the prediction of the sensory consequences of the impending action. This loop is played continuously throughout the duration of the experiment with the focus on the selected signal, while storing the full performance for subsequent analyses (as depicted in the schematics of Figure 3 and see40,41,42,43,44,45,46,47 for an example of a posteriori analyses).

Figure 3
Figure 3: The architecture of the multi-modal peripherally driven close-loop interface concept. Various bodily signals are collected -kinematic data, heart and brain activity (step 1). LSL is used to synchronously co-register and stream the data coming from various equipment to the interface (step 2). Python/MATLAB/C# code is used to continuously parameterize the fluctuations in the signals using a standardized data type and common scale that enables selecting the source of sensory guidance most adequate to dampen the system's uncertainty (step 3). This real-time enhancement of signal transmission through selected channel(s) then allows re-parameterization of the re-entrant sensory signal to integrate in the continuous motor stream and enhance the lost or corrupted input stream (sensory substitution step 4). Continuous re-assessment closes the loop (step 5) and we save all data for additional future analyses. Please click here to view a larger version of this figure.

The following sections present the generic protocol of how to build a close-loop interface (as described in Figure 3) and describe representative results of two experimental interfaces (elaborately presented in Supplementary Material) involving physical dyadic interaction between two dancers (real close-loop system) and virtual dyadic interaction between a person and an avatar (artificial close-loop system).

Protocol

Study was approved by the Rutgers Institutional Study Board (IRB) in compliance with the declaration of Helsinki.

1. Participants

  1. Define the population to be studied and invite them to participate in the study. The present interface can be used in various populations. This protocol and the examples used here to provide proof of concept are not limited to a specific group.
  2. Obtain written informed consent of the IRB approved protocol in compliance with the Declaration of Helsinki.
  3. Ask the participant or guardian to sign the form before the beginning of the experiment.

2. Setup of the Close-Loop Interface

  1. Setup of kinematic equipment-PNS
    1. Help the participant to carefully wear the LED-based motion-capture costume (body and head, shown in Figure 3, step 1 and 5) accompanying the motion-capture system used. The LED markers of the costume will be tracked by the cameras of the system to estimate the location of the moving body in space.
    2. Connect the wireless LED controller (also known as LED driver unit) of the system with the LED cables of the costume by plugging it into the proper port. Turn the device on and set it on the streaming mode.
    3. Turn on the server of the motion-capture system.
    4. Open a web-browser, visit the server address, and sign-in (sign-in info must be provided by the company upon purchase of the product).
    5. Calibrate the system as needed (for example, calibrate the system if this is the first time to use the equipment, otherwise move to step 2.1.17).
    6. Open the calibration tool of the motion-capture system and select Calibration Wizard.
    7. Make sure that the entry of the server number in the text-field on the left-upper side of the interface is correct and click Continue.
    8. Connect the wand to the first port of the LED controller and turn ON the controller and click Continue. Once the wand is connected, its LED markers will be turned on and will appear on the display, in the camera views.
    9. Place the wand in the center of the camera view-field, confirm that it can be recorded by the cameras, and click Continue.
    10. Move the wand throughout the space by keeping it vertical and drawing cylinders. Make sure that the motion is captured by at least 3 cameras every time and is registered on the view field of each camera making it green. Do this for all cameras.
    11. Once the view-field of each camera has been fully registered (it is all green), click Continue and wait for calibration computations to be executed.
      ​NOTE: Once calibration is completed, the camera location along with the LED markers will be seen on the display, as they are physically placed in the room. At this point, the user may resume calibration because it is done, or continue aligning the system.
    12. Hold the wand vertically and place the side with the LED closer to the end of the wand on the ground, where the origin of the 3D space must be set (point (0,0,0)).
    13. Hold the wand stable until registered. Once registered, the screen flashes green. A point indicating the origin of the reference frame on the space will appear on the interface and the next alignment axis, x-axis, will be highlighted green.
    14. Move the wand, maintaining the same orientation (vertically), at the point of the x-axis and hold it stable until registered.
    15. Repeat for the z-axis. Once the point of the z-axis is registered, the calibration is complete.
    16. Click Finish to exit calibration.
    17. Open the interface of the motion-capture system and click Connect to start streaming the data from the LED markers. Once the connection is established, the position of the markers will be displayed on the virtual world of the interface.
    18. Create the virtual skeleton (automatically estimate the bone positions of the body from the position data collected from the LED markers of the costume, as shown Figure 8 step2).
    19. Right click on Skeletons on the right side of the window and select New skeleton.
    20. Choose Marker Mapping and then select the proper file (provided by the company based on the interface version that is used). Then, click OK.
    21. Ask participant to stay stable on the T-pose (straight up posture with arms open on the sides).
    22. Right click on skeleton and select Generate skeleton without training.
    23. If all steps are correctly performed the skeleton will be generated. Ask participant to move and check how accurately the virtual skeleton follows participant's movements.
    24. To stream the skeleton data to LSL, select Settings and Options from the main menu.
    25. Open Owl emulator and click "start" Live streaming.
  2. Setup of EEG equipment - CNS
    1. Help the same participant to wear the EEG head-cap.
    2. Place the gel electrodes (the traditional gel-based electrodes used with the EEG head-cap) on the head-cap and 2 sticky electrodes (electrodes that work like stickers) on the back side of the right ear for the CMS and DRL sensors.
    3. Fill electrodes with high-conductive gel, as needed, to improve conductivity between the sensor and the scalp.
    4. Connect the electrode-cables on the gel-trodes and the two sticky electrodes.
    5. Stick the wireless monitor on the back of the head-cap and plug in the electrode cables.
    6. Turn on the monitor.
    7. Open the interface of the EEG system.
    8. Select Use Wi-Fi device and click Scan for devices.
    9. Select NE Wi-Fi and Use this device.
    10. Click on the head icon, select a protocol that allows the recording of all 32 sensors, and click Load.
    11. Make sure that the streamed data of each channel are displayed on the interface.
  3. Setup of ECG equipment- ANS
    1. Follow the exact steps presented in 2.2 but use channel O1 to connect on the heart rate (HR) extension.
    2. Use a sticky electrode to stick the other end of the extension right below the left ribcage.
  4. Preparation of LSL for synchronized recording and streaming of kinematic data.
    1. Run the LSL application for the motion-capture system by double-clicking on the corresponding icon. Locate the application on the following path of the LSL folder, LSL\labstreaminglayer-master\Apps\PhaseSpace.
    2. On the interface, set the proper server address.
    3. Then, select File and Load configuration.
    4. Select the proper configuration file (it must be provided by the company based on the product version that is used)
    5. Click Link. If no mistakes are made, then no error message will be displayed.
  5. Prepare LSL for synchronized recording and streaming of EEG and ECG data. No extra steps are required for this equipment.
  6. Setup of LSL
    1. Run LabRecorder application by double clicking on the file located in the LSL\labstreaminglayer-master\Apps\LabRecorder path of the LSL folder.
    2. Click Update. If all instructions are correctly executed, all data types of the motion-capture and EEG system will be seen on the panel Record for streams.
    3. Select directory and name for the data on Storage location panel.
    4. Click Start. The data collection of the motion-capture and EEG system will begin synchronously.
    5. At the end of the recording click Stop. If recording was successful, the data will be located on the directory previously selected. Open the files to confirm that they include the recorded information.
  7. Real-time analyses and monitoring of the human system.
    1. Execute the MATLAB, Python, or other code that receives, processes, and augments the streamed data. Example codes corresponding to the representative examples described in the following sections can be found here: https://github.com/VilelminiKala/CloseLoopInterfaceJOVE
  8. Generation of the augmented sensory feedback
    1. Produce the sensory output using the proper device (e.g., speakers, monitor, among others).

3. Experimental procedure

  1. Follow the experimental procedure that is defined by the setup, if any.
    NOTE: The close-loop interfaces are designed to be intuitively explored and learned. Thus, most of the times no instructions are needed.

Representative Results

There are various interfaces that can be built based on the protocol presented in the previous section and can be applied on different populations for numerous purposes. Some possible variations are described in section "Variations of the Presented Close-Loop Interface" of Supplementary Material.

In this section we demonstrate representative results of 2 sample close-loop interfaces that follow the protocol described in the previous section. The setup, the experimental procedure, and the participants of these studies are explained in depth in sections "Example 1: Audio Close-loop Interface of a Real Dyadic Interaction" and "Example 2: Audio-visual Close-loop Interface of an Artificial Dyadic Interaction" of the Supplementary File.

Results of Audio Close-loop Interface of a Real Dyadic Interaction
In the study of "Audio close-loop interface of a real dyadic interaction" (elaborately presented in section "Example 1: Audio Close-loop Interface of a Real Dyadic Interaction" of Supplementary Material), we used a proxy control interface, illustrated in Figure 4, which uses the female dancer's heart signal to alter the music danced. In real time, we performed signal processing to extract the time of the heartbeat and streamed this information to the Max system to alter the speed of the performed song. This way, we played the song back, altered by the biophysical signals. This process led to further alterations of the motions and heartbeat signals.

Figure 4
Figure 4: The audio based close-loop interface. 1. An ECG-HR wearable device monitors the activity of a salsa dancer during the performance of her routines and feeds the signals to the interface at 500Hz. 2. Our interface analyses the ECG data in real time. In each frame, it filters the raw data, extracts the R peaks of the QRS complex; and streams the peak detection to MAX. 3. A third-party interface blends the speed of the audio with the speed of the heartrate. 4. The altered song is played back to the dancers. Please click here to view a larger version of this figure.

Two salsa dancers interacted with the interface and performed a well-rehearsed routine staging a choreography and a spontaneously improvised dance. The dancers had to perform the original version of the song once and a version blended the original tempo of the song with the real-time heartbeat stream. We refer to the later version which was performed twice as alteration 1 and 2 of the song.

In the analysis presented below, we used the heart and audio signal recorded. The peaks of the two signals extracted to estimate MMS trains (see section "Micro-movements Spikes" in Supplementary File), which preserve high frequency fluctuations as shown in Figure 5.

Figure 5
Figure 5: Estimation of MMS trains of the audio close-loop system. ECG time series are used to extract the RR-peaks and the amplitude deviations from the overall (estimated) mean amplitude of the R-peaks obtained (mean-shifted data). Then normalization by equation 1 (see Supplementary File, section "Micro-Movement Spikes") is used to obtain the MMS trains. Similar methods are used to handle the audio waveforms and play the song back according to the person's real-time performance. Please click here to view a larger version of this figure.

The MMS trains were well characterized as a continuous random process, well represented byvthe continuous Gamma family of probability distributions. MLE deemed this continuous family of distributions as the best fit for both data sets (see explanation in section "Gamma Distribution" of Supplementary Material and Supplementary Figure 2). This type of random process was used to track the shifts in stochastic signatures of the biorhythms self-generated by bio-signals from the human nervous systems.

From the empirically estimated shape and scale Gamma parameters, we obtain the Gamma moments, the mean, the variance, the skewness, and the kurtosis (see details of the analysis in section "Stochastic Analysis" of Supplementary Material). We then plot the estimated PDF. Figure 6 focuses only on the heart signal and music, but the methods apply similarly to the other biorhythms generated by the kinematics signals presented in 41.

The PDF of the heart and music signal are shown in Figure 6A-B, where we highlight the differences between the datasets of the two conditions, deliberate routine and spontaneous improvisation. For each condition, we underscore the shifts in stochastic signatures induced by the temporal alterations of the song. Initially, they dance to the original song. Then, as the heartbeat changes rhythms in real time, the sonified fluctuations in this signal leads the dancers to follow the temporal alterations of the song.

These are denoted alteration 1 and alteration 2. These systematic shifts are described by the Gamma parameters. Then, using the empirically estimated shape and scale parameters, we obtained the four corresponding Gamma moments for the heartbeat and the songs. These are displayed in Figure 6C for the heart (top) and the song (bottom) signals.

Figure 6
Figure 6: Inducing systematic changes in the empirically estimated Gamma PDFs and their stochastic trajectories of the Four Gamma Moments from the performance under the proxy control using the Audio Close-loop System. (A) PDFs from the MMS trains of each of the data type (ECG top and audio file bottom) for each of the dance contexts, spontaneous improvisation and deliberate routine. Legends are Imp Or (improvisation original) denoting the baseline condition at the start of the session; Imp Alt1 denoting the improvisation during alteration 1; Imp Alt2 denoting improvisation during alteration 2. (B) Likewise, for the deliberate rehearsed routine, Rout Or means routine original; Rout Alt1 means routine alteration 1; Rout Alt2 means routine alteration 2. The panels in (C) show the systematic shifts in Gamma moments as both the audio signals from the songs and those from the heart shift in tandem and in real time. Please click here to view a larger version of this figure.

The shift of the signatures can be appreciated in these panels (PDF and Gamma moments graphs), thus demonstrating that the methods presented can capture the adaptation of the heart to the alterations of the song that the proxy controller produces in real time. As the songs shift rhythms, so do the heart stochastic signatures and the transition of the stochastic signatures is consistent in direction (which is also a finding in 41 where we studied the shape and scale parameters). Likewise, as the heart's signatures shift, so do the song's signatures. This mirroring effects -the one affects the other and as one shifts consistently towards a direction so does other- follow the close-loop nature of this proxy controller interface. The results underscore the utility of this setup and gives proof of concept that we can systematically shift the person's autonomic biorhythms within the context of dyadic exchange.

Parallel shifts on the stochastic signatures of both the songs and the bodily signals, demonstrate that the co-adaptation of the whole system (participant and interface) is possible using the peripheral signals. This process smoothly transpires beneath the person's awareness and offers proof of concept for the ideas to remotely and systematically shift the person's bio-signals in correspondence with the external sensory feedback of choice. In summary, we can guide the shifting of the stochastic signatures in this continuous random process. The methods enable to capture change and their rate along the stochastic trajectories that we were able to build in near real time.

To ascertain statistical significance in the shifts, we use the non-parametric ANOVA, Kruskal-Wallis test followed by multiple comparisons post hoc test. We compare the signatures of the MMS of the heart data among the six conditions. Figure 7 shows the multi-comparison of the MMS heart data and corresponding Kruskal-Wallis table. The multi-comparison plot indicates that there is a significant difference between the baseline condition of the original routine dance (Rout. Or) and the baseline condition of the original improvised dance (Imp. Or). It is also important to notice that the first alterations, Rout. Alt1 and Imp. Alt1, shift to distributions which share comparable means and the same applies to the second alterations, while the variance, skewness and kurtosis shift on the Gamma moments space (Figure 6C).

Figure 7
Figure 7: Results from the non-parametric Kruskal-Wallis and Multiple comparison post hoc tests. The results of the non-parametric ANOVA (Kruskal-Wallis test) applied on the MMS of the heart data to compare the six conditions. The plot demonstrates the multi-comparison of the 6 cases, indicating the significant difference between the "Rout. Or" and "Imp. Or" conditions. The table shows the results of the Kruskal Wallis test. Please click here to view a larger version of this figure.

Results of Audio-Visual Close-Loop Interface of an Artificial Dyadic Interaction
In the study of "Audio-visual close-loop interface of an artificial dyadic interaction" (elaborately presented in section "Example 2: Audio-visual close-loop Interface of an artificial dyadic interaction" of Supplementary Material), 6 participants interacted with the interface, illustrated in Figures 8, which creates their mirrored avatar rendering the person's own movements. The interface embeds position-dependent sounds within the region surrounding the person during the interaction. The participants were naïve as to the purpose of the study. They had to walk around the room and figure out how to control the sound that would surprisingly emerge as they passed by a RoI (regions of interest) that the proxy controller defined.

Figure 8
Figure 8: The visual representation of the Audio-Visual Interface. 1. A motion-capture system is utilized for acquisition of the peripheral kinematic data. 2. The system collects the positions of the sensors (in our example LED's) to estimate the skeleton - position on the bones. 3. The bone positions are then aligned in our MATLAB developed interface using our own forward-kinematics model. 4. The aligned positions are used to map the skeleton information to our 3D rendered avatar. 5. The mapping of the streamed data to the avatar is in real time which creates the sensation of looking at the person's mirrored image. Please click here to view a larger version of this figure.

Figure 9 demonstrates the results of the audio-visual interface of condition 1 (see Supplementary File for more conditions), where the hip location activates the song when the former in located in RoI. This figure shows the PDF and Gamma signatures (see section "Data Types and Analyses" of Supplementary Material) of the hip speed data of 6 different control participants (C1 to C6), when they were inside and outside the RoI volume. The outcomes presented here highlight the personalized differences on the adaptation rate of the individual participants. These are indicated by the shifts of the stochastic signatures, and the individual outcomes emerging inside or outside the RoI volume. For instance, we can notice that the PDF fit to the frequency histograms of the MMS derived from the speed amplitude of the hips of C3 and C4, were more symmetric (higher shape value) and less noisy (lower scale value) when inside the volume. In contrast, the rest of the participants show an opposite pattern.

Empirically, we have found that signatures to the lower-right corner are those of athletes and dancers, performing highly skilled movements. Signatures lie on the upper-left region, come from datasets of nervous systems with pathologies, such as those with a diagnosis of autism spectrum disorders ADHD22,32 and those of a deafferented participant21. Within the context of shifting patterns along a stochastic trajectory, we obtain the median values of the shape and scale to define the right lower quadrant (RLQ) and the left upper quadrant (LUQ) where we track the overall quality of the signal to noise ratio by accumulating this information over time. This considers the updating of the median values dynamically defining these quadrants as the person co-adapts its internally generated biorhythms to those externally controlled by the proxy but dependent upon the person's internal ones.

Figure 9
Figure 9: Empirically estimated Gamma PDFs and Gamma Signatures of the bodily biorhythms during interactions using the Audio-Visual Close-Loop System. Using the MMS trains derived from the speed of the hips of each participant (C1 - C6), we used MLE to fit the best PDF with 95% confidence intervals. Each participant is represented by a different symbol while the conditions are represented by different colors. A family of Gamma PDFs when in the volume (in) differs from that outside the volume (out). Besides the Gamma empirically estimated PDFs, the estimated Gamma shape and scale parameters are also shown for each person on the Gamma parameter plane. Please click here to view a larger version of this figure.

Table 1 shows p-values obtained from raw (speed) and MMS data comparing the outcome across conditions when the person's body part is inside the RoI vs. outside the RoI. The results depicted on the table have been estimated using the non-parametric ANOVA Kruskal-Wallis test.

Kruskal Wallis Test Speed data MMS
C1 0 1.34 e-05
C2 0 4.72E-15
C3 0 8.59E-34
C4 2.70E-21 3.16E-04
C5 0 1.11E-09
C6 0 5.95E-05

Table 1: Output of the non-parametric ANOAVA-Kruskal-Wallis test. The results of the Kruskal Wallis test comparing the recordings of inside versus outside the Rol for the MMS and the speed data. We apply the test on the data of each participant (C1 - C6) separately.

Supplementary Files. Please click here to download these files.

Discussion

This paper introduces the concept of proxy control via close-loop co-adaptive, interactive, multi-modal interfaces that harness, parameterize and re-parameterize the peripheral signal of the person in the context of dyadic exchange. We aimed at characterizing stochastic shifts in the fluctuations of the person's biorhythms and parameterizing the change. Further we aimed at systematically steering the stochastic signatures of their biorhythms towards targeted levels of noise-to-signal regimes in near real time.

We presented a generic protocol for building a close-loop interface which satisfied 5 core elements: 1) the collection of multiple bodily data coming from the CNS, PNS, and ANS using various instruments and technologies; 2) the synchronized recording and streaming of the data; 3) the real-time analysis of the selected signals; 4) the creation of sensory augmentation (audio, visual, etc.) using physiological features extracted for the bodily signals; and 5) the continuous tracking of the human system and parallel sensory augmentation closes the loop of the interaction between the human and the system.

The generic protocol was applied on two example interfaces. The first one investigates the dyadic exchange between two human agents and the second one between a human and an avatar agent. The two types of dyads were used to provide proof of concept that the peripheral signal can be systematically changed in real time and that these stochastic changes can be precisely tracked. One dyad was composed of two participants physically interacting, while the other involved a participant interacting with a virtual agent in the form of a 3D rendered avatar endowed with the person's motions and with altered variants of these real-time motions. Such alterations were evoked by interactive manipulations driven by auditory and/or visual sensory inputs in a setting of augmented sensations. In both the real dyad and the artificial dyad, we demonstrated the feasibility of remotely shifting the peripheral signals, including bodily biorhythms and autonomic signals from the heartbeat.

We presented new experimental protocols to probe such shifts in efferent motor variability as the kinesthetic signal streams are being manipulated and re-parameterized in near real time. This re-entrant information (kinesthetic reafference48) proved valuable to shift the systems performance in real time. They bear information about the action's sensory consequences, which we can be precisely tracked using the methods that we presented here.

We also showed data types and statistical methods amenable to standardize our analyses. We provided multiple visualization tools to demonstrate the real-time changes in physiological activities naturally evolving in different contexts, with empirically guided statistical inference that lends itself to interpretation of the self-generated and self-controlled nervous systems signals. Importantly, the changes that were evoked by the proxy controller were smooth and yet quantifiable, thus lending support to the notion that peripheral activity is useful in more than one way. While we can implement these methods using commercially available wireless wearable sensors, we can systematically induce changes in performance that are capturable in the biophysical rhythms without stressing the system. It is important to translate our methods to the clinical arena and use them as a testbed to develop new intervention models (e.g., as when using augmented reality in autism 49). In such models, we will be able to track and quantify the sensory consequences of the person's naturalistic actions, as the sensory inputs are precisely manipulated, and the output is parameterized and re-parameterized in near real time.

We offer this protocol as a general model to utilize various biorhythmic activities self-generated by the human nervous systems and harnessed non-invasively with wireless wearables. Although we used a set of biosensors to register EEG, ECG and kinematics in this paper, the methods of recording, synchronizing and analyzing the signals are general. The interface can thus incorporate other technologies. Furthermore, the protocols can be modified to include other naturalistic actions and contexts that extend to the medical field. Because we have aimed for natural behaviors, the setup that we have developed can be used in playful settings (e.g., involving children and parents.)

Several disorders of the nervous systems could benefit from such playful approaches to the control problem. In both types of dyadic interactions that we showed here, the participants could aim at consciously controlling the music, while the proxy controller uses the peripheral output to unconsciously manipulate and systematically shift its signatures. Because scientists have spent years empirically mapping the Gamma parameter plane and the corresponding Gamma moments space across different age groups (neonates to 78 years of age)19,50,51,52,53 and conditions (autism, Parkinson's disease, stroke, coma state and deafferentation), for different levels of control (voluntary, automatic, spontaneous, involuntary and autonomic)25,47,54, they have empirically measured criteria denoting where on the Gamma spaces the stochastic signatures should be for a good predictive control. Previous research has also shown that we know where the parameters are in the presence of spontaneous random noise coming from the self-generated rhythms of the human nervous systems7,19,55,56. Within an optimization schema minimizing biorhythmic motor noise, we can thus aim at driving the signals in such a way as to attain the targeted areas of the Gamma spaces where the shape and dispersion signatures of the family of PDFs of each person is conducive of high signal to noise ratio and predictive values. In this sense, we do not lose gross data and rather use it effectively to drive the system towards desirable levels of noise within a given situation.

Dyadic interactions are ubiquitous in clinical or training settings. They may occur between the trainer and the trainee; the physician and the patient; the clinical therapist and the patient; and they may also occur in research settings that involve translational science and engage the researcher and the participant. One of the advantages of the present protocols is that while they are designed for dyads, they also are personalized. As such, it is possible to tailor the co-adaptive interactions to the person's best capabilities and predispositions, according to their ranges of motion, their ranges of sensory processing times and while considering the ranges in signals' amplitude across the functional hierarchy of the person's nervous systems. As the stochastic trajectory emerges and evolves in time, it is also possible to ascertain the rates of chance of the signatures and use that time series to forecast several impending events along with possible sensory consequences.

Finally, close-loop interfaces could be even used in the art world. They could offer performing artists new avenues to generate computationally driven forms of modern dances, technology dances and new forms of visualization and sonification of bodily expression. In such contexts, the dancer's body can be turned into a sensory-driven instrument to flexibly explore different sensory modalities through sonification and visualization of the self-generated biorhythmic activities, as shown by prior work in this area40,41,43,46. Such performance could augment the role of a dancer on stage and let the audience experience subtle bodily signals beyond visible movement.

Several aspects of this technology require further development and testing to optimize their use in real-time settings. The synchronous streaming demands high-speed CPU/GPU power and memory capacity to really exploit the notion of gaining time and being a step ahead when predicting the sensory consequences of the ongoing motor commands. Sampling rates of the equipment should be comparable in order to be able to truly align the signals, perform proper sensory fusion and explore the transmission of information through the different channels of the nervous system. These are some of the limitations present in this new interface.

All and all, this work offers a new concept to improve the control of our bodily system while employing subliminal means that nonetheless allow for systematic standardized outcome measurements of stochastic change.

Disclosures

Methods covered by:

•US20190333629A1 “Methods for the diagnosis and treatment of neurological disorders”

•EP3229684B1 “Procédés de mesure d'un mouvement physiologiquement pertinent”

•US20190261909A1 “System and method for determining amount of volition in a subject” 

US202110989122 “System and Method for measuring physiologically relevant motion

Acknowledgments

We thank the students who volunteered their time to help perform this research; Kan Anant and the PhaseSpace Inc. for providing us with images and videos necessary to describe the set up; and Neuroelectronics for allowing us to use material from the channel www.youtube.com/c/neuroelectrics/ and their manuals. Finally, we thank Prof. Thomas Papathomas from Rutgers Center for Cognitive Science for professional support during the submission stages of this manuscript, Nancy Lurie Marks Family Foundation Career Development Award to EBT and the Gerondelis Foundation Award to VK.

CONTRIBUTIONS
Conceptualization, VK and EBT; methodology, EBT; software, VK, EBT, SK.; validation, VK and SK; formal analysis, VK; investigation, VK, EBT, SK; resources, EBT; data curation, VK; writing—original draft preparation, EBT; writing—review and editing, VK, SK.; visualization, VK and EBT.; supervision, EBT.; project administration, EBT.; funding acquisition, EBT All authors have read and agreed to the published version of the manuscript.

Materials

Name Company Catalog Number Comments
Enobio 32 Enobio Hardware for EEG data collection
Enobio ECG Extention Enobio Hardware for ECG data collection
LabStreamingLayer (LSL) Synchronization and streaming of data
Matlab Mathwork Analysis and processing of data
Max Cycling'74 Sonification of bodily information
NIC.2 Enobio Software for EEG and ECG data collection
PhaseSpace Impulse PhaseSpace Hardware for collection of the kinematic data (position, speed, acceleration)
Python3 Python Analysis and processing of data
Recap PhaseSpace Software for collection of the kinematic data (position, speed, acceleration)

DOWNLOAD MATERIALS LIST

References

  1. Kawato, M., Wolpert, D. Internal models for motor control. Novartis Foundation Symposium. 218, 291-304 (1998).
  2. Wolpert, D. M., Kawato, M. Multiple paired forward and inverse models for motor control. Neural Networks. 11 (7-8), 1317-1329 (1998).
  3. Wolpert, D. M., Miall, R. C., Kawato, M. Internal models in the cerebellum. Trends in Cognitive Sciences. 2 (9), 338-347 (1998).
  4. Todorov, E., Jordan, M. I. Optimal feedback control as a theory of motor coordination. Nature Neuroscience. 5 (11), 1226-1235 (2002).
  5. Todorov, E. Optimality principles in sensorimotor control. Nature Neuroscience. 7 (9), 907-915 (2004).
  6. Torres, E. B. Theoretical Framework for the Study of Sensori-motor Integration. , University of California, San Diego. (2001).
  7. Harris, C. M., Wolpert, D. M. Signal-dependent noise determines motor planning. Nature. 394 (20), 780-784 (1998).
  8. Torres, E. B., Zipser, D. Reaching to Grasp with a Multi-jointed Arm (I): A Computational Model. Journal of Neurophysiology. 88, 1-13 (2002).
  9. Carmena, J. M., et al. Learning to control a brain-machine interface for reaching and grasping by primates. PLoS Biology. 1 (2), 42 (2003).
  10. Hwang, E. J., Andersen, R. A. Cognitively driven brain machine control using neural signals in the parietal reach region. 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology. 2010, 3329-3332 (2010).
  11. Andersen, R. A., Kellis, S., Klaes, C., Aflalo, T. Toward more versatile and intuitive cortical brain-machine interfaces. Current Biology. 24 (18), 885-897 (2014).
  12. Contreras-Vidal, J. L., et al. Powered exoskeletons for bipedal locomotion after spinal cord injury. Journal of Neural Engineering. 13 (3), 031001 (2016).
  13. Choi, K., Torres, E. B. Intentional signal in prefrontal cortex generalizes across different sensory modalities. Journal of Neurophysiology. 112 (1), 61-80 (2014).
  14. Hwang, E. J., Andersen, R. A. Cognitively driven brain machine control using neural signals in the parietal reach region. 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology. 2010, 3329-3332 (2010).
  15. Andersen, R. A., Kellis, S., Klaes, C., Aflalo, T. Toward more versatile and intuitive cortical brain-machine interfaces. Current Biology. 24 (18), 885-897 (2014).
  16. Contreras-Vidal, J. L., et al. Powered exoskeletons for bipedal locomotion after spinal cord injury. Journal of Neural Engineering. 13 (3), 031001 (2016).
  17. Choi, K., Torres, E. B. Intentional signal in prefrontal cortex generalizes across different sensory modalities. Journal of Neurophysiology. 112 (1), 61-80 (2014).
  18. Yanovich, P., Isenhower, R. W., Sage, J., Torres, E. B. Spatial-orientation priming impedes rather than facilitates the spontaneous control of hand-retraction speeds in patients with Parkinson's disease. PLoS One. 8 (7), 66757 (2013).
  19. Torres, E. B., Cole, J., Poizner, H. Motor output variability, deafferentation, and putative deficits in kinesthetic reafference in Parkinson's disease. Frontiers in Human Neuroscience. 8, 823 (2014).
  20. Torres, E. B., et al. Toward Precision Psychiatry: Statistical Platform for the Personalized Characterization of Natural Behaviors. Frontiers in Human Neuroscience. 7, 8 (2016).
  21. Torres, E. B., Cole, J., Poizner, H. Motor output variability, deafferentation, and putative deficits in kinesthetic reafference in Parkinson's disease. Frontiers in Human Neuroscience. 8, 823 (2014).
  22. Torres, E. B., et al. Autism: the micro-movement perspective. Frontiers in Integrative Neuroscience. 7, 32 (2013).
  23. Ryu, J., Vero, J., Torres, E. B. MOCO '17: Proceedings of the 4th International Conference on Movement Computing. , ACM. 1-8 (2017).
  24. Brincker, M., Torres, E. B. Noise from the periphery in autism. Frontiers in Integrative Neuroscience. 7, 34 (2013).
  25. Torres, E. B. Two classes of movements in motor control. Experimental Brain Research. 215 (3-4), 269-283 (2011).
  26. Torres, E. B. Signatures of movement variability anticipate hand speed according to levels of intent. Behavioral and Brain Functions. 9, 10 (2013).
  27. Gramann, K., et al. Cognition in action: imaging brain/body dynamics in mobile humans. Reviews in the Neurosciences. 22 (6), 593-608 (2011).
  28. Makeig, S., Gramann, K., Jung, T. P., Sejnowski, T. J., Poizner, H. Linking brain, mind and behavior. International Journal of Psychophysiology. 73 (2), 95-100 (2009).
  29. Ojeda, A., Bigdely-Shamlo, N., Makeig, S. MoBILAB: an open source toolbox for analysis and visualization of mobile brain/body imaging data. Frontiers in Human Neuroscience. 8, 121 (2014).
  30. Casadio, M., et al. IEEE International Conference on Rehabilitation Robotics. , IEEE Xplore. ed IEEE (2019).
  31. Pierella, C., Sciacchitano, A., Farshchiansadegh, A., Casadio, M., Mussaivaldi, S. A. 7th IEEE International Conference on Biomedical Robotics and Biomechatronics. , IEEE Xplore. (2018).
  32. Torres, E. B. Two classes of movements in motor control. Experimental Brain Research. 215 (3-4), 269-283 (2011).
  33. Purves, D. Neuroscience. Sixth edition. , Oxford University Press. (2018).
  34. Torres, E. B. Signatures of movement variability anticipate hand speed according to levels of intent. Behavioral and Brain Functions. 9, 10 (2013).
  35. Torres, E. B. The rates of change of the stochastic trajectories of acceleration variability are a good predictor of normal aging and of the stage of Parkinson's disease. Frontiers in Integrative Neuroscience. 7, 50 (2013).
  36. Torres, E. B., Vero, J., Rai, R. Statistical Platform for Individualized Behavioral Analyses Using Biophysical Micro-Movement Spikes. Sensors (Basel). 18 (4), (2018).
  37. Torres, E. B. Progress in Motor Control. , Springer. 229-254 (2016).
  38. Torres, E. B., Yanovich, P., Metaxas, D. N. Give spontaneity and self-discovery a chance in ASD: spontaneous peripheral limb variability as a proxy to evoke centrally driven intentional acts. Frontiers in Integrative Neuroscience. 7, 46 (2013).
  39. Gramann, K., Jung, T. P., Ferris, D. P., Lin, C. T., Makeig, S. Toward a new cognitive neuroscience: modeling natural brain dynamics. Frontiers in Human Neuroscience. 8, 444 (2014).
  40. Kalampratsidou, V., Torres, E. B. Bodily Signals Entrainment in the Presence of Music. Proceedings of the 6th International Conference on Movement and Computing. , (2019).
  41. Kalampratsidou, V., Torres, E. B. Sonification of heart rate variability can entrain bodies in motion. Proceedings of the 7th International Conference on Movement and Computing. , Article 2 (2020).
  42. Kalampratsidou, V. Co-adaptive multimodal interface guided by real-time multisensory stochastic feedback. , Rutgers University-School of Graduate Studies. (2018).
  43. Kalampratsidou, V., Kemper, S., Torres, E. B. Real time streaming and closed loop co-adaptive interface to steer multi-layered nervous systems performance. 48th Annual Meeting of Society for Neuroscience. , (2018).
  44. Kalampratsidou, V., Torres, E. B. Body-brain-avatar interface: a tool to study sensory-motor integration and neuroplasticity. Fourth International Symposium on Movement and Computing, MOCO. 17, (2017).
  45. Kalampratsidou, V., Torres, E. B. Peripheral Network Connectivity Analyses for the Real-Time Tracking of Coupled Bodies in Motion. Sensors (Basel). 18 (9), (2018).
  46. Kalampratsidou, V., Zavorskas, M., Albano, J., Kemper, S., Torres, E. B. Dance from the heart: A dance performance of sounds led by the dancer's heart. Sixth International Symposium on Movement and Computing. , (2019).
  47. Kalampratsidou, V., Torres, E. B. Outcome measures of deliberate and spontaneous motions. Proceedings of the 3rd International Symposium on Movement and Computing. , 9 (2016).
  48. Von Holst, E., Mittelstaedt, H. Perceptual Processing: Stimulus equivalence and pattern recognition. Dodwell, P. C. , Appleton-Century-Crofts. 41-72 (1950).
  49. Torres, E. B., Yanovich, P., Metaxas, D. N. Give spontaneity and self-discovery a chance in ASD: spontaneous peripheral limb variability as a proxy to evoke centrally driven intentional acts. Frontiers in Integrative Neuroscience. 7, 46 (2013).
  50. Torres, E. B. Objective Biometric Methods for the Diagnosis and Treatment of Nervous System Disorders. , Academmic Press, Elsevier. (2018).
  51. Torres, E. B., et al. Toward Precision Psychiatry: Statistical Platform for the Personalized Characterization of Natural Behaviors. Frontiers in Neurology. 7, 8 (2016).
  52. Wu, D., et al. How doing a dynamical analysis of gait movement may provide information about Autism. APS March Meeting Abstracts. , (2017).
  53. Torres, E. B., et al. Characterization of the statistical signatures of micro-movements underlying natural gait patterns in children with Phelan McDermid syndrome: towards precision-phenotyping of behavior in ASD. Frontiers in Integrative Neuroscience. 10, 22 (2016).
  54. Kalampratsidou, V. Peripheral Network Connectivity Analyses for the Real-Time Tracking of Coupled Bodies in Motion. Sensors. , (2018).
  55. Brincker, M., Torres, E. B. Noise from the periphery in autism. Frontiers in Integrative Neuroscience. 7, 34 (2013).
  56. Torres, E. B., Denisova, K. Motor noise is rich signal in autism research and pharmacological treatments. Scientific Reports. 6, 37422 (2016).

Tags

Real-time Proxy-control Re-parameterized Peripheral Signals Closed-loop Interface Poly Signal Augmentation Bodily Awareness Wearable Technologies Experimental Setups Populations Generic Interface Lab Streaming Layer Synchronized Recording Real-time Streaming Stream Data Code Development Real-time Analysis Feature Extraction Sensory Augmentation Auditory Mapping Heart Rate Tempo Of The Song Visual Representation Of Motions Body Information Unfolding Interaction Brain Machine Interface
Real-Time Proxy-Control of Re-Parameterized Peripheral Signals using a Close-Loop Interface
Play Video
PDF DOI DOWNLOAD MATERIALS LIST

Cite this Article

Kalampratsidou, V., Kemper, S.,More

Kalampratsidou, V., Kemper, S., Torres, E. B. Real-Time Proxy-Control of Re-Parameterized Peripheral Signals using a Close-Loop Interface. J. Vis. Exp. (171), e61943, doi:10.3791/61943 (2021).

Less
Copy Citation Download Citation Reprints and Permissions
View Video

Get cutting-edge science videos from JoVE sent straight to your inbox every month.

Waiting X
Simple Hit Counter