Brain Computer Interface Systems for Neurorobotics: Methods and ApplicationsView this Special Issue
Research Article | Open Access
Virtual and Actual Humanoid Robot Control with Four-Class Motor-Imagery-Based Optical Brain-Computer Interface
Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training.
The ability to direct a robot using only human thoughts could provide a powerful mechanism for human-robot interaction with a wide range of potential applications, from medical to search-and-rescue to industrial manufacturing. As robots become more integrated into our everyday lives, from robotic vacuums to self-driving cars, it will also become more important for humans to be able to reliably communicate with and control them. Current robots are difficult to control, often requiring a large degree of autonomy (which is still an area of active research) or a complex series of commands entered through button presses or a computer terminal. Using thoughts to direct a robot’s actions via a brain-computer interface (BCI) could provide a more intuitive way to issue instructions to a robot. This could augment current efforts to develop semiautonomous robots capable of working in environments unsafe for humans, which was the focus of a recent DARPA robotics challenge . A brain-controlled robot could also be a valuable assistive tool for restoring communication or movement in patients with a neuromuscular injury or disease .
The ideal, field-deployable BCI system should be noninvasive, safe, intuitive, and practical to use. Many previous studies have focused on electroencephalography (EEG) and, to a lesser extent, functional magnetic resonance imaging (fMRI). Using these traditional neuroimaging tools, various proof-of-concept BCIs have been built to control the navigation of humanoid (i.e., human-like) robots [3–9], wheeled robots [10–12], flying robots [13, 14], robotic wheelchairs , and assistive exoskeletons . More recently functional near-infrared spectroscopy (fNIRS) has emerged as a good candidate for next generation BCIs, as fNIRS measures the hemodynamic response similar to fMRI [17, 18] but with miniaturized sensors that can be used in field settings and even outdoors [19, 20]. It also provides a balanced trade-off between temporal and spatial resolution, compared to fMRI and EEG, that sets it apart and presents unique opportunities for investigating new approaches, mental tasks, information content, and signal processing for the development of new BCIs . Several fNIRS-based BCI systems have already been investigated for use in robot control [22–27].
Motor imagery, or the act of imagining moving the body while keeping the muscles still, has been a popular choice for use in BCI studies [3, 11, 13, 22, 23, 28–37]. It is a naturalistic task, highly related to actual movements, which could make it a good choice for a BCI input. While motor-execution tasks produce activation levels that are easier to detect, motor imagery is often preferred as issues with possible proprioceptive feedback can be avoided . EEG BCIs have shown success with up to four classes, typically right hand, left hand, feet, and tongue [11, 28, 29]. Other studies have shown potential for EEG to detect difference between right and left foot or leg motor imagery [39, 40] and even individual fingers . Studies have also used fNIRS to detect motor-imagery tasks, with many focusing on a single hand versus resting state , left hand versus right hand [31, 32], or three motor-imagery tasks and rest . Shin and Jeong used fNIRS to detect left and right leg movement tasks in a four-class BCI , and in prior studies we presented preliminary offline classification results using left and right foot tasks separately in a four-class motor-imagery-based fNIRS-BCI [22, 23]. fNIRS has also been used to examine differences in motor imagery due to force of hand clenching or speed of tapping .
Many factors can affect the quality of recorded motor-imagery data. Kinesthetic motor imagery (i.e., imagining the feeling of the movement) has shown higher activation levels in the motor cortex than visual motor imagery (i.e., visualizing the movement) [43, 44]. Additionally, individual participants have varying levels of motor-imagery skill, which also affects the quality of the BCI [45–47]. In some participants, the use of feedback during motor-imagery training can increase the brain activation levels produced during motor imagery [48, 49].
Incorporating robot control into a BCI provides visual feedback and can increase subject motivation. Improved motivation and feedback, both visual and auditory, have demonstrated promise for reducing subject training time and improving BCI accuracy [50, 51]. The realism of feedback provided by a BCI may also have an effect on subject performance during motor imagery. For example, Alimardani et al. found a difference in subject performance in a follow-up session after receiving feedback from viewing a robotic gripper versus a lifelike android arm .
In this study, we report the first online results of a four-class motor-imagery-based fNIRS-BCI used to control both a virtual and physical robot. The four tasks used were imagined movement of upper and lower limbs: the left hand, left foot, right foot, and right hand. To the best of our knowledge, this is the first online four-class motor-imagery-based fNIRS-BCI, as well as the first online fNIRS-BCI to use left and right foot as separate tasks. We also examine the differences in oxygenated hemoglobin (HbO) activation between the virtual and physical-robot BCIs in an offline analysis.
2. Materials and Methods
Participants attended two training sessions, to collect data to train an online classifier for the BCI, followed by a third session in which they used the BCI to control the navigation of both a virtual and actual robot. This section outlines the methods used for data collection, the design of the BCI, and offline analysis of the collected data following the completion of the BCI experiment.
Thirteen healthy participants volunteered to take part in this experiment. Subjects were aged 18–35, right-handed, English-speaking, and with vision correctable to 20/20. No subjects reported any physical or neurological disorders or were on medication. The experiment was approved by the Drexel University Institutional Review Board, and participants were informed of the experimental procedure and provided written consent prior to participating.
2.2. Data Acquisition
Data were recorded using fNIRS as described in our previous study . fNIRS is a noninvasive, relatively low-cost, portable, and potentially wireless optical brain imaging technique . Near-infrared light is used to measure changes in HbO and HbR (deoxygenated hemoglobin) levels due to the rapid delivery of oxygenated blood to active cortical areas through neurovascular coupling, known as the hemodynamic response .
Participants sat in a desk chair facing a computer monitor. They were instructed to sit with their feet flat on the floor and their hands in their lap or on chair arm rests with palms facing upwards. Twenty-four optodes (measurement locations) over the primary and supplementary motor cortices were recorded using a Hitachi ETG-4000 optical topography system, as shown in Figure 1. Each location recorded HbO and HbR levels at a 10 Hz sampling rate.
2.3. Experiment Protocol
Motor-imagery and motor-execution data were recorded in three one-hour-long sessions on three separate days. The first two sessions were training days, used to collect initial data to train a classifier, and the third day used this classifier in a BCI to navigate both a virtual and physical robot to the goal location in a series of rooms. The two robots are described below in Section 2.3.3 Robot Control. The training session protocol included five tasks: a “rest” task and tapping of the right hand, left hand, right foot, and left foot. This protocol expands on a preliminary study reported previously [22, 23]. Data collection for the two training days has been described previously .
Subjects performed all five tasks during the two training days (rest, along with the (actual or imagined) tapping of the right hand, left hand, right foot, and left foot). During the third session, only the four motor-imagery tasks were used to control the BCI.
Participants were instructed to self-pace their real or imagined movements at once per second for the duration of the trial. The hand-tapping task was curling and uncurling their fingers towards their palm as if squeezing an imaginary ball, while the foot-tapping task involved raising and lowering the toes while keeping the heel on the floor. While resting, subjects were instructed to relax their mind and refrain from moving. During motor-imagery tasks, subjects were instructed to refrain from moving and use kinesthetic imagery (i.e., imagine the feelings and sensations of the movement).
Each trial consisted of 9 seconds of rest, a 2-second cue indicating the type of upcoming task, and a 15-second task period. During the two training sessions the cue text indicated a specific task (e.g., “Left Foot”), while, during the robot-control task, it read “Free Choice,” indicating the subject should choose the task corresponding to the desired action of the robot. Trials during the training days ended with a 4-second display indicating that the task period had ended. During the robot-control session, the task was followed by a reporting period so that the subject could indicate which task they had performed. The BCI then predicted which task the user had performed and sent the corresponding command to the robot, which took the corresponding action. The timings for training and robot-control days are shown in Figure 2.
2.3.2. Session Organization
In total, 60 motor-execution and 150 motor-imagery trials were collected during the training days, and an additional 60 subject-selected motor-imagery trials were recorded during the robot-control portion. The two training days were split into two runs, one for motor execution and one for motor imagery, which were repeated three times as shown in Figure 3. The protocol alternated between a run of 10 motor-execution trials and a run of 25 motor-imagery trials in order to reduce subject fatigue and improve their ability to perform motor imagery . Each run had an equal number of the five tasks (rest and motor execution or motor imagery of the right hand, left hand, right foot, and left foot) in a randomized order. The third day (robot control) had two runs of 30 motor-imagery tasks, chosen by the user, which were used to control the BCI. The rest and motor-execution tasks were collected for offline analysis and were not used in the online BCI.
2.3.3. Robot Control
The robot-control session had two parts, beginning with control of a virtual robot using the MazeSuite program (http://www.mazesuite.com) [56, 57] and followed by control of the DARwIn-OP (Dynamic Anthropomorphic Robot with Intelligence-Open Platform) robot . The objective in both scenarios was to use the BCI to navigate through a series of three room designs (shown in Figure 4), in which there was a single goal location (a green cube) and an obstacle (a red cube). A room was successfully completed if the user navigated the robot to the green cube, and it failed if the robot touched the red cube. After completion or failure of a room, the subject would advance to the next room. The sequence was designed such that the robot started closer to the obstacle in each successive room to increase the difficulty as the subject progressed. The run ended if the subject completed (or failed) all three rooms or reached the maximum of 30 trials. Each room could be completed in 5 or fewer movements, assuming perfect accuracy from the BCI.
To control the BCI, subjects selected a motor-imagery task corresponding to the desired action of the (virtual or physical) robot. The task-to-command mappings were as follows: left foot/walk forward, left hand/turn left 90°, right hand/turn right 90°, and right foot/walk backward. These four tasks were chosen to emulate a common arrow-pad setup, so that each action had a corresponding opposite action that could undo a movement. During BCI control, the original experiment display showed a reminder of the mapping between the motor-imagery tasks and the robot commands. A second monitor to the left of the experiment display showed a first-person view of the experiment room for either the virtual or physical robot. The experiment setup and example display screens are shown in Figure 5.
The virtual robot was controlled using the built-in control functions of the MazeSuite program [56, 57]. The virtual environment and movements of the virtual robot were designed to replicate as closely as possible the physical room and movements of the DARwIn-OP, allowing the participants to acquaint themselves with the new robot-control paradigm before adding the complexities inherent in using a real robot. The virtual robot could make perfect 90° turns in place, and the forward and backward distance was adjusted to match the relative distance traveled by the DARwIn-OP robot as closely as possible. The goal and obstacle were shown as floating green and red cubes, respectively, that would trigger a success or failure state on contact with the virtual robot.
During the second run, the user controlled the DARwIn-OP in an enclosed area with a green box and red box marking the location of the goal and obstacle, respectively. Success or failure was determined by an experimenter watching the robot during the experiment. The DARwIn-OP is a small humanoid robot that stands 0.455 m tall, has 20 degrees of freedom, and walks on two legs in a similar manner to humans . The robot received high-level commands from the primary experiment computer using TCP/IP over a wireless connection. Control of the DARwIn-OP was handled via a custom-built C++ class that called the robot’s built-in standing and walking functions using prespecified parameters to control the movements at a high level. This class was then wrapped in a Python class for ease of communication with the experiment computer. The head position was lowered from the standard walking pose, in order to give a better view of the goal and obstacle. In order to turn as closely to 90° in place as possible, the robot used a step size of zero for approximately 3 seconds with a step angle of approximately 25° or −25°. When moving forward or backward, the DARwIn-OP used a step size of approximately 1 cm for 2 or 3 seconds, respectively. The exact values were empirically chosen for this particular robot.
2.4. Data Analysis
In addition to the evaluation of the classifier performance during the online BCI, a secondary offline analysis of the data was performed to further compare the two robot BCIs.
2.4.1. Online Processing
Motor-imagery data from the two training days were used to train a subject-specific classifier to control the BCI during the third day. The rest and motor-execution trials were excluded from the training set, as the BCI only used the four motor-imagery tasks. All data recordings from the training days for HbO, HbR, and HbT (total hemoglobin) were filtered using a 20th-order FIR filter with a 0.1 Hz cutoff. Artifacts and optodes with poor signal quality were noted and removed by the researcher. One subject was excluded from the online results due to insufficient data quality.
In addition to using only the low-pass filter, a variety of preprocessing methods were evaluated: correlation-based signal improvement (CBSI), common average referencing (CAR), task-related component analysis (TRCA), or both CAR and TRCA. CBSI uses the typically strong negative correlation between HbO and HbR to reduce head motion noise . CAR is a simple method, commonly used in EEG, in which the average value of all optodes at each time point is used as a common reference (i.e., that value is subtracted from each optode at that time point). This enhances changes in small sets of optodes while removing global spatial trends from the data. TRCA creates signal components from a weighted sum of the recorded data signals . It attempts to find components that maximize the covariance between instances of the same task while minimizing the covariance between instances of different tasks.
Individual task periods were extracted and baseline corrected, using the first 2 seconds of each task as the baseline level. Figure 6 shows an example of how preprocessing methods affect the recorded HbO and HbR for a single optode during one task period. Comparing Figures 6(a) and 6(b) shows how filtering removes a significant quantity of high-frequency noise from the signal. Figure 6(c) shows the change in the signal after applying CAR and baseline correction.
Four different types of features were calculated individually on each optode for HbO, HbR, and HbT. The features used were as follows: mean (average value of the last 10 seconds of the task), median (median of the last 10 seconds of the task), max (maximum value of the last 10 seconds of the task), and slope (slope of the line of best fit of the first 7 seconds of the task). Datasets were created using features calculated on HbO, HbT, or both HbO and HbR. Each feature set was reduced to between 4 and 8 features using recursive feature elimination. If both HbO and HbR were used, the specified number of features was selected for each chromophore. This resulted in 300 possible datasets (5 preprocessing methods, 3 chromophore combinations, 4 types of features, and 5 levels of feature reduction). Features in each dataset were normalized to have zero mean and unit variance.
Prior to the BCI session, a linear discriminant analysis (LDA) classifier was trained on the data from the two training days, following the flow chart shown in Figure 7 . LDA is one of the simplest classification methods commonly used in BCIs , requiring no parameter tuning, which reduces the number of possible choices when selecting a classifier. LDA was implemented using the Scikit-learn toolkit .
To select an online classifier, an LDA classifier was trained on one training day (60 motor-imagery trials) and tested on the other for each of the 300 feature sets. This was repeated with the two days reversed, and the feature set with the highest average accuracy was selected. The classifier was then retrained on both training days (120 motor-imagery trials) using the selected feature set and was used as the online classifier for both robot-control BCIs.
Results are reported as accuracy (average number of correct classifications), precision (positive prediction value), recall (sensitivity or true positive rate), -score (the balance between precision and recall), and the area under the ROC curve (AUC). The -score is calculated as .
2.4.2. Offline Processing
For the offline analysis, an automatic data-quality analysis was used on the BCI session data to determine which optodes and trials should be removed due to poor quality. This was done separately for the virtual and DARwIn-OP runs using a modified version of the method described by Takizawa et al. for fNIRS data . Any optodes with a very high (near maximum) digital or analog gain were removed, as these were likely contaminated by noise. Areas with a standard deviation of 0 in a 2-second window of the raw light-intensity data were considered to have been saturated, and artifacts were determined to be areas with a change of 0.15 [mM] during a 2-second period on HbO and HbR data after application of the low-pass filter. Optodes that had at least 20 (of the original 30) artifact- and saturation-free trials were kept, with the remaining optodes being removed. Then, any trials with artifacts or saturated areas in any remaining good optodes were removed. An additional 5 subjects were excluded from the offline analysis due to insufficient data quality.
CAR was used for all offline analysis, followed by task data extraction and baseline correction as in the online analysis. Offline analysis examined the average HbO activation levels during the first and last second of each trial. Statistical analysis was done using linear mixed models, with multiple tests being corrected using false discovery rate (FDR).
Offline analysis found that optode (24 levels), the interaction of optode and task (4 levels: right hand, left hand, right foot, and left foot), and the interaction of optode and robot type (2 levels: virtual and DARwIn-OP) had a significant effect on the average HbO activation during the last second of each trial. A post hoc analysis run individually for each optode found no significant effect for task, robot type, or type interaction. -values and values for the main effects are shown in Table 1, with the post hoc analysis available in Table S1 in Supplementary Material available online at https://doi.org/10.1155/2017/1463512.
A second post hoc analysis, run individually for each optode under each task condition separately, showed that robot type had a significant effect on at least one optode under each task condition (, FDR corrected). The effect was found for two optodes (14 and 16) for the left hand task, one optode (14) for left foot, 6 optodes (4, 9, 16, 18, 20, and 23) for right foot, and one optode (6) for right hand. The full table of values is available in Table S2 in Supplementary Material.
A comparison of topographic HbO activation levels demonstrated differences between individual tasks as well as the two BCIs. Left hand showed a much more contralateral activation pattern with the DARwIn-OP robot, with two optodes on the ipsilateral side showing a significant decrease in HbO levels between the first and last second of the task, whereas, during control of the virtual robot, it had a more ipsilateral activation pattern and no optodes with statistically significant changes in activation over the course of the task. Right hand, however, became strongly ipsilateral, with one ipsilateral optode showing significant activation, during the DARwIn-OP BCI.
Right foot activation became more contralateral, with stronger activation being closer to on the contralateral side and a significant decrease in activation on the ipsilateral side. Left foot changed from a centralized bilateral activation near when controlling the virtual robot to a more diffuse and ipsilateral activation pattern during DARwIn-OP control. It did, however, show an optode with significant decrease in HbO activation on the ipsilateral side during DARwIn-OP control.
Topographic plots of the average HbO activation during the last second of each task across all subjects are shown in Figure 8. Optodes showing a significant difference in average HbO level between the first and last second of the task are circled (, FDR corrected).
While controlling the online four-class BCI, participants achieved an average accuracy of 27.12% for the entire session. Five participants (S1, S5, S7, S8, and S11) achieved an accuracy of 30% or higher, reaching 36.67% accuracy (S8). The online accuracy, precision, recall, -Score, and AUC for each subject are detailed in Table 2.
There was a significant increase in classification accuracy during DARwIn-OP control as compared to virtual robot control (one-sided paired t-test, , ), with the average accuracy increasing by 5.21 +/− 2.51% (mean +/− standard error). All but one subject achieved the same or better performance in the second run while controlling the DARwIn-OP compared to during the first run with the virtual robot, and two subjects achieved 40% accuracy. The online accuracy, precision, recall, -Score, and AUC for each subject for each BCI individually are detailed in Table 3. One subject (S5) did not use the left hand task during the virtual robot run, and therefore no AUC value is listed.
This improvement in performance appears to be reflected in the number of goals reached by the participants. While controlling the virtual robot, subject S11 was the only participant to run into an obstacle, and they were also the only participant to reach a goal. During control of the DARwIn-OP robot, two subjects (S2 and S5) reached two of the goals, and two others (S1 and S11) reached a single goal. Two subjects (S1 and S7) collided with an obstacle while navigating the DARwIn-OP.
Subjects S1 and S6, who showed the largest improvement between the virtual and DARwIn-OP BCIs, have confusion matrices that indicate differing methods used to increase accuracy. The confusion matrix of online classification results for subject S1 shows a strong diagonal pattern when controlling the DARwIn-OP, as expected for a well-performing classifier. Interestingly, left foot and right foot are never misclassified as the opposite foot, as might be expected based on their close proximity in homuncular organization, even though such misclassifications were present when controlling the virtual robot. Left hand was the most frequently misclassified task, commonly confused with left foot and right hand. Left foot tasks were also misclassified as left hand tasks but were correctly classified much more often. Subject S6, on the other hand, achieved higher accuracy when controlling the DARwIn-OP by primarily classifying the two hand tasks correctly. This subject’s classifier had a strong tendency to predict right hand tasks during both BCIs, although actual right hand tasks were often misclassified during virtual robot control. The two foot tasks in both scenarios were frequently misclassified, typically as right hand. The confusion matrices are shown in Figure 9.
In this work, we present the results of a four-class motor-imagery-based BCI used to control a virtual and physical robot. There were significant differences in performance between controlling the virtual robot and the physical DARwIn-OP robot with the BCI. Subjects had significantly higher accuracy when controlling the DARwIn-OP than when controlling the virtual robot (29.72% versus 24.51% accuracy, resp.). An offline analysis showed that the interaction between optode and robot type had a significant effect on HbO levels, indicating that this increase in accuracy may be at least partially due to changes in HbO activation patterns during the tasks. Topographic plots of HbO activation also show changes in activation pattern between the virtual and DARwIn-OP BCIs, with left hand and right foot tasks moving to a more contralateral activation pattern while right hand and left foot became more ipsilateral in the second BCI.
These changes could be due to the participants adapting their mental strategy based on the BCI’s classifier while controlling the virtual robot, thereby modifying their motor-imagery activation patterns. Confusion matrices of the online BCI classifiers show different patterns of correct and incorrect classification between subjects and between control of the virtual and physical robot. Such changes could reflect differences in the activation patterns generated during motor imagery, potentially showing differences in mental strategy developed by the participants while using the BCIs. This is in line with previous findings that feedback, especially from a BCI, can improve motor-imagery activation [49, 52, 64, 65]. Participants could also have improved as they became more familiar with the BCI experiment protocol, increasing their confidence in using the BCI, which has also been shown to have an effect on motor-imagery ability .
It is also possible that the differences between the virtual and DARwIn-OP robots themselves contributed to differences in subject performance. The more realistic visuals when using the DARwIn-OP could have had an effect, similar to the results found by Alimardani et al. . There has been limited study on this topic, and further experiments would be needed in order to determine if this was a factor in subject performance.
There was a large difference between the accuracy of the highest-accuracy and lowest-accuracy subjects (40% versus 16% accuracy), in line with previous findings that people have different motor-imagery abilities [45–47]. Future studies could be improved by screening participants for motor-imagery abilities, as suggested by Marchesotti et al. , and potentially using feedback to improve the performance of participants identified as low motor-imagery ability . As Bauer et al. found that the use of a robot BCI could improve motor-imagery performance, longer or additional BCI sessions could be incorporated in order to improve motor-imagery performance .
In this work, we adapted the preprocessing pipeline for each subject based on classifier performance on the two training days. While this allows one more element of customization for each subject-specific classifier, it also increases the likelihood of overfitting on the training data, which can result in poor performance on the online BCI. Future work could compare the different preprocessing methods and select a single method that performs best across subjects. Additionally, the ability to distinguish between four motor-imagery tasks with simple descriptive features and classifiers may be limited. Future work could employ more intelligent feature reduction methods (e.g., Sequential Floating Forward Selection) or explore more powerful feature design methods using deep neural networks or autoencoders. Support vector machines with nonlinear kernels may be able to achieve higher classification accuracy than LDA classifiers. The more powerful classification abilities of neural networks may also prove beneficial for improving BCI performance, as has been explored recently with EEG-based BCIs [66–69].
This study reports the first online results of a motor-imagery-based fNIRS-BCI to control robot navigation using four motor-imagery tasks. Subjects used the BCI to control first a virtual avatar and then a DARwIn-OP humanoid robot to navigate to goal locations within a series of three rooms. Classification accuracy was significantly greater during the DARwIn-OP BCI, and an offline analysis found a significant interaction between optode and both task and robot type on HbO activation levels. These findings corroborate previous studies that show feedback, including feedback from controlling a robot BCI, can improve motor-imagery performance. It is also possible that the use of a physical, as opposed to virtual, robot had an effect on the results, but future study would be needed to assess that. Furthermore, the activation patterns for left hand and right foot change to show a more strongly contralateral activation pattern during the second BCI, becoming more in line with the expected activation patterns based on the cortical homunculus layout of the motor cortex.
These findings indicate that future studies could benefit from additional focus on feedback during training and in particular additional training periods spent controlling the actual BCI. There was also a large discrepancy between the accuracy of the highest-accuracy and lowest-accuracy subject, indicating that future studies could be improved by screening potential subjects for BCI abilities and potentially providing these subjects with extra feedback training.
Conflicts of Interest
fNIR Devices, LLC, manufactures optical brain imaging instruments and licensed IP and know-how from Drexel University. Dr. Ayaz was involved in the technology development and thus offered a minor share in the new startup firm fNIR Devices, LLC. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as potential conflicts of interest.
This work was supported in part by the National Science Foundation Graduate Research Fellowship under Grant no. DGE-1002809. Work reported here was run on hardware supported by Drexel’s University Research Computing Facility.
Results tables for post-hoc analyses.
- E. Guizzo and E. Ackerman, “The hard lessons of DARPA's robotics challenge,” IEEE Spectrum, vol. 52, no. 8, pp. 11–13, 2015.
- J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Brain-computer interfaces for communication and control,” Clinical Neurophysiology, vol. 113, no. 6, pp. 767–791, 2002.
- Y. Chae, J. Jeong, and S. Jo, “Toward brain-actuated humanoid robots: asynchronous direct control using an EEG-Based BCI,” IEEE Transactions on Robotics, vol. 28, no. 5, pp. 1131–1144, 2012.
- W. Li, M. Li, and J. Zhao, “Control of humanoid robot via motion-onset visual evoked potentials,” Frontiers in Systems Neuroscience, vol. 8, 2015.
- B. J. Choi and S. H. Jo, “A low-cost EEG system-based hybrid brain-computer interface for humanoid robot navigation and recognition,” PLoS ONE, vol. 8, no. 9, article e74583, 2013.
- A. Guneysu and L. H. Akin, “An SSVEP based BCI to control a humanoid robot by using portable EEG device,” in Proceedings of the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '13), pp. 6905–6908, Osaka, Japan, July 2013.
- O. Cohen, S. Druon, S. Lengagne et al., “fMRI-Based robotic embodiment: controlling a humanoid robot by thought using real-time fMRI,” Presence: Teleoperators and Virtual Environments, vol. 23, no. 3, pp. 229–241, 2014.
- M. Bryan, J. Green, M. Chung et al., “An adaptive brain-computer interface for humanoid robot control,” in Proceedings of the 2011 11th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS 2011, pp. 199–204, svn, October 2011.
- C. J. Bell, P. Shenoy, R. Chalodhorn, and R. P. N. Rao, “Control of a humanoid robot by a noninvasive brain-computer interface in humans,” Journal of Neural Engineering, vol. 5, no. 2, pp. 214–220, 2008.
- C. Escolano, J. M. Antelis, and J. Minguez, “A telepresence mobile robot controlled with a noninvasive brain-computer interface,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 42, no. 3, pp. 793–804, 2012.
- A. O. G. Barbosa, D. R. Achanccaray, and M. A. Meggiolaro, “Activation of a mobile robot through a brain computer interface,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '10), pp. 4815–4821, May 2010.
- J. D. R. Millán, F. Renkens, J. Mouriño, and W. Gerstner, “Noninvasive brain-actuated control of a mobile robot by human EEG,” IEEE Transactions on Biomedical Engineering, vol. 51, no. 6, pp. 1026–1033, 2004.
- K. Lafleur, K. Cassady, A. Doud, K. Shades, E. Rogin, and B. He, “Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain-computer interface,” Journal of Neural Engineering, vol. 10, no. 4, pp. 711–726, 2013.
- A. Akce, M. Johnson, O. Dantsker, and T. Bretl, “A brain-machine interface to navigate a mobile robot in a planar workspace: enabling humans to fly simulated aircraft with EEG,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 21, no. 2, pp. 306–318, 2013.
- R. Leeb, D. Friedman, G. R. Müller-Putz, R. Scherer, M. Slater, and G. Pfurtscheller, “Self-paced (asynchronous) BCI control of a wheelchair in virtual environments: a case study with a tetraplegic,” Computational Intelligence and Neuroscience, vol. 2007, Article ID 79642, 8 pages, 2007.
- E. López-Larraz, F. Trincado-Alonso, V. Rajasekaran et al., “Control of an ambulatory exoskeleton with a brain-machine interface for spinal cord injury gait rehabilitation,” Frontiers in Neuroscience, vol. 10, 2016.
- X. Cui, S. Bray, D. M. Bryant, G. H. Glover, and A. L. Reiss, “A quantitative comparison of NIRS and fMRI across multiple cognitive tasks,” NeuroImage, vol. 54, no. 4, pp. 2808–2821, 2011.
- Y. Liu, E. A. Piazza, E. Simony et al., “Measuring speaker-listener neural coupling with functional near infrared spectroscopy,” Scientific Reports, vol. 7, article 43293, 2017.
- H. Ayaz, B. Onaral, K. Izzetoglu, P. A. Shewokis, R. Mckendrick, and R. Parasuraman, “Continuous monitoring of brain dynamics with functional near infrared spectroscopy as a tool for neuroergonomic research: empirical examples and a technological development,” Frontiers in Human Neuroscience, vol. 7, no. 871, 2013.
- R. McKendrick, R. Mehta, H. Ayaz, M. Scheldrup, and R. Parasuraman, “Prefrontal hemodynamics of physical activity and environmental complexity during cognitive work,” Human Factors, vol. 59, no. 1, pp. 147–162, 2017.
- K. Gramann, S. H. Fairclough, T. O. Zander, and H. Ayaz, “Editorial: trends in neuroergonomics,” Frontiers in Human Neuroscience, vol. 11, article 165, 2017.
- A. M. Batula, H. Ayaz, and Y. E. Kim, “Evaluating a four-class motor-imagery-based optical brain-computer interface,” in Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '14), pp. 2000–2003, Chicago, Ill, USA, August 2014.
- A. M. Batula, J. Mark, Y. E. Kim, and H. Ayaz, “Developing an optical brain-computer interface for humanoid robot control,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), D. D. Schmorrow and M. C. Fidopiastis, Eds., vol. 9743, pp. 3–13, Springer International Publishing, Toronto, Canda, 2016.
- C. Canning and M. Scheutz, “Functional near-infrared spectroscopy in human-robot interaction,” Journal of Human-Robot Interaction, vol. 2, no. 3, pp. 62–84, 2013.
- S. Kishi, Z. Luo, A. Nagano, M. Okumura, Y. Nagano, and Y. Yamanaka, “On NIRS-based BRI for a human-interactive robot RI-MAN,” in Proceedings of the in Joint 4th International Conference on Soft Computing and Intelligent Systems and 9th International Symposium on Advanced Intelligent Systems (SCIS & ISIS), pp. 124–129, 2008.
- K. Takahashi, S. Maekawa, and M. Hashimoto, “Remarks on fuzzy reasoning-based brain activity recognition with a compact near infrared spectroscopy device and its application to robot control interface,” in Proceedings of the 2014 International Conference on Control, Decision and Information Technologies, CoDIT 2014, pp. 615–620, fra, November 2014.
- K. Tumanov, R. Goebel, R. Möckel, B. Sorger, and G. Weiss, “FNIRS-based BCI for robot control (demonstration),” in Proceedings of the 14th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2015, pp. 1953-1954, tur, May 2015.
- A. J. Doud, J. P. Lucas, M. T. Pisansky, and B. He, “Continuous three-dimensional control of a virtual helicopter using a motor imagery based brain-computer interface,” PLoS ONE, vol. 6, no. 10, article e26322, 2011.
- S. Ge, R. Wang, and D. Yu, “Classification of four-class motor imagery employing single-channel electroencephalography,” PLoS ONE, vol. 9, no. 6, article e98019, 2014.
- S. M. Coyle, T. E. Ward, and C. M. Markham, “Brain-computer interface using a simplified functional near-infrared spectroscopy system,” Journal of Neural Engineering, vol. 4, no. 3, pp. 219–226, 2007.
- N. Naseer and K.-S. Hong, “Classification of functional near-infrared spectroscopy signals corresponding to the right- and left-wrist motor imagery for development of a brain-computer interface,” Neuroscience Letters, vol. 553, pp. 84–89, 2013.
- R. Sitaram, H. Zhang, C. Guan et al., “Temporal classification of multichannel near-infrared spectroscopy signals of motor imagery for developing a brain-computer interface,” NeuroImage, vol. 34, no. 4, pp. 1416–1427, 2007.
- T. Ito, H. Akiyama, and T. Hirano, “Brain machine interface using portable near-InfraRed spectroscopy—improvement of classification performance based on ICA analysis and self-proliferating LVQ,” in Proceedings of the 26th IEEE/RSJ International Conference on Intelligent Robots and Systems: New Horizon (IROS '13), pp. 851–858, 2013.
- X. Yin, B. Xu, C. Jiang et al., “Classification of hemodynamic responses associated with force and speed imagery for a brain-computer interface,” Journal of Medical Systems, vol. 39, no. 5, 53 pages, 2015.
- L. Acqualagna, L. Botrel, C. Vidaurre, A. Kübler, and B. Blankertz, “Large-scale assessment of a fully automatic co-adaptive motor imagery-based brain computer interface,” PLoS ONE, vol. 11, no. 2, article e0148886, 2016.
- B. Koo, H.-G. Lee, Y. Nam et al., “A hybrid NIRS-EEG system for self-paced brain computer interface with online motor imagery,” Journal of Neuroscience Methods, vol. 244, no. 1, pp. 26–32, 2015.
- W. Yi, L. Zhang, K. Wang et al., “Evaluation and comparison of effective connectivity during simple and compound limb motor imagery,” in Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '14), pp. 4892–4895, Chicago, Ill, USA, August 2014.
- N. Naseer and K. Hong, “fNIRS-based brain-computer interfaces: a review,” Frontiers in Human Neuroscience, vol. 9, pp. 1–15, 2015.
- Y. Hashimoto and J. Ushiba, “EEG-based classification of imaginary left and right foot movements using beta rebound,” Clinical Neurophysiology, vol. 124, no. 11, pp. 2153–2160, 2013.
- W.-C. Hsu, L.-F. Lin, C.-W. Chou, Y.-T. Hsiao, and Y.-H. Liu, “EEG classification of imaginary lower LIMb stepping movements based on fuzzy SUPport vector MAChine with Kernel-INDuced MEMbership FUNction,” International Journal of Fuzzy Systems, vol. 19, no. 2, pp. 566–579, 2017.
- L. Stankevich and K. Sonkin, “Human-robot interaction using brain-computer interface based on eeg signal decoding,” in Proceedings of the Interactive Collaborative Robotics (ICR), A. Ronzhin, G. Rigoll, and R. Meshcheryakov, Eds., vol. 9812, pp. 99–106, Springer International Publishing.
- J. Shin and J. Jeong, “Multiclass classification of hemodynamic responses for performance improvement of functional near-infrared spectroscopy-based brain-computer interface,” Journal of Biomedical Optics, vol. 19, no. 6, article 067009, 2014.
- M. Lotze and U. Halsband, “Motor imagery,” Journal of Physiology Paris, vol. 99, no. 4–6, pp. 386–395, 2006.
- A. Guillot, C. Collet, V. A. Nguyen, F. Malouin, C. Richards, and J. Doyon, “Brain activity during visual versus kinesthetic imagery: an fMRI study,” Human Brain Mapping, vol. 30, no. 7, pp. 2157–2172, 2009.
- C. Jeunet, B. N'Kaoua, and F. Lotte, “Advances in user-training for mental-imagery-based BCI control: pychological and cognitive factors and their neural correlates,” Progress in Brain Research, vol. 228, pp. 3–35, 2016.
- S. Marchesotti, M. Bassolino, A. Serino, H. Bleuler, and O. Blanke, “Quantifying the role of motor imagery in brain-machine interfaces,” Scientific Reports, vol. 6, article 24076, 2016.
- F. Lebon, W. D. Byblow, C. Collet, A. Guillot, and C. M. Stinear, “The modulation of motor cortex excitability during motor imagery depends on imagery quality,” European Journal of Neuroscience, vol. 35, no. 2, pp. 323–331, 2012.
- K. J. Miller, G. Schalk, E. E. Fetz, M. Den Nijs, J. G. Ojemann, and R. P. N. Rao, “Cortical activity during motor execution, motor imagery, and imagery-based online feedback,” Proceedings of the National Academy of Sciences of the United States of America, vol. 107, no. 9, pp. 4430–4435, 2010.
- R. Bauer, M. Fels, M. Vukelić, U. Ziemann, and A. Gharabaghi, “Bridging the gap between motor imagery and motor execution with a brain-robot interface,” NeuroImage, vol. 108, pp. 319–327, 2015.
- M. Ahn and S. C. Jun, “Performance variation in motor imagery brain-computer interface: a brief review,” Journal of Neuroscience Methods, vol. 243, pp. 103–110, 2015.
- E. Tidoni, P. Gergondet, A. Kheddar, and S. M. Aglioti, “Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot,” Frontiers in Neurorobotics, vol. 8, 2014.
- M. Alimardani, S. Nishio, and H. Ishiguro, “The importance of visual feedback design in BCIs; from embodiment to motor imagery learning,” PLoS ONE, vol. 11, no. 9, 2016.
- A. M. Batula, J. A. Mark, Y. E. Kim, and H. Ayaz, “Comparison of brain activation during motor imagery and motor movement using fNIRS,” Computational Intelligence and Neuroscience, vol. 2017, Article ID 5491296, 12 pages, 2017.
- A. Villringer and B. Chance, “Non-invasive optical spectroscopy and imaging of human brain function,” Trends in Neurosciences, vol. 20, no. 10, pp. 435–442, 1997.
- V. Rozand, F. Lebon, P. J. Stapley, C. Papaxanthis, and R. Lepers, “A prolonged motor imagery session alter imagined and actual movement durations: potential implications for neurorehabilitation,” Behavioural Brain Research, vol. 297, pp. 67–75, 2016.
- H. Ayaz, S. L. Allen, S. M. Platek, and B. Onaral, “Maze Suite 1.0: a complete set of tools to prepare, present, and analyze navigational and spatial cognitive neuroscience experiments,” Behavior Research Methods, vol. 40, no. 1, pp. 353–359, 2008.
- H. Ayaz, P. A. Shewokis, A. Curtin, M. Izzetoglu, K. Izzetoglu, and B. Onaral, “Using MazeSuite and functional near infrared spectroscopy to study learning in spatial navigation,” Journal of Visualized Experiments, no. 56, article 3443, 2011.
- I. Ha, Y. Tamura, H. Asama, J. Han, and D. W. Hong, “Development of open humanoid platform DARwIn-OP,” in Proceedings of the SICE Annual Conference 2011, pp. 2178–2181, 2011.
- X. Cui, S. Bray, and A. L. Reiss, “Functional near infrared spectroscopy (NIRS) signal improvement based on negative correlation between oxygenated and deoxygenated hemoglobin dynamics,” NeuroImage, vol. 49, no. 4, pp. 3039–3046, 2010.
- H. Tanaka, T. Katura, and H. Sato, “Task-related component analysis for functional neuroimaging and application to near-infrared spectroscopy data,” NeuroImage, vol. 64, no. 1, pp. 308–327, 2013.
- C. M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006.
- F. Pedregosa, G. Varoquaux, and A. Gramfort, “Scikit-learn: machine learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.
- R. Takizawa, K. Kasai, Y. Kawakubo et al., “Reduced frontopolar activation during verbal fluency task in schizophrenia: a multi-channel near-infrared spectroscopy study,” Schizophrenia Research, vol. 99, no. 1–3, pp. 250–262, 2008.
- C. L. Friesen, T. Bardouille, H. F. Neyedli, and S. G. Boe, “Combined action observation and motor imagery neurofeedback for modulation of brain activity,” Frontiers in Human Neuroscience, vol. 10, 2017.
- A. Vourvopoulos and S. Bermúdezi Badia, “Motor priming in virtual reality can augment motor-imagery training efficacy in restorative brain-computer interaction: a within-subject analysis,” Journal of NeuroEngineering and Rehabilitation, vol. 13, no. 1, article 69, 2016.
- A. Yuksel and T. Olmez, “A neural network-based optimal spatial filter design method for motor imagery classification,” PLoS ONE, vol. 10, no. 5, articel e0125039, 2015.
- N. Lu, T. Li, X. Ren, and H. Miao, “A deep learning scheme for motor imagery classification based on restricted boltzmann machines,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. PP, no. 99, 1 page, 2016.
- Y. R. Tabar and U. Halici, “A novel deep learning approach for classification of EEG motor imagery signals,” Journal of Neural Engineering, vol. 14, no. 1, article 016003, 2017.
- Z. Tang, C. Li, and S. Sun, “Single-trial EEG classification of motor imagery using deep convolutional neural networks,” Optik - International Journal for Light and Electron Optics, vol. 130, pp. 11–18, 2017.
Copyright © 2017 Alyssa M. Batula et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.