A new gaze-BCI-driven control of an upper limb exoskeleton for rehabilitation in real-world tasks
- Post by: System
- 27 November 2019
- Comments off
This paper proposes a new multimodal architecture for gaze-independent brain-computer interface (BCI)-driven control of a robotic upper limb exoskeleton for stroke rehabilitation to provide active assistance in the execution of reaching tasks in a real setting scenario. At the level of action plan, the patient’s intention is decoded by means of an active vision system, through the combination of a Kinect-based vision system, which can online robustly identify and track 3-D objects, and an eye-tracking system for objects selection. At the level of action generation, a BCI is used to control the patient’s intention to move his/her own arm, on the basis of brain activity analyzed during motor imagery. The main kinematic parameters of the reaching movement (i.e., speed, acceleration, and jerk) assisted by the robot are modulated by the output of the BCI classifier so that the robot-assisted movement is performed under a continuous control of patient’s brain activity. The system was experimentally evaluated in a group of three healthy volunteers and four chronic stroke patients. Experimental results show that all subjects were able to operate the exoskeleton movement by BCI with a classification error rate of 89.4 ± 5.0% in the robot-assisted condition, with no difference of the performance observed in stroke patients compared with healthy subjects. This indicates the high potential of the proposed gaze-BCI-driven robotic assistance for neurorehabilitation of patients with motor impairments after stroke since the earliest phase of recovery. © 1998-2012 IEEE.