Deep learning models for brain machine interfaces

  • PDF / 1,569,274 Bytes
  • 16 Pages / 439.642 x 666.49 pts Page_size
  • 54 Downloads / 227 Views

DOWNLOAD

REPORT


Deep learning models for brain machine interfaces Lachezar Bozhkov1 · Petia Georgieva2

© Springer Nature Switzerland AG 2019

Abstract Deep Learning methods have been rising in popularity in the past few years, and are now used as a fundamental component in various application domains such as computer vision, natural language processing, bioinformatics. Supervised learning with Convolutional Neural Networks has become the state of the art approach in many image related works. However, despite the great success of deep learning methods in other areas they remain relatively unexplored in the brain imaging field. In this paper we make an overview of recent achievements of Deep Learning to automatically extract features from brain signals that enable building Brain-Machine Interfaces (BMI). Major challenge in the BMI research is to find common subject-independent neural signatures due to the high brain data variability across multiple subjects. To address this problem we propose a Deep Neural Autoencoder with sparsity constraint as a promising approach to extract hidden features from Electroencephalogram data (in-dept feature learning) and build a subject-independent noninvasive BMI in the affective neuro computing framework. Future direction for research are also outlined. Keywords Deep learning · Convolutional neural networks · Autoencoders · Brain machine interface · Affective computing Mathematics Subject Classification (2010) 68T30

1 Introduction Deep Learning (DL) refers to computational models composed of multiple processing layers capable of learning representations of data with multiple levels of abstraction [1]. Since the seminal works of Lecun et al. [2] in 1990, and LeCun, Bottou, Bengio and Haffner [3] in 1998, on training convolutional networks using backpropagation and gradient-based optimization, DL has outperformed other machine learning techniques in many domains of science and industry. Hinton et al. [4], achieved breakthrough with deep learning on the task of automatic speech recognition, the first major industrial application  Petia Georgieva

[email protected] 1

Technical University of Sofia, Sofia, Bulgaria

2

University of Aveiro, DETI/IEETA, Aveiro, Portugal

L. Bozhkov, P. Georgieva

of deep learning. DL has achieved great success in recognition tasks within a wide range of applications including images [5], videos [6], speech [7], text [8, 9], to mention a few.

Convolutional Neural Networks Convolutional Neural Networks (CNN) lie at the core of the most powerful DL architectures working with images and video data, primarily due to their ability to extract representations that are robust to partial translation and deformation of input patterns, Fig. 1. The key element of CNN is the convolution operation using small filter patches (kernels). These filters are able to automatically learn local patterns which can be combined together to form more complex features when stacking multiple CNN layers together. Within the stack of convolution layers, pooling layers are often placed intermittently. The p