Primary color decoding using deep learning on source reconstructed EEG signal responses
Peer reviewed, Journal article
Published version
Date
2023Metadata
Show full item recordCollections
Original version
10.1109/EMBC40787.2023.10340033Abstract
The brain’s response to visual stimuli of different colors might be used in a brain-computer interface (BCI) paradigm, for letting a user control their surroundings by looking at specific colors. Allowing the user to control certain elements in its environment, such as lighting and doors, by looking at corresponding signs of different colors could serve as an intuitive interface. This paper presents work on the development of an intra-subject classifier for red, green, and blue (RGB) visual evoked potentials (VEPs) in recordings performed with an electroencephalogram (EEG). Three deep neural networks (DNNs), proposed in earlier papers, were employed and tested for data in source- and electrode space. All the tests performed in electrode space yielded better results than those in source space. The best classifier yielded an accuracy of 77% averaged over all subjects, with the best subject having an accuracy of 96%.Clinical relevance— This paper demonstrates that deep learning can be used to classify between red, green and blue visual evoked potentials in EEG recordings with an average accuracy of 77%.