Deep Learning for the Classification of EEG Time-Frequency Representations
MetadataShow full item record
This thesis is a report on the implementation and evaluation of a new method classifying EEG signals. The method involves applying either the Short-time Fourier Transform (STFT), Continuous Wavelet Transform (CWT) or Hilbert-Huang Transform (HHT) to produce a two-dimensional time-frequency representation of the signal, known as spectrograms, scalograms and Hilbert spectra, respectively. These two-dimensional representations are then classified using a Convolutional Neural Network (CNN). The evaluation of this method involved testing it on two different datasets. The first is a synthetic dataset generated by simulating a non-stationary and noisy process, and contains signals belonging to one out of three classes. The second dataset consists of real EEG data available as part of one of the tasks in the BCI Competition III. It contains 1,400 EEG recordings consisting of 3.5 seconds of the subject imagining movement in either the right hand or in the left food, and the task is to decide which of these the subject was imagining during the recording. Four different CNN architectures were evaluated using k-fold cross-validation with each of the three representations. When evaluated using the spectrogram and Hilbert spectrum representation of the synthetic data, the best classification accuracy was 98.3% and 88.19%, respectively. The scalogram representation fared less well, with 59.29% as the highest achieved classification accuracy. Results were similar when evaluating the classifiers on real data. The highest accuracy achieved when classifying EEG spectrograms was 72.50%. For Hilbert spectra it was 58.00% and for scalograms it was 55.93%.