Universiti Teknologi Malaysia Institutional Repository

Joint common spatial pattern and short-time fourier transform with attention-based convolutional neural networks for brain computer interface

Che Man, Muhammad Afiq (2022) Joint common spatial pattern and short-time fourier transform with attention-based convolutional neural networks for brain computer interface. Masters thesis, Universiti Teknologi Malaysia.

[img]
Preview
PDF
588kB

Official URL: http://dms.library.utm.my:8080/vital/access/manage...

Abstract

Motor imagery on electroencephalogram (EEG) signals is widely used in braincomputer interface (BCI) systems with many exciting applications. There are three major types of filtering for EEG signals- temporal, spectral, and spatial filtering. Spatial filtering using Common Spatial Pattern (CSP) is an established method of processing EEG signals as classifier inputs. With the recent advent of deep learning, many deep learning classifiers have been adopted, including Recurrent Neural Network (RNN) and Convolutional Neural Networks (CNNs). In the early adoption of CNN to solve BCI based on EEG, the raw EEG signal is fed to CNN for classification. However, in the recent trend, various representations of CNN exist for BCI EEG classification, either spatial or temporal only, or a combination of both, or other similar features to enhance the signal further. Also, there exist multiple implementations of attention networks for BCI EEG classification. However, most of the existing work does not utilize a good filter and spatial or temporal representation by using attention networks. This study develops a framework using CSP and Short-Time Fourier Transform (STFT) as well as Attention Convolutional Neural Network (CSP-STFT-attCNN) for EEG BCI classification. The features from CSP are translated into the spatial domain using STFT as input to attention-based CNN as the classifier. The first step is to preprocess the raw EEG signals, perform channel selection, separate them into train and test data, and apply CSP-STFT. Then, the model architecture to train with the data is defined. This framework uses attention-based CNNs to classify the collected spatial images across different test subjects. Finally, the performance of the CSP-STFTattCNN has been validated on two BCI benchmark datasets 1) Competition III dataset IVa 2) Competition IV dataset I. The proposed CSP-STFT-attCNN has proved that the framework based on CSP-STFT as feature extractor and Attention-CNNs offers a promising result; the classifier achieved better performance in terms of classification accuracy, averaging 80% across all five subjects for Competition III dataset IVa. The precision and recall are excellent too, ranging around 0.8-0.9. Nonetheless CSP-STFTattCNN did not perform as well with the other dataset, hence the reasons are explored further. In general, the proposed CSP-STFT-attCNN can offer richer joint spatiotemporal features as inputs to classifiers, whereas using an Attention-CNN classifier improves upon the earlier problems suffered by CNNs.

Item Type:Thesis (Masters)
Uncontrolled Keywords:electroencephalogram (EEG) signals, Recurrent Neural Network (RNN)
Subjects:T Technology > T Technology (General)
Divisions:Malaysia-Japan International Institute of Technology
ID Code:99624
Deposited By: Narimah Nawil
Deposited On:08 Mar 2023 03:38
Last Modified:08 Mar 2023 03:38

Repository Staff Only: item control page