ダウンロード数: 57

このアイテムのファイル:
ファイル 記述 サイズフォーマット 
TNSRE.2021.3111689.pdf3.13 MBAdobe PDF見る/開く
タイトル: EEGFuseNet: Hybrid Unsupervised Deep Feature Characterization and Fusion for High-Dimensional EEG With an Application to Emotion Recognition
著者: Liang, Zhen
Zhou, Rushuang
Zhang, Li
Li, Linling
Huang, Gan
Zhang, Zhiguo
Ishii, Shin
著者名の別形: 石井, 信
キーワード: Electroencephalography
information fusion
hybrid deep encoder-decoder network
CNN-RNN-GAN
unsupervised
emotion recognition
発行日: 2021
出版者: Institute of Electrical and Electronics Engineers (IEEE)
誌名: IEEE Transactions on Neural Systems and Rehabilitation Engineering
巻: 29
開始ページ: 1913
終了ページ: 1925
抄録: How to effectively and efficiently extract valid and reliable features from high-dimensional electroencephalography (EEG), particularly how to fuse the spatial and temporal dynamic brain information into a better feature representation, is a critical issue in brain data analysis. Most current EEG studies work in a task driven manner and explore the valid EEG features with a supervised model, which would be limited by the given labels to a great extent. In this paper, we propose a practical hybrid unsupervised deep convolutional recurrent generative adversarial network based EEG feature characterization and fusion model, which is termed as EEGFuseNet. EEGFuseNet is trained in an unsupervised manner, and deep EEG features covering both spatial and temporal dynamics are automatically characterized. Comparing to the existing features, the characterized deep EEG features could be considered to be more generic and independent of any specific EEG task. The performance of the extracted deep and low-dimensional features by EEGFuseNet is carefully evaluated in an unsupervised emotion recognition application based on three public emotion databases. The results demonstrate the proposed EEGFuseNet is a robust and reliable model, which is easy to train and performs efficiently in the representation and fusion of dynamic EEG features. In particular, EEGFuseNet is established as an optimal unsupervised fusion model with promising cross-subject emotion recognition performance. It proves EEGFuseNet is capable of characterizing and fusing deep features that imply comparative cortical dynamic significance corresponding to the changing of different emotion states, and also demonstrates the possibility of realizing EEG based cross-subject emotion recognition in a pure unsupervised manner.
著作権等: This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.
URI: http://hdl.handle.net/2433/277585
DOI(出版社版): 10.1109/TNSRE.2021.3111689
PubMed ID: 34506287
出現コレクション:学術雑誌掲載論文等

アイテムの詳細レコードを表示する

Export to RefWorks


出力フォーマット 


このアイテムは次のライセンスが設定されています: クリエイティブ・コモンズ・ライセンス Creative Commons