google-site-verification=NjYuzjcWjJ9sY0pu2JmuCKlQLgHuwYq4L4hXzAk4Res A Comprehensive Review of Imagined Speech Decoding in Brain-Computer Interfaces: Utilizing EEG and fNIRS Technologies - Basic and Clinical Neuroscience


XML English Abstract Print


چکیده:  
The use of brain–computer interfaces (BCIs) to decode imagined speech has significant clinical and assistive potential. Twenty-six studies investigated covert speech decoding between 2009 and 2025 using EEG, fNIRS, or hybrid EEG–fNIRS systems. Early research (2009–2012) primarily focused on analyzing phonemes and syllables with EEG, achieving accuracy rates around 75%. From 2013 to 2017, CNN-based phoneme decoding produced highly variable results (40%–83%), with more complex multiclass tasks occasionally performing poorly (as low as 26.7%). Since 2018, binary paradigms such as yes/no responses have reached 64%–100% accuracy. CNN variants (about 83.4%), AlexNet (90.3%), and LSTM-RNNs (92.5%) demonstrated notable improvements, whereas architectures like EEGNet and SPDNet often underperformed (24.79%–66.93%). In hybrid EEG–fNIRS methods, convolutional neural networks (CNNs) achieved roughly 53% accuracy, while traditional classifiers like SVM and LDA performed better, reaching 78–79%. These results indicate that although deep learning and multimodal systems have potential for enhancing imagined speech decoding, there are still major challenges related to generalization, variability, and robustness.
نوع مطالعه: Review | موضوع مقاله: Cognitive Neuroscience
دریافت: 1404/5/16 | پذیرش: 1404/10/3

ارسال نظر درباره این مقاله : نام کاربری یا پست الکترونیک شما:
CAPTCHA

بازنشر اطلاعات
Creative Commons License این مقاله تحت شرایط Creative Commons Attribution-NonCommercial 4.0 International License قابل بازنشر است.

کلیه حقوق این وب سایت متعلق به Basic and Clinical Neuroscience می باشد.

طراحی و برنامه نویسی : یکتاوب افزار شرق

© 2026 CC BY-NC 4.0 | Basic and Clinical Neuroscience

Designed & Developed by : Yektaweb