Human intention inference with a large language model can enhance brain-computer interface control: A proof-of-concept study
Human intention inference with a large language model can enhance brain-computer interface control: A proof-of-concept study
Shiinoki, S.; Iwama, S.; Ushiba, J.
AbstractBrain-computer interface (BCI) control enables direct communication between the brain and external devices. However, BCI control accuracy with intention inferred from non-invasive modalities is limited, even when using data-driven approaches to tailor neural decoders. In this study, we propose a knowledge-driven framework for inferring human intention, leveraging large language models (LLMs) as an alternative to conventional data-driven approaches. We developed a neural decoder that integrates neural and oculomotor signals with contextual information using an LLM agent. Its feasibility was tested in a real-world BCI task to control a computer application. The LLM-based decoder achieved an average accuracy of 79% for responders (11 out of 20) in inferring the intention to select arbitrary posts in a social networking service. Ablation analyses revealed that integration of contextual information, multimodal signals, and empirical knowledge is critical for decoding accuracy. This study demonstrates the feasibility of a neural decoding framework using an LLM, paving the way for improved performance in BCI-driven external device operation for patients with disability.