{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cdc","dataset_id":"nm000338","associated_paper_doi":null,"authors":["Min-Ho Lee","O-Yeon Kwon","Yong-Jeong Kim","Hong-Kyung Kim","Young-Eun Lee","John Williamson","Siamac Fazli","Seong-Whan Lee"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.1093/gigascience/giz002","datatypes":["eeg"],"demographics":{"subjects_count":54,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000338","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"25cf013ddb75d0c8b6258aa8c332a209b5b00e363a861988f6401059b88cd2f9","license":"GPL-3.0","n_contributing_labs":null,"name":"Lee et al. 2019 (Motor Imagery) — EEG dataset and OpenBMI toolbox for three BCI paradigms: an investigation into BCI illiteracy","readme":"Lee2019-MI\n==========\nBMI/OpenBMI dataset for MI.\nDataset Overview\n----------------\n  Code: Lee2019-MI\n  Paradigm: imagery\n  DOI: 10.5524/100542\n  Subjects: 54\n  Sessions per subject: 2\n  Events: left_hand=2, right_hand=1\n  Trial interval: [0.0, 4.0] s\n  File format: MAT\nAcquisition\n-----------\n  Sampling rate: 1000.0 Hz\n  Number of channels: 62\n  Channel types: eeg=62, emg=4\n  Channel names: AF3, AF4, AF7, AF8, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, EMG1, EMG2, EMG3, EMG4, F10, F3, F4, F7, F8, F9, FC1, FC2, FC3, FC4, FC5, FC6, FT10, FT9, FTT10h, FTT9h, Fp1, Fp2, Fz, O1, O2, Oz, P1, P2, P3, P4, P7, P8, PO10, PO3, PO4, PO9, POz, Pz, T7, T8, TP10, TP7, TP8, TP9, TPP10h, TPP8h, TPP9h, TTP7h\n  Montage: standard_1005\n  Hardware: BrainAmp\n  Reference: nasion\n  Ground: AFz\n  Sensor type: Ag/AgCl\n  Line frequency: 60.0 Hz\n  Impedance threshold: 10.0 kOhm\n  Auxiliary channels: EMG (4 ch)\nParticipants\n------------\n  Number of subjects: 54\n  Health status: healthy\n  Age: min=24, max=35\n  Gender distribution: female=25, male=29\n  Handedness: {'right': 50, 'left': 2, 'ambidexter': 2}\n  BCI experience: mixed\nExperimental Protocol\n---------------------\n  Paradigm: imagery\n  Number of classes: 2\n  Class labels: left_hand, right_hand\n  Trial duration: 4.0 s\n  Tasks: MI\n  Study design: Binary-class motor imagery (left/right hand grasping). Two sessions on different days, each with offline training and online test phases of 100 trials each.\n  Feedback type: visual\n  Stimulus type: arrow\n  Stimulus modalities: visual\n  Primary modality: visual\n  Synchronicity: synchronous\n  Mode: both\n  Training/test split: True\n  Instructions: Subjects performed the imagery task of grasping with the appropriate hand for 4 s when the right or left arrow appeared as a visual cue. First 3 s of each trial began with a black fixation cross to prepare subjects for the MI task. After each task, the screen remained blank for 6 s (± 1.5 s).\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  left_hand\n    ├─ Sensory-event\n    │  ├─ Experimental-stimulus\n    │  ├─ Visual-presentation\n    │  └─ Leftward, Arrow\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Left, Hand\n  right_hand\n    ├─ Sensory-event\n    │  ├─ Experimental-stimulus\n    │  ├─ Visual-presentation\n    │  └─ Rightward, Arrow\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: motor_imagery\n  Imagery tasks: left_hand, right_hand\n  Cue duration: 3.0 s\n  Imagery duration: 4.0 s\nData Structure\n--------------\n  Trials: 200\n  Trials per class: left_hand=100, right_hand=100\n  Trials context: 100 trials per session per phase (50 per class per phase). Training: 50 left + 50 right. Test: 50 left + 50 right. Total per session: 200.\nPreprocessing\n-------------\n  Data state: raw\n  Preprocessing applied: False\nSignal Processing\n-----------------\n  Classifiers: CSP+LDA, CSSP, FBCSP, BSSFO\n  Feature extraction: CSP, CSSP, FBCSP, BSSFO, log-variance\n  Frequency bands: mu=[8.0, 12.0] Hz; analyzed=[8.0, 30.0] Hz\n  Spatial filters: CSP, CSSP, FBCSP, BSSFO\nCross-Validation\n----------------\n  Method: train-test split\n  Evaluation type: within_session, cross_session\nPerformance (Original Study)\n----------------------------\n  Accuracy: 71.1%\n  Accuracy Std: 0.15\n  Illiteracy Rate: 53.7\n  Session1 Accuracy: 70.0\n  Session2 Accuracy: 72.2\nBCI Application\n---------------\n  Applications: motor_control\n  Environment: laboratory\n  Online feedback: True\nTags\n----\n  Pathology: Healthy\n  Modality: Motor\n  Type: Research\nDocumentation\n-------------\n  Description: EEG dataset and OpenBMI toolbox for three BCI paradigms: an investigation into BCI illiteracy. Includes MI, ERP, and SSVEP paradigms with a large number of subjects over multiple sessions.\n  DOI: 10.1093/gigascience/giz002\n  License: GPL-3.0\n  Investigators: Min-Ho Lee, O-Yeon Kwon, Yong-Jeong Kim, Hong-Kyung Kim, Young-Eun Lee, John Williamson, Siamac Fazli, Seong-Whan Lee\n  Senior author: Seong-Whan Lee\n  Contact: sw.lee@korea.ac.kr\n  Institution: Korea University\n  Department: Department of Brain and Cognitive Engineering\n  Address: 145 Anam-ro, Seongbuk-gu, Seoul, 02841, Korea\n  Country: KR\n  Repository: GigaDB\n  Publication year: 2019\n  How to acknowledge: This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.\n  Keywords: EEG datasets, brain-computer interface, event-related potential, steady-state visually evoked potential, motor-imagery, OpenBMI toolbox, BCI illiteracy\nAbstract\n--------\nElectroencephalography (EEG)-based brain-computer interface (BCI) systems are mainly divided into three major paradigms: motor imagery (MI), event-related potential (ERP), and steady-state visually evoked potential (SSVEP). Here, we present a BCI dataset that includes the three major BCI paradigms with a large number of subjects over multiple sessions. In addition, information about the psychological and physiological conditions of BCI users was obtained using a questionnaire, and task-unrelated parameters such as resting state, artifacts, and electromyography of both arms were also recorded. We evaluated the decoding accuracies for the individual paradigms and determined performance variations across both subjects and sessions. Furthermore, we looked for more general, severe cases of BCI illiteracy than have been previously reported in the literature. Average decoding accuracies across all subjects and sessions were 71.1% (± 0.15), 96.7% (± 0.05), and 95.1% (± 0.09), and rates of BCI illiteracy were 53.7%, 11.1%, and 10.2% for MI, ERP, and SSVEP, respectively. Compared to the ERP and SSVEP paradigms, the MI paradigm exhibited large performance variations between both subjects and sessions. Furthermore, we found that 27.8% (15 out of 54) of users were universally BCI literate, i.e., they were able to proficiently perform all three paradigms. Interestingly, we found no universally illiterate BCI user, i.e., all participants were able to control at least one type of BCI system.\nMethodology\n-----------\nExperimental procedure: 54 healthy subjects participated in two sessions on different days. Each session consisted of three BCI paradigms performed sequentially: ERP speller (36 symbols, row-column presentation with face stimuli), MI task (binary left/right hand imagery), and SSVEP (four target frequencies: 5.45, 6.67, 8.57, 12 Hz). Each paradigm had offline training and online test phases. EEG recorded at 1000 Hz with 62 Ag/AgCl electrodes using BrainAmp amplifier, nose-referenced, grounded to AFz. Impedance maintained below 10 kOhm. Subjects seated 60 cm from 21-inch LCD monitor. Questionnaires collected demographic, physiological, and psychological data. Artifact data (eye blinking, eye movements, teeth clenching, arm flexing) and resting state EEG also recorded. Total experiment duration: ~205 minutes per session.\nReferences\n----------\nLee, M. H., Kwon, O. Y., Kim, Y. J., Kim, H. K., Lee, Y. E., Williamson, J., … Lee, S. W. (2019). EEG dataset and OpenBMI toolbox for three BCI paradigms: An investigation into BCI illiteracy. GigaScience, 8(5), 1–16. https://doi.org/10.1093/gigascience/giz002\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["1","2"],"size_bytes":65269047953,"source":"openneuro","storage":{"backend":"s3","base":"s3://openneuro.org/nm000338","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-22T12:52:27.730485+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":216,"computed_title":"Lee et al. 2019 (Motor Imagery) — EEG dataset and OpenBMI toolbox for three BCI paradigms: an investigation into BCI illiteracy","nchans_counts":[{"val":66,"count":216}],"sfreq_counts":[{"val":1000.0,"count":216}],"stats_computed_at":"2026-04-22T23:16:00.314600+00:00","total_duration_s":329549.784,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"98233e1903e56494","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot example by paradigm is the labeled “EEG Motor Movement/Imagery Dataset” (motor/imagery BCI task). That example maps a motor imagery paradigm to Type=“Motor” while keeping Modality=“Visual” because the task is cued by on-screen targets. This convention directly guides labeling here because Lee2019-MI is also a left/right hand motor imagery BCI dataset with visual arrow cues.","metadata_analysis":"Pathology (population): explicitly healthy: “Health status: healthy” and “54 healthy subjects participated in two sessions on different days.”\n\nModality (stimulus channel): explicitly visual cues: “Stimulus type: arrow”, “Stimulus modalities: visual”, and “Subjects performed the imagery task ... when the right or left arrow appeared as a visual cue.”\n\nType (research purpose): explicitly motor imagery / BCI: “Study design: Binary-class motor imagery (left/right hand grasping)”, “Tasks: MI”, and “Detected paradigm: motor_imagery”.","paper_abstract_analysis":"The included abstract reinforces that the dataset is a BCI dataset centered on motor imagery: “EEG-based brain-computer interface (BCI) systems ... motor imagery (MI) ... we present a BCI dataset ...” and reports MI performance/illiteracy, consistent with a Motor/BCI purpose rather than sensory perception.","evidence_alignment_check":"Pathology:\n- Metadata says: “Health status: healthy”, “54 healthy subjects”.\n- Few-shot pattern suggests: motor imagery datasets are typically healthy volunteers unless a disorder is stated.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: “Stimulus modalities: visual”, “Stimulus type: arrow”, “right or left arrow appeared as a visual cue.”\n- Few-shot pattern suggests: for motor imagery tasks with on-screen cues, Modality is labeled “Visual” (see motor movement/imagery example).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: “Binary-class motor imagery”, “Tasks: MI”, “Detected paradigm: motor_imagery”.\n- Few-shot pattern suggests: motor imagery/BCI paradigms map to Type=“Motor”.\n- Alignment: ALIGN.","decision_summary":"Pathology top-2:\n1) Healthy (winner) — supported by “Health status: healthy” and “54 healthy subjects participated...”.\n2) Unknown (runner-up) — only if health status were missing; not needed here.\nFinal: Healthy. Confidence justified by 2 explicit population quotes.\n\nModality top-2:\n1) Visual (winner) — “Stimulus modalities: visual”, “Stimulus type: arrow”, and “arrow appeared as a visual cue”.\n2) Motor (runner-up) — plausible if one incorrectly used effector/imagery as modality, but guidelines specify modality is stimulus channel.\nFinal: Visual. Confidence justified by 3 explicit visual-stimulus quotes plus few-shot convention match.\n\nType top-2:\n1) Motor (winner) — “Binary-class motor imagery”, “Tasks: MI”, “Detected paradigm: motor_imagery”.\n2) Perception (runner-up) — could apply if the study were primarily visual discrimination; however the task goal is imagining hand grasping for BCI control.\nFinal: Motor. Confidence justified by 3 explicit motor-imagery/MI paradigm quotes plus close few-shot match."}},"canonical_name":null,"name_confidence":0.78,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Lee2019_MI"}}