{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cc0","dataset_id":"nm000245","associated_paper_doi":null,"authors":["Hohyun Cho","Minkyu Ahn","Sangtae Ahn","Moonyoung Kwon","Sung Chan Jun"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":52,"ages":[24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24],"age_min":24,"age_max":24,"age_mean":24.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000245","osf_url":null,"github_url":null,"paper_url":null},"funding":["GIST Research Institute (GRI) grant funded by the GIST in 2017","Institute for Information & Communication Technology Promotion (IITP) grant funded by the Korea government (No. 2017-0-00451)"],"ingestion_fingerprint":"32eebd02a05201ad8e009e603151a70dacc13bdf25e6e08e945de10311e5df8e","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Motor Imagery dataset from Cho et al 2017","readme":"# Motor Imagery dataset from Cho et al 2017\nMotor Imagery dataset from Cho et al 2017.\n## Dataset Overview\n- **Code**: Cho2017\n- **Paradigm**: imagery\n- **DOI**: 10.5524/100295\n- **Subjects**: 52\n- **Sessions per subject**: 1\n- **Events**: left_hand=1, right_hand=2\n- **Trial interval**: [0, 3] s\n- **File format**: .mat (MATLAB)\n## Acquisition\n- **Sampling rate**: 512.0 Hz\n- **Number of channels**: 68\n- **Channel types**: eeg=64, emg=4\n- **Channel names**: AF3, AF4, AF7, AF8, AFz, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, EMG1, EMG2, EMG3, EMG4, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fp1, Fp2, Fpz, Fz, Iz, O1, O2, Oz, P1, P10, P2, P3, P4, P5, P6, P7, P8, P9, PO3, PO4, PO7, PO8, POz, Pz, T7, T8, TP7, TP8\n- **Montage**: standard_1005\n- **Hardware**: Biosemi ActiveTwo\n- **Software**: BCI2000 3.0.2\n- **Reference**: CMS/DRL\n- **Sensor type**: active electrodes\n- **Line frequency**: 60.0 Hz\n- **Electrode type**: active\n- **Auxiliary channels**: EMG (4 ch)\n## Participants\n- **Number of subjects**: 52\n- **Health status**: healthy\n- **Age**: mean=24.8, std=3.86\n- **Gender distribution**: female=19, male=33\n- **Handedness**: {'right': 50, 'both': 2}\n- **BCI experience**: collected via questionnaire (0 = no, number = how many times)\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Number of classes**: 2\n- **Class labels**: left_hand, right_hand\n- **Trial duration**: 3.0 s\n- **Study design**: motor imagery\n- **Feedback type**: none\n- **Stimulus type**: visual instruction\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Mode**: offline\n- **Instructions**: Subjects were asked to imagine kinesthetic finger movements (touching index, middle, ring, and little finger to thumb within 3 seconds)\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  left_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Left, Hand\n  right_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: left_hand, right_hand\n- **Cue duration**: 3.0 s\n- **Imagery duration**: 3.0 s\n## Data Structure\n- **Trials**: 100 or 120 per class (200-240 total)\n- **Blocks per session**: 5 or 6\n- **Trials context**: per_class\n## Preprocessing\n- **Data state**: raw\n- **Preprocessing applied**: False\n- **Notes**: Bad trial indices provided separately in .mat files (bad_trial_indices); raw EEG data is unfiltered\n## Signal Processing\n- **Classifiers**: FLDA\n- **Feature extraction**: CSP, ERD, ERS\n- **Frequency bands**: alpha=[8.0, 14.0] Hz; mu=[8, 12] Hz; analyzed=[8.0, 30.0] Hz\n## Cross-Validation\n- **Method**: random subset selection\n- **Folds**: 10\n- **Evaluation type**: within_session\n## Performance (Original Study)\n- **Accuracy**: 67.46%\n- **Accuracy Std**: 13.17\n- **Discriminative Subjects**: 38\n- **Total Subjects**: 50\n## BCI Application\n- **Applications**: motor_control\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Motor\n- **Type**: Research\n## Documentation\n- **Description**: EEG datasets for motor imagery brain-computer interface from 52 subjects with psychological and physiological questionnaire, EMG datasets, 3D EEG electrode locations, and non-task-related states\n- **DOI**: 10.5524/100295\n- **Associated paper DOI**: 10.1093/gigascience/gix034\n- **License**: CC-BY-4.0\n- **Investigators**: Hohyun Cho, Minkyu Ahn, Sangtae Ahn, Moonyoung Kwon, Sung Chan Jun\n- **Senior author**: Sung Chan Jun\n- **Contact**: scjun@gist.ac.kr; TEL: +82-62-715-2216; FAX: +82-62-715-2204\n- **Institution**: Gwangju Institute of Science and Technology\n- **Department**: School of Electrical Engineering and Computer Science\n- **Address**: 123 Cheomdangwagi-ro, Buk-gu, Gwangju 61005, Korea\n- **Country**: KR\n- **Repository**: GigaDB\n- **Data URL**: http://dx.doi.org/10.5524/100295\n- **Publication year**: 2017\n- **Funding**: GIST Research Institute (GRI) grant funded by the GIST in 2017; Institute for Information & Communication Technology Promotion (IITP) grant funded by the Korea government (No. 2017-0-00451)\n- **Ethics approval**: Institutional Review Board of Gwangju Institute of Science and Technology\n- **Keywords**: motor imagery, EEG, brain-computer interface, performance variation, subject-to-subject transfer\n## Abstract\nMotor imagery (MI)-based brain-computer interface (BCI) dataset from 52 subjects with EEG, EMG, psychological and physiological questionnaire, 3D EEG electrode locations, and non-task-related states. The dataset includes 100 or 120 trials per class (left/right hand) with validation showing 73.08% (38 subjects) had discriminative information. Mean accuracy of 67.46% (±13.17%) over 50 subjects (excluding 2 bad subjects). Dataset stored in GigaDB and validated using bad trial percentage, ERD/ERS analysis, and classification analysis.\n## Methodology\nSubjects performed motor imagery of left and right hand finger movements (kinesthetic imagery). Each trial consisted of: 2 seconds fixation cross, 3 seconds instruction (left/right hand), followed by random 4.1-4.8 second break. Five or six runs performed with feedback after each run. Additional data collected: 6 types of non-task-related data (eye blinking, eyeball movements, head movement, jaw clenching, resting state) and 20 trials of real hand movement per class. 3D electrode coordinates measured with Polhemus Fastrak digitizer. Experiments conducted August-September 2011 in four time slots (9:30-12:00, 12:30-15:00, 15:30-18:00, 19:00-21:30) with background noise 37-39 dB.\n## References\nCho, H., Ahn, M., Ahn, S., Kwon, M. and Jun, S.C., 2017. EEG datasets for motor imagery brain computer interface. GigaScience. https://doi.org/10.1093/gigascience/gix034\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":7243977868,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000245","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:09:37.137235+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-25T22:40:40Z"},"total_files":52,"computed_title":"Motor Imagery dataset from Cho et al 2017","nchans_counts":[{"val":64,"count":52}],"sfreq_counts":[{"val":512.0,"count":52}],"stats_computed_at":"2026-05-01T13:49:34.646042+00:00","total_duration_s":73639.8984375,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"0a1fa2ac80212e1e","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot example: \"EEG Motor Movement/Imagery Dataset\" (Schalk et al.). It shows the catalog convention for motor imagery datasets with visually presented cues: Modality is labeled by stimulus channel (Visual) while Type captures the research construct (Motor). That example is labeled Pathology=Healthy, Modality=Visual, Type=Motor, matching the present dataset’s description of left/right hand motor imagery prompted by visual instruction cues in healthy volunteers.","metadata_analysis":"Pathology evidence (recruited population): \"Health status: healthy\" and also the embedded tag line \"Tags\\n- **Pathology**: Healthy\"; additionally \"Participants\\n- **Number of subjects**: 52\" with no clinical condition described.\n\nModality evidence (stimulus/input channel): explicitly states \"Stimulus type: visual instruction\", \"Stimulus modalities: visual\", and \"Primary modality: visual\".\n\nType evidence (construct/purpose): multiple explicit statements of motor imagery focus: \"Paradigm: imagery\", \"Study design: motor imagery\", and \"Subjects were asked to imagine kinesthetic finger movements\"; also events are \"left_hand\" and \"right_hand\" consistent with MI class labels.","paper_abstract_analysis":"Useful information is included in the provided abstract text in the README: \"Motor imagery (MI)-based brain-computer interface (BCI) dataset\" and \"The dataset includes 100 or 120 trials per class (left/right hand)\", reinforcing that the purpose is motor imagery/BCI rather than (e.g.) perception or memory.","evidence_alignment_check":"Pathology:\n1) Metadata says: \"Health status: healthy\" (and \"Tags - Pathology: Healthy\").\n2) Few-shot pattern suggests: MI benchmark datasets with volunteers are labeled Healthy.\n3) ALIGN.\n\nModality:\n1) Metadata says: \"Stimulus type: visual instruction\", \"Stimulus modalities: visual\", \"Primary modality: visual\".\n2) Few-shot pattern suggests: MI tasks with screen cues are labeled Visual (see Schalk MI example labeled Modality=Visual).\n3) ALIGN.\n\nType:\n1) Metadata says: \"Study design: motor imagery\" and \"Subjects were asked to imagine kinesthetic finger movements\".\n2) Few-shot pattern suggests: MI datasets are labeled Type=Motor.\n3) ALIGN.","decision_summary":"Pathology top-2 candidates: (1) Healthy (supported by \"Health status: healthy\"; \"Tags - Pathology: Healthy\"; no patient recruitment described) vs (2) Unknown (would apply if health status were absent). Winner: Healthy. Alignment: aligned with few-shot MI example convention.\n\nModality top-2 candidates: (1) Visual (supported by \"Stimulus type: visual instruction\"; \"Stimulus modalities: visual\"; \"Primary modality: visual\") vs (2) Motor (could be tempting given motor imagery, but modality is defined by stimulus channel, not the imagined movement). Winner: Visual. Alignment: aligned with few-shot MI example (Visual modality + Motor type).\n\nType top-2 candidates: (1) Motor (supported by \"Study design: motor imagery\"; \"Paradigm: imagery\"; \"imagine kinesthetic finger movements\") vs (2) Perception/Other (not supported; no sensory discrimination primary aim stated). Winner: Motor. Alignment: aligned with few-shot MI example.\n\nConfidence justification: Each winning label has 3+ explicit metadata quotes plus a strong few-shot analog (motor movement/imagery dataset)."}},"canonical_name":null,"name_confidence":0.78,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Cho2017"}}