{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c72","dataset_id":"nm000137","associated_paper_doi":null,"authors":["Murat Kaya","Mustafa Kemal Binli","Erkan Ozbay","Hilmi Yanar","Yuriy Mishchenko"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":"10.82901/nemar.nm000137","datatypes":["eeg"],"demographics":{"subjects_count":7,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000137","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"3fc3a319757e3c89359c5ed747d2b91ec2984e3fcdb875c6c45eb00107672194","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Classical motor imagery dataset with left hand, right hand, and rest","readme":"[![DOI](https://img.shields.io/badge/DOI-10.82901%2Fnemar.nm000137-blue)](https://doi.org/10.82901/nemar.nm000137)\n# Classical motor imagery dataset with left hand, right hand, and rest\nClassical motor imagery dataset with left hand, right hand, and rest.\n## Dataset Overview\n- **Code**: Kaya2018\n- **Paradigm**: imagery\n- **DOI**: 10.1038/sdata.2018.211\n- **Subjects**: 7\n- **Sessions per subject**: 1\n- **Events**: left_hand=1, right_hand=2, passive=3\n- **Trial interval**: [0, 1] s\n- **File format**: MAT\n## Acquisition\n- **Sampling rate**: 200.0 Hz\n- **Number of channels**: 19\n- **Channel types**: eeg=19\n- **Channel names**: Fp1, Fp2, F3, F4, C3, C4, P3, P4, O1, O2, F7, F8, T3, T4, T5, T6, Fz, Cz, Pz\n- **Montage**: standard_1020\n- **Hardware**: Nihon Kohden EEG-1200\n- **Reference**: System 0V (0.55*(C3+C4))\n- **Ground**: A1, A2 (earlobes)\n- **Line frequency**: 50.0 Hz\n## Participants\n- **Number of subjects**: 7\n- **Health status**: healthy\n- **Age**: min=20, max=35\n- **Gender distribution**: male=5, female=2\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Task type**: left_right_hand\n- **Number of classes**: 3\n- **Class labels**: left_hand, right_hand, passive\n- **Trial duration**: 1.0 s\n- **Study design**: Classical left/right hand motor imagery with passive rest\n- **Feedback type**: none\n- **Stimulus type**: visual arrow cue\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  left_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Left, Hand\n  right_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\n  passive\n    ├─ Sensory-event\n    └─ Label/passive\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: left_hand, right_hand, passive\n- **Cue duration**: 1.0 s\n## Data Structure\n- **Trials context**: Variable number of trials per session; 1s cue + 1.5-2.5s ITI\n## Preprocessing\n- **Data state**: raw\n## Signal Processing\n- **Classifiers**: SVM\n- **Feature extraction**: fourier_transform_amplitudes\n- **Frequency bands**: low_pass=[0.0, 5.0] Hz\n## Cross-Validation\n- **Method**: repeated_random_split\n- **Folds**: 5\n- **Evaluation type**: within_subject\n## BCI Application\n- **Environment**: lab\n- **Online feedback**: False\n## Tags\n- **Pathology**: healthy\n- **Modality**: motor\n- **Type**: imagery\n## Documentation\n- **DOI**: 10.1038/sdata.2018.211\n- **License**: CC-BY-4.0\n- **Investigators**: Murat Kaya, Mustafa Kemal Binli, Erkan Ozbay, Hilmi Yanar, Yuriy Mishchenko\n- **Senior author**: Yuriy Mishchenko\n- **Institution**: Mersin University\n- **Country**: TR\n- **Repository**: Figshare\n- **Data URL**: https://figshare.com/collections/A_large_electroencephalographic_motor_imagery_dataset_for_electroencephalographic_brain_computer_interfaces/3917698\n- **Publication year**: 2018\n- **Keywords**: EEG, motor imagery, brain-computer interface, BCI\n## References\nM. Kaya, M. K. Binli, E. Ozbay, H. Yanar, and Y. Mishchenko, \"A large electroencephalographic motor imagery dataset for electroencephalographic brain computer interfaces,\" Scientific Data, vol. 5, p. 180211, 2018. DOI: 10.1038/sdata.2018.211\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.4.3 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1","2"],"size_bytes":653722394,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000137","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:08:39.084698+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-17T13:47:27Z"},"total_files":17,"computed_title":"Classical motor imagery dataset with left hand, right hand, and rest","nchans_counts":[{"val":19,"count":17}],"sfreq_counts":[{"val":200.0,"count":17}],"stats_computed_at":"2026-05-01T13:49:34.645013+00:00","total_duration_s":56507.634999999995,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"c323c4f317b51378","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.85,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot example: 'EEG Motor Movement/Imagery Dataset' (Schalk et al.). It uses a motor/imagery paradigm with visual cues (targets on screen) and is labeled Pathology=Healthy, Modality=Visual, Type=Motor. The current dataset is also a classical left/right hand motor imagery paradigm with a visual cue, so the same mapping convention applies: cue modality drives Modality=Visual, while the studied construct (motor imagery) drives Type=Motor.","metadata_analysis":"Key population/task/stimulus facts from metadata/readme include: (1) Population: \"Health status: healthy\" and also \"Tags\\n- **Pathology**: healthy\". (2) Task/construct: \"Classical left/right hand motor imagery with passive rest\" plus \"Paradigm: imagery\" and \"Detected paradigm: motor_imagery\". (3) Stimulus channel: \"Stimulus type: visual arrow cue\" plus \"Stimulus modalities: visual\" and \"Primary modality: visual\". (4) Event classes: \"Events: left_hand=1, right_hand=2, passive=3\" indicating left/right imagery and rest/passive trials.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says participants are \"healthy\" (\"Health status: healthy\"; \"Tags - Pathology: healthy\"). Few-shot pattern for motor imagery datasets typically uses Healthy unless a clinical cohort is stated. ALIGN.\nModality: Metadata says the stimulus is visual (\"Stimulus type: visual arrow cue\"; \"Stimulus modalities: visual\"; \"Primary modality: visual\"). Few-shot convention (motor imagery paradigms with on-screen cues) labels Modality as Visual (see Schalk motor imagery example). ALIGN.\nType: Metadata indicates motor imagery is the primary construct (\"Classical left/right hand motor imagery\"; \"Detected paradigm: motor_imagery\"; events left_hand/right_hand). Few-shot convention labels imagery/execution studies as Type=Motor. ALIGN.","decision_summary":"Top-2 candidates and selection:\n- Pathology: (1) Healthy vs (2) Unknown. Healthy wins because metadata explicitly states \"Health status: healthy\" and repeats it in \"Tags - Pathology: healthy\" (plus non-clinical demographics). Alignment: aligned with few-shot motor imagery example. Confidence basis: 2+ explicit quotes + strong few-shot analog.\n- Modality: (1) Visual vs (2) Motor. Visual wins because Modality is defined by stimulus input channel, and metadata explicitly states \"Stimulus type: visual arrow cue\" and \"Primary modality: visual\". Motor is the response/imagery domain, handled under Type. Alignment: aligned with few-shot motor imagery example (Visual cues). Confidence basis: 3 explicit stimulus-modality quotes + few-shot analog.\n- Type: (1) Motor vs (2) Perception. Motor wins because the study design is \"left/right hand motor imagery\" (not sensory discrimination), and the detected paradigm is \"motor_imagery\" with left/right/passive classes. Alignment: aligned with few-shot motor imagery example (Type=Motor). Confidence basis: 3+ explicit motor-imagery quotes/features + few-shot analog."}},"canonical_name":null,"name_confidence":0.86,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Kaya2018"}}