{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c87","dataset_id":"nm000167","associated_paper_doi":null,"authors":["Xuelin Ma","Shuang Qiu","Changde Du","Junfeng Xing","Huiguang He"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":25,"ages":[26,25,27,26,29,26,25,24,26,26,25,27,28,25,27,26,23,25,24,25,25,24,26,24,24],"age_min":23,"age_max":29,"age_mean":25.52,"species":null,"sex_distribution":{"m":18,"f":7},"handedness_distribution":{"r":25}},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000167","osf_url":null,"github_url":null,"paper_url":null},"funding":["National Key Research and Development Plan of China (No. 2017YFB1002502)","National Natural Science Foundation of China (No. 61976209)","National Natural Science Foundation of China (No. 61906188)"],"ingestion_fingerprint":"b87121670ebab530dd1f57ec2cfa74ac9f08fcdabb615c60844cdd80bae28ea8","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Motor imagery dataset from Ma et al. 2020","readme":"# Motor imagery dataset from Ma et al. 2020\nMotor imagery dataset from Ma et al. 2020.\n## Dataset Overview\n- **Code**: Ma2020\n- **Paradigm**: imagery\n- **DOI**: 10.1038/s41597-020-0535-2\n- **Subjects**: 25\n- **Sessions per subject**: 15\n- **Events**: right_hand=1, right_elbow=2\n- **Trial interval**: [0, 4] s\n- **File format**: CNT\n## Acquisition\n- **Sampling rate**: 1000.0 Hz\n- **Number of channels**: 62\n- **Channel types**: eeg=62\n- **Channel names**: Fp1, Fpz, Fp2, AF3, AF4, F7, F5, F3, F1, Fz, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCz, FC2, FC4, FC6, FT8, T7, C5, C3, C1, Cz, C2, C4, C6, T8, TP7, CP5, CP3, CP1, CPz, CP2, CP4, CP6, TP8, P7, P5, P3, P1, Pz, P2, P4, P6, P8, PO7, PO5, PO3, POz, PO4, PO6, PO8, CB1, O1, Oz, O2, CB2\n- **Montage**: standard_1005\n- **Hardware**: Neuroscan SynAmps2\n- **Ground**: AFz\n- **Line frequency**: 50.0 Hz\n- **Impedance threshold**: 5 kOhm\n- **Auxiliary channels**: EOG (2 ch, horizontal, vertical), M2\n## Participants\n- **Number of subjects**: 25\n- **Health status**: healthy\n- **Age**: mean=25.56, min=23, max=29\n- **Gender distribution**: male=18, female=7\n- **Handedness**: {'right': 25}\n- **BCI experience**: naive\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Task type**: motor_imagery_same_limb\n- **Number of classes**: 2\n- **Class labels**: right_hand, right_elbow\n- **Trial duration**: 4.0 s\n- **Feedback type**: none\n- **Stimulus type**: visual cue\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Training/test split**: False\n- **Instructions**: Subjects were asked to concentrate on performing the indicated motor imagery task (right hand or right elbow) using kinesthetic, not visual, motor imagery while avoiding any motion during imagination.\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  right_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\n  right_elbow\n    ├─ Sensory-event\n    └─ Label/right_elbow\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: right_hand, right_elbow\n- **Cue duration**: 1.0 s\n- **Imagery duration**: 4.0 s\n## Data Structure\n- **Trials**: 600\n- **Trials per class**: right_hand=300, right_elbow=300\n- **Blocks per session**: 15\n- **Trials context**: 3 days x 5 MI sessions/day = 15 sessions, 40 trials/session (20 hand + 20 elbow)\n## Signal Processing\n- **Classifiers**: FBCSP+SVM\n- **Feature extraction**: FBCSP\n- **Frequency bands**: alpha=[8.0, 13.0] Hz; beta=[20.0, 25.0] Hz\n- **Spatial filters**: CAR, FBCSP\n## Cross-Validation\n- **Method**: 5-fold\n- **Folds**: 5\n- **Evaluation type**: within_subject\n## BCI Application\n- **Applications**: motor_rehabilitation, prosthetic_control\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: healthy\n- **Modality**: motor\n- **Type**: imagery\n## Documentation\n- **DOI**: 10.1038/s41597-020-0535-2\n- **License**: CC-BY-4.0\n- **Investigators**: Xuelin Ma, Shuang Qiu, Changde Du, Junfeng Xing, Huiguang He\n- **Senior author**: Huiguang He\n- **Institution**: Chinese Academy of Sciences\n- **Department**: Institute of Automation\n- **Country**: CN\n- **Repository**: Harvard Dataverse\n- **Data URL**: https://doi.org/10.7910/DVN/RBN3XG\n- **Publication year**: 2020\n- **Funding**: National Key Research and Development Plan of China (No. 2017YFB1002502); National Natural Science Foundation of China (No. 61976209); National Natural Science Foundation of China (No. 61906188)\n- **Ethics approval**: Ethics Committee of the Institute of Automation, Chinese Academy of Sciences\n- **Keywords**: motor imagery, EEG, BCI, same limb, hand, elbow\n## References\nX. Ma, S. Qiu, C. Du, J. Xing, and H. He, \"Multi-channel EEG recording during motor imagery of different joints from the same limb,\" Scientific Data, vol. 7, no. 1, p. 191, 2020. DOI: 10.1038/s41597-020-0535-2\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1","10","11","12","13","14","2","3","4","5","6","7","8","9"],"size_bytes":24091691149,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000167","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:08:45.721354+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-02T21:18:33Z"},"total_files":375,"computed_title":"Motor imagery dataset from Ma et al. 2020","nchans_counts":[{"val":64,"count":225},{"val":62,"count":150}],"sfreq_counts":[{"val":1000.0,"count":375}],"stats_computed_at":"2026-05-01T13:49:34.645311+00:00","total_duration_s":126737.265,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"3c669a554bf65ef2","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot example by paradigm is the labeled dataset \"EEG Motor Movement/Imagery Dataset\" (healthy volunteers performing motor/imagery tasks with visual targets). That example maps motor imagery paradigms to Type=Motor. For Modality, EEG motor imagery datasets often include visual target/cue presentation; in this dataset the metadata explicitly states the cue is visual, so Modality should follow the stimulus channel convention (Visual), not the response/imagery domain. This matches the catalog rule that Modality is based on stimulus type, not the intended action.","metadata_analysis":"Pathology/population: explicit healthy cohort: \"Health status: healthy\" and also \"Subjects: 25\" with \"BCI experience: naive\".\nStimulus modality: explicit visual cueing: \"Stimulus type: visual cue\" plus \"Stimulus modalities: visual\" and \"Primary modality: visual\".\nTask/type: explicit motor imagery construct: \"Paradigm: imagery\" and \"Task type: motor_imagery_same_limb\" with class labels \"right_hand, right_elbow\" and instruction: \"performing the indicated motor imagery task (right hand or right elbow) using kinesthetic... motor imagery\".","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n1) Metadata says: \"Health status: healthy\".\n2) Few-shot pattern suggests: motor imagery benchmark datasets typically are Healthy unless a patient group is stated.\n3) ALIGN.\n\nModality:\n1) Metadata says: \"Stimulus type: visual cue\" and \"Primary modality: visual\".\n2) Few-shot pattern suggests: for motor imagery tasks, Modality is assigned by presented cue modality (often Visual), not by motor domain.\n3) ALIGN.\n\nType:\n1) Metadata says: \"Task type: motor_imagery_same_limb\" and instructions to perform \"motor imagery\" of \"right hand or right elbow\".\n2) Few-shot pattern suggests: motor imagery/movement paradigms map to Type=Motor.\n3) ALIGN.","decision_summary":"Top-2 candidates per category with head-to-head selection:\n\nPathology candidates: (1) Healthy vs (2) Unknown.\n- Healthy evidence: \"Health status: healthy\"; age/gender/handedness given for typical non-clinical cohort.\n- Unknown evidence: none.\nWinner: Healthy. Alignment: aligned with few-shot motor imagery example convention.\n\nModality candidates: (1) Visual vs (2) Motor.\n- Visual evidence: \"Stimulus type: visual cue\"; \"Stimulus modalities: visual\"; \"Primary modality: visual\".\n- Motor evidence: task is motor imagery (right hand/right elbow), but this is the cognitive domain/intent rather than stimulus input.\nWinner: Visual (stimulus-channel rule). Alignment: aligned with few-shot convention and explicit metadata.\n\nType candidates: (1) Motor vs (2) Perception.\n- Motor evidence: \"Task type: motor_imagery_same_limb\"; class labels \"right_hand, right_elbow\"; instruction to perform \"motor imagery\".\n- Perception evidence: visual cues exist, but the study purpose is not visual perception/discrimination.\nWinner: Motor. Alignment: aligned with few-shot motor imagery dataset labeling.\n\nConfidence justification: multiple explicit metadata quotes support each chosen label (not just inference), and few-shot analog strongly matches the paradigm for Type."}},"canonical_name":null,"name_confidence":0.74,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Ma2020"}}