{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c83","dataset_id":"nm000160","associated_paper_doi":null,"authors":["Weibo Yi","Jiaming Chen","Dan Wang","Xinkang Hu","Meng Xu","Fangda Li","Shuhan Wu","Jin Qian"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":18,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":{"r":18}},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000160","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"f0faa5f13694fefeadf88f330dbafb55387b3181488e0884b2bac30633f59b6b","license":"CC-BY-NC-ND-4.0","n_contributing_labs":null,"name":"Multi-joint upper-limb MI dataset from Yi et al. 2025","readme":"# Multi-joint upper-limb MI dataset from Yi et al. 2025\nMulti-joint upper-limb MI dataset from Yi et al. 2025.\n## Dataset Overview\n- **Code**: Yi2025\n- **Paradigm**: imagery\n- **DOI**: 10.1038/s41597-025-05286-0\n- **Subjects**: 18\n- **Sessions per subject**: 1\n- **Events**: hand_open_close=1, wrist_flex_ext=2, wrist_abd_add=3, elbow_pron_sup=4, elbow_flex_ext=5, shoulder_pron_sup=6, shoulder_abd_add=7, shoulder_flex_ext=8\n- **Trial interval**: [0, 4] s\n- **Runs per session**: 8\n- **File format**: CNT\n## Acquisition\n- **Sampling rate**: 1000.0 Hz\n- **Number of channels**: 62\n- **Channel types**: eeg=62\n- **Channel names**: Fp1, Fpz, Fp2, AF3, AF4, F7, F5, F3, F1, Fz, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCz, FC2, FC4, FC6, FT8, T7, C5, C3, C1, Cz, C2, C4, C6, T8, TP7, CP5, CP3, CP1, CPz, CP2, CP4, CP6, TP8, P7, P5, P3, P1, Pz, P2, P4, P6, P8, PO7, PO5, PO3, POz, PO4, PO6, PO8, CB1, O1, Oz, O2, CB2\n- **Montage**: standard_1005\n- **Hardware**: Neuroscan SynAmps2\n- **Reference**: left mastoid (M1)\n- **Line frequency**: 50.0 Hz\n## Participants\n- **Number of subjects**: 18\n- **Health status**: healthy\n- **Age**: min=22, max=27\n- **Gender distribution**: female=10, male=8\n- **Handedness**: right\n- **BCI experience**: naive\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Number of classes**: 8\n- **Class labels**: hand_open_close, wrist_flex_ext, wrist_abd_add, elbow_pron_sup, elbow_flex_ext, shoulder_pron_sup, shoulder_abd_add, shoulder_flex_ext\n- **Trial duration**: 4.0 s\n- **Study design**: 8-class multi-joint upper-limb MI. 8 blocks of 40 trials (5 per class), 320 total trials per subject.\n- **Feedback type**: none\n- **Stimulus type**: video + text\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: cue-based\n- **Mode**: offline\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  hand_open_close\n    ├─ Sensory-event\n    └─ Label/hand_open_close\n  wrist_flex_ext\n    ├─ Sensory-event\n    └─ Label/wrist_flex_ext\n  wrist_abd_add\n    ├─ Sensory-event\n    └─ Label/wrist_abd_add\n  elbow_pron_sup\n    ├─ Sensory-event\n    └─ Label/elbow_pron_sup\n  elbow_flex_ext\n    ├─ Sensory-event\n    └─ Label/elbow_flex_ext\n  shoulder_pron_sup\n    ├─ Sensory-event\n    └─ Label/shoulder_pron_sup\n  shoulder_abd_add\n    ├─ Sensory-event\n    └─ Label/shoulder_abd_add\n  shoulder_flex_ext\n    ├─ Sensory-event\n    └─ Label/shoulder_flex_ext\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: hand_open_close, wrist_flex_ext, wrist_abd_add, elbow_pron_sup, elbow_flex_ext, shoulder_pron_sup, shoulder_abd_add, shoulder_flex_ext\n- **Cue duration**: 2.0 s\n- **Imagery duration**: 4.0 s\n## Data Structure\n- **Trials**: 320\n- **Trials per class**: hand_open_close=40, wrist_flex_ext=40, wrist_abd_add=40, elbow_pron_sup=40, elbow_flex_ext=40, shoulder_pron_sup=40, shoulder_abd_add=40, shoulder_flex_ext=40\n- **Blocks per session**: 8\n- **Trials context**: 8 blocks x 40 trials (5 per class x 8 classes)\n## Signal Processing\n- **Classifiers**: ShallowConvNet\n- **Feature extraction**: ERSP\n- **Frequency bands**: alpha=[8.0, 13.0] Hz; beta=[13.0, 30.0] Hz; bandpass=[4.0, 40.0] Hz\n- **Spatial filters**: CAR\n## Cross-Validation\n- **Method**: 5-fold\n- **Folds**: 5\n- **Evaluation type**: within_subject\n## BCI Application\n- **Applications**: rehabilitation\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Motor\n- **Type**: Motor Imagery\n## Documentation\n- **DOI**: 10.1038/s41597-025-05286-0\n- **License**: CC-BY-NC-ND-4.0\n- **Investigators**: Weibo Yi, Jiaming Chen, Dan Wang, Xinkang Hu, Meng Xu, Fangda Li, Shuhan Wu, Jin Qian\n- **Institution**: Beijing University of Technology\n- **Country**: CN\n- **Data URL**: https://figshare.com/articles/dataset/Data/24123303\n- **Publication year**: 2025\n## References\nYi, W., Chen, J., Wang, D., et al. (2025). A multi-modal dataset of EEG and fNIRS for motor imagery of multi-types of joints from unilateral upper limb. Scientific Data, 12, 953. https://doi.org/10.1038/s41597-025-05286-0\nNotes\n.. versionadded:: 1.2.0\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":21794532522,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000160","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:08:44.772351+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-02T21:18:29Z"},"total_files":141,"computed_title":"Multi-joint upper-limb MI dataset from Yi et al. 2025","nchans_counts":[{"val":62,"count":141}],"sfreq_counts":[{"val":1000.0,"count":141}],"stats_computed_at":"2026-05-01T13:49:34.645255+00:00","total_duration_s":116937.219,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"59fb222791f0889f","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Closest few-shot match is the \"EEG Motor Movement/Imagery Dataset\" example (Schalk et al.), which is motor execution/imagery with visual cues and was labeled Pathology=Healthy, Modality=Visual, Type=Motor. The current dataset is also a cue-based motor imagery paradigm (upper-limb joint imagery) with visually presented cues (\"video + text\"), so the same mapping convention applies. Other few-shot examples emphasize the Modality rule as stimulus-channel (e.g., digit span labeled Auditory because digits are presented auditorily), supporting labeling this dataset as Visual (not Motor) for Modality.","metadata_analysis":"Key population facts: \"Health status: healthy\" and \"Number of subjects: 18\" with \"Age: min=22, max=27\".\nKey paradigm/task facts: \"Paradigm: imagery\", \"Detected paradigm: motor_imagery\", and class labels are joint actions (e.g., \"hand_open_close\", \"wrist_flex_ext\").\nKey stimulus/modality facts: \"Stimulus type: video + text\", \"Stimulus modalities: visual\", and \"Primary modality: visual\".","paper_abstract_analysis":"No useful paper information. (Only the DOI/citation is provided in metadata; no abstract text included.)","evidence_alignment_check":"Pathology: Metadata says participants are \"healthy\" (\"Health status: healthy\"). Few-shot pattern for non-clinical BCI/MI datasets suggests Healthy. ALIGN.\nModality: Metadata explicitly says cues are visual (\"Stimulus type: video + text\", \"Stimulus modalities: visual\", \"Primary modality: visual\"). Few-shot convention (e.g., Schalk motor imagery dataset labeled Modality=Visual) suggests Visual when cues are on-screen. ALIGN.\nType: Metadata explicitly indicates motor imagery (\"Detected paradigm: motor_imagery\", \"Paradigm: imagery\", joint movement imagery class labels). Few-shot motor imagery example maps to Type=Motor. ALIGN.","decision_summary":"Top-2 candidates — Pathology: (1) Healthy vs (2) Unknown. Healthy wins due to explicit metadata: \"Health status: healthy\" plus consistent dataset tags (\"Pathology: Healthy\") and normal-volunteer demographics.\nTop-2 candidates — Modality: (1) Visual vs (2) Motor. Visual wins because Modality is defined by stimulus/input channel and metadata explicitly states \"Stimulus modalities: visual\" and \"Stimulus type: video + text\"; motor imagery is captured under Type, not Modality.\nTop-2 candidates — Type: (1) Motor vs (2) Perception. Motor wins because the paradigm is explicitly \"motor_imagery\" with multiple upper-limb joint imagery classes; cues are just instructions, not a perceptual discrimination goal.\nConfidence justification: Pathology has 2+ explicit statements (\"Health status: healthy\", \"Pathology: Healthy\") and strong few-shot analog. Modality has 3 explicit statements (\"Stimulus modalities: visual\", \"Primary modality: visual\", \"Stimulus type: video + text\") plus few-shot analog. Type has multiple explicit motor imagery statements (\"Detected paradigm: motor_imagery\", \"Paradigm: imagery\", imagery class list) plus few-shot analog."}},"canonical_name":null,"name_confidence":0.66,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Yi2025"}}