{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c7b","dataset_id":"nm000146","associated_paper_doi":null,"authors":["Weibo Yi","Shuang Qiu","Kun Wang","Hongzhi Qi","Lixin Zhang","Peng Zhou","Feng He","Dong Ming"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":"10.82901/nemar.nm000146","datatypes":["eeg"],"demographics":{"subjects_count":10,"ages":[24,24,24,24,24,24,24,24,24,24],"age_min":24,"age_max":24,"age_mean":24.0,"species":null,"sex_distribution":null,"handedness_distribution":{"r":10}},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000146","osf_url":null,"github_url":null,"paper_url":null},"funding":["National Natural Science Foundation of China (No. 81222021, 61172008, 81171423, 51377120, 31271062)","National Key Technology R&D Program of the Ministry of Science and Technology of China (No. 2012BAI34B02)","Program for New Century Excellent Talents in University of the Ministry of Education of China (No. NCET-10-0618)","Natural Science Foundation of Tianjin (No. 13JCQNJC13900)"],"ingestion_fingerprint":"222de2c6ce2d5fb2be70c4fe8a63a06301d294e490799b322e97f1af239f1ba6","license":"CC0-1.0","n_contributing_labs":null,"name":"Motor Imagery dataset from Weibo et al 2014","readme":"[![DOI](https://img.shields.io/badge/DOI-10.82901%2Fnemar.nm000146-blue)](https://doi.org/10.82901/nemar.nm000146)\n# Motor Imagery dataset from Weibo et al 2014\nMotor Imagery dataset from Weibo et al 2014.\n## Dataset Overview\n- **Code**: Weibo2014\n- **Paradigm**: imagery\n- **DOI**: 10.1371/journal.pone.0114853\n- **Subjects**: 10\n- **Sessions per subject**: 1\n- **Events**: left_hand=1, right_hand=2, hands=3, feet=4, left_hand_right_foot=5, right_hand_left_foot=6, rest=7\n- **Trial interval**: [3, 7] s\n- **File format**: MAT\n- **Data preprocessed**: True\n## Acquisition\n- **Sampling rate**: 200.0 Hz\n- **Number of channels**: 60\n- **Channel types**: eeg=60, eog=2, misc=2\n- **Channel names**: AF3, AF4, C1, C2, C3, C4, C5, C6, CB1, CB2, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fp1, Fp2, Fpz, Fz, HEO, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO5, PO6, PO7, PO8, POz, Pz, T7, T8, TP7, TP8, VEO\n- **Montage**: standard_1005\n- **Hardware**: Neuroscan SynAmps2\n- **Reference**: nose\n- **Ground**: prefrontal lobe\n- **Sensor type**: Ag/AgCl\n- **Line frequency**: 50.0 Hz\n- **Online filters**: {'bandpass': [0.5, 100], 'notch_hz': 50}\n- **Auxiliary channels**: EOG (2 ch, HEO, VEO)\n## Participants\n- **Number of subjects**: 10\n- **Health status**: healthy\n- **Age**: mean=24.0, min=23.0, max=25.0\n- **Gender distribution**: female=7, male=3\n- **Handedness**: right-handed\n- **BCI experience**: naive\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Number of classes**: 7\n- **Class labels**: left_hand, right_hand, hands, feet, left_hand_right_foot, right_hand_left_foot, rest\n- **Trial duration**: 8.0 s\n- **Study design**: Simple limb motor imagery (left hand, right hand, feet) and compound limb motor imagery (both hands, left hand combined with right foot, right hand combined with left foot)\n- **Feedback type**: none\n- **Stimulus type**: text cues\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Instructions**: Participants were asked to perform kinesthetic motor imagery rather than a visual type of imagery while avoiding any muscle movement\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  left_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Left, Hand\n  right_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\n  hands\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine, Move, Hand\n  feet\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine, Move, Foot\n  left_hand_right_foot\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       ├─ Imagine\n       │  ├─ Move\n       │  └─ Left, Hand\n       └─ Imagine\n          ├─ Move\n          └─ Right, Foot\n  right_hand_left_foot\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       ├─ Imagine\n       │  ├─ Move\n       │  └─ Right, Hand\n       └─ Imagine\n          ├─ Move\n          └─ Left, Foot\n  rest\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Rest\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: left_hand, right_hand, feet, both_hands, left_hand_right_foot, right_hand_left_foot\n- **Cue duration**: 1.0 s\n- **Imagery duration**: 4.0 s\n## Data Structure\n- **Trials**: 560\n- **Trials context**: 8 sections with 60 trials each (10 trials per MI task per section) for 6 MI tasks, plus 1 section with 80 trials for rest state\n## Preprocessing\n- **Data state**: preprocessed\n- **Preprocessing applied**: True\n- **Steps**: bandpass filtering, downsampling\n- **Highpass filter**: 0.5 Hz\n- **Lowpass filter**: 50.0 Hz\n- **Bandpass filter**: {'low_cutoff_hz': 0.5, 'high_cutoff_hz': 50.0}\n- **Re-reference**: nose\n- **Downsampled to**: 200.0 Hz\n## Signal Processing\n- **Feature extraction**: Bandpower, ERD, ERS, ERSP, Time-Frequency, AR, DTF, PLV\n- **Frequency bands**: theta=[4.0, 5.0] Hz; alpha=[8.0, 13.0] Hz; beta=[13.0, 30.0] Hz; analyzed=[1.0, 40.0] Hz\n## BCI Application\n- **Applications**: motor_control\n- **Environment**: laboratory\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Motor\n- **Type**: Research\n## Documentation\n- **DOI**: 10.1371/journal.pone.0114853\n- **License**: CC0-1.0\n- **Investigators**: Weibo Yi, Shuang Qiu, Kun Wang, Hongzhi Qi, Lixin Zhang, Peng Zhou, Feng He, Dong Ming\n- **Senior author**: Dong Ming\n- **Contact**: qhz@tju.edu.cn; richardming@tju.edu.cn\n- **Institution**: Tianjin University\n- **Department**: Department of Biomedical Engineering\n- **Country**: CN\n- **Repository**: Harvard Dataverse Database\n- **Data URL**: http://dx.doi.org/10.7910/DVN/27306\n- **Publication year**: 2014\n- **Funding**: National Natural Science Foundation of China (No. 81222021, 61172008, 81171423, 51377120, 31271062); National Key Technology R&D Program of the Ministry of Science and Technology of China (No. 2012BAI34B02); Program for New Century Excellent Talents in University of the Ministry of Education of China (No. NCET-10-0618); Natural Science Foundation of Tianjin (No. 13JCQNJC13900)\n- **Ethics approval**: Ethical committee of Tianjin University\n- **Keywords**: motor imagery, compound limb motor imagery, EEG oscillatory patterns, cognitive process, effective connectivity, ERD, ERS\n## References\nYi, Weibo, et al. \"Evaluation of EEG oscillatory patterns and cognitive process during simple and compound limb motor imagery.\" PloS one 9.12 (2014). https://doi.org/10.1371/journal.pone.0114853\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":1720643017,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000146","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:08:40.430736+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-29T22:29:24Z"},"total_files":10,"computed_title":"Motor Imagery dataset from Weibo et al 2014","nchans_counts":[{"val":60,"count":10}],"sfreq_counts":[{"val":200.0,"count":10}],"stats_computed_at":"2026-05-01T13:49:34.645143+00:00","total_duration_s":47089.95,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"cf763bcc4270d431","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot example by paradigm is the 'EEG Motor Movement/Imagery Dataset' (Schalk et al.). It is a motor movement/imagery paradigm with visually presented cues, labeled as Modality=Visual and Type=Motor. This guides the convention that for motor imagery datasets, the Modality label follows the cue/stimulus channel (often Visual) while the Type label captures the construct (Motor).","metadata_analysis":"Key participant/pathology facts: the README explicitly states \"Health status: healthy\" and also includes a tag line \"Pathology: Healthy\".\n\nKey stimulus/modality facts: the protocol states \"Stimulus type: text cues\", \"Stimulus modalities: visual\", and \"Primary modality: visual\". The HED annotations also mark each class as \"Visual-presentation\" (e.g., left_hand/right_hand etc.).\n\nKey construct/type facts: the dataset is explicitly motor imagery: \"Paradigm: imagery\", \"Detected paradigm: motor_imagery\", and \"Study design: Simple limb motor imagery ... and compound limb motor imagery ...\". It also instructs \"perform kinesthetic motor imagery\".","paper_abstract_analysis":"No useful paper information (abstract not provided in metadata).","evidence_alignment_check":"Pathology: Metadata says participants are \"healthy\" (\"Health status: healthy\"; \"Pathology: Healthy\"). Few-shot pattern: motor imagery benchmark datasets are typically healthy volunteers. ALIGN.\n\nModality: Metadata explicitly specifies \"Stimulus modalities: visual\" and \"Primary modality: visual\" with \"text cues\" and HED \"Visual-presentation\". Few-shot pattern: the motor movement/imagery example is labeled Modality=Visual due to visual cueing. ALIGN.\n\nType: Metadata describes the construct as motor imagery (\"Detected paradigm: motor_imagery\"; \"Study design: ... motor imagery\"). Few-shot pattern: motor imagery datasets are labeled Type=Motor. ALIGN.","decision_summary":"Pathology top-2: (1) Healthy — supported by \"Health status: healthy\" and \"Pathology: Healthy\"; (2) Unknown — only if health status were missing. Winner: Healthy. \n\nModality top-2: (1) Visual — supported by \"Stimulus type: text cues\", \"Stimulus modalities: visual\", \"Primary modality: visual\" and HED \"Visual-presentation\"; (2) Motor — plausible if labeling by action domain rather than stimulus, but guidelines and metadata emphasize stimulus channel. Winner: Visual.\n\nType top-2: (1) Motor — supported by \"Detected paradigm: motor_imagery\" and \"Study design: ... motor imagery\" plus kinesthetic imagery instructions; (2) Attention/Other — less plausible since the primary aim is motor imagery classes/ERD-ERS rather than attentional manipulation. Winner: Motor.\n\nConfidence justification: Each chosen label has 3+ explicit metadata quotes/features and a strong few-shot analog (motor movement/imagery)."}},"canonical_name":null,"name_confidence":0.7,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Yi2014"}}