{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cba","dataset_id":"nm000237","associated_paper_doi":null,"authors":["Qing Zhou","Jiafan Lin","Lin Yao","Yueming Wang","Yan Han","Kedi Xu"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":20,"ages":[23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23],"age_min":23,"age_max":23,"age_mean":23.0,"species":null,"sex_distribution":null,"handedness_distribution":{"r":20}},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000237","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"0a92146bc2880112e77a5385fe36db26c8592571ee1a9eda21103977791826b8","license":"CC-BY-4.0","n_contributing_labs":null,"name":"7-day motor imagery BCI EEG dataset from Zhou et al 2021","readme":"# 7-day motor imagery BCI EEG dataset from Zhou et al 2021\n7-day motor imagery BCI EEG dataset from Zhou et al 2021.\n## Dataset Overview\n- **Code**: Zhou2020\n- **Paradigm**: imagery\n- **DOI**: 10.3389/fnhum.2021.701091\n- **Subjects**: 20\n- **Sessions per subject**: 7\n- **Events**: left_hand=1, right_hand=2, feet=3, rest=4\n- **Trial interval**: [0, 5] s\n- **Runs per session**: 6\n- **File format**: NPZ\n- **Data preprocessed**: True\n## Acquisition\n- **Sampling rate**: 500.0 Hz\n- **Number of channels**: 41\n- **Channel types**: eeg=41\n- **Channel names**: F3, F1, Fz, F2, F4, FC5, FC3, FC1, FCz, FC2, FC4, FC6, C5, C3, C1, Cz, C2, C4, C6, CP5, CP3, CP1, CPz, CP2, CP4, CP6\n- **Montage**: standard_1005\n- **Hardware**: Neuroscan SynAmps2\n- **Reference**: vertex (Cz)\n- **Ground**: AFz\n- **Sensor type**: Ag/AgCl\n- **Line frequency**: 50.0 Hz\n- **Online filters**: {'bandpass': [0.5, 100], 'notch_hz': 50}\n## Participants\n- **Number of subjects**: 20\n- **Health status**: healthy\n- **Age**: mean=23.2, std=1.47, min=21, max=27\n- **Gender distribution**: female=9, male=11\n- **Handedness**: right-handed\n- **BCI experience**: mixed\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Number of classes**: 4\n- **Class labels**: left_hand, right_hand, feet, rest\n- **Trial duration**: 5.0 s\n- **Study design**: 7-day longitudinal MI-BCI study without feedback training. 4 classes: left hand, right hand, both feet, idle\n- **Feedback type**: none\n- **Stimulus type**: arrow cues\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  left_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Left, Hand\n  right_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\n  feet\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine, Move, Foot\n  rest\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Rest\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: left_hand, right_hand, feet, rest\n- **Imagery duration**: 5.0 s\n## Data Structure\n- **Trials**: 33600\n- **Trials context**: 20 subjects x 7 sessions x 6 runs x 40 trials = 33600\n## Signal Processing\n- **Classifiers**: SVM\n- **Feature extraction**: CSP\n- **Frequency bands**: classification=[8.0, 30.0] Hz\n- **Spatial filters**: CSP\n## Cross-Validation\n- **Method**: 10-fold\n- **Folds**: 10\n- **Evaluation type**: within_session\n## BCI Application\n- **Applications**: research\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Motor\n- **Type**: Research\n## Documentation\n- **DOI**: 10.3389/fnhum.2021.701091\n- **License**: CC-BY-4.0\n- **Investigators**: Qing Zhou, Jiafan Lin, Lin Yao, Yueming Wang, Yan Han, Kedi Xu\n- **Institution**: Zhejiang University\n- **Country**: CN\n- **Repository**: Zenodo\n- **Data URL**: https://zenodo.org/records/18988317\n- **Publication year**: 2021\n## References\nZhou, Q., Lin, J., Yao, L., Wang, Y., Han, Y., Xu, K. (2021). Relative Power Correlates With the Decoding Performance of Motor Imagery Both Across Time and Subjects. Frontiers in Human Neuroscience, 15, 701091. https://doi.org/10.3389/fnhum.2021.701091\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1","2","3","4","5","6"],"size_bytes":17139582972,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000237","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:09:26.976327+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-25T19:46:34Z"},"total_files":833,"computed_title":"7-day motor imagery BCI EEG dataset from Zhou et al 2021","nchans_counts":[{"val":41,"count":506},{"val":26,"count":327}],"sfreq_counts":[{"val":500.0,"count":833}],"stats_computed_at":"2026-05-01T13:49:34.645969+00:00","total_duration_s":324261.334,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"b20398ee919022ce","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.7,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot example by paradigm is the 'EEG Motor Movement/Imagery Dataset' (Schalk et al.). It uses motor execution/imagery runs with visual target cues and is labeled Pathology=Healthy, Modality=Visual, Type=Motor. This guides the convention that for motor imagery BCI tasks, the Modality reflects the presented cue (often visual), while Type reflects the cognitive/motor construct (motor imagery).","metadata_analysis":"Population/health: the README explicitly states \"Health status: healthy\" and \"Number of subjects: 20\".\nTask/paradigm: it is a \"7-day longitudinal MI-BCI study\" with \"Paradigm: imagery\" and \"Detected paradigm: motor_imagery\"; events are \"left_hand=1, right_hand=2, feet=3, rest=4\".\nStimulus modality: cues are visual: \"Stimulus type: arrow cues\" and \"Stimulus modalities: visual\" (also \"Primary modality: visual\").\nThese lines support Healthy participants, visually cued motor imagery task, and a Motor-focused construct.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says \"Health status: healthy\" (ALIGN) with few-shot motor imagery example (also Healthy). No conflict.\nModality: Metadata says \"Stimulus type: arrow cues\" and \"Stimulus modalities: visual\". Few-shot convention for motor imagery datasets with on-screen targets/cues maps Modality to Visual (ALIGN). No conflict.\nType: Metadata says \"motor imagery BCI\" / \"Detected paradigm: motor_imagery\" and imagery classes \"left_hand, right_hand, feet\". Few-shot convention maps this construct to Type=Motor (ALIGN). No conflict.","decision_summary":"Pathology top-2: (1) Healthy — supported by quotes \"Health status: healthy\" and typical volunteer demographics (mean age 23.2). (2) Unknown — would apply only if no health info were provided. Winner: Healthy (clear explicit statement). Alignment: aligned with few-shot motor-imagery Healthy example.\nModality top-2: (1) Visual — supported by \"Stimulus type: arrow cues\", \"Stimulus modalities: visual\", and \"Primary modality: visual\". (2) Motor — plausible because the task is motor imagery, but modality is defined by stimulus channel, not response/imagery content. Winner: Visual (multiple explicit modality lines; matches few-shot convention).\nType top-2: (1) Motor — supported by \"Detected paradigm: motor_imagery\", \"Study design: ... MI-BCI study\", and imagery classes (left hand/right hand/feet). (2) Attention — could be argued due to cue-following, but not the primary construct. Winner: Motor.\nConfidence justification: Pathology has 1 strong explicit quote (0.7). Modality has 3 explicit quotes + few-shot analog (0.9). Type has 2+ explicit task/paradigm quotes + few-shot analog (0.9)."}},"canonical_name":null,"name_confidence":0.74,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Zhou2021"}}