{"success":true,"database":"eegdash","data":{"_id":"69d16e06897a7725c66f4ce6","dataset_id":"nm000348","associated_paper_doi":null,"authors":["Banghua Yang","Fenqi Rong","Yunlong Xie","Du Li","Jiayang Zhang","Fu Li","Guangming Shi","Xiaorong Gao"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.1038/s41597-025-04826-y","datatypes":["eeg"],"demographics":{"subjects_count":51,"ages":[29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29],"age_min":29,"age_max":29,"age_mean":29.0,"species":null,"sex_distribution":{"f":45,"m":6},"handedness_distribution":{"r":51}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000348","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"a8b1064ee777e0826d4b7d96858ff7d9ecbba8c9c19f2d8677b08d61dd5b35ac","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Yang et al. 2025 — A multi-day and high-quality EEG dataset for motor imagery brain-computer interface","readme":"Yang2025\n========\nMulti-day MI-BCI dataset (WBCIC-SHU) from Yang et al 2025.\nDataset Overview\n----------------\n  Code: Yang2025\n  Paradigm: imagery\n  DOI: 10.1038/s41597-025-04826-y\n  Subjects: 51\n  Sessions per subject: 3\n  Events: left_hand=1, right_hand=2\n  Trial interval: [1.5, 5.5] s\n  File format: BDF\nAcquisition\n-----------\n  Sampling rate: 1000.0 Hz\n  Number of channels: 59\n  Channel types: eeg=59, ecg=1, eog=4\n  Channel names: Fpz, Fp1, Fp2, AF3, AF4, AF7, AF8, Fz, F1, F2, F3, F4, F5, F6, F7, F8, FCz, FC1, FC2, FC3, FC4, FC5, FC6, FT7, FT8, Cz, C1, C2, C3, C4, C5, C6, T7, T8, CP1, CP2, CP3, CP4, CP5, CP6, TP7, TP8, Pz, P3, P4, P5, P6, P7, P8, POz, PO3, PO4, PO5, PO6, PO7, PO8, Oz, O1, O2\n  Montage: standard_1005\n  Hardware: Neuracle NeuSen W\n  Sensor type: Ag/AgCl\n  Line frequency: 50.0 Hz\n  Online filters: {}\nParticipants\n------------\n  Number of subjects: 51\n  Health status: healthy\n  Age: min=17.0, max=30.0\n  Gender distribution: female=18, male=44\n  Handedness: right-handed\n  BCI experience: naive\n  Species: human\nExperimental Protocol\n---------------------\n  Paradigm: imagery\n  Number of classes: 2\n  Class labels: left_hand, right_hand\n  Trial duration: 7.5 s\n  Study design: Multi-day MI-BCI: 2C (left/right hand, 51 subj) and 3C (left hand, right hand, foot-hooking, 11 subj). 3 sessions per subject on different days.\n  Feedback type: none\n  Stimulus type: video cues\n  Stimulus modalities: visual, auditory\n  Primary modality: visual\n  Synchronicity: synchronous\n  Mode: offline\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  left_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Left, Hand\n  right_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: motor_imagery\n  Imagery tasks: left_hand, right_hand, feet\n  Cue duration: 1.5 s\n  Imagery duration: 4.0 s\nData Structure\n--------------\n  Trials: 39600\n  Trials context: 51 subjects x 3 sessions x 200 trials (2C) + 11 subjects x 3 sessions x 300 trials (3C) = 39600\nSignal Processing\n-----------------\n  Classifiers: CSP+SVM, FBCSP+SVM, EEGNet, deepConvNet, FBCNet\n  Feature extraction: CSP, FBCSP\n  Frequency bands: bandpass=[0.5, 40.0] Hz\n  Spatial filters: CSP, FBCSP\nCross-Validation\n----------------\n  Method: 10-fold\n  Folds: 10\n  Evaluation type: within_session\nBCI Application\n---------------\n  Applications: motor_control\n  Environment: laboratory\n  Online feedback: False\nTags\n----\n  Pathology: Healthy\n  Modality: Motor\n  Type: Research\nDocumentation\n-------------\n  DOI: 10.1038/s41597-025-04826-y\n  License: CC-BY-4.0\n  Investigators: Banghua Yang, Fenqi Rong, Yunlong Xie, Du Li, Jiayang Zhang, Fu Li, Guangming Shi, Xiaorong Gao\n  Institution: Shanghai University\n  Country: CN\n  Data URL: https://plus.figshare.com/articles/dataset/22671172\n  Publication year: 2025\nReferences\n----------\nYang, B., Rong, F., Xie, Y., et al. (2025). A multi-day and high-quality EEG dataset for motor imagery brain-computer interface. Scientific Data, 12, 488. https://doi.org/10.1038/s41597-025-04826-y\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1","2"],"size_bytes":68048964135,"source":"nemar","storage":{"backend":"s3","base":"s3://openneuro.org/nm000348","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-22T12:52:30.775660+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":153,"computed_title":"Yang et al. 2025 — A multi-day and high-quality EEG dataset for motor imagery brain-computer interface","nchans_counts":[{"val":64,"count":153}],"sfreq_counts":[{"val":1000.0,"count":153}],"stats_computed_at":"2026-04-22T23:16:00.314729+00:00","total_duration_s":354333.84699999995,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"4a0991b9bf4ca730","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot example by paradigm is **“EEG Motor Movement/Imagery Dataset”** (healthy volunteers performing motor/imagery tasks with on-screen cues) which is labeled Pathology=Healthy, Modality=Visual, Type=Motor. The current dataset is also an MI-BCI left/right hand imagery paradigm with cueing, so the same convention applies: classify the construct as Motor and the stimulus channel as primarily Visual (cue-driven MI tasks are typically categorized as Visual modality when cues are on-screen, even though the measured construct is motor imagery). A secondary comparison is the Parkinson’s cross-modal oddball example, which uses both auditory+visual cues and is labeled Multisensory; this guides the Visual vs Multisensory runner-up choice here, but the metadata explicitly states a primary modality (visual), pushing the final selection to Visual.","metadata_analysis":"Key population facts: (1) “**Health status: healthy**” (Participants section). (2) “**Tags — Pathology: Healthy**”. (3) “**Species: human**” and recruitment context shows no disorder-focused cohort.\n\nKey stimulus/modality facts: (1) “**Stimulus type: video cues**”. (2) “**Stimulus modalities: visual, auditory**”. (3) “**Primary modality: visual**”. (4) HED annotations for events include “**Visual-presentation**”.\n\nKey task/type facts: (1) “**Paradigm: imagery**” and “**Detected paradigm: motor_imagery**”. (2) “**Events: left_hand=1, right_hand=2**” with HED “**Imagine → Move → Left/Right, Hand**”. (3) “**BCI Application — Applications: motor_control**”.\n\nNote: there is an internal inconsistency in the provided participant sex counts (readme: “female=18, male=44” vs participants_overview: “Sex: {'f': 45, 'm': 6}”), but both still indicate a non-clinical cohort and do not affect pathology labeling.","paper_abstract_analysis":"No useful paper information (abstract text not provided in the input).","evidence_alignment_check":"Pathology: Metadata says “Health status: healthy” and “Tags — Pathology: Healthy”. Few-shot convention: MI/BCI datasets with no diagnosis are labeled Healthy. ALIGN.\n\nModality: Metadata says “Stimulus modalities: visual, auditory” but also explicitly “Primary modality: visual” and HED includes “Visual-presentation”. Few-shot convention: motor imagery with on-screen cues is often labeled Visual modality (see EEG Motor Movement/Imagery Dataset). ALIGN overall; minor ambiguity (Multisensory runner-up) resolved by explicit “Primary modality: visual”.\n\nType: Metadata says “Detected paradigm: motor_imagery”, events are “left_hand/right_hand” imagery with HED “Imagine → Move”, and application is “motor_control”. Few-shot convention: motor imagery paradigms map to Type=Motor. ALIGN.","decision_summary":"Top-2 candidates per category and final choice:\n\n1) Pathology\n- Candidate A: Healthy — Evidence: “Health status: healthy”; “Tags — Pathology: Healthy”; no clinical recruitment described.\n- Candidate B: Unknown — Would apply if population health status were not specified.\nDecision: Healthy (metadata explicitly states healthy). Confidence=0.9 supported by 2+ explicit quotes plus consistent non-clinical context.\n\n2) Modality\n- Candidate A: Visual — Evidence: “Stimulus type: video cues”; “Primary modality: visual”; HED includes “Visual-presentation”. Few-shot analog: EEG Motor Movement/Imagery Dataset labeled Visual.\n- Candidate B: Multisensory — Evidence: “Stimulus modalities: visual, auditory”.\nDecision: Visual because the dataset explicitly designates a dominant channel (“Primary modality: visual”) and the event annotations are visual-presented cues. Confidence=0.9 (3+ explicit modality quotes + strong few-shot analog).\n\n3) Type\n- Candidate A: Motor — Evidence: “Detected paradigm: motor_imagery”; events left/right hand imagery with HED “Imagine → Move”; “Applications: motor_control”. Few-shot analog maps MI to Motor.\n- Candidate B: Attention/Perception — Could be argued if the primary aim were cue perception, but the paradigm and labels emphasize motor imagery/BCI.\nDecision: Motor. Confidence=0.9 (3+ explicit quotes + strong few-shot match)."}},"canonical_name":null,"name_confidence":0.74,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Yang2025"}}