{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cc1","dataset_id":"nm000246","associated_paper_doi":null,"authors":["Banghua Yang","Fenqi Rong","Yunlong Xie","Du Li","Jiayang Zhang","Fu Li","Guangming Shi","Xiaorong Gao"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":51,"ages":[29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29],"age_min":29,"age_max":29,"age_mean":29.0,"species":null,"sex_distribution":{"f":45,"m":6},"handedness_distribution":{"r":51}},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000246","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"0355be1cfaece6dd83f288171c243443bb3963020a378fee49297da0a8e548c3","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Multi-day MI-BCI dataset (WBCIC-SHU) from Yang et al 2025","readme":"# Multi-day MI-BCI dataset (WBCIC-SHU) from Yang et al 2025\nMulti-day MI-BCI dataset (WBCIC-SHU) from Yang et al 2025.\n## Dataset Overview\n- **Code**: Yang2025\n- **Paradigm**: imagery\n- **DOI**: 10.1038/s41597-025-04826-y\n- **Subjects**: 51\n- **Sessions per subject**: 3\n- **Events**: left_hand=1, right_hand=2\n- **Trial interval**: [1.5, 5.5] s\n- **File format**: BDF\n## Acquisition\n- **Sampling rate**: 1000.0 Hz\n- **Number of channels**: 59\n- **Channel types**: eeg=59, ecg=1, eog=4\n- **Channel names**: Fpz, Fp1, Fp2, AF3, AF4, AF7, AF8, Fz, F1, F2, F3, F4, F5, F6, F7, F8, FCz, FC1, FC2, FC3, FC4, FC5, FC6, FT7, FT8, Cz, C1, C2, C3, C4, C5, C6, T7, T8, CP1, CP2, CP3, CP4, CP5, CP6, TP7, TP8, Pz, P3, P4, P5, P6, P7, P8, POz, PO3, PO4, PO5, PO6, PO7, PO8, Oz, O1, O2\n- **Montage**: standard_1005\n- **Hardware**: Neuracle NeuSen W\n- **Sensor type**: Ag/AgCl\n- **Line frequency**: 50.0 Hz\n- **Online filters**: {}\n## Participants\n- **Number of subjects**: 51\n- **Health status**: healthy\n- **Age**: min=17.0, max=30.0\n- **Gender distribution**: female=18, male=44\n- **Handedness**: right-handed\n- **BCI experience**: naive\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Number of classes**: 2\n- **Class labels**: left_hand, right_hand\n- **Trial duration**: 7.5 s\n- **Study design**: Multi-day MI-BCI: 2C (left/right hand, 51 subj) and 3C (left hand, right hand, foot-hooking, 11 subj). 3 sessions per subject on different days.\n- **Feedback type**: none\n- **Stimulus type**: video cues\n- **Stimulus modalities**: visual, auditory\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  left_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Left, Hand\n  right_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: left_hand, right_hand, feet\n- **Cue duration**: 1.5 s\n- **Imagery duration**: 4.0 s\n## Data Structure\n- **Trials**: 39600\n- **Trials context**: 51 subjects x 3 sessions x 200 trials (2C) + 11 subjects x 3 sessions x 300 trials (3C) = 39600\n## Signal Processing\n- **Classifiers**: CSP+SVM, FBCSP+SVM, EEGNet, deepConvNet, FBCNet\n- **Feature extraction**: CSP, FBCSP\n- **Frequency bands**: bandpass=[0.5, 40.0] Hz\n- **Spatial filters**: CSP, FBCSP\n## Cross-Validation\n- **Method**: 10-fold\n- **Folds**: 10\n- **Evaluation type**: within_session\n## BCI Application\n- **Applications**: motor_control\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Motor\n- **Type**: Research\n## Documentation\n- **DOI**: 10.1038/s41597-025-04826-y\n- **License**: CC-BY-4.0\n- **Investigators**: Banghua Yang, Fenqi Rong, Yunlong Xie, Du Li, Jiayang Zhang, Fu Li, Guangming Shi, Xiaorong Gao\n- **Institution**: Shanghai University\n- **Country**: CN\n- **Data URL**: https://plus.figshare.com/articles/dataset/22671172\n- **Publication year**: 2025\n## References\nYang, B., Rong, F., Xie, Y., et al. (2025). A multi-day and high-quality EEG dataset for motor imagery brain-computer interface. Scientific Data, 12, 488. https://doi.org/10.1038/s41597-025-04826-y\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1","2"],"size_bytes":62733834741,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000246","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:09:37.486733+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-26T00:58:04Z"},"total_files":153,"computed_title":"Multi-day MI-BCI dataset (WBCIC-SHU) from Yang et al 2025","nchans_counts":[{"val":59,"count":153}],"sfreq_counts":[{"val":1000.0,"count":153}],"stats_computed_at":"2026-05-01T13:49:34.646298+00:00","total_duration_s":354333.84699999995,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"58800bfea9ec93bd","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot is **\"EEG Motor Movement/Imagery Dataset\"** (Healthy, Visual, Motor): it is a motor imagery paradigm where cues are presented on a screen and the research aim is motor/MI decoding. This convention supports labeling MI-BCI datasets as **Type=Motor** and **Modality=Visual** (because cues are visually presented), with **Pathology=Healthy** when participants are healthy volunteers. The current dataset is explicitly described as a multi-day motor imagery BCI dataset with left/right hand imagery and visual (video) cues, matching that few-shot paradigm closely.","metadata_analysis":"Key metadata facts:\n- Population: \"**Health status**: healthy\" and \"**Species**: human\"; also \"**Tags**\\n- **Pathology**: Healthy\".\n- Task/paradigm: \"**Paradigm**: imagery\", \"**Detected paradigm**: motor_imagery\", and \"**Events**: left_hand=1, right_hand=2\".\n- Stimulus modality: \"**Stimulus type**: video cues\", \"**Stimulus modalities**: visual, auditory\", and \"**Primary modality**: visual\"; HED annotations also include \"Visual-presentation\" for left_hand/right_hand.\n\nNote: there is an internal inconsistency in the README demographics (\"Gender distribution: female=18, male=44\") vs participants_overview (\"Sex: {'f': 45, 'm': 6}\"). This affects gender counts but not the explicitly stated recruitment health status or task nature.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\" (also \"Tags... Pathology: Healthy\").\n- Few-shot suggests: MI/motor imagery datasets in examples are typically healthy cohorts unless a disorder is explicitly recruited.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"Stimulus type: video cues\" and \"Primary modality: visual\" (even though \"Stimulus modalities: visual, auditory\"). HED also tags \"Visual-presentation\".\n- Few-shot suggests: motor imagery tasks with screen cues are labeled **Visual** for Modality (see EEG Motor Movement/Imagery Dataset).\n- Alignment: ALIGN (choose dominant/primary stimulus channel).\n\nType:\n- Metadata says: \"Detected paradigm: motor_imagery\" and imagery classes \"left_hand, right_hand\" with HED \"Imagine -> Move\".\n- Few-shot suggests: motor movement/imagery paradigms map to **Type=Motor**.\n- Alignment: ALIGN.","decision_summary":"Top-2 comparative selections:\n\n1) Pathology\n- Candidate A: Healthy\n  - Evidence: \"Health status: healthy\"; \"Tags... Pathology: Healthy\"; dataset described as MI-BCI with no clinical recruitment.\n- Candidate B: Unknown\n  - Evidence: minor demographic inconsistency (gender counts) but no indication of clinical recruitment.\n- Decision: Healthy (metadata explicitly states healthy). Alignment: aligns with few-shot MI convention.\n\n2) Modality\n- Candidate A: Visual\n  - Evidence: \"Stimulus type: video cues\"; \"Primary modality: visual\"; HED includes \"Visual-presentation\".\n- Candidate B: Multisensory\n  - Evidence: \"Stimulus modalities: visual, auditory\" mentions two channels.\n- Decision: Visual because the dataset explicitly declares \"Primary modality: visual\" and cues are video-based. Alignment: aligns with few-shot motor imagery example labeled Visual modality.\n\n3) Type\n- Candidate A: Motor\n  - Evidence: \"Detected paradigm: motor_imagery\"; \"Paradigm: imagery\"; events are left/right hand imagery and HED \"Imagine -> Move\".\n- Candidate B: Perception\n  - Evidence: presence of sensory cues (video/audio) could superficially suggest perception, but cues serve to instruct imagery rather than study sensory discrimination.\n- Decision: Motor (motor imagery BCI decoding purpose). Alignment: aligns with few-shot motor imagery dataset labeled Motor.\n\nConfidence justification:\n- Pathology 0.9: multiple explicit statements (\"Health status: healthy\"; \"Tags... Pathology: Healthy\") + strong few-shot match.\n- Modality 0.9: explicit primary modality (\"Primary modality: visual\"), explicit cue type (\"video cues\"), and HED \"Visual-presentation\" + strong few-shot match.\n- Type 0.9: explicit paradigm labels (\"motor_imagery\", \"imagery\") and class definitions (left/right hand imagery) + strong few-shot match."}},"canonical_name":null,"name_confidence":0.7,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Yang2025_Multi"}}