{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cc9","dataset_id":"nm000265","associated_paper_doi":null,"authors":["Eva Guttmann-Flury","Xinjun Sheng","Xiangyang Zhu"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.1038/s41597-025-04861-9","datatypes":["eeg"],"demographics":{"subjects_count":31,"ages":[28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28],"age_min":28,"age_max":28,"age_mean":28.0,"species":null,"sex_distribution":{"f":11,"m":20},"handedness_distribution":{"r":24,"l":2}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000265","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"5c683d5a4bac9a406f5fd25298dffbc9fa8d95abdd1532ebd861baa07b81b88c","license":"CC0","n_contributing_labs":null,"name":"Guttmann-Flury et al. 2025 (Motor Imagery) — Dataset combining EEG, eye-tracking, and high-speed video for ocular activity analysis across BCI paradigms","readme":"GuttmannFlury2025-MI\n====================\nEye-BCI multimodal MI/ME dataset from Guttmann-Flury et al 2025.\nDataset Overview\n----------------\n  Code: GuttmannFlury2025-MI\n  Paradigm: imagery\n  DOI: 10.1038/s41597-025-04861-9\n  Subjects: 31\n  Sessions per subject: 3\n  Events: left_hand=1, right_hand=2\n  Trial interval: [0, 4] s\n  File format: BDF\nAcquisition\n-----------\n  Sampling rate: 1000.0 Hz\n  Number of channels: 66\n  Channel types: eeg=64, eog=1, stim=1\n  Channel names: FP1, FPZ, FP2, AF3, AF4, F7, F5, F3, F1, FZ, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCZ, FC2, FC4, FC6, FT8, T7, C5, C3, C1, CZ, C2, C4, C6, T8, TP7, CP5, CP3, CP1, CPZ, CP2, CP4, CP6, TP8, P7, P5, P3, P1, PZ, P2, P4, P6, P8, PO7, PO5, PO3, POZ, PO4, PO6, PO8, O1, OZ, O2, CB1, CB2\n  Montage: standard_1005\n  Hardware: Neuroscan Quik-Cap 65-ch, SynAmps2\n  Reference: right mastoid (M1)\n  Ground: forehead\n  Sensor type: Ag/AgCl\n  Line frequency: 50.0 Hz\n  Online filters: {'highpass_time_constant_s': 10}\nParticipants\n------------\n  Number of subjects: 31\n  Health status: healthy\n  Age: mean=28.3, min=20.0, max=57.0\n  Gender distribution: female=11, male=20\n  Species: human\nExperimental Protocol\n---------------------\n  Paradigm: imagery\n  Number of classes: 2\n  Class labels: left_hand, right_hand\n  Trial duration: 7.5 s\n  Study design: Multi-paradigm BCI (MI/ME/SSVEP/P300). MI and ME: 2-class hand grasping, 40 trials/session, up to 3 sessions per subject.\n  Feedback type: none\n  Stimulus type: visual rectangle cue\n  Stimulus modalities: visual\n  Primary modality: visual\n  Synchronicity: synchronous\n  Mode: offline\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  left_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Left, Hand\n  right_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: motor_imagery\n  Imagery tasks: left_hand, right_hand\n  Cue duration: 2.0 s\n  Imagery duration: 4.0 s\nData Structure\n--------------\n  Trials: 2520\n  Trials context: 63 sessions x 40 trials = 2520 (MI only, default)\nBCI Application\n---------------\n  Applications: motor_control\n  Environment: laboratory\n  Online feedback: False\nTags\n----\n  Pathology: Healthy\n  Modality: Motor\n  Type: Research\nDocumentation\n-------------\n  DOI: 10.1038/s41597-025-04861-9\n  License: CC0\n  Investigators: Eva Guttmann-Flury, Xinjun Sheng, Xiangyang Zhu\n  Institution: Shanghai Jiao Tong University\n  Country: CN\n  Publication year: 2025\nReferences\n----------\nGuttmann-Flury, E., Sheng, X., & Zhu, X. (2025). Dataset combining EEG, eye-tracking, and high-speed video for ocular activity analysis across BCI paradigms. Scientific Data, 12, 587. https://doi.org/10.1038/s41597-025-04861-9\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1","2"],"size_bytes":9896509531,"source":"nemar","storage":{"backend":"s3","base":"s3://openneuro.org/nm000265","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-22T12:52:16.894879+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":126,"computed_title":"Guttmann-Flury et al. 2025 (Motor Imagery) — Dataset combining EEG, eye-tracking, and high-speed video for ocular activity analysis across BCI paradigms","nchans_counts":[{"val":65,"count":126}],"sfreq_counts":[{"val":1000.0,"count":126}],"stats_computed_at":"2026-04-22T23:16:00.314326+00:00","total_duration_s":50723.874,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"40f65f4f831d3b7a","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.85},"reasoning":{"few_shot_analysis":"Closest match is the few-shot example “EEG Motor Movement/Imagery Dataset” (Schalk et al.). It uses visual on-screen cues to elicit actual/imagined limb movement and is labeled Pathology=Healthy, Modality=Visual, Type=Motor. The current dataset is likewise a motor-imagery BCI paradigm with visual cues for left/right hand imagery, so the same convention applies: Modality is determined by stimulus channel (visual cue), while Type captures the motor imagery construct.","metadata_analysis":"Key facts from provided metadata/readme: (1) Population: “Health status: healthy” and also “Tags\\n----\\n  Pathology: Healthy”. (2) Task/paradigm: “Eye-BCI multimodal MI/ME dataset” and “Paradigm: imagery” plus “Detected paradigm: motor_imagery”. (3) Motor imagery content: “Events: left_hand=1, right_hand=2” and “Imagery tasks: left_hand, right_hand”. (4) Stimulus channel: “Stimulus type: visual rectangle cue” and “Stimulus modalities: visual” / “Primary modality: visual”.","paper_abstract_analysis":"No useful paper abstract text was provided in the input (only a citation/DOI).","evidence_alignment_check":"Pathology: Metadata says “Health status: healthy” and “Pathology: Healthy”. Few-shot pattern for motor imagery datasets commonly uses Healthy unless a patient group is explicitly recruited (e.g., Parkinson’s, TBI). ALIGN.\n\nModality: Metadata says “Stimulus type: visual rectangle cue” and “Stimulus modalities: visual” / “Primary modality: visual”. Few-shot convention (e.g., Schalk motor imagery example labeled Modality=Visual) maps cue-driven MI to Visual modality because the dominant input is a visual cue rather than a sensory motor stimulus. ALIGN.\n\nType: Metadata says “Detected paradigm: motor_imagery” and HED annotations include “Agent-action → Imagine → Move”. Few-shot convention labels motor imagery datasets as Type=Motor. ALIGN.","decision_summary":"Pathology top-2: (A) Healthy — supported by “Health status: healthy” and “Pathology: Healthy”. (B) Unknown — would apply only if health status were not stated; weaker because health is explicit. Final: Healthy. Evidence alignment: align.\n\nModality top-2: (A) Visual — supported by “Stimulus type: visual rectangle cue”, “Stimulus modalities: visual”, and “Primary modality: visual”. (B) Motor — could be considered because the participant imagines movement, but modality is defined by stimulus channel, not response/imagery. Final: Visual. Evidence alignment: align.\n\nType top-2: (A) Motor — supported by “Detected paradigm: motor_imagery” and “MI/ME: 2-class hand grasping” / imagery of left vs right hand. (B) Perception — would fit if the study were mainly about sensory discrimination; no evidence for that. Final: Motor. Evidence alignment: align.\n\nConfidence justification: Pathology has 2 explicit quotes; Modality has 3 explicit quotes; Type has 2 explicit quotes plus a strong few-shot analog (Schalk motor imagery dataset)."}},"canonical_name":null,"name_confidence":0.82,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"GuttmannFlury2025_MI"}}