{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cb8","dataset_id":"nm000235","associated_paper_doi":null,"authors":["Eva Guttmann-Flury","Xinjun Sheng","Xiangyang Zhu"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":31,"ages":[28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28],"age_min":28,"age_max":28,"age_mean":28.0,"species":null,"sex_distribution":{"f":11,"m":20},"handedness_distribution":{"r":24,"l":2}},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000235","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"751036cfcb315150223bf5d07fcb040cad79bd471405a4c5f8d36e0a9c8cac9d","license":"CC0","n_contributing_labs":null,"name":"Eye-BCI multimodal MI/ME dataset from Guttmann-Flury et al 2025","readme":"# Eye-BCI multimodal MI/ME dataset from Guttmann-Flury et al 2025\nEye-BCI multimodal MI/ME dataset from Guttmann-Flury et al 2025.\n## Dataset Overview\n- **Code**: GuttmannFlury2025-MI\n- **Paradigm**: imagery\n- **DOI**: 10.1038/s41597-025-04861-9\n- **Subjects**: 31\n- **Sessions per subject**: 3\n- **Events**: left_hand=1, right_hand=2\n- **Trial interval**: [0, 4] s\n- **File format**: BDF\n## Acquisition\n- **Sampling rate**: 1000.0 Hz\n- **Number of channels**: 66\n- **Channel types**: eeg=64, eog=1, stim=1\n- **Channel names**: FP1, FPZ, FP2, AF3, AF4, F7, F5, F3, F1, FZ, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCZ, FC2, FC4, FC6, FT8, T7, C5, C3, C1, CZ, C2, C4, C6, T8, TP7, CP5, CP3, CP1, CPZ, CP2, CP4, CP6, TP8, P7, P5, P3, P1, PZ, P2, P4, P6, P8, PO7, PO5, PO3, POZ, PO4, PO6, PO8, O1, OZ, O2, CB1, CB2\n- **Montage**: standard_1005\n- **Hardware**: Neuroscan Quik-Cap 65-ch, SynAmps2\n- **Reference**: right mastoid (M1)\n- **Ground**: forehead\n- **Sensor type**: Ag/AgCl\n- **Line frequency**: 50.0 Hz\n- **Online filters**: {'highpass_time_constant_s': 10}\n## Participants\n- **Number of subjects**: 31\n- **Health status**: healthy\n- **Age**: mean=28.3, min=20.0, max=57.0\n- **Gender distribution**: female=11, male=20\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Number of classes**: 2\n- **Class labels**: left_hand, right_hand\n- **Trial duration**: 7.5 s\n- **Study design**: Multi-paradigm BCI (MI/ME/SSVEP/P300). MI and ME: 2-class hand grasping, 40 trials/session, up to 3 sessions per subject.\n- **Feedback type**: none\n- **Stimulus type**: visual rectangle cue\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  left_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Left, Hand\n  right_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: left_hand, right_hand\n- **Cue duration**: 2.0 s\n- **Imagery duration**: 4.0 s\n## Data Structure\n- **Trials**: 2520\n- **Trials context**: 63 sessions x 40 trials = 2520 (MI only, default)\n## BCI Application\n- **Applications**: motor_control\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Motor\n- **Type**: Research\n## Documentation\n- **DOI**: 10.1038/s41597-025-04861-9\n- **License**: CC0\n- **Investigators**: Eva Guttmann-Flury, Xinjun Sheng, Xiangyang Zhu\n- **Institution**: Shanghai Jiao Tong University\n- **Country**: CN\n- **Publication year**: 2025\n## References\nGuttmann-Flury, E., Sheng, X., & Zhu, X. (2025). Dataset combining EEG, eye-tracking, and high-speed video for ocular activity analysis across BCI paradigms. Scientific Data, 12, 587. https://doi.org/10.1038/s41597-025-04861-9\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1","2"],"size_bytes":4990033886,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000235","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:09:22.045556+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-25T18:21:17Z"},"total_files":63,"computed_title":"Eye-BCI multimodal MI/ME dataset from Guttmann-Flury et al 2025","nchans_counts":[{"val":66,"count":63}],"sfreq_counts":[{"val":1000.0,"count":63}],"stats_computed_at":"2026-05-01T13:49:34.645945+00:00","total_duration_s":25186.937,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"6f57af39906ea7cc","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.7,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot example by paradigm is the “EEG Motor Movement/Imagery Dataset” (Schalk et al.), which is explicitly a motor execution/imagery BCI-style protocol. In that example, despite the motor focus, the Modality label is set to “Visual” because trials are cued by on-screen targets (“A target appears...”), while the Type is “Motor” because the studied construct is movement execution/imagery. This convention directly guides labeling here because the present dataset is also a cued left-vs-right hand motor imagery task with visual cues.","metadata_analysis":"Pathology/population is explicitly healthy: “**Health status**: healthy”.\nStimulus channel is explicitly visual: “**Stimulus type**: visual rectangle cue” and “**Stimulus modalities**: visual” (also “**Primary modality**: visual”).\nTask construct is motor imagery (and multi-paradigm BCI including motor execution): “**Paradigm**: imagery”, “**Detected paradigm**: motor_imagery”, and “MI and ME: **2-class hand grasping**”. HED tags further support MI: for left/right hand events: “Agent-action → Imagine → Move → Left/Right, Hand”.","paper_abstract_analysis":"No useful paper information (abstract text not provided in the metadata payload; only the citation/DOI is given).","evidence_alignment_check":"Pathology: Metadata says “Health status: healthy” (ALIGNS with few-shot conventions where explicitly healthy participants are labeled Healthy).\nModality: Metadata says “Stimulus type: visual rectangle cue” and “Stimulus modalities: visual”; few-shot motor imagery example labels Modality as Visual when cues are on-screen targets (ALIGNS).\nType: Metadata indicates “Detected paradigm: motor_imagery” and HED “Imagine → Move → ... Hand”; few-shot motor imagery example labels Type as Motor for MI/ME paradigms (ALIGNS). No conflicts requiring override.","decision_summary":"Pathology top-2: (1) Healthy — supported by “Health status: healthy”. (2) Unknown — only if health were unspecified; not the case. Final: Healthy.\nModality top-2: (1) Visual — supported by “Stimulus type: visual rectangle cue”, “Stimulus modalities: visual”, “Primary modality: visual”, and consistent with the few-shot MI dataset where visual cues drive Modality=Visual. (2) Motor — plausible if labeling by action rather than stimulus, but conventions/examples indicate Modality follows stimulus channel, not imagined movement. Final: Visual.\nType top-2: (1) Motor — supported by “Detected paradigm: motor_imagery”, “Class labels: left_hand, right_hand”, and HED “Imagine → Move → ... Hand”. (2) Attention/Perception — less plausible because the core aim is MI/ME for BCI motor control, not sensory discrimination. Final: Motor.\nConfidence justification: Pathology has 1 explicit quote; Modality has 3 explicit quotes; Type has 3+ explicit cues (motor_imagery + MI/ME hand grasping + HED Imagine/Move), plus a strong few-shot match to the motor imagery dataset."}},"canonical_name":null,"name_confidence":0.72,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"GuttmannFlury2025_Eye_BCI"}}