{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cd3","dataset_id":"nm000310","associated_paper_doi":null,"authors":["Eva Guttmann-Flury","Xinjun Sheng","Xiangyang Zhu"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.1038/s41597-025-04861-9","datatypes":["eeg"],"demographics":{"subjects_count":11,"ages":[28,28,28,28,28,28,28,28,28,28,28],"age_min":28,"age_max":28,"age_mean":28.0,"species":null,"sex_distribution":{"f":5,"m":6},"handedness_distribution":{"r":9,"l":1}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000310","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"438f9a3dbd8f2320b60475882b5c5f88fa395aa46f3a408127cd8684c8610d85","license":"CC0","n_contributing_labs":null,"name":"Guttmann-Flury et al. 2025 (SSVEP) — Dataset combining EEG, eye-tracking, and high-speed video for ocular activity analysis across BCI paradigms","readme":"GuttmannFlury2025-SSVEP\n=======================\nEye-BCI multimodal SSVEP dataset from Guttmann-Flury et al 2025.\nDataset Overview\n----------------\n  Code: GuttmannFlury2025-SSVEP\n  Paradigm: ssvep\n  DOI: 10.1038/s41597-025-04861-9\n  Subjects: 31\n  Sessions per subject: 3\n  Events: 10.0=1, 11.0=2, 12.0=3, 13.0=4\n  Trial interval: [0, 5] s\n  File format: BDF\nAcquisition\n-----------\n  Sampling rate: 1000.0 Hz\n  Number of channels: 66\n  Channel types: eeg=64, eog=1, stim=1\n  Channel names: FP1, FPZ, FP2, AF3, AF4, F7, F5, F3, F1, FZ, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCZ, FC2, FC4, FC6, FT8, T7, C5, C3, C1, CZ, C2, C4, C6, T8, TP7, CP5, CP3, CP1, CPZ, CP2, CP4, CP6, TP8, P7, P5, P3, P1, PZ, P2, P4, P6, P8, PO7, PO5, PO3, POZ, PO4, PO6, PO8, O1, OZ, O2, CB1, CB2\n  Montage: standard_1005\n  Hardware: Neuroscan Quik-Cap 65-ch, SynAmps2\n  Reference: right mastoid (M1)\n  Ground: forehead\n  Sensor type: Ag/AgCl\n  Line frequency: 50.0 Hz\n  Online filters: {'highpass_time_constant_s': 10}\nParticipants\n------------\n  Number of subjects: 31\n  Health status: healthy\n  Age: mean=28.3, min=20.0, max=57.0\n  Gender distribution: female=11, male=20\n  Species: human\nExperimental Protocol\n---------------------\n  Paradigm: ssvep\n  Number of classes: 4\n  Class labels: 10.0, 11.0, 12.0, 13.0\n  Trial duration: 7.0 s\n  Study design: Multi-paradigm BCI (MI/ME/SSVEP/P300). SSVEP: 4-class frequency flickering, 48 trials/session, up to 3 sessions per subject.\n  Feedback type: none\n  Stimulus type: flickering LED\n  Stimulus modalities: visual\n  Primary modality: visual\n  Synchronicity: synchronous\n  Mode: offline\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  10.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10_0\n  11.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/11_0\n  12.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12_0\n  13.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/13_0\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: ssvep\n  Stimulus frequencies: [8.0, 10.0, 12.0, 15.0] Hz\nData Structure\n--------------\n  Trials: 3024\n  Trials context: 63 sessions x 48 trials = 3024\nBCI Application\n---------------\n  Applications: communication\n  Environment: laboratory\nTags\n----\n  Pathology: Healthy\n  Modality: Visual\n  Type: Research\nDocumentation\n-------------\n  DOI: 10.1038/s41597-025-04861-9\n  License: CC0\n  Investigators: Eva Guttmann-Flury, Xinjun Sheng, Xiangyang Zhu\n  Institution: Shanghai Jiao Tong University\n  Country: CN\n  Publication year: 2025\nReferences\n----------\nGuttmann-Flury, E., Sheng, X., & Zhu, X. (2025). Dataset combining EEG, eye-tracking, and high-speed video for ocular activity analysis across BCI paradigms. Scientific Data, 12, 587. https://doi.org/10.1038/s41597-025-04861-9\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1","2"],"size_bytes":2217164832,"source":"nemar","storage":{"backend":"s3","base":"s3://openneuro.org/nm000310","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["ssvep"],"timestamps":{"digested_at":"2026-04-22T12:52:23.063062+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":26,"computed_title":"Guttmann-Flury et al. 2025 (SSVEP) — Dataset combining EEG, eye-tracking, and high-speed video for ocular activity analysis across BCI paradigms","nchans_counts":[{"val":65,"count":26}],"sfreq_counts":[{"val":1000.0,"count":26}],"stats_computed_at":"2026-04-22T23:16:00.314487+00:00","total_duration_s":11363.974,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"efb63d931bd7e176","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.85,"modality":0.95,"type":0.7},"reasoning":{"few_shot_analysis":"Closest few-shot conventions by paradigm/stimulus-purpose: (1) The schizophrenia visual moving-dots dataset is labeled Visual + Perception, where the core construct is sensory discrimination/evoked visual processing. This guides mapping a visually-evoked stimulation paradigm (here SSVEP flicker) to Type=Perception rather than Motor. (2) The “Subcortical responses to music and speech…” dataset is labeled Auditory + Perception for stimulus-evoked responses without a higher-level cognitive construct being primary; similarly, SSVEP is primarily stimulus-evoked visual response used for decoding. There is no direct SSVEP example in the few-shot set, so we follow the broader convention: stimulus-evoked sensory paradigms → Perception unless the metadata emphasizes another construct (e.g., attention/executive control/learning).","metadata_analysis":"Key facts from metadata: (1) Population: explicitly healthy — \"Health status: healthy\" and also \"Tags\\n----\\n  Pathology: Healthy\". (2) Stimulus channel: explicitly visual flicker — \"Stimulus type: flickering LED\", \"Stimulus modalities: visual\", and \"Primary modality: visual\". HED annotations also repeatedly specify \"Visual-presentation\" under events (10.0/11.0/12.0/13.0). (3) Paradigm/purpose: \"Paradigm: ssvep\" and \"BCI Application\\n---------------\\n  Applications: communication\" indicate an SSVEP BCI-style dataset with 4-class frequency flickering.","paper_abstract_analysis":"No useful paper information (abstract text not provided in the metadata snippet).","evidence_alignment_check":"Pathology: Metadata says \"Health status: healthy\" / \"Pathology: Healthy\". Few-shot pattern: when participants are not recruited for a disorder, label Healthy. ALIGN.\nModality: Metadata says \"Stimulus type: flickering LED\" and \"Stimulus modalities: visual\" (plus HED \"Visual-presentation\"). Few-shot pattern: map dominant stimulus channel to Visual. ALIGN.\nType: Metadata says \"Paradigm: ssvep\" with flickering visual stimulation for BCI \"communication\". Few-shot pattern: stimulus-evoked sensory paradigms (visual discrimination; auditory ABR) map to Perception unless the study primarily targets another construct. No explicit metadata emphasis on attention, learning, memory, or motor beyond gaze/BCI use. ALIGN (weakly; requires inference).","decision_summary":"Top-2 candidates per category:\nPathology — (A) Healthy: supported by \"Health status: healthy\", \"Tags... Pathology: Healthy\"; (B) Unknown: would apply only if health status were not stated. Winner: Healthy. Alignment: aligns with few-shot conventions for normative cohorts. Confidence evidence: 2 explicit quotes + consistent tags.\nModality — (A) Visual: supported by \"Stimulus type: flickering LED\", \"Stimulus modalities: visual\", \"Primary modality: visual\" and HED \"Visual-presentation\"; (B) Multisensory/Other: not supported (no auditory/tactile described). Winner: Visual. Alignment: aligns with few-shot modality mapping. Confidence evidence: 3+ explicit quotes.\nType — (A) Perception: SSVEP is a visually evoked response to flicker; supported by \"Paradigm: ssvep\" plus explicit visual stimulation lines; matches few-shot convention that stimulus-evoked sensory paradigms map to Perception. (B) Attention: plausible because SSVEP BCIs often require focusing attention/gaze on a target, but metadata does not explicitly foreground attention as the research construct. Winner: Perception. Alignment: consistent with few-shot style; inference needed because construct is not explicitly named as 'perception'. Confidence evidence: 1 explicit paradigm quote + contextual inference from SSVEP/visual evoked stimulation."}},"canonical_name":null,"name_confidence":0.72,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"GuttmannFlury2025_SSVEP"}}