{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c64","dataset_id":"nm000123","associated_paper_doi":null,"authors":["Emmanuel K. Kalunga","Sylvain Chevallier","Quentin Barthélemy","Karim Djouani","Eric Monacelli","Yskandar Hamam"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":12,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000123","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"2c9ab23c099a4fdabd4584b651e7bdbee192ef01ef2ced8a316b5f41fac6654b","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Kalunga2016 – SSVEP Exo dataset","readme":"# SSVEP Exo dataset\nSSVEP Exo dataset.\n## Dataset Overview\n- **Code**: Kalunga2016\n- **Paradigm**: ssvep\n- **DOI**: 10.1016/j.neucom.2016.01.007\n- **Subjects**: 12\n- **Sessions per subject**: 1\n- **Events**: 13=2, 17=4, 21=3, rest=1\n- **Trial interval**: [2, 4] s\n- **File format**: fif\n## Acquisition\n- **Sampling rate**: 256.0 Hz\n- **Number of channels**: 8\n- **Channel types**: eeg=8\n- **Channel names**: Oz, O1, O2, POz, PO3, PO4, PO7, PO8\n- **Montage**: standard_1005\n- **Hardware**: g.tec MobiLab\n- **Reference**: right mastoid\n- **Sensor type**: EEG\n- **Line frequency**: 50.0 Hz\n## Participants\n- **Number of subjects**: 12\n- **Health status**: healthy\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: ssvep\n- **Number of classes**: 4\n- **Class labels**: 13, 17, 21, rest\n- **Trial duration**: 6.0 s\n- **Study design**: SSVEP\n- **Feedback type**: none\n- **Stimulus type**: flickering\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Stimulus presentation**: device=LED stimuli, frequencies=13 Hz, 17 Hz, 21 Hz, note=No phase synchronization required\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  13\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/13\n  17\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/17\n  21\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/21\n  rest\n    ├─ Experiment-structure\n    └─ Rest\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: ssvep\n- **Stimulus frequencies**: [13.0, 17.0, 21.0] Hz\n- **Number of targets**: 3\n## Data Structure\n- **Trials**: 32 trials per session (8 per visual stimulus, 8 for resting class)\n- **Trials context**: per session\n## Preprocessing\n- **Preprocessing applied**: False\n## Signal Processing\n- **Classifiers**: MDRM, CCA\n- **Feature extraction**: Covariance/Riemannian\n## Cross-Validation\n- **Method**: bootstrap\n- **Evaluation type**: cross_subject, cross_session\n## BCI Application\n- **Applications**: assistive_robotics\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Visual\n- **Type**: Perception\n## Documentation\n- **Description**: Online SSVEP-based BCI using Riemannian geometry for assistive robotics with shared control scheme\n- **DOI**: 10.1016/j.neucom.2016.01.007\n- **License**: CC-BY-4.0\n- **Investigators**: Emmanuel K. Kalunga, Sylvain Chevallier, Quentin Barthélemy, Karim Djouani, Eric Monacelli, Yskandar Hamam\n- **Senior author**: Sylvain Chevallier\n- **Institution**: Universite de Versailles Saint-Quentin\n- **Department**: Laboratoire d'Ingénierie des Systèmes de Versailles\n- **Address**: 78140 Velizy, France\n- **Country**: FR\n- **Repository**: Zenodo\n- **Data URL**: https://zenodo.org/record/2392979\n- **Publication year**: 2016\n- **Keywords**: Riemannian geometry, Online, Asynchronous, Brain-Computer Interfaces, Steady State Visually Evoked Potentials\n## References\nEmmanuel K. Kalunga, Sylvain Chevallier, Quentin Barthelemy. \"Online SSVEP-based BCI using Riemannian Geometry\". Neurocomputing, 2016. arXiv report: https://arxiv.org/abs/1501.03227\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.4.3 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":81996149,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000123","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["ssvep"],"timestamps":{"digested_at":"2026-04-30T14:08:34.361408+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-29T22:38:35Z"},"total_files":30,"computed_title":"Kalunga2016 – SSVEP Exo dataset","nchans_counts":[{"val":8,"count":30}],"sfreq_counts":[{"val":256.0,"count":30}],"stats_computed_at":"2026-05-01T13:49:34.644792+00:00","total_duration_s":9322.7578125,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"e0d031a5cac9b754","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"No few-shot example is explicitly SSVEP, but the conventions transfer from similar stimulus-driven paradigms. The schizophrenia dataset uses an explicit sensory discrimination paradigm and is labeled Visual + Perception, showing that stimulus-driven sensory paradigms map to Type=Perception rather than Motor (even if responses are used). The auditory ABR/music-vs-speech dataset is labeled Auditory + Perception, reinforcing that evoked sensory responses (stimulus-locked physiology) are categorized as Perception. These conventions guide mapping SSVEP (steady-state visually evoked potentials from flickering visual stimuli) to Modality=Visual and Type=Perception.","metadata_analysis":"Key explicit metadata facts:\n- Population/health: \"Health status: healthy\" and later \"Tags\\n- **Pathology**: Healthy\".\n- Stimulus modality: \"Stimulus type: flickering\", \"Stimulus modalities: visual\", and \"Primary modality: visual\".\n- Paradigm/construct: \"**Paradigm**: ssvep\" plus description: \"Online SSVEP-based BCI\" and keyword \"Steady State Visually Evoked Potentials\". Also HED annotations label events as \"Sensory-event\" and \"Visual-presentation\" for the 13/17/21 Hz classes.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\" / \"Pathology: Healthy\".\n- Few-shot pattern suggests: when participants are typical volunteers and no disorder recruitment is stated, label Healthy.\n- ALIGN.\n\nModality:\n- Metadata says: \"Stimulus modalities: visual\" / \"Primary modality: visual\" and \"Stimulus type: flickering\".\n- Few-shot pattern suggests: sensory channel of presented stimuli determines Modality (e.g., visual discrimination -> Visual; auditory ABR -> Auditory).\n- ALIGN.\n\nType:\n- Metadata says: \"Paradigm: ssvep\" and describes an \"SSVEP-based BCI\" driven by flickering visual stimulation (13/17/21 Hz) with a rest class.\n- Few-shot pattern suggests: stimulus-driven sensory/evoked-response paradigms map to Type=Perception (not Motor), even if used for BCI control.\n- ALIGN.","decision_summary":"Top-2 candidates with head-to-head comparison:\n\nPathology:\n1) Healthy (WIN) — explicit: \"Health status: healthy\"; \"Tags - Pathology: Healthy\".\n2) Unknown (runner-up) — would apply only if health status were not stated.\nDecision: Healthy. Alignment: aligns with few-shot conventions for normative cohorts.\nConfidence evidence: 2 explicit quotes.\n\nModality:\n1) Visual (WIN) — explicit: \"Stimulus modalities: visual\"; \"Primary modality: visual\"; plus \"Stimulus type: flickering\" supports visual flicker.\n2) Other (runner-up) — only if modality were ambiguous.\nDecision: Visual. Alignment: matches few-shot mapping of stimulus channel to Modality.\nConfidence evidence: 3 explicit quotes.\n\nType:\n1) Perception (WIN) — SSVEP is a visually evoked potential paradigm: \"Paradigm: ssvep\" with flickering stimuli at specific frequencies; HED marks events as \"Sensory-event\" and \"Visual-presentation\".\n2) Attention (runner-up) — SSVEP BCIs can involve selective attention to one flicker, but the primary construct here is steady-state visual evoked response classification rather than attentional manipulation.\nDecision: Perception. Alignment: consistent with few-shot convention labeling evoked sensory paradigms as Perception.\nConfidence evidence: 2+ explicit task/paradigm quotes plus strong few-shot analog."}},"canonical_name":null,"name_confidence":0.72,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Kalunga2016"}}