{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c63","dataset_id":"nm000122","associated_paper_doi":null,"authors":["Jingjing Chen","Dan Zhang","Andreas K. Engel","Qin Gong","Alexander Maye"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":12,"ages":[23,23,23,23,23,23,23,23,23,23,23,23],"age_min":23,"age_max":23,"age_mean":23.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000122","osf_url":null,"github_url":null,"paper_url":null},"funding":["DFG TRR169/B1/Z2 Crossmodal Learning","Landesforschungsfoerderung Hamburg CROSS FV25"],"ingestion_fingerprint":"e4ee46b2fa026ae7a8f3da242caf5881a93e858d7f4dd8f111aa2149d3883a7b","license":"CC BY 4.0","n_contributing_labs":null,"name":"Chen2017 – Single-flicker online SSVEP BCI dataset","readme":"# Single-flicker online SSVEP BCI dataset\nSingle-flicker online SSVEP BCI dataset.\n## Dataset Overview\n- **Code**: Chen2017SingleFlicker\n- **Paradigm**: ssvep\n- **DOI**: 10.1371/journal.pone.0178385\n- **Subjects**: 12\n- **Sessions per subject**: 2\n- **Events**: north=1, east=2, west=3, south=4\n- **Trial interval**: [0.0, 3.5] s\n- **File format**: XDF/MAT\n## Acquisition\n- **Sampling rate**: 512.0 Hz\n- **Number of channels**: 32\n- **Channel types**: eeg=32\n- **Montage**: biosemi32\n- **Hardware**: BioSemi ActiveTwo\n- **Reference**: CMS/DRL\n- **Sensor type**: active\n- **Line frequency**: 50.0 Hz\n- **Cap manufacturer**: BioSemi\n- **Electrode material**: sintered Ag/AgCl\n## Participants\n- **Number of subjects**: 12\n- **Health status**: healthy\n- **Age**: mean=23.5, min=19, max=32\n- **Gender distribution**: male=5, female=7\n## Experimental Protocol\n- **Paradigm**: ssvep\n- **Task type**: spatial navigation\n- **Number of classes**: 4\n- **Class labels**: north, east, west, south\n- **Study design**: Spatial navigation with single 15 Hz flicker\n- **Feedback type**: visual\n- **Stimulus type**: single-flicker spatially coded\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: online\n- **Training/test split**: True\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  north\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/north\n  east\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/east\n  west\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/west\n  south\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/south\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: ssvep\n- **Stimulus frequencies**: [15.0] Hz\n## Signal Processing\n- **Classifiers**: LDA\n- **Feature extraction**: CCA\n- **Frequency bands**: bandpass=[1.0, 80.0] Hz\n- **Spatial filters**: CCA\n## Cross-Validation\n- **Evaluation type**: within_subject\n## BCI Application\n- **Applications**: spatial_navigation\n- **Environment**: lab\n- **Online feedback**: True\n## Tags\n- **Pathology**: healthy\n- **Modality**: visual\n- **Type**: perception\n## Documentation\n- **DOI**: 10.1371/journal.pone.0178385\n- **License**: CC BY 4.0\n- **Investigators**: Jingjing Chen, Dan Zhang, Andreas K. Engel, Qin Gong, Alexander Maye\n- **Senior author**: Alexander Maye\n- **Institution**: University Medical Center Hamburg-Eppendorf\n- **Department**: Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf\n- **Country**: DE\n- **Repository**: Zenodo\n- **Data URL**: https://zenodo.org/records/580485\n- **Publication year**: 2017\n- **Funding**: DFG TRR169/B1/Z2 Crossmodal Learning; Landesforschungsfoerderung Hamburg CROSS FV25\n- **Ethics approval**: Ethics committee of the medical association, Hamburg\n- **Keywords**: SSVEP, BCI, spatial navigation, single-flicker, online BCI\n## References\nJ. Chen, D. Zhang, A. K. Engel, Q. Gong, and A. Maye, \"Application of a single-flicker online SSVEP BCI for spatial navigation,\" PLoS ONE, vol. 12, no. 5, e0178385, 2017. DOI: 10.1371/journal.pone.0178385\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.4.3 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["1"],"size_bytes":777917745,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000122","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["ssvep"],"timestamps":{"digested_at":"2026-04-30T14:08:34.268726+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-29T22:38:43Z"},"total_files":12,"computed_title":"Chen2017 – Single-flicker online SSVEP BCI dataset","nchans_counts":[{"val":32,"count":12}],"sfreq_counts":[{"val":512.0,"count":12}],"stats_computed_at":"2026-05-01T13:49:34.644775+00:00","total_duration_s":11775.03515625,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"ccda3c26594a5bee","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"No few-shot example is explicitly SSVEP-BCI, but the closest convention match is the visual discrimination ERP-style dataset (Meta-rdk) labeled as (Modality=Visual, Type=Perception) because the primary manipulation is visual stimulus processing and discrimination. In contrast, the motor imagery dataset (EEG Motor Movement/Imagery Dataset) is labeled Type=Motor because movement/imagery is the research focus. This Chen2017 dataset centers on visually-evoked SSVEP responses to a flicker stimulus for BCI classification, which aligns more with the perception-style convention than motor/action-focused conventions.","metadata_analysis":"Key quoted facts from metadata:\n1) Population: \"Health status: healthy\" and \"Subjects: 12\".\n2) Stimulus modality: \"Stimulus modalities: visual\" and \"Primary modality: visual\".\n3) Paradigm/stimulus: \"Paradigm: ssvep\" and \"Study design: Spatial navigation with single 15 Hz flicker\" plus \"Stimulus type: single-flicker spatially coded\".\n4) BCI/online classification context: \"Single-flicker online SSVEP BCI dataset\" and \"Online feedback: True\".\nThese indicate healthy participants performing a visually driven SSVEP BCI task based on a 15 Hz flicker.","paper_abstract_analysis":"No useful paper information (abstract not provided in the dataset metadata beyond citation).","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\".\n- Few-shot pattern suggests: when no disorder and typical volunteer sample, label \"Healthy\".\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"Stimulus modalities: visual\" and \"Primary modality: visual\".\n- Few-shot pattern suggests: visual stimulus presentation -> \"Visual\".\n- Alignment: ALIGN.\n\nType:\n- Metadata says: \"Paradigm: ssvep\", \"single 15 Hz flicker\", and events are stimulus labels (\"north/east/west/south\" with HED tags \"Sensory-event\" and \"Visual-presentation\").\n- Few-shot pattern suggests: tasks centered on sensory stimulus processing/discrimination (even with choice/BCI output) map to \"Perception\" rather than Motor.\n- Alignment: ALIGN (Perception best matches SSVEP as visually-evoked response paradigm).","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n1) Healthy (SELECTED): supported by \"Health status: healthy\" and typical non-clinical demographics (\"Age: mean=23.5...\").\n2) Unknown (runner-up): would apply only if health status absent/unclear, but it is explicit.\nConfidence basis: explicit health-status quote + consistent context.\n\nModality:\n1) Visual (SELECTED): \"Stimulus modalities: visual\", \"Primary modality: visual\", and \"single 15 Hz flicker\".\n2) Multisensory (runner-up): not supported; no auditory/tactile stimuli described.\nConfidence basis: multiple explicit modality lines.\n\nType:\n1) Perception (SELECTED): SSVEP is a visually evoked steady-state response paradigm (\"Paradigm: ssvep\"; \"single 15 Hz flicker\"), with stimulus-class events (north/east/west/south) and visual feedback.\n2) Attention (runner-up): plausible because SSVEP BCIs often require attentional selection, but the metadata emphasizes stimulus/flicker-evoked SSVEP and visual coding rather than attention as the explicit construct.\nConfidence basis: explicit SSVEP + visual flicker design strongly supports a perception-focused label, with attention as a secondary interpretation."}},"canonical_name":null,"name_confidence":0.9,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Chen2017"}}