{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c69","dataset_id":"nm000128","associated_paper_doi":null,"authors":["Yue Dong","Sen Tian"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":59,"ages":[15,11,11,16,11,12,13,12,14,11,11,11,11,12,13,10,12,11,11,13,14,12,13,13,13,11,12,13,13,15,14,13,14,13,11,12,11,13,13,14,11,11,13,14,13,14,11,14,14,10,12,11,11,13,12,12,12,11,13],"age_min":10,"age_max":16,"age_mean":12.372881355932204,"species":null,"sex_distribution":{"m":37,"f":22},"handedness_distribution":{"r":59}},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000128","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"e3e43033d564e4405e5b76e1317958c789816e3feb045fa4b959ebebce4b36df","license":"CC BY-NC 4.0","n_contributing_labs":null,"name":"Dong2023 – 59-subject 40-class SSVEP dataset","readme":"# 59-subject 40-class SSVEP dataset\n59-subject 40-class SSVEP dataset.\n## Dataset Overview\n- **Code**: Dong2023\n- **Paradigm**: ssvep\n- **DOI**: 10.26599/BSA.2023.9050020\n- **Subjects**: 59\n- **Sessions per subject**: 1\n- **Events**: 8=1, 8.2=2, 8.4=3, 8.6=4, 8.8=5, 9=6, 9.2=7, 9.4=8, 9.6=9, 9.8=10, 10=11, 10.2=12, 10.4=13, 10.6=14, 10.8=15, 11=16, 11.2=17, 11.4=18, 11.6=19, 11.8=20, 12=21, 12.2=22, 12.4=23, 12.6=24, 12.8=25, 13=26, 13.2=27, 13.4=28, 13.6=29, 13.8=30, 14=31, 14.2=32, 14.4=33, 14.6=34, 14.8=35, 15=36, 15.2=37, 15.4=38, 15.6=39, 15.8=40\n- **Trial interval**: [0.5, 4.5] s\n- **File format**: MAT\n## Acquisition\n- **Sampling rate**: 250.0 Hz\n- **Number of channels**: 8\n- **Channel types**: eeg=8\n- **Channel names**: POz, PO3, PO4, PO7, PO8, Oz, O1, O2\n- **Montage**: standard_1005\n- **Hardware**: NeuSenW (Neuracle)\n- **Reference**: Fp1\n- **Ground**: Fp2\n- **Sensor type**: semi-dry (pre-gelled)\n- **Line frequency**: 50.0 Hz\n## Participants\n- **Number of subjects**: 59\n- **Health status**: healthy\n- **Age**: mean=12.4, min=10, max=16\n- **Gender distribution**: male=37, female=22\n## Experimental Protocol\n- **Paradigm**: ssvep\n- **Task type**: SSVEP speller\n- **Number of classes**: 40\n- **Class labels**: 8, 8.2, 8.4, 8.6, 8.8, 9, 9.2, 9.4, 9.6, 9.8, 10, 10.2, 10.4, 10.6, 10.8, 11, 11.2, 11.4, 11.6, 11.8, 12, 12.2, 12.4, 12.6, 12.8, 13, 13.2, 13.4, 13.6, 13.8, 14, 14.2, 14.4, 14.6, 14.8, 15, 15.2, 15.4, 15.6, 15.8\n- **Trial duration**: 4.0 s\n- **Feedback type**: visual\n- **Stimulus type**: JFPM visual flicker\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Training/test split**: False\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/8\n  8.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/8_2\n  8.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/8_4\n  8.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/8_6\n  8.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/8_8\n  9\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/9\n  9.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/9_2\n  9.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/9_4\n  9.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/9_6\n  9.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/9_8\n  10\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10\n  10.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10_2\n  10.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10_4\n  10.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10_6\n  10.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10_8\n  11\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/11\n  11.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/11_2\n  11.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/11_4\n  11.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/11_6\n  11.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/11_8\n  12\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12\n  12.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12_2\n  12.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12_4\n  12.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12_6\n  12.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12_8\n  13\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/13\n  13.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/13_2\n  13.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/13_4\n  13.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/13_6\n  13.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/13_8\n  14\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14\n  14.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14_2\n  14.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14_4\n  14.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14_6\n  14.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14_8\n  15\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/15\n  15.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/15_2\n  15.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/15_4\n  15.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/15_6\n  15.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/15_8\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: ssvep\n- **Stimulus frequencies**: [8.0, 8.2, 8.4, 8.6, 8.8, 9.0, 9.2, 9.4, 9.6, 9.8, 10.0, 10.2, 10.4, 10.6, 10.8, 11.0, 11.2, 11.4, 11.6, 11.8, 12.0, 12.2, 12.4, 12.600000000000001, 12.8, 13.0, 13.2, 13.4, 13.600000000000001, 13.8, 14.0, 14.2, 14.4, 14.600000000000001, 14.8, 15.0, 15.2, 15.4, 15.600000000000001, 15.8] Hz\n- **Frequency resolution**: 0.2 Hz\n## Data Structure\n- **Trials**: 160\n- **Blocks per session**: 4\n## Preprocessing\n- **Data state**: epoched\n- **Downsampled to**: 250.0 Hz\n## Signal Processing\n- **Classifiers**: FBCCA, eTRCA, msTRCA\n- **Spatial filters**: CCA, TRCA\n## Cross-Validation\n- **Method**: leave-one-block-out\n- **Folds**: 4\n- **Evaluation type**: within_subject\n## BCI Application\n- **Environment**: non-shielded\n- **Online feedback**: True\n## Tags\n- **Pathology**: healthy\n- **Modality**: visual\n- **Type**: perception\n## Documentation\n- **DOI**: 10.26599/BSA.2023.9050020\n- **License**: CC BY-NC 4.0\n- **Investigators**: Yue Dong, Sen Tian\n- **Senior author**: Yue Dong\n- **Institution**: Jiangsu JITRI Brain Machine Fusion Intelligence Institute\n- **Country**: CN\n- **Repository**: Zenodo\n- **Data URL**: https://zenodo.org/records/18847318\n- **Publication year**: 2023\n## References\nY. Dong and S. Tian, \"A large database towards user-friendly SSVEP-based BCI,\" Brain Science Advances, vol. 9, no. 4, pp. 297-309, 2023. DOI: 10.26599/BSA.2023.9050020\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.4.3 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":416390010,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000128","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["ssvep"],"timestamps":{"digested_at":"2026-04-30T14:08:35.315029+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-29T22:38:56Z"},"total_files":59,"computed_title":"Dong2023 – 59-subject 40-class SSVEP dataset","nchans_counts":[{"val":8,"count":59}],"sfreq_counts":[{"val":250.0,"count":59}],"stats_computed_at":"2026-05-01T13:49:34.644886+00:00","total_duration_s":50975.763999999996,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"2ffeb16b34439511","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"No few-shot example is explicitly an SSVEP-speller/BCI dataset, but the conventions for mapping stimulus-driven sensory paradigms to Modality/Type are clear. For Modality and Type, the closest analog is the healthy auditory stimulus dataset (“Subcortical responses to music and speech...”) labeled Modality=Auditory and Type=Perception, showing that stimulus-driven sensory-evoked EEG studies are labeled by the stimulus channel and generally as Perception when the goal is evoked responses/classification. Also, the schizophrenia visual discrimination example is labeled Modality=Visual and Type=Perception, reinforcing the convention that visually presented stimulus paradigms map to Visual + Perception when the core is sensory-evoked responses rather than motor execution or resting-state.","metadata_analysis":"Key participant/pathology facts: (1) “Health status: healthy” and (2) “Number of subjects: 59” with “Age: mean=12.4, min=10, max=16” indicate a non-clinical cohort (adolescents but not recruited for a disorder). Key modality/task facts: (1) “Stimulus type: JFPM visual flicker”, (2) “Stimulus modalities: visual” and “Primary modality: visual”, and (3) HED annotations repeatedly include “Visual-presentation”. Key purpose/type facts: (1) “Paradigm: ssvep” and “Task type: SSVEP speller” indicate an evoked visual steady-state response paradigm, and (2) the dataset’s own tag states “Type: perception”.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says participants are non-clinical (e.g., “Health status: healthy”; also dataset tag “Pathology: healthy”). Few-shot pattern suggests labeling by recruited clinical condition (or Healthy when none). ALIGN (Healthy).\n\nModality: Metadata explicitly indicates visual stimulation (e.g., “Stimulus modalities: visual”, “Stimulus type: ... visual flicker”, HED “Visual-presentation”). Few-shot pattern suggests modality follows stimulus channel (e.g., auditory stimuli -> Auditory; visual discrimination -> Visual). ALIGN (Visual).\n\nType: Metadata indicates an SSVEP speller with flicker-evoked responses (e.g., “Paradigm: ssvep”, “Task type: SSVEP speller”) and even provides “Type: perception”. Few-shot convention suggests sensory-evoked/discrimination paradigms are labeled Perception (e.g., auditory/music-speech; visual discrimination). ALIGN (Perception). Potential alternative is Attention (SSVEP spellers often require selective visual attention), but the dataset framing and provided tag favor Perception.","decision_summary":"Top-2 candidates with head-to-head selection:\n\nPathology:\n- Healthy (WIN): Supported by “Health status: healthy”, dataset tag “Pathology: healthy”, and the absence of any diagnosis-based recruitment; age range (10–16) alone is not a pathology.\n- Development (RUNNER-UP): Considered only because the cohort is adolescent (“Age: mean=12.4, min=10, max=16”), but there is no developmental disorder/mental health recruitment.\nAlignment: Align.\nConfidence justification: 3+ explicit metadata statements supporting Healthy.\n\nModality:\n- Visual (WIN): “Stimulus modalities: visual”, “Primary modality: visual”, “Stimulus type: ... visual flicker”, plus HED “Visual-presentation”.\n- Other (RUNNER-UP): Only if treating BCI/speller as non-sensory, but stimulus is clearly visual.\nAlignment: Align.\nConfidence justification: 3+ explicit visual-stimulus quotes/features.\n\nType:\n- Perception (WIN): “Paradigm: ssvep”, “Task type: SSVEP speller” (evoked visual steady-state response to flicker), and explicit tag “Type: perception”. Few-shot convention maps stimulus-evoked sensory paradigms to Perception.\n- Attention (RUNNER-UP): Plausible because SSVEP spellers involve selecting/attending to one of multiple flickers, but this is not stated as the primary construct in the provided metadata.\nAlignment: Align.\nConfidence justification: explicit “Type: perception” plus strong contextual support from SSVEP visual-evoked paradigm."}},"canonical_name":null,"name_confidence":0.74,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Dong2023"}}