{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c68","dataset_id":"nm000127","associated_paper_doi":null,"authors":["Heegyu Kim","Kyungho Won","Minkyu Ahn","Sung Chan Jun"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":40,"ages":[22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22],"age_min":22,"age_max":22,"age_mean":22.0,"species":null,"sex_distribution":{"m":25,"f":15},"handedness_distribution":{"r":35,"l":4}},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000127","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"95eb262c6e78430c0b841d66039a4445b40c18261da7dc3745c76237e97bc341","license":"CC BY 4.0","n_contributing_labs":null,"name":"Kim2025 – 40-class beta-range SSVEP speller dataset","readme":"# 40-class beta-range SSVEP speller dataset\n40-class beta-range SSVEP speller dataset.\n## Dataset Overview\n- **Code**: Kim2025BetaRange\n- **Paradigm**: ssvep\n- **DOI**: 10.1038/s41597-025-06032-2\n- **Subjects**: 40\n- **Sessions per subject**: 6\n- **Events**: 14=1, 15=2, 16=3, 17=4, 18=5, 19=6, 20=7, 21=8, 14.2=9, 15.2=10, 16.2=11, 17.2=12, 18.2=13, 19.2=14, 20.2=15, 21.2=16, 14.4=17, 15.4=18, 16.4=19, 17.4=20, 18.4=21, 19.4=22, 20.4=23, 21.4=24, 14.6=25, 15.6=26, 16.6=27, 17.6=28, 18.6=29, 19.6=30, 20.6=31, 21.6=32, 14.8=33, 15.8=34, 16.8=35, 17.8=36, 18.8=37, 19.8=38, 20.8=39, 21.8=40\n- **Trial interval**: [0.0, 5.0] s\n- **File format**: MAT\n## Acquisition\n- **Sampling rate**: 1024.0 Hz\n- **Number of channels**: 31\n- **Channel types**: eeg=31, misc=2\n- **Montage**: standard_1005\n- **Hardware**: BioSemi ActiveTwo\n- **Software**: OpenViBE\n- **Reference**: CMS/DRL\n- **Ground**: CMS/DRL near Pz\n- **Sensor type**: active\n- **Line frequency**: 60.0 Hz\n- **Impedance threshold**: 5 kOhm\n- **Cap manufacturer**: BioSemi\n- **Electrode type**: wet\n- **Electrode material**: Ag/AgCl\n## Participants\n- **Number of subjects**: 40\n- **Health status**: healthy\n- **Age**: mean=22.8, std=3.34, min=20, max=35\n- **Gender distribution**: male=25, female=15\n- **BCI experience**: 3 of 40 had prior SSVEP-BCI experience\n## Experimental Protocol\n- **Paradigm**: ssvep\n- **Task type**: speller\n- **Number of classes**: 40\n- **Class labels**: 14, 15, 16, 17, 18, 19, 20, 21, 14.2, 15.2, 16.2, 17.2, 18.2, 19.2, 20.2, 21.2, 14.4, 15.4, 16.4, 17.4, 18.4, 19.4, 20.4, 21.4, 14.6, 15.6, 16.6, 17.6, 18.6, 19.6, 20.6, 21.6, 14.8, 15.8, 16.8, 17.8, 18.8, 19.8, 20.8, 21.8\n- **Trial duration**: 5.0 s\n- **Feedback type**: none\n- **Stimulus type**: JFPM visual flicker\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Training/test split**: True\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  14\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14\n  15\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/15\n  16\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/16\n  17\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/17\n  18\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/18\n  19\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/19\n  20\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/20\n  21\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/21\n  14.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14_2\n  15.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/15_2\n  16.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/16_2\n  17.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/17_2\n  18.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/18_2\n  19.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/19_2\n  20.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/20_2\n  21.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/21_2\n  14.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14_4\n  15.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/15_4\n  16.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/16_4\n  17.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/17_4\n  18.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/18_4\n  19.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/19_4\n  20.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/20_4\n  21.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/21_4\n  14.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14_6\n  15.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/15_6\n  16.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/16_6\n  17.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/17_6\n  18.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/18_6\n  19.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/19_6\n  20.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/20_6\n  21.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/21_6\n  14.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14_8\n  15.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/15_8\n  16.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/16_8\n  17.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/17_8\n  18.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/18_8\n  19.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/19_8\n  20.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/20_8\n  21.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/21_8\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: ssvep\n- **Stimulus frequencies**: [14.0, 14.2, 14.4, 14.6, 14.8, 15.0, 15.2, 15.4, 15.6, 15.8, 16.0, 16.2, 16.4, 16.6, 16.8, 17.0, 17.2, 17.4, 17.6, 17.8, 18.0, 18.2, 18.4, 18.6, 18.8, 19.0, 19.2, 19.4, 19.6, 19.8, 20.0, 20.2, 20.4, 20.6, 20.8, 21.0, 21.2, 21.4, 21.6, 21.8] Hz\n- **Frequency resolution**: 0.2 Hz\n## Data Structure\n- **Trials**: 240\n- **Blocks per session**: 6\n## Preprocessing\n- **Data state**: epoched\n## Signal Processing\n- **Classifiers**: CCA, FBCCA, ITCCA, TRCA, EEGNet\n- **Feature extraction**: CCA, FBCCA, TRCA\n- **Frequency bands**: stimulus_range=[14.0, 22.0] Hz; analysis=[13.0, 89.0] Hz\n- **Spatial filters**: CCA, TRCA\n## Cross-Validation\n- **Method**: leave-one-subject-out\n- **Folds**: 6\n- **Evaluation type**: within_subject, cross_subject\n## BCI Application\n- **Applications**: speller\n- **Environment**: lab\n## Tags\n- **Pathology**: healthy\n- **Modality**: visual\n- **Type**: perception\n## Documentation\n- **DOI**: 10.1038/s41597-025-06032-2\n- **License**: CC BY 4.0\n- **Investigators**: Heegyu Kim, Kyungho Won, Minkyu Ahn, Sung Chan Jun\n- **Senior author**: Sung Chan Jun\n- **Institution**: Gwangju Institute of Science and Technology\n- **Department**: School of Electrical Engineering and Computer Science, GIST\n- **Country**: KR\n- **Repository**: Figshare\n- **Data URL**: https://doi.org/10.6084/m9.figshare.28806815.v2\n- **Publication year**: 2025\n- **Ethics approval**: GIST IRB, No. 20211201-HR-64-02-04\n- **Keywords**: SSVEP, BCI, beta range, visual fatigue, 40-class speller, JFPM, EEG\n## References\nH. Kim, K. Won, M. Ahn, and S. C. Jun, \"A 40-class SSVEP speller dataset: beta range stimulation for low-fatigue BCI applications,\" Scientific Data, vol. 12, p. 1751, 2025. DOI: 10.1038/s41597-025-06032-2\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.4.3 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1","2","3","4","5"],"size_bytes":8668693287,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000127","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["ssvep"],"timestamps":{"digested_at":"2026-04-30T14:08:35.123307+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-29T22:38:12Z"},"total_files":240,"computed_title":"Kim2025 – 40-class beta-range SSVEP speller dataset","nchans_counts":[{"val":31,"count":240}],"sfreq_counts":[{"val":1024.0,"count":240}],"stats_computed_at":"2026-05-01T13:49:34.644870+00:00","total_duration_s":68137.265625,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"8fcc750d2079d0a2","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.7,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot example by stimulus/cognitive framing is the schizophrenia visual motion discrimination dataset labeled (Pathology=Schizophrenia/Psychosis, Modality=Visual, Type=Perception). Although the population differs, it provides the convention that visually driven stimulus-evoked EEG paradigms aiming at decoding/recognizing visual stimulus properties are labeled as Visual + Perception rather than Motor. The motor-imagery example shows that Type=Motor is reserved for movement/imagery as the research focus, which is not the case for an SSVEP speller driven by visual flicker.","metadata_analysis":"Key population facts: (1) \"Health status: healthy\" and (2) \"Number of subjects: 40\" under Participants. Key stimulus/modality facts: (1) \"Stimulus type: JFPM visual flicker\" and (2) \"Stimulus modalities: visual\" / \"Primary modality: visual\". Key paradigm/task facts: (1) \"Paradigm: ssvep\" and (2) \"Task type: speller\" / \"Applications: speller\" indicating a visual-evoked BCI spelling paradigm based on steady-state visually evoked potentials.","paper_abstract_analysis":"No useful paper information. (The dataset metadata cites a DOI, but no abstract text is provided in the input.)","evidence_alignment_check":"Pathology: Metadata says \"Health status: healthy\" (ALIGNS with few-shot convention that non-clinical cohorts are labeled Healthy). Modality: Metadata says \"Stimulus type: JFPM visual flicker\" and \"Primary modality: visual\" (ALIGNS with few-shot visual-stimulus datasets labeled Visual). Type: Metadata indicates \"Paradigm: ssvep\" and \"Task type: speller\"; few-shot convention maps stimulus-driven sensory paradigms (especially discrimination/recognition of stimulus features) to Perception rather than Motor (ALIGNS). No conflicts detected; no overrides needed.","decision_summary":"Pathology top-2: (1) Healthy — supported by \"Health status: healthy\" and typical non-clinical BCI recruitment; (2) Unknown — possible if health info were absent, but it is explicit here. Final: Healthy. Modality top-2: (1) Visual — supported by \"Stimulus type: JFPM visual flicker\", \"Stimulus modalities: visual\", \"Primary modality: visual\"; (2) Multisensory/Other — not supported (no non-visual stimulus described). Final: Visual. Type top-2: (1) Perception — SSVEP is an evoked response to visual flicker frequencies (stimulus-driven recognition/decoding); supported by \"Paradigm: ssvep\" and flicker stimulus description; (2) Attention — plausible because SSVEP spellers require attending to a target, but primary described construct is stimulus-evoked frequency-tagging for BCI spelling rather than an attention manipulation. Final: Perception. Confidence justification: Pathology has 1 explicit quote; Modality has 2+ explicit quotes; Type has 2 explicit paradigm/task quotes plus strong alignment to few-shot Perception convention for visual stimulus-driven EEG."}},"canonical_name":null,"name_confidence":0.66,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Kim2025_SSVEP"}}