{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c6a","dataset_id":"nm000129","associated_paper_doi":null,"authors":["Bingchuan Liu","Xiaoshan Huang","Yijun Wang","Xiaogang Chen","Xiaorong Gao"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":70,"ages":[26,20,25,22,25,24,24,18,27,28,22,17,23,10,23,20,43,20,21,30,10,22,28,22,24,22,22,28,49,26,20,23,16,36,28,31,64,20,20,21,24,22,24,24,26,24,22,27,25,19,24,28,17,46,23,23,28,23,24,27,38,25,20,28,25,22,22,30,28,21],"age_min":10,"age_max":64,"age_mean":25.12857142857143,"species":null,"sex_distribution":{"f":28,"m":42},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000129","osf_url":null,"github_url":null,"paper_url":null},"funding":["National Key Research and Development Program of China (No. 2017YFB1002505)","Strategic Priority Research Program of Chinese Academy of Sciences (No. XDB32040200)","Key Research and Development Program of Guangdong Province (No. 2018B030339001)","National Natural Science Foundation of China (Grant No. 61431007)"],"ingestion_fingerprint":"9d76d2e7a8b6eeb8d9a844dca6a62129e353f61bc518274d9f22a23df3eb836a","license":"Non-commercial research use","n_contributing_labs":null,"name":"Liu2020 – BETA SSVEP benchmark dataset","readme":"# BETA SSVEP benchmark dataset\nBETA SSVEP benchmark dataset.\n## Dataset Overview\n- **Code**: Liu2020BETA\n- **Paradigm**: ssvep\n- **DOI**: 10.3389/fnins.2020.00627\n- **Subjects**: 70\n- **Sessions per subject**: 1\n- **Events**: 8.6=1, 8.8=2, 9=3, 9.2=4, 9.4=5, 9.6=6, 9.8=7, 10=8, 10.2=9, 10.4=10, 10.6=11, 10.8=12, 11=13, 11.2=14, 11.4=15, 11.6=16, 11.8=17, 12=18, 12.2=19, 12.4=20, 12.6=21, 12.8=22, 13=23, 13.2=24, 13.4=25, 13.6=26, 13.8=27, 14=28, 14.2=29, 14.4=30, 14.6=31, 14.8=32, 15=33, 15.2=34, 15.4=35, 15.6=36, 15.8=37, 8=38, 8.2=39, 8.4=40\n- **Trial interval**: [0, 3.0] s\n- **File format**: MAT\n## Acquisition\n- **Sampling rate**: 250.0 Hz\n- **Number of channels**: 64\n- **Channel types**: eeg=64\n- **Channel names**: Fp1, Fpz, Fp2, AF3, AF4, F7, F5, F3, F1, Fz, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCz, FC2, FC4, FC6, FT8, T7, C5, C3, C1, Cz, C2, C4, C6, T8, M1, TP7, CP5, CP3, CP1, CPz, CP2, CP4, CP6, TP8, M2, P7, P5, P3, P1, Pz, P2, P4, P6, P8, PO7, PO5, PO3, POz, PO4, PO6, PO8, CB1, O1, Oz, O2, CB2\n- **Montage**: standard_1005\n- **Hardware**: Synamps2 (Neuroscan)\n- **Reference**: Cz\n- **Line frequency**: 50.0 Hz\n- **Impedance threshold**: 10 kOhm\n## Participants\n- **Number of subjects**: 70\n- **Health status**: healthy\n- **Age**: mean=25.14, std=7.97, min=9, max=64\n- **Gender distribution**: male=42, female=28\n- **BCI experience**: mixed\n## Experimental Protocol\n- **Paradigm**: ssvep\n- **Task type**: cued-spelling\n- **Number of classes**: 40\n- **Class labels**: 8.6, 8.8, 9, 9.2, 9.4, 9.6, 9.8, 10, 10.2, 10.4, 10.6, 10.8, 11, 11.2, 11.4, 11.6, 11.8, 12, 12.2, 12.4, 12.6, 12.8, 13, 13.2, 13.4, 13.6, 13.8, 14, 14.2, 14.4, 14.6, 14.8, 15, 15.2, 15.4, 15.6, 15.8, 8, 8.2, 8.4\n- **Trial duration**: 3.0 s\n- **Feedback type**: visual\n- **Stimulus type**: JFPM visual flicker\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Training/test split**: False\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  8.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/8_6\n  8.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/8_8\n  9\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/9\n  9.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/9_2\n  9.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/9_4\n  9.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/9_6\n  9.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/9_8\n  10\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10\n  10.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10_2\n  10.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10_4\n  10.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10_6\n  10.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10_8\n  11\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/11\n  11.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/11_2\n  11.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/11_4\n  11.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/11_6\n  11.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/11_8\n  12\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12\n  12.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12_2\n  12.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12_4\n  12.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12_6\n  12.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12_8\n  13\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/13\n  13.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/13_2\n  13.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/13_4\n  13.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/13_6\n  13.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/13_8\n  14\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14\n  14.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14_2\n  14.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14_4\n  14.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14_6\n  14.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14_8\n  15\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/15\n  15.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/15_2\n  15.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/15_4\n  15.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/15_6\n  15.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/15_8\n  8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/8\n  8.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/8_2\n  8.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/8_4\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: ssvep\n- **Stimulus frequencies**: [8.0, 8.2, 8.4, 8.6, 8.8, 9.0, 9.2, 9.4, 9.6, 9.8, 10.0, 10.2, 10.4, 10.6, 10.8, 11.0, 11.2, 11.4, 11.6, 11.8, 12.0, 12.2, 12.4, 12.600000000000001, 12.8, 13.0, 13.2, 13.4, 13.600000000000001, 13.8, 14.0, 14.2, 14.4, 14.600000000000001, 14.8, 15.0, 15.2, 15.4, 15.600000000000001, 15.8] Hz\n- **Frequency resolution**: 0.2 Hz\n## Data Structure\n- **Trials**: 160\n- **Blocks per session**: 4\n## Preprocessing\n- **Data state**: epoched\n- **Notch filter**: 50 Hz\n- **Filter type**: zero-phase FIR\n- **Downsampled to**: 250.0 Hz\n## Signal Processing\n- **Classifiers**: TRCA, msTRCA, FBCCA, CCA\n- **Feature extraction**: CCA, TRCA, FBCCA\n- **Frequency bands**: bandpass=[3.0, 100.0] Hz\n- **Spatial filters**: CCA, TRCA\n## Cross-Validation\n- **Method**: leave-one-block-out\n- **Folds**: 4\n- **Evaluation type**: within_subject\n## BCI Application\n- **Applications**: speller\n- **Environment**: classroom\n- **Online feedback**: True\n## Tags\n- **Pathology**: healthy\n- **Modality**: visual\n- **Type**: perception\n## Documentation\n- **DOI**: 10.3389/fnins.2020.00627\n- **License**: Non-commercial research use\n- **Investigators**: Bingchuan Liu, Xiaoshan Huang, Yijun Wang, Xiaogang Chen, Xiaorong Gao\n- **Senior author**: Xiaorong Gao\n- **Institution**: Tsinghua University\n- **Department**: Department of Biomedical Engineering, Tsinghua University\n- **Country**: CN\n- **Repository**: Tsinghua BCI Lab\n- **Data URL**: http://bci.med.tsinghua.edu.cn/upload/liubingchuan/\n- **Publication year**: 2020\n- **Funding**: National Key Research and Development Program of China (No. 2017YFB1002505); Strategic Priority Research Program of Chinese Academy of Sciences (No. XDB32040200); Key Research and Development Program of Guangdong Province (No. 2018B030339001); National Natural Science Foundation of China (Grant No. 61431007)\n- **Ethics approval**: Ethics Committee of Tsinghua University, No. 20190002\n- **Keywords**: SSVEP, BCI, EEG, benchmark, JFPM\n## References\nB. Liu, X. Huang, Y. Wang, X. Chen, and X. Gao, \"BETA: A Large Benchmark Database Toward SSVEP-BCI Application,\" Frontiers in Neuroscience, vol. 14, p. 627, 2020. DOI: 10.3389/fnins.2020.00627\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.4.3 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":3012862406,"source":"nemar","storage":{"backend":"s3","base":"s3://nemar/nm000129","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["ssvep"],"timestamps":{"digested_at":"2026-04-06T13:13:14.196941+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-18T00:41:29Z"},"total_files":70,"computed_title":"Liu2020 – BETA SSVEP benchmark dataset","nchans_counts":[{"val":64,"count":70}],"sfreq_counts":[{"val":250.0,"count":70}],"stats_computed_at":"2026-04-04T21:29:34.904725+00:00","total_duration_s":46879.72,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"16b0b78f4b7ec9b7","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.85},"reasoning":{"few_shot_analysis":"Closest few-shot by paradigm/stimulus mapping is the visual discrimination example (Meta-rdk) labeled as Visual + Perception: it uses an explicitly visual stimulus and the study focus is stimulus-driven sensory/perceptual processing. This guides labeling SSVEP visual flicker paradigms (stimulus-evoked steady-state responses) as Modality=Visual and Type=Perception rather than Motor/Decision-making. Also consistent with the auditory ABR/music-speech example labeled Auditory + Perception: stimulus-evoked sensory responses map to Perception even when attention demands are minimal.","metadata_analysis":"Key extracted facts from the dataset metadata:\n- Population: explicitly healthy: \"Health status: healthy\" and also \"Tags\\n- **Pathology**: healthy\".\n- Stimulus modality: explicitly visual: \"Stimulus modalities: visual\", \"Primary modality: visual\", and \"Stimulus type: JFPM visual flicker\".\n- Paradigm/purpose: SSVEP BCI speller benchmark using visual flicker classes: \"Paradigm: ssvep\", \"Task type: cued-spelling\", and the HED annotations repeatedly mark each class as \"Visual-presentation\".\n- The dataset also self-tags the construct: \"Tags\\n- **Type**: perception\".","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\"; \"Tags - Pathology: healthy\".\n- Few-shot suggests: when no disorder recruitment is indicated, label as Healthy.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"Stimulus modalities: visual\"; \"Primary modality: visual\"; \"Stimulus type: JFPM visual flicker\".\n- Few-shot suggests: visually presented stimuli (visual discrimination, visual targets) map to Visual modality.\n- Alignment: ALIGN.\n\nType:\n- Metadata says: \"Paradigm: ssvep\" with \"Stimulus type: ... visual flicker\" and self-tag \"Type: perception\".\n- Few-shot suggests: stimulus-evoked sensory paradigms (visual discrimination; auditory ABR) map to Perception.\n- Alignment: ALIGN.","decision_summary":"Pathology top-2:\n1) Healthy (selected): supported by \"Health status: healthy\" and \"Tags - Pathology: healthy\" (plus repeated healthy tagging in the README section).\n2) Unknown (runner-up): only if health status were missing; not needed here.\nAlignment: aligns with few-shot convention for non-clinical cohorts.\nConfidence basis: 3 explicit supporting snippets: \"Health status: healthy\"; \"Tags - Pathology: healthy\"; \"Tags - Pathology: healthy\" (in the Tags block).\n\nModality top-2:\n1) Visual (selected): supported by \"Stimulus modalities: visual\", \"Primary modality: visual\", and \"Stimulus type: JFPM visual flicker\".\n2) Other (runner-up): would apply if stimulus channel were unclear; it is explicit here.\nAlignment: aligns with few-shot visual-stimulus datasets.\nConfidence basis: 3+ explicit quotes as above.\n\nType top-2:\n1) Perception (selected): SSVEP is a visually evoked steady-state response; supported by \"Paradigm: ssvep\", the HED labeling of each class as \"Visual-presentation\", and the dataset’s own tag \"Type: perception\".\n2) Attention (runner-up): SSVEP tasks often require attending to a target flicker, but the primary construct here is sensory/evoked response classification for a speller benchmark.\nAlignment: aligns with few-shot convention mapping stimulus-evoked sensory paradigms to Perception.\nConfidence basis: 3 explicit supporting snippets: \"Paradigm: ssvep\"; \"Stimulus type: JFPM visual flicker\"; \"Tags - Type: perception\"."}},"canonical_name":["BetaSSVEP","BETA_SSVEP","BETA"],"name_confidence":0.72,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Liu2020"}}