{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c6b","dataset_id":"nm000130","associated_paper_doi":null,"authors":["Bingchuan Liu","Yijun Wang","Xiaorong Gao","Xiaogang Chen"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":"10.82901/nemar.nm000130","datatypes":["eeg"],"demographics":{"subjects_count":100,"ages":[70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70,70],"age_min":70,"age_max":70,"age_mean":70.0,"species":null,"sex_distribution":{"m":33,"f":67},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000130","osf_url":null,"github_url":null,"paper_url":null},"funding":["National Natural Science Foundation of China (No. 62171473)","Doctoral Brain+X Seed Grant Program of Tsinghua University","Strategic Priority Research Program of Chinese Academy of Sciences (No. XDB32040200)"],"ingestion_fingerprint":"e8a640d4f9c7218775b0e2dfef99386f28058acbb2bcb7b4e2d0ead70385af12","license":"CC BY 4.0","n_contributing_labs":null,"name":"Liu2022 – eldBETA SSVEP benchmark dataset for elderly population","readme":"[![DOI](https://img.shields.io/badge/DOI-10.82901%2Fnemar.nm000130-blue)](https://doi.org/10.82901/nemar.nm000130)\n# eldBETA SSVEP benchmark dataset for elderly population\neldBETA SSVEP benchmark dataset for elderly population.\n## Dataset Overview\n- **Code**: Liu2022EldBETA\n- **Paradigm**: ssvep\n- **DOI**: 10.1038/s41597-022-01372-9\n- **Subjects**: 100\n- **Sessions per subject**: 7\n- **Events**: 8=1, 9.5=2, 11=3, 8.5=4, 10=5, 11.5=6, 9=7, 10.5=8, 12=9\n- **Trial interval**: [0, 6.0] s\n- **File format**: GDF (BIDS)\n## Acquisition\n- **Sampling rate**: 1000.0 Hz\n- **Number of channels**: 64\n- **Channel types**: eeg=64\n- **Montage**: standard_1005\n- **Hardware**: Synamps2 (Neuroscan)\n- **Reference**: Cz\n- **Line frequency**: 50.0 Hz\n- **Impedance threshold**: 20 kOhm\n## Participants\n- **Number of subjects**: 100\n- **Health status**: healthy\n- **Age**: mean=63.17, std=6.05, min=51, max=81\n- **Gender distribution**: male=33, female=67\n## Experimental Protocol\n- **Paradigm**: ssvep\n- **Task type**: 9-target SSVEP speller\n- **Number of classes**: 9\n- **Class labels**: 8, 9.5, 11, 8.5, 10, 11.5, 9, 10.5, 12\n- **Trial duration**: 5.0 s\n- **Feedback type**: visual\n- **Stimulus type**: JFPM visual flicker\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: online\n- **Training/test split**: False\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/8\n  9.5\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/9_5\n  11\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/11\n  8.5\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/8_5\n  10\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10\n  11.5\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/11_5\n  9\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/9\n  10.5\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10_5\n  12\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: ssvep\n- **Stimulus frequencies**: [8.0, 8.5, 9.0, 9.5, 10.0, 10.5, 11.0, 11.5, 12.0] Hz\n- **Frequency resolution**: 0.5 Hz\n## Data Structure\n- **Trials**: 63\n- **Blocks per session**: 7\n## Signal Processing\n- **Classifiers**: TDCA, ms-eCCA, ensemble_msTRCA, ensemble_TRCA, Extended_CCA, ITCCA, L1MCCA, FBCCA, CVARS, tMSI, MEC, MSI, CCA\n- **Feature extraction**: TDCA, CCA, FBCCA, TRCA, ms-eCCA, msTRCA, Extended_CCA, ITCCA, L1MCCA, CVARS, tMSI, MEC, MSI\n- **Frequency bands**: bandpass=[6.0, 100.0] Hz\n- **Spatial filters**: TDCA, CCA, TRCA, ms-eCCA, msTRCA, Extended_CCA, ITCCA, L1MCCA, CVARS, MEC, MSI, tMSI\n## Cross-Validation\n- **Method**: leave-one-block-out\n- **Folds**: 7\n- **Evaluation type**: within_subject\n## BCI Application\n- **Applications**: speller\n- **Environment**: lab\n- **Online feedback**: True\n## Tags\n- **Pathology**: healthy\n- **Modality**: visual\n- **Type**: perception\n## Documentation\n- **DOI**: 10.1038/s41597-022-01372-9\n- **License**: CC BY 4.0\n- **Investigators**: Bingchuan Liu, Yijun Wang, Xiaorong Gao, Xiaogang Chen\n- **Senior author**: Xiaogang Chen\n- **Institution**: Tsinghua University\n- **Department**: Department of Biomedical Engineering, School of Medicine, Tsinghua University\n- **Country**: CN\n- **Repository**: Figshare\n- **Data URL**: https://doi.org/10.6084/m9.figshare.18032669\n- **Publication year**: 2022\n- **Funding**: National Natural Science Foundation of China (No. 62171473); Doctoral Brain+X Seed Grant Program of Tsinghua University; Strategic Priority Research Program of Chinese Academy of Sciences (No. XDB32040200)\n- **Ethics approval**: Institutional Review Board of Tsinghua University, No. 20210032\n- **Keywords**: SSVEP, BCI, EEG, elderly, aging, benchmark, JFPM\n## References\nB. Liu, Y. Wang, X. Gao, and X. Chen, \"eldBETA: A Large Eldercare-oriented Benchmark Database of SSVEP-BCI for the Aging Population,\" Scientific Data, vol. 9, p. 252, 2022. DOI: 10.1038/s41597-022-01372-9\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.4.3 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1","2","3","4","5","6"],"size_bytes":18646359596,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000130","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["ssvep"],"timestamps":{"digested_at":"2026-04-30T14:08:36.327673+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-29T01:16:36Z"},"total_files":700,"computed_title":"Liu2022 – eldBETA SSVEP benchmark dataset for elderly population","nchans_counts":[{"val":64,"count":700}],"sfreq_counts":[{"val":1000.0,"count":700}],"stats_computed_at":"2026-05-01T13:49:34.644935+00:00","total_duration_s":72630.62,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"736cc2c42c8e3e3a","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"No few-shot example is explicitly SSVEP, but labeling conventions transfer from similar stimulus-driven paradigms. Example: “Meta-rdk: Preprocessed EEG data” is a visual discrimination task and is labeled Modality=Visual, Type=Perception; this supports mapping a visually driven evoked-response paradigm to Visual+Perception. Example: “Subcortical responses to music and speech…” uses auditory stimuli and is labeled Auditory+Perception, supporting that sensory-evoked response characterization is typically Type=Perception (not Motor/Decision-making) when the main manipulation is the stimulus and its neural response.","metadata_analysis":"Population facts: the README explicitly states “Health status: healthy” and describes an “elderly population” with “Age: mean=63.17… min=51, max=81”.\nStimulus/modality facts: the protocol states “Stimulus type: JFPM visual flicker”, “Stimulus modalities: visual”, and “Primary modality: visual”.\nTask/type facts: it is an SSVEP BCI paradigm: “Paradigm: ssvep” and “Task type: 9-target SSVEP speller” with frequency-tagged visual targets (“Stimulus frequencies: [8.0, 8.5, …, 12.0] Hz”).","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says participants are “healthy” (quote: “Health status: healthy”). Few-shot pattern suggests using explicit recruited clinical groups when present; here none are present, so this aligns with labeling as Healthy.\nModality: Metadata says stimuli are visual (quotes: “Stimulus type: JFPM visual flicker”; “Primary modality: visual”). Few-shot pattern maps stimulus channel to Modality (e.g., visual discrimination -> Visual). Aligns.\nType: Metadata indicates an SSVEP speller with visually flickering frequency-tagged stimuli (quotes: “Paradigm: ssvep”; “Task type: 9-target SSVEP speller”). Few-shot convention for sensory-evoked paradigms (visual discrimination, auditory ABR) maps to Type=Perception rather than Motor/Decision-making. Aligns.","decision_summary":"Pathology top-2: (1) Healthy — supported by “Health status: healthy” and general non-clinical benchmark framing; (2) Development — plausible only because it’s an age-focused cohort, but metadata indicates elderly adults, not childhood/adolescence. Winner: Healthy.\nModality top-2: (1) Visual — supported by “Stimulus type: JFPM visual flicker”, “Stimulus modalities: visual”, “Primary modality: visual”; (2) Multisensory — unlikely as no auditory/tactile channels described. Winner: Visual.\nType top-2: (1) Perception — SSVEP relies on processing frequency-tagged visual flicker targets (“Paradigm: ssvep”; “Stimulus frequencies …”); (2) Attention — plausible because SSVEP spellers require attentional selection, but the dataset is primarily a stimulus-evoked BCI benchmark rather than an explicit attention manipulation. Winner: Perception.\nConfidence justification: Pathology has 1 very explicit health-status quote; Modality has 3 explicit visual-stimulus quotes; Type has 2 explicit SSVEP/speller quotes plus strong few-shot alignment to sensory-evoked paradigms being Perception."}},"canonical_name":null,"name_confidence":0.72,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Liu2022"}}