{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c67","dataset_id":"nm000126","associated_paper_doi":null,"authors":["Yijun Wang","Xiaogang Chen","Xiaorong Gao","Shangkai Gao"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":34,"ages":[22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22],"age_min":22,"age_max":22,"age_mean":22.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000126","osf_url":null,"github_url":null,"paper_url":null},"funding":["National Natural Science Foundation of China (No. 61431007, No. 91220301, and No. 91320202)","National High-tech R&D Program (863) of China (No. 2012AA011601)","Recruitment Program for Young Professionals","Young Talents Lift Project of Chinese Association of Science and Technology","PUMC Youth Fund (No. 3332016101)"],"ingestion_fingerprint":"3bf95d792d4029436ca5909ec7abf3475106ec40a83aae9c51a8d74136cc2e92","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Wang2016 – SSVEP Wang 2016 dataset","readme":"# SSVEP Wang 2016 dataset\nSSVEP Wang 2016 dataset.\n## Dataset Overview\n- **Code**: Wang2016\n- **Paradigm**: ssvep\n- **DOI**: 10.1109/TNSRE.2016.2627556\n- **Subjects**: 34\n- **Sessions per subject**: 1\n- **Events**: 8=1, 9=2, 10=3, 11=4, 12=5, 13=6, 14=7, 15=8, 8.2=9, 9.2=10, 10.2=11, 11.2=12, 12.2=13, 13.2=14, 14.2=15, 15.2=16, 8.4=17, 9.4=18, 10.4=19, 11.4=20, 12.4=21, 13.4=22, 14.4=23, 15.4=24, 8.6=25, 9.6=26, 10.6=27, 11.6=28, 12.6=29, 13.6=30, 14.6=31, 15.6=32, 8.8=33, 9.8=34, 10.8=35, 11.8=36, 12.8=37, 13.8=38, 14.8=39, 15.8=40\n- **Trial interval**: [0.5, 5.5] s\n- **File format**: MATLAB MAT\n- **Data preprocessed**: True\n## Acquisition\n- **Sampling rate**: 250.0 Hz\n- **Number of channels**: 64\n- **Channel types**: eeg=64\n- **Channel names**: AF3, AF4, C1, C2, C3, C4, C5, C6, CB1, CB2, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fp1, Fp2, Fpz, Fz, M1, M2, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO5, PO6, PO7, PO8, POz, Pz, T7, T8, TP7, TP8\n- **Montage**: standard_1005\n- **Hardware**: Synamps2 EEG system (Neuroscan, Inc.)\n- **Reference**: Cz\n- **Line frequency**: 50.0 Hz\n- **Online filters**: {'bandpass': [0.15, 200], 'notch': 50}\n- **Impedance threshold**: 10 kOhm\n## Participants\n- **Number of subjects**: 34\n- **Health status**: healthy\n- **Age**: mean=22.0, min=17, max=34\n- **Gender distribution**: female=17, male=18\n- **BCI experience**: 8 experienced, 27 naïve\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: ssvep\n- **Number of classes**: 40\n- **Class labels**: 8, 9, 10, 11, 12, 13, 14, 15, 8.2, 9.2, 10.2, 11.2, 12.2, 13.2, 14.2, 15.2, 8.4, 9.4, 10.4, 11.4, 12.4, 13.4, 14.4, 15.4, 8.6, 9.6, 10.6, 11.6, 12.6, 13.6, 14.6, 15.6, 8.8, 9.8, 10.8, 11.8, 12.8, 13.8, 14.8, 15.8\n- **Trial duration**: 6.0 s\n- **Study design**: Cue-guided target selecting task using a 40-target BCI speller with joint frequency and phase modulation (JFPM) approach\n- **Stimulus type**: visual flicker\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Instructions**: Subjects were asked to shift their gaze to the target as soon as possible after cue and avoid eye blinks during the 5-s stimulation duration\n- **Stimulus presentation**: SoftwareName=MATLAB Psychophysics Toolbox Ver. 3 (PTB-3), display=23.6-in LCD monitor (Acer GD245 HQ, response time: 2 ms), resolution=1920 × 1080 pixels at 60 Hz, viewing_distance=70 cm, stimulus_size=140 × 140 pixels (3.2° × 3.2°), character_size=32 × 32 pixels (0.7° × 0.7°), matrix_size=1510 × 1037 pixels (34° × 24°), matrix_layout=5 × 8 stimulus matrix, inter_stimulus_distance=50 pixels vertical and horizontal, method=sampled sinusoidal stimulation\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/8\n  9\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/9\n  10\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10\n  11\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/11\n  12\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12\n  13\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/13\n  14\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14\n  15\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/15\n  8.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/8_2\n  9.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/9_2\n  10.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10_2\n  11.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/11_2\n  12.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12_2\n  13.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/13_2\n  14.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14_2\n  15.2\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/15_2\n  8.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/8_4\n  9.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/9_4\n  10.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10_4\n  11.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/11_4\n  12.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12_4\n  13.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/13_4\n  14.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14_4\n  15.4\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/15_4\n  8.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/8_6\n  9.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/9_6\n  10.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10_6\n  11.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/11_6\n  12.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12_6\n  13.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/13_6\n  14.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14_6\n  15.6\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/15_6\n  8.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/8_8\n  9.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/9_8\n  10.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10_8\n  11.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/11_8\n  12.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12_8\n  13.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/13_8\n  14.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14_8\n  15.8\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/15_8\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: ssvep\n- **Stimulus frequencies**: [8.0, 8.2, 8.4, 8.6, 8.8, 9.0, 9.2, 9.4, 9.6, 9.8, 10.0, 10.2, 10.4, 10.6, 10.8, 11.0, 11.2, 11.4, 11.6, 11.8, 12.0, 12.2, 12.4, 12.6, 12.8, 13.0, 13.2, 13.4, 13.6, 13.8, 14.0, 14.2, 14.4, 14.6, 14.8, 15.0, 15.2, 15.4, 15.6, 15.8] Hz\n- **Frequency resolution**: 0.2 Hz\n- **Number of targets**: 40\n- **Number of repetitions**: 6\n- **Cue duration**: 0.5 s\n## Data Structure\n- **Trials**: 240\n- **Trials per class**: per_target=6\n- **Blocks per session**: 6\n- **Trials context**: 40 trials per block corresponding to all 40 characters in random order\n## Preprocessing\n- **Data state**: Raw epochs extracted from continuous EEG recordings according to stimulus onsets, downsampled to 250 Hz, no digital filters applied\n- **Preprocessing applied**: True\n- **Steps**: Epoch extraction according to stimulus onsets from event channel, Downsampling from 1000 Hz to 250 Hz, No digital filters applied in preprocessing\n- **Downsampled to**: 250.0 Hz\n- **Epoch window**: [-0.5, 5.5]\n- **Notes**: Data epochs include 0.5 s before stimulus onset, 5 s for stimulation, and 0.5 s after stimulus offset. Upper bound frequency of SSVEP harmonics is around 90 Hz.\n## Signal Processing\n- **Classifiers**: CCA, FBCCA\n- **Feature extraction**: Canonical Correlation Analysis, Filter Bank CCA\n- **Frequency bands**: analyzed=[7.0, 90.0] Hz\n## Cross-Validation\n- **Method**: leave-one-out (on six blocks)\n- **Folds**: 6\n- **Evaluation type**: within_subject\n## Performance (Original Study)\n- **Itr**: 117.75 bits/min\n- **Peak Itr Fbcca 0.55S Gaze**: 117.75\n- **Peak Itr Fbcca 2S Gaze**: 68.99\n- **Peak Itr Cca 0.55S Gaze**: 89.89\n- **Peak Itr Cca 2S Gaze**: 56.03\n- **Visual Latency Ms**: 136.91\n- **Visual Latency Std Ms**: 18.4\n## BCI Application\n- **Applications**: speller\n- **Environment**: dimly lit soundproof room\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Visual\n- **Type**: Perception\n## Documentation\n- **Description**: A benchmark SSVEP dataset acquired with a 40-target BCI speller using joint frequency and phase modulation (JFPM) approach\n- **DOI**: 10.1109/TNSRE.2016.2627556\n- **License**: CC-BY-4.0\n- **Investigators**: Yijun Wang, Xiaogang Chen, Xiaorong Gao, Shangkai Gao\n- **Senior author**: Shangkai Gao\n- **Contact**: wangyj@semi.ac.cn; chenxg@bme.cams.cn; gxrdea@tsinghua.edu.cn; gsk-dea@tsinghua.edu.cn\n- **Institution**: Tsinghua University\n- **Department**: Department of Biomedical Engineering, Tsinghua University\n- **Address**: Beijing, China\n- **Country**: CN\n- **Repository**: BNCI Horizon 2020\n- **Data URL**: http://bci.med.tsinghua.edu.cn/download.html\n- **Publication year**: 2016\n- **Funding**: National Natural Science Foundation of China (No. 61431007, No. 91220301, and No. 91320202); National High-tech R&D Program (863) of China (No. 2012AA011601); Recruitment Program for Young Professionals; Young Talents Lift Project of Chinese Association of Science and Technology; PUMC Youth Fund (No. 3332016101)\n- **Ethics approval**: Research Ethics Committee of Tsinghua University\n- **Keywords**: Brain–computer interface (BCI), electroencephalogram (EEG), joint frequency and phase modulation (JFPM), public data set, steady-state visual evoked potential (SSVEP)\n## External Links\n- **Source**: http://bci.med.tsinghua.edu.cn/download.html\n- **Bnci Horizon**: https://bnci-horizon-2020.eu/database/data-sets\n## Abstract\nThis paper presents a benchmark steady-state visual evoked potential (SSVEP) dataset acquired with a 40-target brain–computer interface (BCI) speller. The dataset consists of 64-channel Electroencephalogram (EEG) data from 35 healthy subjects (8 experienced and 27 naïve) while they performed a cue-guided target selecting task. The virtual keyboard of the speller was composed of 40 visual flickers, which were coded using a joint frequency and phase modulation (JFPM) approach. The stimulation frequencies ranged from 8 Hz to 15.8 Hz with an interval of 0.2 Hz. The phase difference between two adjacent frequencies was 0.5π. For each subject, the data included six blocks of 40 trials corresponding to all 40 flickers indicated by a visual cue in a random order. The stimulation duration in each trial was five seconds.\n## Methodology\nThe study used a cue-guided target selecting task with a 40-target BCI speller. Stimuli were presented on a 23.6-in LCD monitor at 60 Hz using sampled sinusoidal stimulation method. Each trial started with a 0.5-s target cue, followed by 5 s of concurrent flickering of all stimuli, and ended with 0.5 s blank screen. The experiment included six blocks per subject, with 40 trials per block in random order. EEG data were recorded using Synamps2 system at 1000 Hz with 64 electrodes, referenced to Cz. Data were preprocessed by extracting epochs according to stimulus onsets and downsampling to 250 Hz. The JFPM approach encoded 40 characters using frequencies from 8-15.8 Hz (0.2 Hz interval) and phases from 0 to 19.5π (0.5π interval). Performance was evaluated using CCA and FBCCA methods with leave-one-out cross-validation.\n## References\nWang, Y., Chen, X., Gao, X., & Gao, S. (2016). A benchmark dataset for SSVEP-based brain–computer interfaces. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 25(10), 1746-1752. doi: 10.1109/TNSRE.2016.2627556.\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.4.3 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":3351768158,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000126","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["ssvep"],"timestamps":{"digested_at":"2026-04-30T14:08:34.881567+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-29T22:38:18Z"},"total_files":34,"computed_title":"Wang2016 – SSVEP Wang 2016 dataset","nchans_counts":[{"val":64,"count":34}],"sfreq_counts":[{"val":250.0,"count":34}],"stats_computed_at":"2026-05-01T13:49:34.644850+00:00","total_duration_s":52223.864,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"e6cc8d2bb52b142b","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot convention is the visual discrimination task in the schizophrenia dataset (Meta-rdk), which is labeled Modality=Visual and Type=Perception because the core manipulation is visual stimulus processing/discrimination. This Wang2016 dataset is also driven by visually presented stimuli (SSVEP visual flicker) and is analyzed via evoked responses for stimulus-class identification (BCI target selection), which under the few-shot conventions maps best to Visual + Perception (rather than Motor).","metadata_analysis":"Key facts from provided metadata:\n- Population: explicitly healthy (\"Health status: healthy\"; also \"The dataset consists of ... data from 35 healthy subjects\")\n- Stimulus/modality: explicitly visual SSVEP flicker (\"Stimulus type: visual flicker\"; \"Stimulus modalities: visual\"; \"Primary modality: visual\"; also HED tags repeatedly include \"Visual-presentation\")\n- Paradigm/purpose: SSVEP-based BCI speller / target selection (\"Paradigm: ssvep\"; \"Cue-guided target selecting task using a 40-target BCI speller\"; \"benchmark steady-state visual evoked potential (SSVEP) dataset acquired with a 40-target brain–computer interface (BCI) speller\")","paper_abstract_analysis":"Useful paper text is embedded in the dataset README under \"## Abstract\": it reiterates \"35 healthy subjects\" and that they performed a \"cue-guided target selecting task\" with \"40 visual flickers\" in an SSVEP BCI speller, supporting Healthy + Visual and a perception/visual-evoked focus.","evidence_alignment_check":"Pathology:\n1) Metadata says: \"Health status: healthy\" and \"data from 35 healthy subjects\".\n2) Few-shot pattern suggests: when participants are non-clinical volunteers, label Pathology=Healthy.\n3) ALIGN.\n\nModality:\n1) Metadata says: \"Stimulus type: visual flicker\", \"Stimulus modalities: visual\", and HED \"Visual-presentation\".\n2) Few-shot pattern suggests: visual stimulus-driven paradigms map to Modality=Visual.\n3) ALIGN.\n\nType:\n1) Metadata says: \"Paradigm: ssvep\" and describes a \"BCI speller\" with \"40 visual flickers\" for target selection.\n2) Few-shot pattern suggests: stimulus-driven sensory paradigms (e.g., visual discrimination) are labeled Type=Perception; motor labels are reserved for movement execution/imagery being the research focus.\n3) ALIGN (SSVEP is a visually evoked/perceptual physiology paradigm; although it is a BCI application, the dominant construct is visual evoked response discrimination).","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n- Healthy (winner): \"Health status: healthy\"; \"35 healthy subjects\".\n- Unknown (runner-up): would apply only if health status were not stated.\nDecision: Healthy. Alignment: aligned with few-shot conventions.\n\nModality:\n- Visual (winner): \"Stimulus type: visual flicker\"; \"Stimulus modalities: visual\"; HED \"Visual-presentation\".\n- Other (runner-up): only if stimulus channel were unclear or mixed.\nDecision: Visual. Alignment: aligned.\n\nType:\n- Perception (winner): SSVEP = steady-state *visual evoked* responses; \"40 visual flickers\"; paradigm \"ssvep\" used for stimulus-class identification.\n- Attention (runner-up): gaze shifting to cued target could be framed as attentional selection, but the dataset is primarily an evoked-visual BCI benchmark.\nDecision: Perception. Alignment: aligned.\n\nConfidence justification (quotes/features): Pathology supported by 2 explicit statements (\"Health status: healthy\"; \"35 healthy subjects\"). Modality supported by 3+ explicit statements (\"visual flicker\"; \"Stimulus modalities: visual\"; repeated HED \"Visual-presentation\"). Type supported by multiple explicit paradigm statements (\"Paradigm: ssvep\"; \"steady-state visual evoked potential (SSVEP)\"; \"40 visual flickers\")."}},"canonical_name":null,"name_confidence":0.86,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Wang2016"}}