{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4c99","dataset_id":"nm000196","associated_paper_doi":null,"authors":["Jordy Thielen","Philip van den Broek","Jason Farquhar","Peter Desain"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":12,"ages":[24,24,24,24,24,24,24,24,24,24,24,24],"age_min":24,"age_max":24,"age_mean":24.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000196","osf_url":null,"github_url":null,"paper_url":null},"funding":["BrainGain Smart Mix Program of the Netherlands Ministry of Economic Affairs","Netherlands Ministry of Education, Culture and Science (SSM06011)"],"ingestion_fingerprint":"1053eb9c804e38ee222229b0fe9d650d721eca8b1a83d45f11faa68b83b2e3e1","license":"CC0-1.0","n_contributing_labs":null,"name":"c-VEP dataset from Thielen et al. (2015)","readme":"# c-VEP dataset from Thielen et al. (2015)\nc-VEP dataset from Thielen et al. (2015)\n## Dataset Overview\n- **Code**: Thielen2015\n- **Paradigm**: cvep\n- **DOI**: 10.34973/1ecz-1232\n- **Subjects**: 12\n- **Sessions per subject**: 1\n- **Events**: 1.0=101, 0.0=100\n- **Trial interval**: (0, 0.3) s\n- **Runs per session**: 3\n- **File format**: mat\n- **Data preprocessed**: True\n## Acquisition\n- **Sampling rate**: 2048.0 Hz\n- **Number of channels**: 64\n- **Channel types**: eeg=64\n- **Channel names**: AF3, AF4, AF7, AF8, AFz, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fp1, Fp2, Fpz, Fz, Iz, O1, O2, Oz, P1, P10, P2, P3, P4, P5, P6, P7, P8, P9, PO3, PO4, PO7, PO8, POz, Pz, T7, T8, TP7, TP8\n- **Montage**: standard_1020\n- **Hardware**: Biosemi ActiveTwo\n- **Reference**: CMS/DRL\n- **Sensor type**: EEG\n- **Line frequency**: 50.0 Hz\n- **Electrode type**: active\n## Participants\n- **Number of subjects**: 12\n- **Health status**: patients\n- **Clinical population**: Healthy\n- **Age**: mean=24.0, std=2.3\n- **Gender distribution**: male=4, female=8\n- **BCI experience**: naive\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: cvep\n- **Number of classes**: 2\n- **Class labels**: 1.0, 0.0\n- **Trial duration**: 4.2 s\n- **Study design**: 6x6 matrix speller BCI using modulated Gold codes for visual stimulation; participants focused on target symbols while cells flashed according to pseudo-random bit-sequences\n- **Feedback type**: visual\n- **Stimulus type**: pseudo-random noise-code\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: online\n- **Training/test split**: False\n- **Instructions**: participants visually attended cells containing target symbols during stimulation\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  1.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_1_0\n  0.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_0_0\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: cvep\n- **Code type**: modulated Gold codes\n- **Code length**: 126\n- **Number of targets**: 36\n## Data Structure\n- **Trials**: 108\n- **Trials context**: 108 total per subject: 3 fixed-length copy-spelling runs x 36 trials per run, each trial 4.2 seconds (4 code cycles)\n## Preprocessing\n- **Data state**: preprocessed\n- **Preprocessing applied**: True\n- **Steps**: downsampling from 2048 Hz to 360 Hz, linear de-trending, common average referencing, spectral filtering\n- **Highpass filter**: 5 Hz\n- **Lowpass filter**: 100 Hz\n- **Bandpass filter**: {'band1': [5, 48], 'band2': [52, 100]}\n- **Re-reference**: car\n- **Downsampled to**: 360.0 Hz\n## Signal Processing\n- **Classifiers**: template matching, CCA\n- **Feature extraction**: correlation\n- **Spatial filters**: Canonical Correlation Analysis\n## Cross-Validation\n- **Method**: training-testing split\n- **Evaluation type**: within-subject\n## Performance (Original Study)\n- **Accuracy Fixed Length**: 86.0\n- **Itr Fixed Length**: 38.12\n- **Spm Fixed Length**: 6.93\n- **Accuracy Early Stopping**: 86.0\n- **Itr Early Stopping**: 48.37\n- **Spm Early Stopping**: 8.99\n## BCI Application\n- **Applications**: speller\n- **Environment**: laboratory\n- **Online feedback**: True\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Visual\n- **Type**: Research\n## Documentation\n- **DOI**: 10.1371/journal.pone.0133797\n- **License**: CC0-1.0\n- **Investigators**: Jordy Thielen, Philip van den Broek, Jason Farquhar, Peter Desain\n- **Senior author**: Peter Desain\n- **Contact**: jordy.thielen@gmail.com; info@donders.ru.nl\n- **Institution**: Radboud University Nijmegen\n- **Department**: Donders Center for Cognition\n- **Country**: NL\n- **Repository**: GitHub\n- **Data URL**: https://public.data.ru.nl/dcc/DSC_2018.00047_553_v3\n- **Publication year**: 2015\n- **Funding**: BrainGain Smart Mix Program of the Netherlands Ministry of Economic Affairs; Netherlands Ministry of Education, Culture and Science (SSM06011)\n- **Ethics approval**: Ethical Committee of the Faculty of Social Sciences at the Radboud University Nijmegen\n- **Keywords**: Brain-Computer Interface, BCI, Broad-Band Visually Evoked Potentials, BBVEP, Gold codes, reconvolution, speller, visual stimulation\n## Abstract\nBrain-Computer Interfaces (BCIs) allow users to control devices and communicate by using brain activity only. BCIs based on broad-band visual stimulation can outperform BCIs using other stimulation paradigms. Visual stimulation with pseudo-random bit-sequences evokes specific Broad-Band Visually Evoked Potentials (BBVEPs) that can be reliably used in BCI for high-speed communication in speller applications. In this study, we report a novel paradigm for a BBVEP-based BCI that utilizes a generative framework to predict responses to broad-band stimulation sequences. In this study we designed a BBVEP-based BCI using modulated Gold codes to mark cells in a visual speller BCI. We defined a linear generative model that decomposes full responses into overlapping single-flash responses. These single-flash responses are used to predict responses to novel stimulation sequences, which in turn serve as templates for classification. The linear generative model explains on average 50% and up to 66% of the variance of responses to both seen and unseen sequences. In an online experiment, 12 participants tested a 6 × 6 matrix speller BCI. On average, an online accuracy of 86% was reached with trial lengths of 3.21 seconds. This corresponds to an Information Transfer Rate of 48 bits per minute (approximately 9 symbols per minute). This study indicates the potential to model and predict responses to broad-band stimulation. These predicted responses are proven to be well-suited as templates for a BBVEP-based BCI, thereby enabling communication and control by brain activity only.\n## Methodology\nThe study implements a novel BBVEP-based BCI using modulated Gold codes with a reconvolution approach for template generation. The reconvolution model decomposes responses into single-flash responses (short and long pulses) and predicts responses to unseen sequences. Two sets of Gold codes were used: set V for training (65 sequences) and set U for testing (65 sequences). Each sequence had 126 bits with duration of 1.05s. The classifier uses template matching with correlation, combined with Canonical Correlation Analysis for spatial filtering. Subset optimization (Platinum subset) selects the most distinguishable codes, and layout optimization arranges codes on the 6x6 grid to minimize cross-talk. An early stopping algorithm was implemented to reduce trial duration. Online experiments were conducted with 12 participants using a synchronous BCI paradigm.\n## References\nThielen, J. (Jordy), Jason Farquhar, Desain, P.W.M. (Peter) (2023): Broad-Band Visually Evoked Potentials: Re(con)volution in Brain-Computer Interfacing. Version 2. Radboud University. (dataset). DOI: https://doi.org/10.34973/1ecz-1232\nThielen, J., Van Den Broek, P., Farquhar, J., & Desain, P. (2015). Broad-Band visually evoked potentials: re(con)volution in brain-computer interfacing. PLOS ONE, 10(7), e0133797. DOI: https://doi.org/10.1371/journal.pone.0133797\nNotes\n.. versionadded:: 1.0.0\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":3761962571,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000196","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["cvep"],"timestamps":{"digested_at":"2026-04-30T14:08:57.036325+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-24T00:47:27Z"},"total_files":36,"computed_title":"c-VEP dataset from Thielen et al. (2015)","nchans_counts":[{"val":64,"count":36}],"sfreq_counts":[{"val":2048.0,"count":36}],"stats_computed_at":"2026-05-01T13:49:34.645556+00:00","total_duration_s":9415.68017578125,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"e4a443e05f6cc056","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.85,"modality":0.95,"type":0.7},"reasoning":{"few_shot_analysis":"Most similar few-shot examples by stimulus modality are the Visual/Perception datasets (e.g., Meta-rdk visual discrimination labeled as Modality=Visual and Type=Perception). This supports labeling this dataset's Modality as Visual because it is driven by visual stimulation. However, unlike the visual discrimination example (left/right motion decision), this dataset is a BCI speller where performance depends on selectively attending to a target cell during flashing sequences; in EEGDash conventions, paradigms requiring target selection via attentional focus often map better to Type=Attention than pure sensory discrimination. No few-shot example is a direct c-VEP/SSVEP speller, so Type must be chosen by metadata facts and the closest convention (visual task purpose: perception vs attention).","metadata_analysis":"Key population facts: (1) \"Clinical population: Healthy\" and (2) \"Age: mean=24.0\" with \"Subjects: 12\".\n\nKey paradigm/stimulus facts: (1) \"Study design: 6x6 matrix speller BCI using modulated Gold codes for visual stimulation\" and (2) \"Stimulus modalities: visual\" / \"Primary modality: visual\". Additional task-demand fact: \"Instructions: participants visually attended cells containing target symbols during stimulation\". These indicate a visually driven, attention-to-target BCI speller based on visually evoked potentials (c-VEP/BBVEP).","paper_abstract_analysis":"The included abstract reinforces the goal and stimulus basis: \"Visual stimulation with pseudo-random bit-sequences evokes specific Broad-Band Visually Evoked Potentials (BBVEPs)\" and describes that \"12 participants tested a 6 × 6 matrix speller BCI\". This supports Modality=Visual and a task requiring selective attention to target symbols for communication/control.","evidence_alignment_check":"Pathology: Metadata SAYS \"Clinical population: Healthy\" (despite also listing \"Health status: patients\"). Few-shot pattern SUGGESTS using explicit recruited diagnosis/condition when stated; here the explicit condition is Healthy. ALIGN overall (treat 'patients' as a generic field error given the explicit 'Clinical population: Healthy').\n\nModality: Metadata SAYS \"Stimulus modalities: visual\" and describes \"visual stimulation\" with a 6x6 speller. Few-shot pattern SUGGESTS labeling by stimulus channel (e.g., visual discrimination -> Visual). ALIGN.\n\nType: Metadata SAYS participants \"visually attended\" target symbols in a speller BCI and the paradigm uses visually evoked potentials for classification/communication. Few-shot pattern SUGGESTS Visual discrimination tasks map to Perception, while tasks centered on selecting targets among competing stimuli via attentional focus often map to Attention. PARTIAL ALIGN/AMBIGUOUS (Perception vs Attention); decision made by primary cognitive construct (selective attention to target during flashing sequences) rather than sensory discrimination.","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n1) Healthy (WIN) — explicit: \"Clinical population: Healthy\"; participants described as \"BCI experience: naive\" with normal demographics.\n2) Unknown — would only apply if population were unclear; but it is explicitly stated as Healthy.\nAlignment: aligns with few-shot convention (explicit population statement dominates). Confidence=0.85 based on clear explicit population line, minor conflicting wording \"Health status: patients\".\n\nModality:\n1) Visual (WIN) — explicit: \"Stimulus modalities: visual\", \"Primary modality: visual\", and \"6x6 matrix speller... visual stimulation\".\n2) Other — only if stimulus were non-sensory or mixed; not supported.\nAlignment: strong alignment with few-shot visual-task convention. Confidence=0.95 (3+ explicit visual-stimulus statements).\n\nType:\n1) Attention (WIN) — explicit instruction: \"participants visually attended cells containing target symbols during stimulation\"; speller BCI performance depends on selective attention to a target among many flashing cells.\n2) Perception (RUNNER-UP) — strong perceptual/evoked-potential framing: \"Visually Evoked Potentials (BBVEPs)\" and \"Visual stimulation with pseudo-random bit-sequences evokes... BBVEPs\".\nAlignment: ambiguous; few-shot visual discrimination example would lean Perception, but this paradigm’s core construct is target selection via attention in a speller matrix. Confidence=0.7 (one direct attention quote plus contextual paradigm inference)."}},"canonical_name":null,"name_confidence":0.83,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Thielen2015"}}