{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cab","dataset_id":"nm000214","associated_paper_doi":null,"authors":["J Thielen","P Marsman","J Farquhar","P Desain"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":30,"ages":[25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25],"age_min":25,"age_max":25,"age_mean":25.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000214","osf_url":null,"github_url":null,"paper_url":null},"funding":["NWO/TTW Takeoff Grant No. 14054","International ALS Association and Dutch ALS Foundation Grant Nos. ATC20610 and 2017-57"],"ingestion_fingerprint":"24d4aaa58159908057d87c1879ca919839144ae7475a713782227d40fb2a8a33","license":"CC0-1.0","n_contributing_labs":null,"name":"c-VEP dataset from Thielen et al. (2021)","readme":"# c-VEP dataset from Thielen et al. (2021)\nc-VEP dataset from Thielen et al. (2021)\n## Dataset Overview\n- **Code**: Thielen2021\n- **Paradigm**: cvep\n- **DOI**: 10.34973/9txv-z787\n- **Subjects**: 30\n- **Sessions per subject**: 1\n- **Events**: 1.0=101, 0.0=100\n- **Trial interval**: (0, 0.3) s\n- **Runs per session**: 5\n- **File format**: gdf\n- **Contributing labs**: MindAffect, Radboud University\n## Acquisition\n- **Sampling rate**: 512.0 Hz\n- **Number of channels**: 8\n- **Channel types**: eeg=8\n- **Channel names**: Fpz, Iz, O1, O2, Oz, POz, T7, T8\n- **Montage**: custom\n- **Hardware**: Biosemi ActiveTwo\n- **Reference**: CMS/DRL\n- **Sensor type**: sintered Ag/AgCl active electrodes\n- **Line frequency**: 50.0 Hz\n## Participants\n- **Number of subjects**: 30\n- **Health status**: healthy\n- **Age**: mean=25.0, min=19, max=62\n- **Gender distribution**: female=17, male=13\n## Experimental Protocol\n- **Paradigm**: cvep\n- **Number of classes**: 2\n- **Class labels**: 1.0, 0.0\n- **Trial duration**: 31.5 s\n- **Study design**: Code-modulated visual evoked potentials BCI task where participants fixated on target cells in a calculator grid (offline) or keyboard layout (online) while all cells flashed with unique pseudo-random Gold code modulated bit-sequences\n- **Feedback type**: none\n- **Stimulus type**: visual\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Training/test split**: False\n- **Instructions**: Participants maintained fixation at the target cell which was cued in green for 1 s before trial onset. No feedback was given after trials in the offline experiment.\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  1.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_1_0\n  0.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_0_0\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: cvep\n- **Code type**: modulated Gold codes\n- **Code length**: 126\n- **Number of targets**: 20\n## Data Structure\n- **Trials**: 100\n- **Blocks per session**: 5\n- **Trials context**: per_subject (5 blocks × 20 trials each)\n## Preprocessing\n- **Data state**: raw\n- **Preprocessing applied**: False\n## Signal Processing\n- **Classifiers**: template-matching, reconvolution, CCA\n- **Feature extraction**: encoding model, event responses, spatio-temporal\n- **Spatial filters**: CCA\n## Cross-Validation\n- **Method**: cross-validation\n- **Folds**: 5\n- **Evaluation type**: within_session, transfer_learning, zero_training\n## Performance (Original Study)\n- **High Communication Rates**: achieved in online spelling task\n## BCI Application\n- **Applications**: speller\n- **Environment**: indoor\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Visual\n- **Type**: Research\n## Documentation\n- **DOI**: 10.1088/1741-2552/abecef\n- **Associated paper DOI**: 10.1088/1741-2552/ab4057\n- **License**: CC0-1.0\n- **Investigators**: J Thielen, P Marsman, J Farquhar, P Desain\n- **Senior author**: P Desain\n- **Contact**: jordy.thielen@donders.ru.nl\n- **Institution**: Radboud University\n- **Department**: Donders Institute for Brain, Cognition and Behaviour\n- **Country**: NL\n- **Repository**: Radboud\n- **Data URL**: https://doi.org/10.34973/9txv-z787\n- **Publication year**: 2021\n- **Funding**: NWO/TTW Takeoff Grant No. 14054; International ALS Association and Dutch ALS Foundation Grant Nos. ATC20610 and 2017-57\n- **Ethics approval**: Approved by the local ethical committee of the Faculty of Social Sciences of Radboud University\n- **Keywords**: brain–computer interface (BCI), electroencephalography (EEG), code-modulated visual evoked potentials (cVEPs), reconvolution, zero training, spread spectrum communication\n## External Links\n- **Source**: https://doi.org/10.34973/9txv-z787\n## Abstract\nObjective. Typically, a brain–computer interface (BCI) is calibrated using user- and session-specific data because of the individual idiosyncrasies and the non-stationary signal properties of the electroencephalogram (EEG). Therefore, it is normal for BCIs to undergo a time-consuming passive training stage that prevents users from directly operating them. In this study, we systematically reduce the training data set in a stepwise fashion, to ultimately arrive at a calibration-free method for a code-modulated visually evoked potential (cVEP)-based BCI to fully eliminate the tedious training stage. Approach. In an extensive offline analysis, we compare our sophisticated encoding model with a traditional event-related potential (ERP) technique. We calibrate the encoding model in a standard way, with data limited to a single class while generalizing to all others and without any data. In addition, we investigate the feasibility of the zero-training cVEP BCI in an online setting. Main results. By adopting the encoding model, the training data can be reduced substantially, while maintaining both the classification performance as well as the explained variance of the ERP method. Moreover, with data from only one class or even no data at all, it still shows excellent performance. In addition, the zero-training cVEP BCI achieved high communication rates in an online spelling task, proving its feasibility for practical use. Significance. To date, this is the fastest zero-training cVEP BCI in the field, allowing high communication speeds without calibration while using only a few non-invasive water-based EEG electrodes. This allows us to skip the training stage altogether and spend all the valuable time on direct operation. This minimizes the session time and opens up new exciting directions for practical plug-and-play BCI. Fundamentally, these results validate that the adopted neural encoding model compresses data into event responses without the loss of explanatory power compared to using full ERPs as a template.\n## Methodology\nThe study compared four training regimes: (1) e-train: traditional ERP template-matching with data from all classes, (2) n-train: encoding model (reconvolution) with data from all n classes, (3) 1-train: encoding model with data from only one class while generating templates for all sequences, (4) 0-train: zero-training encoding model requiring no calibration data. Offline experiment: 30 participants completed 5 blocks of 20 trials each (100 trials total), with 31.5 s trials using a 4×5 calculator grid (n=20 symbols). Stimuli were luminance-modulated pseudo-random Gold codes (126-bit sequences, 2.1 s duration) presented on an iPad Pro at 60 Hz. Online experiment: 11 participants (9 analyzed) used a keyboard layout (n=29 symbols) with dynamic stopping rule for spelling tasks. EEG recorded at 512 Hz from 8 electrodes, preprocessed with 2-30 Hz Butterworth filtering and downsampled to 120 Hz. Classification used template-matching with reconvolution encoding model that decomposes responses to sequences into linear sums of individual event responses.\n## References\nThielen, J. (Jordy), Pieter Marsman, Jason Farquhar, Desain, P.W.M. (Peter) (2023): From full calibration to zero training for a code-modulated visual evoked potentials brain computer interface. Version 3. Radboud University. (dataset). DOI: https://doi.org/10.34973/9txv-z787\nThielen, J., Marsman, P., Farquhar, J., & Desain, P. (2021). From full calibration to zero training for a code-modulated visual evoked potentials for brain–computer interface. Journal of Neural Engineering, 18(5), 056007. DOI: https://doi.org/10.1088/1741-2552/abecef\nAhmadi, S., Borhanazad, M., Tump, D., Farquhar, J., & Desain, P. (2019). Low channel count montages using sensor tying for VEP-based BCI. Journal of Neural Engineering, 16(6), 066038. DOI: https://doi.org/10.1088/1741-2552/ab4057\nNotes\n.. versionadded:: 0.6.0\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":1581367866,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000214","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["cvep"],"timestamps":{"digested_at":"2026-04-30T14:09:06.705254+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-24T05:35:55Z"},"total_files":150,"computed_title":"c-VEP dataset from Thielen et al. (2021)","nchans_counts":[{"val":8,"count":150}],"sfreq_counts":[{"val":512.0,"count":150}],"stats_computed_at":"2026-05-01T13:49:34.645768+00:00","total_duration_s":99953.017578125,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"1ac57b24890c8e3b","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.85,"modality":0.95,"type":0.75},"reasoning":{"few_shot_analysis":"Most similar few-shot by paradigm/stimulus is the Visual/Perception example (Meta-rdk) which uses a visually driven discrimination task and is labeled Modality=Visual, Type=Perception. While the current dataset is a cVEP BCI (not dot-motion discrimination), both are fundamentally visual-evoked/stimulus-driven EEG paradigms rather than motor execution. Another relevant convention is the auditory ABR/music-vs-speech example labeled Type=Perception for stimulus-evoked sensory responses; similarly, cVEPs are stimulus-evoked visual responses.","metadata_analysis":"Key metadata facts:\n- Population: \"Health status: healthy\" and also \"Pathology: Healthy\".\n- Stimulus channel: \"Stimulus type: visual\", \"Stimulus modalities: visual\", and \"Primary modality: visual\".\n- Paradigm/task aim: \"Study design: Code-modulated visual evoked potentials BCI task where participants fixated on target cells... while all cells flashed\" and \"cVEP-based BCI\" / \"online spelling task\".\nThese indicate healthy participants receiving visual stimulation to elicit visual evoked potentials for BCI control.","paper_abstract_analysis":"The included abstract reinforces the purpose as cVEP-evoked responses for BCI calibration/training reduction: \"code-modulated visually evoked potential (cVEP)-based BCI\" and \"zero-training cVEP BCI achieved high communication rates in an online spelling task\". This supports a sensory-evoked (visual) paradigm rather than a clinical or intervention study.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\" (and \"Pathology: Healthy\").\n- Few-shot pattern suggests: When explicitly healthy controls are recruited, label Pathology=Healthy (seen across multiple few-shots labeled Healthy).\n- ALIGN.\n\nModality:\n- Metadata says: \"Stimulus type: visual\", \"Stimulus modalities: visual\", \"Primary modality: visual\".\n- Few-shot pattern suggests: Visual stimuli/visual evoked paradigms map to Modality=Visual.\n- ALIGN.\n\nType:\n- Metadata says: \"Code-modulated visual evoked potentials BCI task\" and participants \"fixated on target cells... while all cells flashed\" (evoked visual responses used for classification).\n- Few-shot pattern suggests: Stimulus-driven sensory/evoked-potential paradigms map to Type=Perception (e.g., visual discrimination task; auditory ABR/music-speech evoked responses).\n- ALIGN (though there is some secondary attentional selection—fixating/attending to a target—but the core measured construct is visually evoked responses for BCI classification).","decision_summary":"Top-2 candidates (with head-to-head comparison):\n\nPathology:\n1) Healthy (WINNER)\n- Evidence: \"Health status: healthy\"; also \"Pathology: Healthy\"; \"Subjects: 30\" with no disorder mentioned.\n2) Unknown (runner-up)\n- Would apply only if health status were not stated.\nDecision: Healthy clearly supported by explicit metadata. Confidence based on 2+ direct quotes.\n\nModality:\n1) Visual (WINNER)\n- Evidence: \"Stimulus type: visual\"; \"Stimulus modalities: visual\"; \"Primary modality: visual\"; plus paradigm is \"visual evoked potentials\".\n2) Other (runner-up)\n- Would apply if modality were mixed/unclear; not the case here.\nDecision: Visual strongly supported by multiple explicit statements. High confidence (3+ quotes).\n\nType:\n1) Perception (WINNER)\n- Evidence: \"Code-modulated visual evoked potentials\"; \"participants fixated on target cells... while all cells flashed\"; abstract emphasizes \"cVEP-based BCI\" relying on visually evoked responses.\n2) Attention (runner-up)\n- Evidence: requirement to select/maintain focus on a cued target cell (\"maintained fixation at the target cell which was cued in green\").\nDecision: Perception is better because the primary measured phenomenon is stimulus-evoked visual potentials (cVEP) used for classification/BCI, while attention is supportive/operational. Confidence reflects clear task description but some plausible overlap with Attention."}},"canonical_name":null,"name_confidence":0.75,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Thielen2021"}}