{"success":true,"database":"eegdash","data":{"_id":"69d16e06897a7725c66f4ce3","dataset_id":"nm000345","associated_paper_doi":null,"authors":["Kalou Cabrera Castillos","Simon Ladouce","Ludovic Darmet","Frédéric Dehais"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.1016/j.neuroimage.2023.120446","datatypes":["eeg"],"demographics":{"subjects_count":12,"ages":[30,30,30,30,30,30,30,30,30,30,30,30],"age_min":30,"age_max":30,"age_mean":30.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000345","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"1d3e67b8f222058a7db943b05e528558c5090d5a950162314f9e4534bfa15c0e","license":"CC-BY-4.0","n_contributing_labs":null,"name":"CastillosBurstVEP40","readme":"CastillosBurstVEP40\n===================\nc-VEP and Burst-VEP dataset from Castillos et al. (2023)\nDataset Overview\n----------------\n  Code: CastillosBurstVEP40\n  Paradigm: cvep\n  DOI: https://doi.org/10.1016/j.neuroimage.2023.120446\n  Subjects: 12\n  Sessions per subject: 1\n  Events: 0=100, 1=101\n  Trial interval: (0, 0.25) s\n  File format: EEGLAB .set\n  Number of contributing labs: 1\nAcquisition\n-----------\n  Sampling rate: 500.0 Hz\n  Number of channels: 32\n  Channel types: eeg=32\n  Channel names: C3, C4, CP1, CP2, CP5, CP6, Cz, F10, F3, F4, F7, F8, F9, FC1, FC2, FC5, FC6, Fp1, Fp2, Fz, O1, O2, Oz, P10, P3, P4, P7, P8, P9, Pz, T7, T8\n  Montage: standard_1020\n  Hardware: BrainProducts LiveAmp 32\n  Reference: FCz\n  Ground: FPz\n  Sensor type: eeg\n  Line frequency: 50.0 Hz\n  Online filters: {'line_noise': 'IIR cut-band filter between 49.9 and 50.1 Hz of order 16'}\n  Impedance threshold: 25.0 kOhm\n  Cap manufacturer: BrainProducts\n  Cap model: Acticap\n  Electrode type: active\nParticipants\n------------\n  Number of subjects: 12\n  Health status: healthy\n  Age: mean=30.6, std=7.1\n  Gender distribution: female=4, male=8\n  Species: human\nExperimental Protocol\n---------------------\n  Paradigm: cvep\n  Task type: reactive BCI\n  Number of classes: 2\n  Class labels: 0, 1\n  Trial duration: 2.2 s\n  Tasks: attend to cued target\n  Study design: factorial design\n  Study domain: brain-computer interface\n  Feedback type: none\n  Stimulus type: aperiodic visual flashes\n  Stimulus modalities: visual\n  Primary modality: visual\n  Synchronicity: synchronous\n  Mode: offline\n  Training/test split: False\n  Instructions: Participants were instructed to focus on c-VEP targets cued sequentially\n  Stimulus presentation: screen=Dell P2419HC, 1920 × 1080 pixels, 265 cd/m2, 60 Hz\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_0\n  1\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_1\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: cvep\n  Stimulus frequencies: [2.0, 3.0, 4.0] Hz\n  Code type: burst\n  Number of targets: 4\n  Cue duration: 0.5 s\nData Structure\n--------------\n  Trials: 60\n  Blocks per session: 15\n  Trials context: 15 blocks x 4 trials per block = 60 trials per subject for burst c-VEP at 40% amplitude\nPreprocessing\n-------------\n  Data state: raw\nSignal Processing\n-----------------\n  Classifiers: CNN, Convolutional Neural Network\n  Feature extraction: EEG2Code bitwise decoding\nCross-Validation\n----------------\n  Evaluation type: offline\nPerformance (Original Study)\n----------------------------\n  Accuracy: 95.6%\n  Burst 100 Accuracy 17.6S Calibration: 90.5\n  Burst 100 Accuracy 52.8S Calibration: 95.6\n  Mseq 100 Accuracy 17.6S Calibration: 71.4\n  Mseq 100 Accuracy 52.8S Calibration: 85.0\n  Burst 40 Accuracy: 94.2\n  Mean Selection Time S: 1.5\nBCI Application\n---------------\n  Applications: brain-computer interface\n  Environment: laboratory\n  Online feedback: False\nTags\n----\n  Pathology: Healthy\n  Modality: EEG\n  Type: reactive BCI, c-VEP\nDocumentation\n-------------\n  Description: Burst c-VEP based BCI study optimizing stimulus design for enhanced classification with minimal calibration data and improved user experience. The study introduces an innovative variant of code-VEP called 'Burst c-VEP' involving short bursts of aperiodic visual flashes at 2-4 flashes per second.\n  DOI: 10.1016/j.neuroimage.2023.120446\n  Associated paper DOI: 10.1016/j.neuroimage.2023.120446\n  License: CC-BY-4.0\n  Investigators: Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais\n  Senior author: Frédéric Dehais\n  Contact: kalou.cabrera-castillos@isae-supaero.fr\n  Institution: Institut Supérieur de l'Aéronautique et de l'Espace (ISAE-SUPAERO)\n  Department: Human Factors and Neuroergonomics\n  Address: 10 Av. Edouard Belin, Toulouse, 31400, France\n  Country: FR\n  Repository: Zenodo\n  Data URL: https://zenodo.org/record/8255618\n  Publication year: 2023\n  Ethics approval: University of Toulouse ethics committee (CER approval number 2020-334); Declaration of Helsinki\n  Keywords: Code-VEP, Reactive BCI, CNN, Amplitude depth reduction, Visual comfort\nExternal Links\n--------------\n  Source: https://zenodo.org/record/8255618\nAbstract\n--------\nThe utilization of aperiodic flickering visual stimuli under the form of code-modulated Visual Evoked Potentials (c-VEP) represents a pivotal advancement in the field of reactive Brain–Computer Interface (rBCI). A major advantage of the c-VEP approach is that the training of the model is independent of the number and complexity of targets, which helps reduce calibration time. Nevertheless, the existing designs of c-VEP stimuli can be further improved in terms of visual user experience but also to achieve a higher signal-to-noise ratio, while shortening the selection time and calibration process. In this study, we introduce an innovative variant of code-VEP, referred to as 'Burst c-VEP'. This original approach involves the presentation of short bursts of aperiodic visual flashes at a deliberately slow rate, typically ranging from two to four flashes per second. The rationale behind this design is to leverage the sensitivity of the primary visual cortex to transient changes in low-level stimuli features to reliably elicit distinctive series of visual evoked potentials. In comparison to other types of faster-paced code sequences, burst c-VEP exhibit favorable properties to achieve high bitwise decoding performance using convolutional neural networks (CNN), which yields potential to attain faster selection time with the need for less calibration data. Furthermore, our investigation focuses on reducing the perceptual saliency of c-VEP through the attenuation of visual stimuli contrast and intensity to significantly improve users' visual comfort. The proposed solutions were tested through an offline 4-classes c-VEP protocol involving 12 participants. Following a factorial design, participants were instructed to focus on c-VEP targets whose pattern (burst and maximum-length sequences) and amplitude (100% or 40% amplitude depth modulations) were manipulated across experimental conditions. Firstly, the full amplitude burst c-VEP sequences exhibited higher accuracy, ranging from 90.5% (with 17.6 s of calibration data) to 95.6% (with 52.8 s of calibration data), compared to its m-sequence counterpart (71.4% to 85.0%). The mean selection time for both types of codes (1.5 s) compared favorably to reports from previous studies. Secondly, our findings revealed that lowering the intensity of the stimuli only slightly decreased the accuracy of the burst code sequences to 94.2% while leading to substantial improvements in terms of user experience. Taken together, these results demonstrate the high potential of the proposed burst codes to advance reactive BCI both in terms of performance and usability. The collected dataset, along with the proposed CNN architecture implementation, are shared through open-access repositories.\nMethodology\n-----------\nFactorial experimental design with 12 participants. Four conditions: burst or m-sequence codes × 100% or 40% amplitude depth. Participants attended to cued targets presented as aperiodic visual flashes. Burst codes: 50ms flashes at 2-4 Hz with 200ms minimum inter-burst interval. M-sequences: pseudo-random binary sequences at ~10 Hz. EEG recorded at 500 Hz using 32-channel BrainProduct LiveAmp. Analysis on occipital/parietal electrodes. CNN-based bitwise decoding (improved EEG2Code architecture). Each participant completed 15 blocks of 4 trials per condition (60 trials per class, 240 total trials). Trial structure: 700ms ITI, 500ms cue, 2200ms stimulation. Display: Dell P2419HC 60Hz LCD. Luminance: medium grey background (124 lux), 100% condition (168 lux), 40% condition (142 lux). Preprocessing: average re-reference, 50Hz notch filter (IIR order 16), epoching 0-2.2s, baseline removal. Subjective assessments of visual comfort, tiredness, and intrusiveness collected.\nReferences\n----------\nKalou Cabrera Castillos. (2023). 4-class code-VEP EEG data [Data set]. Zenodo.(dataset). DOI: https://doi.org/10.5281/zenodo.8255618\nKalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais. Burst c-VEP Based BCI: Optimizing stimulus design for enhanced classification with minimal calibration data and improved user experience,NeuroImage,Volume 284, 2023,120446,ISSN 1053-8119 DOI: https://doi.org/10.1016/j.neuroimage.2023.120446\nNotes\n.. versionadded:: 1.1.0\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":151168121,"source":"openneuro","storage":{"backend":"s3","base":"s3://openneuro.org/nm000345","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["cvep"],"timestamps":{"digested_at":"2026-04-22T12:52:30.086090+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":12,"computed_title":"CastillosBurstVEP40","nchans_counts":[{"val":32,"count":12}],"sfreq_counts":[{"val":500.0,"count":12}],"stats_computed_at":"2026-04-22T23:16:00.314692+00:00","total_duration_s":3031.9759999999997,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"fe071db6b8f3eeb8","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.75},"reasoning":{"few_shot_analysis":"Closest few-shot convention match is the Visual Perception schizophrenia dataset (Meta-rdk) which is labeled Modality=Visual and Type=Perception based on an explicit \"visual discrimination task\" with visual motion-dot stimuli. This guides mapping visually presented stimuli to Modality=Visual and perceptual-evoked paradigms to Perception/Attention types. Another relevant convention is the motor imagery BCI dataset (EEG Motor Movement/Imagery) labeled Type=Motor because the research purpose is movement/imagery; by contrast, the current dataset is a reactive visual-evoked BCI (c-VEP), not motor imagery. Therefore, Type should not be Motor; the best competitors are Attention vs Perception depending on whether target-focused selection is treated as attentional control versus pure sensory evoked response.","metadata_analysis":"Key metadata facts:\n1) Healthy population: \"Health status: healthy\" and \"Subjects: 12\".\n2) Visual stimulus modality: \"Stimulus type: aperiodic visual flashes\" and \"Stimulus modalities: visual\" / \"Primary modality: visual\".\n3) Task/cognitive demand: \"Task type: reactive BCI\" and \"Tasks: attend to cued target\" plus \"Participants were instructed to focus on c-VEP targets cued sequentially\".\nThese indicate a visual evoked potential (code-VEP / burst c-VEP) paradigm with explicit target-focused attention for BCI decoding.","paper_abstract_analysis":"The included abstract reinforces the same interpretation: it describes \"code-modulated Visual Evoked Potentials (c-VEP)\" and \"short bursts of aperiodic visual flashes\" used to improve reactive BCI decoding and user comfort, consistent with a visual-evoked, attention-to-target paradigm. No contradiction with metadata.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\".\n- Few-shot pattern suggests: when no disorder is targeted, label as Healthy.\n- ALIGN.\n\nModality:\n- Metadata says: \"Stimulus type: aperiodic visual flashes\" and \"Stimulus modalities: visual\".\n- Few-shot pattern suggests: visual stimuli/tasks map to Modality=Visual.\n- ALIGN.\n\nType:\n- Metadata says: \"Tasks: attend to cued target\" and \"Participants were instructed to focus on c-VEP targets\" (reactive BCI selection).\n- Few-shot pattern suggests: visual discrimination/evoked paradigms often map to Perception; tasks emphasizing target selection can map to Attention.\n- PARTIAL ALIGN (both plausible). Choose Attention because the explicit instruction is to selectively attend/focus on the cued target for BCI control, not merely passive perception.","decision_summary":"Top-2 candidates (with head-to-head comparison):\n\n1) Pathology:\n- Candidate A: Healthy\n  Evidence: \"Health status: healthy\"; also \"Subjects: 12\" with no clinical recruitment described.\n- Candidate B: Unknown\n  Evidence: none beyond potential incompleteness.\n  Decision: Healthy wins (explicit health-status statement).\n  Confidence basis: 1 very explicit quote plus consistent absence of clinical terms.\n\n2) Modality:\n- Candidate A: Visual\n  Evidence: \"Stimulus type: aperiodic visual flashes\"; \"Stimulus modalities: visual\"; \"Primary modality: visual\".\n- Candidate B: Multisensory\n  Evidence: none (no auditory/tactile stimuli described).\n  Decision: Visual wins (multiple explicit statements).\n  Confidence basis: 3 explicit quotes.\n\n3) Type:\n- Candidate A: Attention\n  Evidence: \"Tasks: attend to cued target\"; \"Participants were instructed to focus on c-VEP targets cued sequentially\"; reactive BCI requires selective attention to a target to generate discriminable VEP codes.\n- Candidate B: Perception\n  Evidence: c-VEP is fundamentally a visual evoked potential paradigm using low-level flashes (sensory-evoked responses).\n  Decision: Attention wins because the task is explicitly target-focused (cued target selection) rather than passive stimulus processing.\n  Confidence basis: 2 explicit quotes supporting attention; Perception remains a close runner-up."}},"canonical_name":null,"name_confidence":0.66,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Castillos2023_CastillosBurstVEP40"}}