{"success":true,"database":"eegdash","data":{"_id":"69d16e06897a7725c66f4ce2","dataset_id":"nm000344","associated_paper_doi":null,"authors":["Kalou Cabrera Castillos","Simon Ladouce","Ludovic Darmet","Frédéric Dehais"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.1016/j.neuroimage.2023.120446","datatypes":["eeg"],"demographics":{"subjects_count":12,"ages":[30,30,30,30,30,30,30,30,30,30,30,30],"age_min":30,"age_max":30,"age_mean":30.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000344","osf_url":null,"github_url":null,"paper_url":null},"funding":["AID (Powerbrain project), France","AXA Research Fund Chair for Neuroergonomics, France","Chair for Neuroadaptive Technology, Artificial and Natural Intelligence Toulouse Institute (ANITI), France"],"ingestion_fingerprint":"20b70df1e3d4f32deebaff9e234419228e52f43118ba487bc1dbd56264a742ca","license":"CC-BY-4.0","n_contributing_labs":null,"name":"CastillosBurstVEP100","readme":"CastillosBurstVEP100\n====================\nc-VEP and Burst-VEP dataset from Castillos et al. (2023)\nDataset Overview\n----------------\n  Code: CastillosBurstVEP100\n  Paradigm: cvep\n  DOI: https://doi.org/10.1016/j.neuroimage.2023.120446\n  Subjects: 12\n  Sessions per subject: 1\n  Events: 0=100, 1=101\n  Trial interval: (0, 0.25) s\n  File format: EEGLAB .set\n  Number of contributing labs: 1\nAcquisition\n-----------\n  Sampling rate: 500.0 Hz\n  Number of channels: 32\n  Channel types: eeg=32\n  Channel names: C3, C4, CP1, CP2, CP5, CP6, Cz, F10, F3, F4, F7, F8, F9, FC1, FC2, FC5, FC6, Fp1, Fp2, Fz, O1, O2, Oz, P10, P3, P4, P7, P8, P9, Pz, T7, T8\n  Montage: standard_1020\n  Hardware: BrainProducts LiveAmp 32\n  Reference: FCz\n  Ground: FPz\n  Sensor type: eeg\n  Line frequency: 50.0 Hz\n  Online filters: {'notch': {'freq': 50.0, 'bandwidth': 0.2, 'order': 16, 'type': 'IIR cut-band'}}\n  Impedance threshold: 25.0 kOhm\n  Cap manufacturer: BrainProducts\n  Cap model: Acticap\n  Electrode type: active\nParticipants\n------------\n  Number of subjects: 12\n  Health status: healthy\n  Age: mean=30.6, std=7.1\n  Gender distribution: female=4, male=8\n  Species: human\nExperimental Protocol\n---------------------\n  Paradigm: cvep\n  Task type: target selection\n  Number of classes: 2\n  Class labels: 0, 1\n  Trial duration: 2.2 s\n  Tasks: visual attention, target selection\n  Study design: factorial within-subject\n  Study domain: BCI performance and user experience\n  Feedback type: none\n  Stimulus type: visual\n  Stimulus modalities: visual\n  Primary modality: visual\n  Synchronicity: synchronous\n  Mode: offline\n  Training/test split: False\n  Instructions: Focus on cued targets sequentially in random order\n  Stimulus presentation: software=PsychoPy, monitor=Dell P2419HC, resolution=1920x1080, refresh_rate_hz=60\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_0\n  1\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_1\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: cvep\n  Code type: burst\n  Number of targets: 4\n  Cue duration: 0.5 s\nData Structure\n--------------\n  Trials: 60\n  Blocks per session: 15\n  Trials context: 15 blocks x 4 trials per block = 60 trials per subject for burst c-VEP at 100% amplitude\nPreprocessing\n-------------\n  Data state: raw\nSignal Processing\n-----------------\n  Classifiers: Convolutional Neural Network (CNN), Pearson correlation\n  Feature extraction: CNN spatial filtering (8x1 kernel, 16 filters), CNN temporal filtering (1x32 kernel with dilation 2, 8 filters), CNN 2D convolution (5x5 kernel, 4 filters), sliding windows (250ms, 2ms stride)\n  Frequency bands: analyzed=[0.1, 40.0] Hz\n  Spatial filters: CNN 8x1 spatial convolution (16 filters)\nCross-Validation\n----------------\n  Method: sequential train/test split\n  Evaluation type: offline classification, iterative calibration (1-6 blocks)\nPerformance (Original Study)\n----------------------------\n  Accuracy: 95.6%\n  Itr: 67.49 bits/min\n  Selection Time S: 1.5\n  Cnn Training Time S: 15.0\n  Burst 40 Accuracy: 94.2\n  Mseq 100 Accuracy: 85.0\nBCI Application\n---------------\n  Applications: reactive BCI\n  Environment: controlled laboratory\n  Online feedback: False\nTags\n----\n  Pathology: Healthy\n  Modality: EEG\n  Type: reactive BCI, c-VEP, visual evoked potentials\nDocumentation\n-------------\n  Description: Burst c-VEP based BCI study comparing novel burst code sequences to traditional m-sequences at two amplitude depths (100% and 40%) to optimize classification performance, minimize calibration data, and improve user experience\n  DOI: 10.1016/j.neuroimage.2023.120446\n  Associated paper DOI: 10.1016/j.neuroimage.2023.120446\n  License: CC-BY-4.0\n  Investigators: Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais\n  Senior author: Frédéric Dehais\n  Contact: kalou.cabrera-castillos@isae-supaero.fr\n  Institution: Institut Supérieur de l'Aéronautique et de l'Espace (ISAE-SUPAERO)\n  Department: Human Factors and Neuroergonomics\n  Address: 10 Av. Edouard Belin, Toulouse, 31400, France\n  Country: FR\n  Repository: Zenodo\n  Data URL: https://zenodo.org/record/8255618\n  Publication year: 2023\n  Funding: AID (Powerbrain project), France; AXA Research Fund Chair for Neuroergonomics, France; Chair for Neuroadaptive Technology, Artificial and Natural Intelligence Toulouse Institute (ANITI), France\n  Ethics approval: University of Toulouse ethics committee (CER approval number 2020-334); Declaration of Helsinki\n  Acknowledgements: This work was funded by AID (Powerbrain project), France, the AXA Research Fund Chair for Neuroergonomics, France and Chair for Neuroadaptive Technology, Artificial and Natural Intelligence Toulouse Institute (ANITI), France.\n  Keywords: Code-VEP, Reactive BCI, CNN, Amplitude depth reduction, Visual comfort\nExternal Links\n--------------\n  Source: https://zenodo.org/record/8255618\n  Github: https://github.com/neuroergoISAE/burst_codes\nAbstract\n--------\nThe utilization of aperiodic flickering visual stimuli under the form of code-modulated Visual Evoked Potentials (c-VEP) represents a pivotal advancement in the field of reactive Brain–Computer Interface (rBCI). This study introduces Burst c-VEP, an innovative variant involving short bursts of aperiodic visual flashes at 2-4 flashes per second. The proposed burst c-VEP sequences exhibited higher accuracy (90.5%-95.6%) compared to m-sequence counterparts (71.4%-85.0%) with mean selection time of 1.5s. Reducing stimulus intensity to 40% amplitude depth only slightly decreased accuracy to 94.2% while substantially improving user experience. The collected dataset and CNN architecture implementation are shared through open-access repositories.\nMethodology\n-----------\nTwelve healthy participants completed an offline 4-class c-VEP protocol using a factorial design. EEG was recorded at 500 Hz using BrainProducts LiveAmp 32-channel system. Participants focused on cued targets with factorial manipulation of pattern type (burst vs m-sequence) and amplitude depth (100% vs 40%). Visual stimuli were presented on a 60 Hz Dell monitor. Burst codes consisted of brief flashes (~50ms) with minimum 200ms inter-burst interval, while m-sequences used Fibonacci-type LFSR with segmented 132-frame subsequences. A CNN architecture with spatial (8x1, 16 filters), temporal (1x32, 8 filters), and 2D convolution (5x5, 4 filters) layers decoded EEG using 250ms sliding windows with 2ms stride. Calibration data ranged from 1-6 blocks (8.8-52.8s). Classification used sequential train/test splits with Pearson correlation for target selection. VEP analysis examined amplitude, latency, and inter-trial coherence. Statistical analyses used 2×2 repeated measures ANOVA.\nReferences\n----------\nKalou Cabrera Castillos. (2023). 4-class code-VEP EEG data [Data set]. Zenodo.(dataset). DOI: https://doi.org/10.5281/zenodo.8255618\nKalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais. Burst c-VEP Based BCI: Optimizing stimulus design for enhanced classification with minimal calibration data and improved user experience,NeuroImage,Volume 284, 2023,120446,ISSN 1053-8119 DOI: https://doi.org/10.1016/j.neuroimage.2023.120446\nNotes\n.. versionadded:: 1.1.0\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":157314831,"source":"openneuro","storage":{"backend":"s3","base":"s3://openneuro.org/nm000344","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["cvep"],"timestamps":{"digested_at":"2026-04-22T12:52:29.956442+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":12,"computed_title":"CastillosBurstVEP100","nchans_counts":[{"val":32,"count":12}],"sfreq_counts":[{"val":500.0,"count":12}],"stats_computed_at":"2026-04-22T23:16:00.314680+00:00","total_duration_s":3157.9759999999997,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"d0a7f6255fe16575","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot conventions are the Visual/Motor BCI-style example (\"EEG Motor Movement/Imagery Dataset\") showing that Type should reflect the primary cognitive construct (e.g., Motor) rather than the mere presence of visual cues, and the schizophrenia visual discrimination example showing Visual stimuli → Modality=Visual and perceptual/selection paradigms can map to Perception/Attention depending on aim. For this dataset, the paradigm is a visual evoked potential reactive BCI (c-VEP) with explicit 'visual attention, target selection', so by convention Type should emphasize Attention (attending to a cued target) rather than generic Visual modality.","metadata_analysis":"Key explicit metadata facts:\n- Population/health: \"Health status: healthy\" and also under Tags: \"Pathology: Healthy\".\n- Stimulus modality: \"Stimulus type: visual\"; \"Stimulus modalities: visual\"; \"Primary modality: visual\".\n- Task/construct: \"Tasks: visual attention, target selection\" and \"Task type: target selection\" within a \"Paradigm: cvep\" (code-modulated visual evoked potentials / reactive BCI).","paper_abstract_analysis":"The included abstract in the README reinforces that this is a reactive BCI using code-modulated visual flashes: \"code-modulated Visual Evoked Potentials (c-VEP)\" and participants focus on targets, consistent with visual-attention-based target selection rather than clinical intervention.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\"; Tags: \"Pathology: Healthy\".\n- Few-shot pattern suggests: when explicitly healthy/non-clinical, use Healthy.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"Stimulus type: visual\"; \"Primary modality: visual\"; c-VEP / VEP described.\n- Few-shot pattern suggests: visual stimulation paradigms → Visual.\n- Alignment: ALIGN.\n\nType:\n- Metadata says: \"Tasks: visual attention, target selection\" and \"Task type: target selection\" in a reactive BCI c-VEP paradigm.\n- Few-shot pattern suggests: for BCI-like paradigms, Type should reflect the main construct (e.g., attention to a target stimulus rather than response mechanics); visual discrimination maps to Perception, but explicit 'visual attention/target selection' pushes toward Attention.\n- Alignment: Mostly ALIGN (no conflict); only mild ambiguity between Attention vs Perception due to VEP/perceptual nature, resolved by explicit 'visual attention' phrasing.","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n1) Healthy (WIN) — explicit: \"Health status: healthy\"; Tags: \"Pathology: Healthy\".\n2) Unknown (runner-up) — would apply only if health were unspecified (not the case).\nFinal: Healthy. Evidence alignment: aligned.\nConfidence justification: multiple explicit mentions of healthy status.\n\nModality:\n1) Visual (WIN) — explicit: \"Stimulus type: visual\"; \"Stimulus modalities: visual\"; \"Primary modality: visual\".\n2) Other (runner-up) — only if non-standard modality were dominant (not supported).\nFinal: Visual. Evidence alignment: aligned.\nConfidence justification: 3 explicit modality statements.\n\nType:\n1) Attention (WIN) — explicit: \"Tasks: visual attention, target selection\"; \"Task type: target selection\"; instructions: \"Focus on cued targets\".\n2) Perception (runner-up) — plausible because c-VEP/VEP are evoked by visual stimulation, but the study goal is target selection via attention in reactive BCI.\nFinal: Attention. Evidence alignment: aligned (resolved ambiguity using explicit 'visual attention' wording).\nConfidence justification: 2 explicit task/construct statements plus clear paradigm context (c-VEP reactive BCI)."}},"canonical_name":null,"name_confidence":0.66,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Castillos2023_CastillosBurstVEP100"}}