{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c86","dataset_id":"nm000163","associated_paper_doi":null,"authors":["Kalou Cabrera Castillos","Simon Ladouce","Ludovic Darmet","Frédéric Dehais"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":12,"ages":[30,30,30,30,30,30,30,30,30,30,30,30],"age_min":30,"age_max":30,"age_mean":30.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000163","osf_url":null,"github_url":null,"paper_url":null},"funding":["AID (Powerbrain project), France","AXA Research Fund Chair for Neuroergonomics, France","Chair for Neuroadaptive Technology, Artificial and Natural Intelligence Toulouse Institute (ANITI), France"],"ingestion_fingerprint":"0d688fbc35ba903e62eefbb04dce5d0098d92cb91ff4d490a0c66565cfe3e88a","license":"CC-BY-4.0","n_contributing_labs":null,"name":"c-VEP and Burst-VEP dataset from Castillos et al. (2023)","readme":"# c-VEP and Burst-VEP dataset from Castillos et al. (2023)\nc-VEP and Burst-VEP dataset from Castillos et al. (2023)\n## Dataset Overview\n- **Code**: CastillosBurstVEP100\n- **Paradigm**: cvep\n- **DOI**: https://doi.org/10.1016/j.neuroimage.2023.120446\n- **Subjects**: 12\n- **Sessions per subject**: 1\n- **Events**: 0=100, 1=101\n- **Trial interval**: (0, 0.25) s\n- **File format**: EEGLAB .set\n- **Number of contributing labs**: 1\n## Acquisition\n- **Sampling rate**: 500.0 Hz\n- **Number of channels**: 32\n- **Channel types**: eeg=32\n- **Channel names**: C3, C4, CP1, CP2, CP5, CP6, Cz, F10, F3, F4, F7, F8, F9, FC1, FC2, FC5, FC6, Fp1, Fp2, Fz, O1, O2, Oz, P10, P3, P4, P7, P8, P9, Pz, T7, T8\n- **Montage**: standard_1020\n- **Hardware**: BrainProducts LiveAmp 32\n- **Reference**: FCz\n- **Ground**: FPz\n- **Sensor type**: eeg\n- **Line frequency**: 50.0 Hz\n- **Online filters**: {'notch': {'freq': 50.0, 'bandwidth': 0.2, 'order': 16, 'type': 'IIR cut-band'}}\n- **Impedance threshold**: 25.0 kOhm\n- **Cap manufacturer**: BrainProducts\n- **Cap model**: Acticap\n- **Electrode type**: active\n## Participants\n- **Number of subjects**: 12\n- **Health status**: healthy\n- **Age**: mean=30.6, std=7.1\n- **Gender distribution**: female=4, male=8\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: cvep\n- **Task type**: target selection\n- **Number of classes**: 2\n- **Class labels**: 0, 1\n- **Trial duration**: 2.2 s\n- **Tasks**: visual attention, target selection\n- **Study design**: factorial within-subject\n- **Study domain**: BCI performance and user experience\n- **Feedback type**: none\n- **Stimulus type**: visual\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Training/test split**: False\n- **Instructions**: Focus on cued targets sequentially in random order\n- **Stimulus presentation**: software=PsychoPy, monitor=Dell P2419HC, resolution=1920x1080, refresh_rate_hz=60\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_0\n  1\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_1\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: cvep\n- **Code type**: burst\n- **Number of targets**: 4\n- **Cue duration**: 0.5 s\n## Data Structure\n- **Trials**: 60\n- **Blocks per session**: 15\n- **Trials context**: 15 blocks x 4 trials per block = 60 trials per subject for burst c-VEP at 100% amplitude\n## Preprocessing\n- **Data state**: raw\n## Signal Processing\n- **Classifiers**: Convolutional Neural Network (CNN), Pearson correlation\n- **Feature extraction**: CNN spatial filtering (8x1 kernel, 16 filters), CNN temporal filtering (1x32 kernel with dilation 2, 8 filters), CNN 2D convolution (5x5 kernel, 4 filters), sliding windows (250ms, 2ms stride)\n- **Frequency bands**: analyzed=[0.1, 40.0] Hz\n- **Spatial filters**: CNN 8x1 spatial convolution (16 filters)\n## Cross-Validation\n- **Method**: sequential train/test split\n- **Evaluation type**: offline classification, iterative calibration (1-6 blocks)\n## Performance (Original Study)\n- **Accuracy**: 95.6%\n- **Itr**: 67.49 bits/min\n- **Selection Time S**: 1.5\n- **Cnn Training Time S**: 15.0\n- **Burst 40 Accuracy**: 94.2\n- **Mseq 100 Accuracy**: 85.0\n## BCI Application\n- **Applications**: reactive BCI\n- **Environment**: controlled laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: EEG\n- **Type**: reactive BCI, c-VEP, visual evoked potentials\n## Documentation\n- **Description**: Burst c-VEP based BCI study comparing novel burst code sequences to traditional m-sequences at two amplitude depths (100% and 40%) to optimize classification performance, minimize calibration data, and improve user experience\n- **DOI**: 10.1016/j.neuroimage.2023.120446\n- **Associated paper DOI**: 10.1016/j.neuroimage.2023.120446\n- **License**: CC-BY-4.0\n- **Investigators**: Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais\n- **Senior author**: Frédéric Dehais\n- **Contact**: kalou.cabrera-castillos@isae-supaero.fr\n- **Institution**: Institut Supérieur de l'Aéronautique et de l'Espace (ISAE-SUPAERO)\n- **Department**: Human Factors and Neuroergonomics\n- **Address**: 10 Av. Edouard Belin, Toulouse, 31400, France\n- **Country**: FR\n- **Repository**: Zenodo\n- **Data URL**: https://zenodo.org/record/8255618\n- **Publication year**: 2023\n- **Funding**: AID (Powerbrain project), France; AXA Research Fund Chair for Neuroergonomics, France; Chair for Neuroadaptive Technology, Artificial and Natural Intelligence Toulouse Institute (ANITI), France\n- **Ethics approval**: University of Toulouse ethics committee (CER approval number 2020-334); Declaration of Helsinki\n- **Acknowledgements**: This work was funded by AID (Powerbrain project), France, the AXA Research Fund Chair for Neuroergonomics, France and Chair for Neuroadaptive Technology, Artificial and Natural Intelligence Toulouse Institute (ANITI), France.\n- **Keywords**: Code-VEP, Reactive BCI, CNN, Amplitude depth reduction, Visual comfort\n## External Links\n- **Source**: https://zenodo.org/record/8255618\n- **Github**: https://github.com/neuroergoISAE/burst_codes\n## Abstract\nThe utilization of aperiodic flickering visual stimuli under the form of code-modulated Visual Evoked Potentials (c-VEP) represents a pivotal advancement in the field of reactive Brain–Computer Interface (rBCI). This study introduces Burst c-VEP, an innovative variant involving short bursts of aperiodic visual flashes at 2-4 flashes per second. The proposed burst c-VEP sequences exhibited higher accuracy (90.5%-95.6%) compared to m-sequence counterparts (71.4%-85.0%) with mean selection time of 1.5s. Reducing stimulus intensity to 40% amplitude depth only slightly decreased accuracy to 94.2% while substantially improving user experience. The collected dataset and CNN architecture implementation are shared through open-access repositories.\n## Methodology\nTwelve healthy participants completed an offline 4-class c-VEP protocol using a factorial design. EEG was recorded at 500 Hz using BrainProducts LiveAmp 32-channel system. Participants focused on cued targets with factorial manipulation of pattern type (burst vs m-sequence) and amplitude depth (100% vs 40%). Visual stimuli were presented on a 60 Hz Dell monitor. Burst codes consisted of brief flashes (~50ms) with minimum 200ms inter-burst interval, while m-sequences used Fibonacci-type LFSR with segmented 132-frame subsequences. A CNN architecture with spatial (8x1, 16 filters), temporal (1x32, 8 filters), and 2D convolution (5x5, 4 filters) layers decoded EEG using 250ms sliding windows with 2ms stride. Calibration data ranged from 1-6 blocks (8.8-52.8s). Classification used sequential train/test splits with Pearson correlation for target selection. VEP analysis examined amplitude, latency, and inter-trial coherence. Statistical analyses used 2×2 repeated measures ANOVA.\n## References\nKalou Cabrera Castillos. (2023). 4-class code-VEP EEG data [Data set]. Zenodo.(dataset). DOI: https://doi.org/10.5281/zenodo.8255618\nKalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais. Burst c-VEP Based BCI: Optimizing stimulus design for enhanced classification with minimal calibration data and improved user experience,NeuroImage,Volume 284, 2023,120446,ISSN 1053-8119 DOI: https://doi.org/10.1016/j.neuroimage.2023.120446\nNotes\n.. versionadded:: 1.1.0\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":167883135,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000163","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["cvep"],"timestamps":{"digested_at":"2026-04-30T14:08:45.603518+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-02T21:18:32Z"},"total_files":12,"computed_title":"c-VEP and Burst-VEP dataset from Castillos et al. (2023)","nchans_counts":[{"val":32,"count":12}],"sfreq_counts":[{"val":500.0,"count":12}],"stats_computed_at":"2026-05-01T13:49:34.645298+00:00","total_duration_s":3161.9480000000003,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"87d253e659a48ca5","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.7},"reasoning":{"few_shot_analysis":"Closest conventions in few-shot examples:\n- The motor imagery benchmark example (“EEG Motor Movement/Imagery Dataset”) maps explicit movement/imagery paradigms to Type=“Motor”, showing that Type should reflect the primary cognitive construct rather than general “BCI”.\n- The schizophrenia visual discrimination example (“Meta-rdk… visual discrimination task… moving dots”) maps a visual stimulus discrimination paradigm to Modality=“Visual” and Type=“Perception”, illustrating that stimulus channel drives Modality and that perceptual/evoked-response paradigms often map to Perception unless the instructions emphasize attentional control.\nApplying these conventions here: this is a visual evoked potential (c-VEP) reactive BCI where participants “focus on cued targets”, which is more aligned with attentional selection than sensory discrimination per se, so Type leans toward “Attention” rather than “Perception”.","metadata_analysis":"Key quoted metadata facts:\n- Population: “Health status: healthy” and also “Subjects: 12” under Participants.\n- Stimulus modality: “Stimulus type: visual”, “Stimulus modalities: visual”, and “Primary modality: visual”.\n- Task/construct: “Tasks: visual attention, target selection” and “Instructions: Focus on cued targets sequentially in random order”.\n- Paradigm framing: “Study domain: BCI performance and user experience” and tags: “reactive BCI, c-VEP, visual evoked potentials”.","paper_abstract_analysis":"Useful paper information is included in the dataset’s embedded abstract/methods. It reinforces a reactive BCI c-VEP paradigm with visual flashes and attention to cued targets: “code-modulated Visual Evoked Potentials (c-VEP)… short bursts of aperiodic visual flashes” and “Participants focused on cued targets…”. This supports Modality=Visual and Type leaning to Attention.","evidence_alignment_check":"Pathology:\n- Metadata says: “Health status: healthy”.\n- Few-shot pattern suggests: datasets with explicitly healthy participants map to Pathology=“Healthy” (e.g., multiple few-shot datasets labeled Healthy).\n- ALIGN.\n\nModality:\n- Metadata says: “Stimulus type: visual” / “Primary modality: visual”.\n- Few-shot pattern suggests: visual stimulus paradigms map to Modality=“Visual” (e.g., schizophrenia moving-dots task labeled Visual; motor imagery task uses visual cues but is still labeled Visual for modality there).\n- ALIGN.\n\nType:\n- Metadata says: “Tasks: visual attention, target selection” and “Instructions: Focus on cued targets…”, and it is a “reactive BCI” using c-VEP.\n- Few-shot pattern suggests: (a) perceptual discrimination tasks often map to “Perception”; (b) when the core construct is control/selection of target stimuli, it can map to “Attention” (e.g., DPX cognitive control task labeled Attention).\n- PARTIAL ALIGN with both Attention and Perception plausible; no direct few-shot c-VEP example, so we choose based on the explicit ‘visual attention/target selection’ wording.","decision_summary":"Top-2 candidates per category with head-to-head selection:\n\nPathology:\n1) Healthy — Evidence: “Health status: healthy”.\n2) Unknown — only if health status were absent (not the case).\nSelected: Healthy. Alignment: aligns with few-shot healthy-control conventions.\n\nModality:\n1) Visual — Evidence: “Stimulus type: visual”, “Stimulus modalities: visual”, “Primary modality: visual”.\n2) Multisensory — no supporting evidence; no auditory/tactile channels described.\nSelected: Visual. Alignment: aligns with few-shot visual-stimulus convention.\n\nType:\n1) Attention — Evidence: “Tasks: visual attention, target selection”; “Instructions: Focus on cued targets…”. Reactive BCI target selection is essentially attentional selection to one of multiple flickering/coded targets.\n2) Perception — Evidence: “c-VEP… visual evoked potentials” and “visual flashes” could be framed as sensory/evoked-response measurement.\nSelected: Attention (stronger because metadata explicitly labels the task as visual attention/target selection, not discrimination or perceptual judgment)."}},"canonical_name":null,"name_confidence":0.66,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Castillos2023_VEP"}}