{"success":true,"database":"eegdash","data":{"_id":"69d16e06897a7725c66f4ce4","dataset_id":"nm000346","associated_paper_doi":null,"authors":["Kalou Cabrera Castillos","Simon Ladouce","Ludovic Darmet","Frédéric Dehais"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.1016/j.neuroimage.2023.120446","datatypes":["eeg"],"demographics":{"subjects_count":12,"ages":[30,30,30,30,30,30,30,30,30,30,30,30],"age_min":30,"age_max":30,"age_mean":30.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000346","osf_url":null,"github_url":null,"paper_url":null},"funding":["AID (Powerbrain project), France","AXA Research Fund Chair for Neuroergonomics, France","Chair for Neuroadaptive Technology, Artificial and Natural Intelligence Toulouse Institute (ANITI), France"],"ingestion_fingerprint":"8e6e33dedfb44d123c0178dc72afc1c6d1c2b51033b0ffeaaa71973480f88e54","license":"CC-BY-4.0","n_contributing_labs":null,"name":"CastillosCVEP100","readme":"CastillosCVEP100\n================\nc-VEP and Burst-VEP dataset from Castillos et al. (2023)\nDataset Overview\n----------------\n  Code: CastillosCVEP100\n  Paradigm: cvep\n  DOI: https://doi.org/10.1016/j.neuroimage.2023.120446\n  Subjects: 12\n  Sessions per subject: 1\n  Events: 0=100, 1=101\n  Trial interval: (0, 0.25) s\n  File format: EEGLAB .set\nAcquisition\n-----------\n  Sampling rate: 500.0 Hz\n  Number of channels: 32\n  Channel types: eeg=32\n  Channel names: C3, C4, CP1, CP2, CP5, CP6, Cz, F10, F3, F4, F7, F8, F9, FC1, FC2, FC5, FC6, Fp1, Fp2, Fz, O1, O2, Oz, P10, P3, P4, P7, P8, P9, Pz, T7, T8\n  Montage: standard_1020\n  Hardware: BrainProducts LiveAmp\n  Reference: FCz\n  Ground: FPz\n  Sensor type: EEG\n  Line frequency: 50.0 Hz\n  Impedance threshold: 25.0 kOhm\n  Cap manufacturer: BrainProducts\n  Cap model: Acticap\n  Electrode type: active\nParticipants\n------------\n  Number of subjects: 12\n  Health status: healthy\n  Age: mean=30.6, std=7.1\n  Gender distribution: female=4, male=8\n  Species: human\nExperimental Protocol\n---------------------\n  Paradigm: cvep\n  Task type: visual attention\n  Number of classes: 2\n  Class labels: 0, 1\n  Trial duration: 2.2 s\n  Study design: factorial design (code type × amplitude depth)\n  Study domain: BCI performance and user experience\n  Feedback type: none\n  Stimulus type: visual flashing\n  Stimulus modalities: visual\n  Primary modality: visual\n  Synchronicity: synchronous\n  Mode: offline\n  Training/test split: False\n  Instructions: focus on four targets that were cued sequentially in a random order for 0.5 s, followed by a 2.2 s stimulation phase, before a 0.7 s inter-trial period\n  Stimulus presentation: display=Dell P2419HC LCD monitor, resolution=1920×1080 pixels, refresh_rate=60 Hz, brightness=265 cd/m², stimulus_size=150 pixels, background_luminance=124 lux (50% screen luminance), on_state_100=168 lux (100% amplitude depth), on_state_40=142 lux (40% amplitude depth), cue_duration=0.5 s, stimulation_duration=2.2 s, inter_trial_interval=0.7 s\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_0\n  1\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_1\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: cvep\n  Code type: m-sequence (maximum-length sequence)\n  Code length: 132\n  Number of targets: 4\nData Structure\n--------------\n  Trials: 60\n  Blocks per session: 15\n  Trials context: 15 blocks × 4 trials (one per target) × 4 conditions (burst/mseq × 100%/40%)\nPreprocessing\n-------------\n  Data state: raw\nSignal Processing\n-----------------\n  Classifiers: Convolutional Neural Network (CNN)\n  Feature extraction: Sliding windows (250ms, 2ms stride), Standard deviation normalization\n  Spatial filters: 16 spatial filters via 1D spatial convolution (8×1 kernel)\nCross-Validation\n----------------\n  Method: sequential train/test split\n  Evaluation type: offline classification\nPerformance (Original Study)\n----------------------------\n  Accuracy: 85.0%\n  Itr: 48.7 bits/min\n  Selection Time S: 1.5\n  Cnn Training Time 6Blocks S: 40.0\n  Calibration Data 6Blocks S: 52.8\nBCI Application\n---------------\n  Applications: reactive BCI\n  Environment: laboratory\n  Online feedback: False\nTags\n----\n  Pathology: Healthy\n  Modality: EEG\n  Type: reactive BCI, visual evoked potentials\nDocumentation\n-------------\n  Description: 4-class code-VEP BCI dataset comparing burst c-VEP and m-sequence stimulation at two amplitude depths (100% and 40%) to optimize performance and user experience\n  DOI: 10.1016/j.neuroimage.2023.120446\n  Associated paper DOI: 10.1016/j.neuroimage.2023.120446\n  License: CC-BY-4.0\n  Investigators: Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais\n  Senior author: Frédéric Dehais\n  Contact: kalou.cabrera-castillos@isae-supaero.fr\n  Institution: Institut Supérieur de l'Aéronautique et de l'Espace (ISAE-SUPAERO)\n  Department: Human Factors and Neuroergonomics\n  Address: 10 Av. Edouard Belin, Toulouse, 31400, France\n  Country: FR\n  Repository: Zenodo\n  Data URL: https://zenodo.org/record/8255618\n  Publication year: 2023\n  Funding: AID (Powerbrain project), France; AXA Research Fund Chair for Neuroergonomics, France; Chair for Neuroadaptive Technology, Artificial and Natural Intelligence Toulouse Institute (ANITI), France\n  Ethics approval: Ethics committee of the University of Toulouse (CER approval number 2020-334); Declaration of Helsinki\n  Keywords: Code-VEP, Reactive BCI, CNN, Amplitude depth reduction, Visual comfort\nExternal Links\n--------------\n  Source: https://zenodo.org/record/8255618\n  Github Code: https://github.com/neuroergoISAE/burst_codes\n  Paper: https://doi.org/10.1016/j.neuroimage.2023.120446\nAbstract\n--------\nThe utilization of aperiodic flickering visual stimuli under the form of code-modulated Visual Evoked Potentials (c-VEP) represents a pivotal advancement in the field of reactive Brain–Computer Interface (rBCI). This study introduces an innovative variant of code-VEP, referred to as 'Burst c-VEP', involving the presentation of short bursts of aperiodic visual flashes at a deliberately slow rate, typically ranging from two to four flashes per second. The proposed solutions were tested through an offline 4-classes c-VEP protocol involving 12 participants. The full amplitude burst c-VEP sequences exhibited higher accuracy, ranging from 90.5% (with 17.6 s of calibration data) to 95.6% (with 52.8 s of calibration data), compared to its m-sequence counterpart (71.4% to 85.0%). The mean selection time for both types of codes (1.5 s) compared favorably to reports from previous studies. Lowering the intensity of the stimuli only slightly decreased the accuracy of the burst code sequences to 94.2% while leading to substantial improvements in terms of user experience.\nMethodology\n-----------\nFactorial experimental design with 12 healthy participants. EEG recorded with BrainProducts LiveAmp 32-channel system at 500 Hz. Four conditions tested: burst c-VEP and m-sequence c-VEP, each at 100% and 40% amplitude depth. Participants focused on cued targets (4 classes) in 15 blocks of 4 trials per condition. CNN-based decoding with 250ms sliding windows. Subjective ratings collected for visual comfort, mental tiredness, and intrusiveness. VEP analysis included amplitude, latency, and inter-trial coherence metrics.\nReferences\n----------\nKalou Cabrera Castillos. (2023). 4-class code-VEP EEG data [Data set]. Zenodo.(dataset). DOI: https://doi.org/10.5281/zenodo.8255618\nKalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais. Burst c-VEP Based BCI: Optimizing stimulus design for enhanced classification with minimal calibration data and improved user experience,NeuroImage,Volume 284, 2023,120446,ISSN 1053-8119 DOI: https://doi.org/10.1016/j.neuroimage.2023.120446\nNotes\n.. versionadded:: 1.1.0\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":157895570,"source":"openneuro","storage":{"backend":"s3","base":"s3://openneuro.org/nm000346","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["cvep"],"timestamps":{"digested_at":"2026-04-22T12:52:30.205193+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":12,"computed_title":"CastillosCVEP100","nchans_counts":[{"val":32,"count":12}],"sfreq_counts":[{"val":500.0,"count":12}],"stats_computed_at":"2026-04-22T23:16:00.314703+00:00","total_duration_s":3168.9759999999997,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"ac4b50025436bac6","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Closest few-shot matches are the healthy visual task examples that show how to map stimulus channel and task aim to labels. (1) The schizophrenia visual discrimination dataset is labeled Modality=Visual and Type=Perception based on explicit \"visual discrimination task\" with moving dots; this guides selecting Visual when the stimulus is clearly visual. (2) The EEG motor movement/imagery dataset shows that even when a screen is used, Type should track the research focus (Motor there); this guides treating the present dataset as a BCI/target-focus paradigm rather than generic \"visual display\". There is no direct code-VEP BCI example in the few-shot set, so Type must rely more on this dataset’s explicit wording (e.g., \"visual attention\", focus on targets).","metadata_analysis":"Pathology/population facts: \"Health status: healthy\"; \"Pathology: Healthy\" (in Tags); \"Factorial experimental design with 12 healthy participants.\" Stimulus/modality facts: \"Stimulus type: visual flashing\"; \"Stimulus modalities: visual\"; \"Primary modality: visual\". Task/construct facts: \"Task type: visual attention\"; \"Instructions: focus on four targets that were cued sequentially\"; \"Study domain: BCI performance and user experience\" and \"Applications: reactive BCI\".","paper_abstract_analysis":"The included abstract reinforces that this is a visual-evoked-potential BCI paradigm: \"code-modulated Visual Evoked Potentials (c-VEP)\" and \"offline 4-classes c-VEP protocol\" with \"aperiodic visual flashes\". This supports Visual modality and an attention-to-targets BCI decoding purpose.","evidence_alignment_check":"Pathology: Metadata says \"Health status: healthy\" / \"12 healthy participants\" / tag \"Pathology: Healthy\". Few-shot pattern suggests using Healthy when no disorder recruitment is present. ALIGN.\nModality: Metadata says \"Stimulus type: visual flashing\" and \"Primary modality: visual\". Few-shot pattern suggests modality follows stimulus channel (e.g., visual discrimination task -> Visual). ALIGN.\nType: Metadata says \"Task type: visual attention\" and instructs participants to \"focus on four targets\" in a reactive BCI (code-VEP) context. Few-shot pattern has visual discrimination -> Perception, but also indicates Type should reflect the main construct studied; here the dataset explicitly frames the task as attention/target focus for BCI performance. PARTIAL CONFLICT (Perception vs Attention), resolved in favor of Attention due to explicit metadata wording.","decision_summary":"Top-2 candidates per category:\n1) Pathology: (A) Healthy vs (B) Unknown. Healthy wins because of explicit statements: \"Health status: healthy\", \"12 healthy participants\", and tag \"Pathology: Healthy\". Evidence alignment: aligned with few-shot convention. \n2) Modality: (A) Visual vs (B) Other. Visual wins due to explicit stimulus descriptions: \"Stimulus type: visual flashing\", \"Stimulus modalities: visual\", \"Primary modality: visual\". Evidence alignment: aligned with few-shot mapping of visual stimuli to Visual.\n3) Type: (A) Attention vs (B) Perception. Attention wins because the dataset explicitly defines \"Task type: visual attention\" and instructs participants to \"focus on four targets\" (selective attention to a cued target) in a reactive BCI context (\"Study domain: BCI performance\"). Perception remains plausible because it is a VEP paradigm, but the stronger, explicit construct label in metadata is attention. Confidence reflects remaining ambiguity between Attention and Perception."}},"canonical_name":null,"name_confidence":0.74,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Castillos2023_CastillosCVEP100"}}