{"success":true,"database":"eegdash","data":{"_id":"69d16e06897a7725c66f4ce0","dataset_id":"nm000342","associated_paper_doi":null,"authors":["Kalou Cabrera Castillos","Simon Ladouce","Ludovic Darmet","Frédéric Dehais"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.1016/j.neuroimage.2023.120446","datatypes":["eeg"],"demographics":{"subjects_count":12,"ages":[30,30,30,30,30,30,30,30,30,30,30,30],"age_min":30,"age_max":30,"age_mean":30.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000342","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"36e24b59573a4d45f71f916fb25ba215202ad5efbc88f13f5736a6c67df6c739","license":"CC-BY-4.0","n_contributing_labs":null,"name":"CastillosCVEP40","readme":"CastillosCVEP40\n===============\nc-VEP and Burst-VEP dataset from Castillos et al. (2023)\nDataset Overview\n----------------\n  Code: CastillosCVEP40\n  Paradigm: cvep\n  DOI: https://doi.org/10.1016/j.neuroimage.2023.120446\n  Subjects: 12\n  Sessions per subject: 1\n  Events: 0=100, 1=101\n  Trial interval: (0, 0.25) s\n  File format: EEGLAB .set\n  Number of contributing labs: 1\nAcquisition\n-----------\n  Sampling rate: 500.0 Hz\n  Number of channels: 32\n  Channel types: eeg=32\n  Channel names: C3, C4, CP1, CP2, CP5, CP6, Cz, F10, F3, F4, F7, F8, F9, FC1, FC2, FC5, FC6, Fp1, Fp2, Fz, O1, O2, Oz, P10, P3, P4, P7, P8, P9, Pz, T7, T8\n  Montage: standard_1020\n  Hardware: BrainProducts LiveAmp 32\n  Reference: FCz\n  Ground: FPz\n  Sensor type: EEG\n  Line frequency: 50.0 Hz\n  Online filters: {'line_noise_filter': 'IIR cut-band filter 49.9-50.1 Hz, order 16'}\n  Impedance threshold: 25.0 kOhm\n  Cap manufacturer: BrainProducts\n  Cap model: Acticap\n  Electrode type: active\nParticipants\n------------\n  Number of subjects: 12\n  Health status: healthy\n  Age: mean=30.6, std=7.1\n  Gender distribution: female=4, male=8\n  Species: human\nExperimental Protocol\n---------------------\n  Paradigm: cvep\n  Task type: reactive BCI\n  Number of classes: 2\n  Class labels: 0, 1\n  Trial duration: 2.2 s\n  Tasks: visual_attention\n  Study design: factorial design\n  Study domain: brain-computer interface\n  Feedback type: none\n  Stimulus type: visual flicker\n  Stimulus modalities: visual\n  Primary modality: visual\n  Synchronicity: synchronous\n  Mode: offline\n  Training/test split: False\n  Instructions: focus on targets that were cued sequentially in a random order for 0.5 s, followed by a 2.2 s stimulation phase\n  Stimulus presentation: cue_duration=500 ms, stimulation_duration=2200 ms, inter_trial_interval=700 ms, cue_type=red-bordered square around target stimulus, display=Dell P2419HC, 1920×1080 pixels, 265 cd/m², 60 Hz\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_0\n  1\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_1\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: cvep\n  Code type: m-sequence\n  Number of targets: 4\n  Cue duration: 0.5 s\nData Structure\n--------------\n  Trials: 60\n  Blocks per session: 15\n  Trials context: 15 blocks x 4 trials per block = 60 trials per subject for m-sequence c-VEP at 40% amplitude\nPreprocessing\n-------------\n  Data state: raw\nSignal Processing\n-----------------\n  Classifiers: CNN (Convolutional Neural Network)\n  Feature extraction: sliding windows, bitwise decoding\nCross-Validation\n----------------\n  Evaluation type: offline\nPerformance (Original Study)\n----------------------------\n  Accuracy: 95.6%\n  Burst 100 Accuracy 17.6S Calibration: 90.5\n  Burst 100 Accuracy 52.8S Calibration: 95.6\n  Burst 40 Accuracy: 94.2\n  Mseq 100 Accuracy 17.6S Calibration: 71.4\n  Mseq 100 Accuracy 52.8S Calibration: 85.0\n  Mean Selection Time: 1.5\nBCI Application\n---------------\n  Applications: reactive BCI\n  Environment: laboratory\n  Online feedback: False\nTags\n----\n  Pathology: Healthy\n  Modality: EEG\n  Type: reactive, code-VEP, visual\nDocumentation\n-------------\n  Description: Burst c-VEP Based BCI: Optimizing stimulus design for enhanced classification with minimal calibration data and improved user experience\n  DOI: 10.1016/j.neuroimage.2023.120446\n  Associated paper DOI: 10.1016/j.neuroimage.2023.120446\n  License: CC-BY-4.0\n  Investigators: Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais\n  Senior author: Frédéric Dehais\n  Contact: kalou.cabrera-castillos@isae-supaero.fr\n  Institution: Institut Supérieur de l'Aéronautique et de l'Espace (ISAE-SUPAERO)\n  Department: Human Factors and Neuroergonomics\n  Address: 10 Av. Edouard Belin, Toulouse, 31400, France\n  Country: FR\n  Repository: Zenodo\n  Data URL: https://zenodo.org/record/8255618\n  Publication year: 2023\n  Ethics approval: University of Toulouse CER approval number 2020-334\n  Keywords: Code-VEP, Reactive BCI, CNN, Amplitude depth reduction, Visual comfort\nExternal Links\n--------------\n  Source: https://zenodo.org/record/8255618\nAbstract\n--------\nThe utilization of aperiodic flickering visual stimuli under the form of code-modulated Visual Evoked Potentials (c-VEP) represents a pivotal advancement in the field of reactive Brain–Computer Interface (rBCI). This study introduces an innovative variant of code-VEP, referred to as 'Burst c-VEP', involving the presentation of short bursts of aperiodic visual flashes at a deliberately slow rate (2-4 flashes per second). The study tested an offline 4-classes c-VEP protocol involving 12 participants with factorial design manipulating pattern (burst and m-sequences) and amplitude (100% or 40% depth modulations). Full amplitude burst c-VEP sequences exhibited higher accuracy (90.5% with 17.6s calibration to 95.6% with 52.8s calibration) compared to m-sequence (71.4% to 85.0%). Mean selection time was 1.5s. Lowering intensity to 40% decreased accuracy slightly to 94.2% while improving user experience substantially.\nMethodology\n-----------\nFactorial experimental design with 12 participants. Four conditions: burst vs m-sequence × 100% vs 40% amplitude depth. Participants seated comfortably, presented with 15 blocks of 4 trials for each condition. Each trial: 0.5s cue (red-bordered square), 2.2s stimulation, 0.7s inter-trial interval. Four disc targets (150 pixels) on Dell monitor (60 Hz). Background: medium grey (50% max luminance, 124 lux). 100% condition: modulation to brightest white (168 lux). 40% condition: 40% of grey-to-white range (142 lux). EEG recorded with BrainProducts LiveAmp (32 channels, 500 Hz), impedance <25kΩ. Analysis on subset: O1, O2, Oz, Pz, P3, P4, P8, P9. Preprocessing: average re-reference, IIR notch filter (49.9-50.1 Hz, order 16), epoching (0-2.2s), baseline removal. Classification: CNN architecture with sliding windows for bitwise decoding.\nReferences\n----------\nKalou Cabrera Castillos. (2023). 4-class code-VEP EEG data [Data set]. Zenodo.(dataset). DOI: https://doi.org/10.5281/zenodo.8255618\nKalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais. Burst c-VEP Based BCI: Optimizing stimulus design for enhanced classification with minimal calibration data and improved user experience,NeuroImage,Volume 284, 2023,120446,ISSN 1053-8119 DOI: https://doi.org/10.1016/j.neuroimage.2023.120446\nNotes\n.. versionadded:: 1.1.0\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":152349409,"source":"nemar","storage":{"backend":"s3","base":"s3://openneuro.org/nm000342","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["cvep"],"timestamps":{"digested_at":"2026-04-22T12:52:29.570102+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":12,"computed_title":"CastillosCVEP40","nchans_counts":[{"val":32,"count":12}],"sfreq_counts":[{"val":500.0,"count":12}],"stats_computed_at":"2026-04-22T23:16:00.314655+00:00","total_duration_s":3055.9759999999997,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"49dcd79301a27981","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot examples by paradigm/style are the BCI datasets, especially the motor imagery BCI example (\"EEG Motor Movement/Imagery Dataset\"), which maps a BCI paradigm to a cognitive Type focused on the targeted construct (Motor when motor execution/imagery is the research focus). Here, the paradigm is a reactive BCI based on code-modulated visual evoked potentials (c-VEP), so by the same convention we should label by the dominant cognitive construct in the task. Unlike the schizophrenia visual discrimination example (labeled Perception), this dataset is not primarily about perceptual discrimination performance but about attending to a cued visual flicker target to drive a BCI, which aligns better with an Attention label. The few-shot examples also reinforce using the stimulus channel for Modality (e.g., auditory oddball -> Auditory), guiding a Visual modality here.","metadata_analysis":"Key population facts: \"Health status: healthy\" and \"Number of subjects: 12\" under Participants, plus the explicit tag \"Pathology: Healthy\".\nKey stimulus/modality facts: \"Stimulus type: visual flicker\", \"Stimulus modalities: visual\", and \"Primary modality: visual\".\nKey task/construct facts: \"Task type: reactive BCI\", \"Tasks: visual_attention\", and the instruction \"focus on targets that were cued sequentially ... followed by a 2.2 s stimulation phase\". The paradigm description also states \"c-VEP and Burst-VEP dataset\" and \"Study domain: brain-computer interface\" indicating a VEP-based attention-to-target paradigm.","paper_abstract_analysis":"The included abstract supports a reactive code-VEP BCI framing rather than basic perception: it describes \"code-modulated Visual Evoked Potentials (c-VEP)\" and an offline multi-class c-VEP protocol aimed at optimizing stimulus design and classification performance (e.g., \"enhanced classification with minimal calibration data\"). This aligns with an attention-to-target VEP/BCI use case.","evidence_alignment_check":"Pathology: Metadata says participants are healthy (\"Health status: healthy\"; tag \"Pathology: Healthy\"), and few-shot conventions assign Healthy when no clinical recruitment is stated. ALIGN.\nModality: Metadata explicitly states visual stimulation (\"Stimulus type: visual flicker\"; \"Stimulus modalities: visual\"; \"Primary modality: visual\"). Few-shot convention uses stimulus channel to set Modality. ALIGN.\nType: Metadata explicitly labels the task as attention-oriented (\"Tasks: visual_attention\"; instruction to \"focus on targets\"), while few-shot convention is to label by the primary construct (e.g., motor imagery BCI -> Motor). For VEP-based reactive BCI, the construct is selective attention to a visual target; this supports Attention. Potential alternative Perception is suggested by the presence of VEPs/visual stimulation, but the task emphasis is target-focused attention and BCI control. Mostly ALIGN with few-shot style (construct-focused), with mild ambiguity between Attention vs Perception resolved by the explicit \"visual_attention\" task label.","decision_summary":"Pathology top-2: (1) Healthy vs (2) Unknown. Healthy wins due to explicit metadata: \"Health status: healthy\", tag \"Pathology: Healthy\", and participant section stating healthy cohort. Confidence supported by 3 explicit cues.\nModality top-2: (1) Visual vs (2) Multisensory. Visual wins due to explicit metadata: \"Stimulus type: visual flicker\", \"Stimulus modalities: visual\", \"Primary modality: visual\". Confidence supported by 3 explicit cues.\nType top-2: (1) Attention vs (2) Perception. Attention wins because the dataset explicitly specifies \"Tasks: visual_attention\" and instructs participants to \"focus on targets that were cued\" in a reactive BCI setting; Perception is plausible because it is a visual evoked potential paradigm, but the primary construct is attentional selection for BCI control rather than perceptual discrimination. Confidence based on 2+ explicit task/goal statements plus supporting abstract framing."}},"canonical_name":null,"name_confidence":0.72,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Castillos2023_CastillosCVEP40"}}