{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cc7","dataset_id":"nm000260","associated_paper_doi":null,"authors":["G.F.P. Van Veen","A. Barachant","A. Andreev","G. Cattan","P. Rodrigues","M. Congedo"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.5281/zenodo.2649006","datatypes":["eeg"],"demographics":{"subjects_count":23,"ages":[24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24],"age_min":24,"age_max":24,"age_mean":24.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000260","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"c1b786cee2f31a1b46bc620b601fd92aea151f8afa4881b9fcfaee0891947771","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Van Veen, Barachant & Andreev 2012 — Building Brain Invaders: EEG data of an experimental validation (BI2012)","readme":"BrainInvaders2012\n=================\nP300 dataset BI2012 from a \"Brain Invaders\" experiment.\nDataset Overview\n----------------\n  Code: BrainInvaders2012\n  Paradigm: p300\n  DOI: https://doi.org/10.5281/zenodo.2649006\n  Subjects: 25\n  Sessions per subject: 1\n  Events: Target=2, NonTarget=1\n  Trial interval: [0, 1] s\n  Runs per session: 2\n  Session IDs: 0\n  File format: mat, csv\n  Contributing labs: GIPSA-lab\nAcquisition\n-----------\n  Sampling rate: 128.0 Hz\n  Number of channels: 16\n  Channel types: eeg=16\n  Channel names: F7, F3, F4, F8, T7, C3, Cz, C4, T8, P7, P3, Pz, P4, P8, O1, O2\n  Montage: standard_1020\n  Hardware: NeXus-32 (MindMedia/TMSi)\n  Software: OpenVibe\n  Reference: hardware common average reference\n  Ground: FZ\n  Sensor type: EEG\n  Line frequency: 50.0 Hz\n  Electrode type: wet\n  Electrode material: Silver/Silver Chloride\nParticipants\n------------\n  Number of subjects: 25\n  Health status: healthy\n  Age: mean=24.4, std=2.76, min=21, max=31\n  BCI experience: half played games occasionally (around 4.5 hours a week)\n  Species: human\nExperimental Protocol\n---------------------\n  Paradigm: p300\n  Task type: brain_invaders\n  Number of classes: 2\n  Class labels: Target, NonTarget\n  Study design: longitudinal and transversal design with training-test mode of operation\n  Feedback type: visual (game interface)\n  Stimulus type: visual flashes of alien groups\n  Stimulus modalities: visual\n  Primary modality: visual\n  Synchronicity: synchronous\n  Mode: both\n  Training/test split: True\n  Instructions: limit eye blinks, head movements and face muscular contractions; silently count the number of Target flashes\n  Stimulus presentation: repetition_structure=12 flashes per repetition (2 Target, 10 non-Target), flash_groups=12 groups of 6 aliens (36 total aliens), target_ratio=1:5 (Target to non-Target), screen_distance=75 to 115 cm\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: p300\n  Number of repetitions: 8\nData Structure\n--------------\n  Trials: {'Target': 128, 'non-Target': 640}\n  Trials per class: Target=128, non-Target=640\n  Trials context: per session (Training session); variable in Online session depending on user performance\nPreprocessing\n-------------\n  Data state: raw EEG with software tagging (note: tagging introduces jitter and latency)\n  Preprocessing applied: False\n  Notes: Software tagging introduces a jitter and a latency which artificially modify the ERPs onset. Strong drift over time resulting in higher jitter. Only possible to compare ERP acquired within the same experimental conditions when latency is not corrected.\nSignal Processing\n-----------------\n  Classifiers: xDAWN, Riemannian\n  Feature extraction: Covariance/Riemannian, xDAWN\n  Spatial filters: xDAWN\nPerformance (Original Study)\n----------------------------\nBCI Application\n---------------\n  Applications: gaming\n  Environment: laboratory\n  Online feedback: True\nTags\n----\n  Pathology: Healthy\n  Modality: Visual\n  Type: Perception\nDocumentation\n-------------\n  Description: EEG recordings of 25 subjects testing the Brain Invaders, a visual P300 Brain-Computer Interface inspired by the famous vintage video game Space Invaders\n  DOI: 10.5281/zenodo.2649006\n  Associated paper DOI: 10.5281/zenodo.2649006\n  License: CC-BY-4.0\n  Investigators: G.F.P. Van Veen, A. Barachant, A. Andreev, G. Cattan, P. Rodrigues, M. Congedo\n  Senior author: M. Congedo\n  Institution: GIPSA-lab, CNRS, University Grenoble-Alpes, Grenoble INP\n  Address: GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402, France\n  Country: FR\n  Repository: Zenodo\n  Data URL: https://doi.org/10.5281/zenodo.2649006\n  Publication year: 2019\n  Acknowledgements: All subjects were volunteers recruited by means of flyers and of the mailing list of the University of Grenoble-Alpes. All participants provided written informed consent confirming the notification of the experimental process, the data management procedures and the right to withdraw from the experiment at any moment.\n  Keywords: Electroencephalography (EEG), P300, Brain-Computer Interface, Experiment\nAbstract\n--------\nWe describe the experimental procedures for a dataset that we have made publicly available at https://doi.org/10.5281/zenodo.2649006 in mat and csv formats. This dataset contains electroencephalographic (EEG) recordings of 25 subjects testing the Brain Invaders (1), a visual P300 Brain-Computer Interface inspired by the famous vintage video game Space Invaders (Taito, Tokyo, Japan). The visual P300 is an event-related potential elicited by a visual stimulation, peaking 240-600 ms after stimulus onset. EEG data were recorded by 16 electrodes in an experiment that took place in the GIPSA-lab, Grenoble, France, in 2012 (2,3). Python code for manipulating the data is available at https://github.com/plcrodrigues/py.BI.EEG.2012-GIPSA. The ID of this dataset is BI.EEG.2012-GIPSA.\nMethodology\n-----------\nThe visual P300 is an event-related potential (ERP) elicited by a visual stimulation, peaking 240-600 ms after stimulus onset. The experiment features a training-test mode of operation and both a longitudinal and transversal design. Training session: Target alien chosen randomly at each repetition, 8 Targets total, 8 repetitions each, resulting in 128 Target trials and 640 non-Target flashes. Online session: consisted of three levels with different distractor configurations, minimum 3.5 minutes per level, counter-balanced order across subjects. Interface: 36 aliens flashing in 12 groups of 6, each repetition has 12 flashes (2 Target, 10 non-Target). P300 peak latency: 240-600 ms post-stimulus.\nReferences\n----------\nVan Veen, G., Barachant, A., Andreev, A., Cattan, G., Rodrigues, P. C., & Congedo, M. (2019). Building Brain Invaders: EEG data of an experimental validation. arXiv preprint arXiv:1905.05182.\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":172777937,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000260","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-30T14:09:45.540077+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-11T21:08:26Z"},"total_files":46,"computed_title":"Van Veen, Barachant & Andreev 2012 — Building Brain Invaders: EEG data of an experimental validation (BI2012)","nchans_counts":[{"val":17,"count":46}],"sfreq_counts":[{"val":128.0,"count":46}],"stats_computed_at":"2026-05-01T13:49:34.646383+00:00","total_duration_s":25640.640625,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"ef51ea3c6b0d8c47","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.75},"reasoning":{"few_shot_analysis":"Most similar few-shot examples are the oddball-style paradigms that rely on target vs non-target event classes and ERP extraction. In the Parkinson’s “Cross-modal Oddball Task” example, the task is explicitly an oddball with visual+auditory pre-cues and is labeled with the stimulus modality (Multisensory) and a task-purpose label that reflects the cognitive-control/oddball nature (though the final Type there is driven by clinical focus). This guides mapping BrainInvaders’ Target/NonTarget P300 paradigm to an attention/oddball-like Type rather than motor/learning. For modality, the schizophrenia “visual discrimination task” example shows that when stimuli are visual displays, Modality is labeled Visual; BrainInvaders similarly uses visual flashing stimuli.","metadata_analysis":"Key metadata facts:\n1) Population/health: \"Health status: healthy\" and also \"Tags\\n----\\n  Pathology: Healthy\".\n2) Paradigm/task: \"P300 dataset BI2012 from a \\\"Brain Invaders\\\" experiment.\" plus \"Paradigm: p300\" and \"Events: Target=2, NonTarget=1\".\n3) Stimulus modality: \"Stimulus type: visual flashes of alien groups\" and \"Stimulus modalities: visual\" / \"Primary modality: visual\".\n4) Task demand consistent with attention: \"Instructions: ... silently count the number of Target flashes\".","paper_abstract_analysis":"The dataset’s included abstract reinforces the ERP/oddball framing: it defines the P300 as \"an event-related potential elicited by a visual stimulation\" and describes subjects \"testing the Brain Invaders ... visual P300 Brain-Computer Interface\". No contradictory clinical or modality information is present.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\"; \"Pathology: Healthy\".\n- Few-shot pattern suggests: non-clinical university/lab volunteers without diagnosis -> Healthy.\n- ALIGN.\n\nModality:\n- Metadata says: \"Stimulus type: visual flashes of alien groups\"; \"Stimulus modalities: visual\"; \"Primary modality: visual\".\n- Few-shot pattern suggests: visual screen-based stimuli -> Visual.\n- ALIGN.\n\nType:\n- Metadata says: \"Paradigm: p300\", \"Events: Target=2, NonTarget=1\", and instruction to \"silently count the number of Target flashes\" (classic target-detection/oddball attention manipulation).\n- Few-shot pattern suggests: oddball/target-detection ERP tasks are typically categorized under attention/perceptual processing rather than motor/learning.\n- ALIGN (with mild ambiguity between Attention vs Perception because both fit ERP target-detection tasks).","decision_summary":"Top-2 candidates with head-to-head selection:\n\nPathology candidates:\n1) Healthy (SELECTED)\n- Evidence: \"Health status: healthy\"; \"Pathology: Healthy\"; volunteers recruited at university (\"All subjects were volunteers recruited...\").\n2) Unknown\n- Would apply only if health status not stated; here it is explicit.\nDecision: Healthy clearly dominates.\nConfidence notes: supported by 2 explicit metadata statements.\n\nModality candidates:\n1) Visual (SELECTED)\n- Evidence: \"Stimulus type: visual flashes of alien groups\"; \"Stimulus modalities: visual\"; \"Primary modality: visual\".\n2) Multisensory\n- Not supported; no auditory/tactile stimuli described.\nDecision: Visual clearly dominates.\nConfidence notes: supported by 3 explicit modality quotes.\n\nType candidates:\n1) Attention (SELECTED)\n- Evidence: target vs non-target oddball structure (\"Events: Target=2, NonTarget=1\"), P300 paradigm (\"Paradigm: p300\"), and explicit attentional instruction (\"silently count the number of Target flashes\").\n2) Perception\n- Plausible because it is visually evoked ERP and involves visual stimulus processing (\"visual flashes\"), but the core manipulation is target detection/oddball attention.\nDecision: Attention slightly stronger than Perception due to explicit counting/target-detection instruction.\nConfidence notes: multiple explicit task/ERP quotes support Attention, but Perception remains a close runner-up."}},"canonical_name":null,"name_confidence":0.9,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"BrainInvaders2012"}}