{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cc8","dataset_id":"nm000264","associated_paper_doi":null,"authors":["E. Vaineau","A. Barachant","A. Andreev","P. Rodrigues","G. Cattan","M. Congedo"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.5281/zenodo.1494163","datatypes":["eeg"],"demographics":{"subjects_count":24,"ages":[25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25],"age_min":25,"age_max":25,"age_mean":25.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000264","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"ba4df428ca0ed84f28d6750709cfb8fea19afa3cad32b26101629233cb121121","license":"CC-BY-1.0","n_contributing_labs":null,"name":"Vaineau, Barachant & Andreev 2013 — Brain Invaders Adaptive versus Non-Adaptive P300 Brain-Computer Interface dataset (BI2013a)","readme":"BrainInvaders2013a\n==================\nP300 dataset BI2013a from a \"Brain Invaders\" experiment.\nDataset Overview\n----------------\n  Code: BrainInvaders2013a\n  Paradigm: p300\n  DOI: https://doi.org/10.5281/zenodo.2669187\n  Subjects: 24\n  Sessions per subject: 8\n  Events: Target=33285, NonTarget=33286\n  Trial interval: [0, 1] s\n  Runs per session: 2\n  File format: mat, csv, gdf\n  Contributing labs: GIPSA-lab\nAcquisition\n-----------\n  Sampling rate: 512.0 Hz\n  Number of channels: 16\n  Channel types: eeg=16\n  Channel names: Fp1, Fp2, F5, AFz, F6, T7, Cz, T8, P7, P3, Pz, P4, P8, O1, Oz, O2\n  Montage: standard_1020\n  Hardware: g.USBamp (g.tec, Schiedlberg, Austria)\n  Software: OpenVibe\n  Reference: left earlobe\n  Ground: FZ\n  Sensor type: wet Silver/Silver Chloride electrodes\n  Line frequency: 50.0 Hz\n  Online filters: no digital filter applied\n  Cap manufacturer: g.tec\n  Cap model: g.GAMMAcap\n  Electrode type: wet\n  Electrode material: Silver/Silver Chloride\nParticipants\n------------\n  Number of subjects: 24\n  Health status: healthy\n  Age: mean=25.96, std=4.46, min=20.0, max=30.0\n  Gender distribution: male=12, female=12\n  BCI experience: volunteers recruited via flyers and university mailing list\n  Species: human\nExperimental Protocol\n---------------------\n  Paradigm: p300\n  Task type: visual P300 BCI\n  Number of classes: 2\n  Class labels: Target, NonTarget\n  Study design: compare P300-based BCI with and without adaptive calibration using Riemannian geometry; randomised order of runs (adaptive vs non-adaptive)\n  Feedback type: visual (Brain Invaders video game interface)\n  Stimulus type: visual flashes\n  Stimulus modalities: visual\n  Primary modality: visual\n  Mode: both\n  Training/test split: True\n  Instructions: destroy targets in Brain Invaders BCI video game\n  Stimulus presentation: distance_from_screen=75 to 115 cm, screen=ViewSonic 22 inch, flash_groups=36 symbols distributed in 12 groups\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: p300\nData Structure\n--------------\n  Trials: {'Training_Target': 80, 'Training_non-Target': 400, 'Online': 'variable (depends on user performance)'}\n  Trials context: per_phase\nPreprocessing\n-------------\n  Data state: raw EEG with software tagging via USB (note: tagging introduces jitter and latency)\n  Preprocessing applied: False\n  Notes: Tags sent by application to amplifier through USB port and recorded as supplementary channel; tagging process identical in all experimental conditions\nSignal Processing\n-----------------\n  Classifiers: xDAWN, Riemannian, RMDM (Riemannian Minimum Distance to Mean)\n  Feature extraction: Covariance/Riemannian, xDAWN, common spatiotemporal pattern\nCross-Validation\n----------------\n  Evaluation type: cross_session\nPerformance (Original Study)\n----------------------------\n  Balanced Accuracy: used due to unbalanced classes (1:5 ratio Target to non-Target)\nBCI Application\n---------------\n  Applications: gaming\n  Environment: small room (4 square meters) with one-way glass window for experimenter observation\n  Online feedback: True\nTags\n----\n  Pathology: Healthy\n  Modality: Visual\n  Type: Perception\nDocumentation\n-------------\n  Description: EEG recordings of 24 subjects doing a visual P300 Brain-Computer Interface experiment comparing adaptive vs non-adaptive calibration using Riemannian geometry\n  DOI: 10.5281/zenodo.1494163\n  Associated paper DOI: 10.5281/zenodo.2649006\n  License: CC-BY-1.0\n  Investigators: E. Vaineau, A. Barachant, A. Andreev, P. Rodrigues, G. Cattan, M. Congedo\n  Senior author: M. Congedo\n  Institution: GIPSA-lab, CNRS, University Grenoble-Alpes, Grenoble INP\n  Address: GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402, France\n  Country: FR\n  Repository: Zenodo\n  Data URL: https://doi.org/10.5281/zenodo.1494163\n  Publication year: 2019\n  Ethics approval: Approved by the Ethical Committee of the University of Grenoble Alpes (Comité d'Ethique pour la Recherche Non-Interventionnelle)\n  Keywords: Electroencephalography (EEG), P300, Brain-Computer Interface, Experiment, Adaptive, Calibration\nAbstract\n--------\nThis dataset contains electroencephalographic (EEG) recordings of 24 subjects doing a visual P300 Brain-Computer Interface experiment on PC. The visual P300 is an event-related potential elicited by visual stimulation, peaking 240-600 ms after stimulus onset. The experiment was designed to compare the use of a P300-based brain-computer interface with and without adaptive calibration using Riemannian geometry. EEG data were recorded using 16 electrodes during an experiment at GIPSA-lab, Grenoble, France, in 2013.\nMethodology\n-----------\nSubjects participated in sessions with two runs (Non-Adaptive and Adaptive, randomised order). Each run had Training (calibration) and Online phases. In Non-Adaptive mode, Training data calibrated the MDM classifier for Online phase. In Adaptive mode, classifier initialized with generic class geometric means from previous experiment and continuously adapted using Riemannian method. Brain Invaders interface: 36 symbols in 12 groups, one repetition = 12 flashes (2 Target, 10 non-Target). Training phase: 80 Target and 400 non-Target flashes (fixed). Online phase: variable repetitions based on performance to destroy targets. Subjects blind to mode of operation.\nReferences\n----------\nVaineau, E., Barachant, A., Andreev, A., Rodrigues, P. C., Cattan, G. & Congedo, M. (2019). Brain invaders adaptive versus non-adaptive P300 brain-computer interface dataset. arXiv preprint arXiv:1904.09111.\nBarachant A, Congedo M (2014) A Plug & Play P300 BCI using Information Geometry. arXiv:1409.0107.\nCongedo M, Goyat M, Tarrin N, Ionescu G, Rivet B,Varnet L, Rivet B, Phlypo R, Jrad N, Acquadro M, Jutten C (2011) “Brain Invaders”: a prototype of an open-source P300-based video game working with the OpenViBE platform. Proc. IBCI Conf., Graz, Austria, 280-283.\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1","2","3","4","5","6","7"],"size_bytes":1842010007,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000264","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-30T14:09:45.784752+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-11T21:08:00Z"},"total_files":292,"computed_title":"Vaineau, Barachant & Andreev 2013 — Brain Invaders Adaptive versus Non-Adaptive P300 Brain-Computer Interface dataset (BI2013a)","nchans_counts":[{"val":16,"count":292}],"sfreq_counts":[{"val":512.0,"count":292}],"stats_computed_at":"2026-05-01T13:49:34.646396+00:00","total_duration_s":74278.4296875,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"6b238a5f6bc86b41","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.85,"modality":0.9,"type":0.75},"reasoning":{"few_shot_analysis":"No few-shot example is an exact match for a P300-BCI (Brain Invaders) paradigm, but the closest conventions come from the oddball-style target vs non-target datasets. For example, the Cross-modal Oddball Task (Parkinson’s) uses an oddball structure with salient/rare events, and the DPX cognitive control task (TBI) is labeled as Attention when the construct is target monitoring and control. A P300 paradigm similarly relies on attending to rare targets among frequent non-targets, which by convention maps more naturally to an Attention construct than to Motor or Resting-state.","metadata_analysis":"Key facts from the dataset metadata:\n- Population/pathology: explicitly healthy: \"Health status: healthy\" and also \"Tags\\n----\\n  Pathology: Healthy\".\n- Stimulus modality: explicitly visual: \"Task type: visual P300 BCI\", \"Stimulus type: visual flashes\", and \"Primary modality: visual\".\n- Paradigm/construct: classic target detection: \"Paradigm: p300\" with \"Events: Target=33285, NonTarget=33286\" and \"Class labels: Target, NonTarget\"; instructions require attending to targets: \"Instructions: destroy targets in Brain Invaders BCI video game\"; and it is an ERP P300 design: \"The visual P300 is an event-related potential elicited by visual stimulation\".","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\"; \"Tags ... Pathology: Healthy\".\n- Few-shot pattern suggests: when explicitly healthy controls are recruited (e.g., multiple healthy datasets), label as Healthy.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"Stimulus type: visual flashes\"; \"Stimulus modalities: visual\"; \"Primary modality: visual\".\n- Few-shot pattern suggests: visual stimulus paradigms map to Visual (e.g., visual discrimination task, visual gambling/learning tasks).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: \"Paradigm: p300\" with \"Target\" vs \"NonTarget\" flashes and a \"visual P300 BCI\" task where participants must \"destroy targets\".\n- Few-shot pattern suggests: oddball/target-detection paradigms are typically treated as attention-demanding (target monitoring/selection), though some few-shot oddball datasets are labeled under Clinical/Intervention when the clinical cohort is the main focus.\n- Alignment: PARTIAL. Metadata even includes a self-tag \"Type: Perception\", but the task mechanics (rare target detection among non-target flashes) more directly match Attention as the cognitive construct. No explicit contradiction (both are plausible), but construct-level mapping favors Attention.","decision_summary":"Top-2 candidate labels with head-to-head comparison:\n\nPathology:\n1) Healthy — Supported by \"Health status: healthy\" and \"Tags ... Pathology: Healthy\" and participants described as \"volunteers\".\n2) Unknown — Only if health status were missing (it is not).\nWinner: Healthy (explicit recruitment/health statement). Confidence justified by 2+ explicit quotes.\n\nModality:\n1) Visual — Supported by \"Stimulus type: visual flashes\", \"Stimulus modalities: visual\", and \"Primary modality: visual\".\n2) Multisensory — Only if another sensory channel were present (none indicated).\nWinner: Visual. Confidence justified by 3 explicit quotes.\n\nType:\n1) Attention — Supported by P300 target vs non-target structure: \"Events: Target..., NonTarget...\", \"Class labels: Target, NonTarget\", and goal-directed target monitoring: \"Instructions: destroy targets\".\n2) Perception — Plausible because stimuli are visual flashes and metadata includes \"Tags ... Type: Perception\".\nWinner: Attention, because the primary construct in P300 paradigms is selective attention/target detection rather than general sensory perception. Confidence moderated because metadata’s own tag suggests Perception as a competing interpretation."}},"canonical_name":null,"name_confidence":0.93,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"BrainInvaders2013"}}