{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cad","dataset_id":"nm000216","associated_paper_doi":null,"authors":["Louis Korczowski","Martine Cederhout","Anton Andreev","Grégoire Cattan","Pedro Luiz Coelho Rodrigues","Violette Gautheret","Marco Congedo"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":43,"ages":[23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23],"age_min":23,"age_max":23,"age_mean":23.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000216","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"d198ca01cc5b3ddba48f3c1140b9e25e963b2f37cb5d58865b72a773e4632c74","license":"CC-BY-4.0","n_contributing_labs":null,"name":"P300 dataset BI2015a from a \"Brain Invaders\" experiment","readme":"# P300 dataset BI2015a from a \"Brain Invaders\" experiment\nP300 dataset BI2015a from a \"Brain Invaders\" experiment.\n## Dataset Overview\n- **Code**: BrainInvaders2015a\n- **Paradigm**: p300\n- **DOI**: https://doi.org/10.5281/zenodo.3266929\n- **Subjects**: 43\n- **Sessions per subject**: 3\n- **Events**: Target=2, NonTarget=1\n- **Trial interval**: [0, 1] s\n- **File format**: mat and csv\n## Acquisition\n- **Sampling rate**: 512.0 Hz\n- **Number of channels**: 32\n- **Channel types**: eeg=32\n- **Channel names**: Fp1, Fp2, AFz, F7, F3, F4, F8, FC5, FC1, FC2, FC6, T7, C3, Cz, C4, T8, CP5, CP1, CP2, CP6, P7, P3, Pz, P4, P8, PO7, O1, Oz, O2, PO8, PO9, PO10\n- **Montage**: 10-10\n- **Hardware**: g.USBamp (g.tec, Schiedlberg, Austria)\n- **Software**: OpenVibe\n- **Reference**: right earlobe\n- **Ground**: Fz\n- **Sensor type**: wet electrodes\n- **Line frequency**: 50.0 Hz\n- **Online filters**: no digital filter applied\n- **Cap manufacturer**: g.tec\n- **Cap model**: g.GAMMAcap\n- **Electrode type**: wet\n- **Electrode material**: Silver/Silver Chloride\n## Participants\n- **Number of subjects**: 43\n- **Health status**: healthy\n- **Age**: mean=23.7, std=3.19\n- **Gender distribution**: male=31, female=12\n- **BCI experience**: mostly students and young researchers\n## Experimental Protocol\n- **Paradigm**: p300\n- **Task type**: target detection\n- **Number of classes**: 2\n- **Class labels**: Target, NonTarget\n- **Study design**: calibration-less P300-based BCI with modulation of flash duration; three game sessions (9 levels each) with different flash durations (110ms, 80ms, 50ms); resting state and eyes closed recorded before and after sessions; subjects instructed to limit eye blinks, head movements and face muscular contractions\n- **Feedback type**: visual (game interface with real-time adaptive Riemannian RMDM classifier)\n- **Stimulus type**: oddball paradigm on grid of 36 symbols (1 Target, 35 Non-Target) flashed pseudo-randomly\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: online\n- **Training/test split**: False\n- **Instructions**: destroy target symbol within 8 attempts; aliens move slowly and regularly according to predefined path to maintain attention\n- **Stimulus presentation**: SoftwareName=OpenViBE\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: p300\n- **Number of targets**: 1\n- **Number of repetitions**: 12\n## Data Structure\n- **Trials**: variable per subject (up to 8 attempts per level, 9 levels per session, 3 sessions)\n- **Blocks per session**: 3\n- **Trials context**: 9 levels per session with variable duration (average ~5 minutes per session, max 10 minutes)\n## Preprocessing\n- **Data state**: raw EEG with synchronized USB tagging (reduced jitter using USB digital-to-analog converter)\n- **Preprocessing applied**: False\n- **Notes**: no digital filter applied during acquisition; tags synchronized with EEG signals to reduce jitter; consistent tagging latency across Brain Invaders databases\n## Signal Processing\n- **Classifiers**: Riemannian Minimum Distance to Mean (RMDM), adaptive\n- **Feature extraction**: Covariance/Riemannian\n## Cross-Validation\n- **Evaluation type**: cross_session\n## BCI Application\n- **Applications**: gaming\n- **Environment**: small room (4 square meters) with 24 inch screen\n- **Online feedback**: True\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Visual\n- **Type**: Perception\n## Documentation\n- **DOI**: 10.5281/zenodo.3266930\n- **Associated paper DOI**: hal-02172347\n- **License**: CC-BY-4.0\n- **Investigators**: Louis Korczowski, Martine Cederhout, Anton Andreev, Grégoire Cattan, Pedro Luiz Coelho Rodrigues, Violette Gautheret, Marco Congedo\n- **Senior author**: Marco Congedo\n- **Institution**: GIPSA-lab, CNRS, University Grenoble-Alpes, Grenoble INP\n- **Address**: GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402, France\n- **Country**: FR\n- **Repository**: Zenodo\n- **Data URL**: https://doi.org/10.5281/zenodo.3266930\n- **Publication year**: 2019\n- **Ethics approval**: Ethical Committee of the University of Grenoble Alpes (Comité d'Ethique pour la Recherche Non-Interventionnelle)\n- **How to acknowledge**: Korczowski, L., Cederhout, M., Andreev, A., Cattan, G., Rodrigues, P.L.C., Gautheret, V., Congedo, M. (2019). Brain Invaders calibration-less P300-based BCI with modulation of flash duration Dataset (bi2015a). Technical Report, GIPSA-lab.\n- **Keywords**: Electroencephalography (EEG), P300, Brain-Computer Interface, Experiment\n## Abstract\nThis dataset contains electroencephalographic (EEG) recordings of 50 subjects playing to a visual P300 Brain-Computer Interface (BCI) videogame named Brain Invaders. The interface uses the oddball paradigm on a grid of 36 symbols (1 Target, 35 Non-Target) that are flashed pseudo-randomly to elicit the P300 response. EEG data were recorded using 32 active wet electrodes with three conditions: flash duration 50ms, 80ms or 110ms. The experiment took place at GIPSA-lab, Grenoble, France, in 2015.\n## Methodology\nThe experiment was designed to study the influence of the flash duration on a calibration-less P300-based BCI system with wet electrodes and as a screening session for potential candidates for a broader multi-user BCI study. The visual P300 is an event-related potential (ERP) elicited by an expected but unpredictable target visual stimulation (oddball paradigm), with peaking amplitude 240-600 ms after stimulus onset. During the experiment, the output of a real-time adaptive Riemannian Minimum Distance to Mean (RMDM) classifier was used for assessing the participants' command. This scheme allows a calibration-free classifier. Before and after the three game sessions, around one minute of resting state and eyes closed conditions were recorded. The interface of Brain Invaders is composed of 36 aliens. In the Brain Invaders P300 paradigm, a repetition is composed of 12 flashes of pseudo-random groups of six symbols chosen in such a way that after each repetition each symbol has flashed exactly two times. A game session was compounded by nine levels, consisting in a unique and predefined configuration of the 36 symbols of the interface. Aliens slowly and regularly moved according to a predefined path keeping constant the inter-distance between adjacent aliens to maintain high player's attention during the whole experiment.\n## References\nKorczowski, L., Cederhout, M., Andreev, A., Cattan, G., Rodrigues, P. L. C., Gautheret, V., & Congedo, M. (2019). Brain Invaders calibration-less P300-based BCI with modulation of flash duration Dataset (BI2015a) https://hal.archives-ouvertes.fr/hal-02172347\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1","2"],"size_bytes":2080767476,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000216","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-30T14:09:07.366417+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-24T05:48:11Z"},"total_files":129,"computed_title":"P300 dataset BI2015a from a \"Brain Invaders\" experiment","nchans_counts":[{"val":32,"count":129}],"sfreq_counts":[{"val":512.0,"count":129}],"stats_computed_at":"2026-05-01T13:49:34.645792+00:00","total_duration_s":41990.935546875,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"f0362e31af184154","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot by paradigm family (oddball / target vs non-target detection): the Parkinson’s “Cross-modal Oddball Task” example shows how oddball-style target detection is labeled with Modality based on stimulus channels (it uses both visual+auditory → “Multisensory”) and Pathology based on recruitment (PD → “Parkinson’s”). For Type, few-shots show that when the dataset’s main goal is eliciting/characterizing sensory-evoked responses to stimuli (e.g., “Subcortical responses to music and speech…”), Type is mapped to “Perception”. This guides labeling the Brain Invaders P300 (visual oddball target vs non-target) as Visual modality and Perception type, with Healthy participants.","metadata_analysis":"Key metadata facts:\n- Population: “Health status: healthy” and “Participants… mostly students and young researchers” and “Number of subjects: 43”.\n- Stimulus modality & paradigm: “Paradigm: p300”, “Task type: target detection”, and “Stimulus type: oddball paradigm on grid of 36 symbols (1 Target, 35 Non-Target) flashed pseudo-randomly”. Also explicitly: “Stimulus modalities: visual” and “Primary modality: visual”.\n- Research framing: “visual P300 Brain-Computer Interface (BCI) videogame named Brain Invaders” and “visual P300 is an event-related potential (ERP) elicited by… target visual stimulation (oddball paradigm)”.\n- There is also a brief rest recording (“resting state and eyes closed recorded before and after sessions”), but the dataset is primarily a task-based P300/oddball BCI dataset.","paper_abstract_analysis":"No useful paper abstract beyond the dataset’s own abstract/methodology text provided in the metadata.","evidence_alignment_check":"Pathology:\n- Metadata says: “Health status: healthy”.\n- Few-shot pattern suggests: recruitment diagnosis rules dominate (e.g., PD oddball → Parkinson’s; schizophrenia visual task → Schizophrenia/Psychosis). For this dataset, no clinical recruitment is described.\n- ALIGNMENT: aligns → Healthy.\n\nModality:\n- Metadata says: “Stimulus modalities: visual” and “Primary modality: visual” and “grid of 36 symbols… flashed”.\n- Few-shot pattern suggests: modality is the stimulus channel (e.g., music/speech dataset → Auditory; braille letters → Tactile; cross-modal oddball → Multisensory).\n- ALIGNMENT: aligns → Visual.\n\nType:\n- Metadata says: “Paradigm: p300”, “Task type: target detection”, and describes an “oddball paradigm… flashed… to elicit the P300 response”.\n- Few-shot pattern suggests: stimulus-driven detection/discrimination tasks are labeled “Perception” (e.g., auditory stimuli response characterization → Perception; visual discrimination task → Perception). Oddball tasks can sometimes be construed as “Attention”, but the dataset description emphasizes P300 elicitation/BCI target detection rather than sustained attention as the primary construct.\n- ALIGNMENT: mostly aligns with Perception; minor ambiguity vs Attention due to “target detection” framing.","decision_summary":"Top-2 candidates per category with head-to-head selection:\n\nPathology:\n1) Healthy — Evidence: “Health status: healthy”; participants described as “mostly students and young researchers”; no patient group mentioned.\n2) Unknown — Would apply if health status were not stated.\nWinner: Healthy. (Alignment: aligned)\nConfidence notes: supported by multiple explicit population statements.\n\nModality:\n1) Visual — Evidence: “Stimulus modalities: visual”; “Primary modality: visual”; “symbols… flashed pseudo-randomly”.\n2) Multisensory — Only if multiple stimulus channels were present; none indicated beyond visual feedback.\nWinner: Visual. (Alignment: aligned)\nConfidence notes: multiple explicit modality statements.\n\nType:\n1) Perception — Evidence: “Task type: target detection”; “oddball paradigm… flashed… to elicit the P300 response”; P300 is a stimulus-evoked ERP to target vs non-target visual events.\n2) Attention — Plausible because oddball/target detection involves attentional selection, and instructions mention maintaining attention (“to maintain attention”).\nWinner: Perception (stronger because the primary described aim is eliciting/using the visual P300 via oddball target detection for BCI operation rather than studying attention as the main construct). (Alignment: mostly aligned; minor ambiguity vs Attention)\nConfidence notes: direct task/paradigm quotes support Perception, but some attentional language keeps confidence below 0.9."}},"canonical_name":null,"name_confidence":0.83,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Korczowski2015_P300"}}