{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cac","dataset_id":"nm000215","associated_paper_doi":null,"authors":["Louis Korczowski","Ekaterina Ostaschenko","Anton Andreev","Grégoire Cattan","Pedro Luiz Coelho Rodrigues","Violette Gautheret","Marco Congedo"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":38,"ages":[24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24],"age_min":24,"age_max":24,"age_mean":24.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000215","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"6b2f286b8398d2e89076d828eb8e9eeb80f7d1eb4d47b22a38cc5c2efd9ddf96","license":"CC-BY-4.0","n_contributing_labs":null,"name":"P300 dataset BI2014b from a \"Brain Invaders\" experiment","readme":"# P300 dataset BI2014b from a \"Brain Invaders\" experiment\nP300 dataset BI2014b from a \"Brain Invaders\" experiment.\n## Dataset Overview\n- **Code**: BrainInvaders2014b\n- **Paradigm**: p300\n- **DOI**: https://doi.org/10.5281/zenodo.3267301\n- **Subjects**: 38\n- **Sessions per subject**: 1\n- **Events**: Target=2, NonTarget=1\n- **Trial interval**: [0, 1] s\n- **File format**: mat and csv\n## Acquisition\n- **Sampling rate**: 512.0 Hz\n- **Number of channels**: 32\n- **Channel types**: eeg=32\n- **Channel names**: Fp1, Fp2, AFz, F7, F3, F4, F8, FC5, FC1, FC2, FC6, T7, C3, Cz, C4, T8, CP5, CP1, CP2, CP6, P7, P3, Pz, P4, P8, PO7, O1, Oz, O2, PO8, PO9, PO10\n- **Montage**: standard_1010\n- **Hardware**: g.USBamp (g.tec, Schiedlberg, Austria)\n- **Software**: OpenVibe\n- **Reference**: right earlobe\n- **Ground**: Fz\n- **Sensor type**: wet electrodes\n- **Line frequency**: 50.0 Hz\n- **Cap manufacturer**: g.tec\n- **Cap model**: g.GAMMAcap\n- **Electrode type**: wet\n- **Electrode material**: Ag/AgCl\n## Participants\n- **Number of subjects**: 38\n- **Health status**: healthy\n- **Age**: mean=24.1, std=3.09\n- **Gender distribution**: male=24, female=14\n- **BCI experience**: not naïve users - selected on the basis of their individual score during a preliminary session of Brain Invaders\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: p300\n- **Task type**: oddball\n- **Number of classes**: 2\n- **Class labels**: Target, NonTarget\n- **Study design**: multi-user/hyperscanning experiment with three randomized conditions (Solo1, Solo2, Collaboration). Subjects played in pairs. Solo conditions used a control design where non-playing participant focused on unanimated cross to prevent stimulus observation while EEG was recorded (to correct for fake inter-brain synchrony).\n- **Study domain**: inter-brain synchrony in collaborative BCI\n- **Feedback type**: visual\n- **Stimulus type**: visual flashes\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: online\n- **Training/test split**: False\n- **Instructions**: destroy the target alien symbol as fast as possible. Up to eight attempts per level. If all attempts missed, level restarted.\n- **Stimulus presentation**: repetition_structure=12 flashes per repetition of pseudo-random groups of 6 symbols, such that each symbol flashes exactly twice per repetition, target_ratio=1:5 (Target vs Non-Target), flash_groups=6 rows and 6 columns (pseudo-random groups, not physical arrangement), animation=aliens slowly and regularly moved according to predefined path with constant inter-distance\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: p300\n- **Number of targets**: 1\n## Data Structure\n- **Trials**: variable per session (9 levels, up to 8 attempts per level)\n- **Blocks per session**: 9\n- **Block duration**: variable, average ~33 seconds per level (5 minutes total for 9 levels) s\n- **Trials context**: 9 levels per game session, each with unique predefined spatial configuration of 36 aliens. Up to 8 attempts to destroy target per level.\n## Preprocessing\n- **Data state**: raw EEG with no digital filter applied, synchronized experimental tags using USB analog-to-digital converter to reduce jitter\n- **Preprocessing applied**: False\n- **Notes**: Experimental tags produced by Brain Invaders 2 were synchronized with EEG signals using USB analog-to-digital converter connected to g.USBamp trigger channel. This tagging procedure allows consistent tagging latency and jitter.\n## Signal Processing\n- **Classifiers**: RMDM (Riemannian Minimum Distance to Mean), Riemannian\n- **Feature extraction**: Covariance/Riemannian\n## Cross-Validation\n- **Evaluation type**: cross_session\n## Performance (Original Study)\n- **Classifier**: real-time adaptive RMDM classifier (calibration-free procedure)\n## BCI Application\n- **Applications**: gaming\n- **Environment**: small room with 24' screen, subjects sitting side by side at ~125cm distance, experimenter in adjacent room with one-way glass window\n- **Online feedback**: True\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Visual\n- **Type**: Perception\n## Documentation\n- **Description**: EEG recordings of 38 subjects playing in pairs to the multi-user version of Brain Invaders P300-based BCI. Contains three conditions: Solo1, Solo2, and Collaboration.\n- **DOI**: 10.5281/zenodo.3267301\n- **Associated paper DOI**: hal-02173958\n- **License**: CC-BY-4.0\n- **Investigators**: Louis Korczowski, Ekaterina Ostaschenko, Anton Andreev, Grégoire Cattan, Pedro Luiz Coelho Rodrigues, Violette Gautheret, Marco Congedo\n- **Senior author**: Marco Congedo\n- **Institution**: GIPSA-lab, CNRS, University Grenoble-Alpes, Grenoble INP\n- **Address**: GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402, France\n- **Country**: FR\n- **Repository**: Zenodo\n- **Data URL**: https://doi.org/10.5281/zenodo.3267301\n- **Publication year**: 2019\n- **Ethics approval**: Ethical Committee of the University of Grenoble Alpes (Comité d'Ethique pour la Recherche Non-Interventionnelle)\n- **Acknowledgements**: At the end of the experiment two tickets of cinema were offered to each subject, for a total value of 15 euros per subject.\n- **Keywords**: Electroencephalography (EEG), P300, Brain-Computer Interface (BCI), Experiment, Collaboration, Multi-User, Hyperscanning\n## Abstract\nWe describe the experimental procedures for a dataset containing electroencephalographic (EEG) recordings of 38 subjects playing in pairs to the multi-user version of a visual P300-based Brain-Computer Interface (BCI) named Brain Invaders. The interface uses the oddball paradigm on a grid of 36 symbols (1 Target, 35 Non-Target) that are flashed pseudo-randomly to elicit a P300 response. EEG data were recorded using 32 active wet electrodes per subject (total: 64 electrodes) during three randomised conditions (Solo1, Solo2, Collaboration). The experiment took place at GIPSA-lab, Grenoble, France, in 2014.\n## Methodology\nMulti-user hyperscanning P300 BCI experiment designed to study inter-brain synchrony. Participants played Brain Invaders 2 in three conditions: Solo1 (player1 plays, player2 watches cross), Solo2 (roles reversed), and Collaboration (4 game sessions with both players). Each game session consisted of 9 levels with predefined alien configurations. A repetition used 12 flashes of pseudo-random groups of 6 symbols, ensuring each symbol flashed twice per repetition (1:5 Target:Non-Target ratio). Real-time adaptive RMDM classifier provided online feedback. Control condition (non-playing participant) allowed correction for fake inter-brain synchrony.\n## References\nKorczowski, L., Ostaschenko, E., Andreev, A., Cattan, G., Rodrigues, P. L. C., Gautheret, V., & Congedo, M. (2019). Brain Invaders Solo versus Collaboration: Multi-User P300-Based Brain-Computer Interface Dataset (BI2014b). https://hal.archives-ouvertes.fr/hal-02173958\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":421326813,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000215","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-30T14:09:07.106070+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-24T05:39:21Z"},"total_files":38,"computed_title":"P300 dataset BI2014b from a \"Brain Invaders\" experiment","nchans_counts":[{"val":32,"count":38}],"sfreq_counts":[{"val":512.0,"count":38}],"stats_computed_at":"2026-05-01T13:49:34.645780+00:00","total_duration_s":8505.23828125,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"58146a9225865bce","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.9,"modality":0.95,"type":0.8},"reasoning":{"few_shot_analysis":"Closest few-shot conventions: (1) The DPX cognitive control dataset (TBI) is labeled Type=Attention, showing that when the task is about detecting/responding to task-relevant cues/targets under control demands, the catalog uses the Attention label. (2) The schizophrenia visual discrimination example is labeled Type=Perception, showing that simple sensory discrimination maps to Perception. For the present dataset, the paradigm is explicitly P300/oddball target detection (a classic attentional ERP/BCI paradigm), which aligns more with the Attention convention than pure Perception.","metadata_analysis":"Key facts from metadata: (1) Population is normative: \"Health status: healthy\" and also \"Subjects: 38\" with no clinical recruitment indicated. (2) Paradigm and task: \"Paradigm: p300\" and \"Task type: oddball\" with \"Events: Target=2, NonTarget=1\". (3) Stimulus channel is visual: \"Stimulus type: visual flashes\", \"Stimulus modalities: visual\", and \"Primary modality: visual\". (4) Task goal requires attending to a target: \"destroy the target alien symbol as fast as possible\" and the abstract notes \"oddball paradigm on a grid of 36 symbols (1 Target, 35 Non-Target) that are flashed ... to elicit a P300 response\".","paper_abstract_analysis":"No separate paper abstract was provided beyond the dataset's included Abstract text. That abstract reinforces the attentional oddball/P300 framing: \"visual P300-based Brain-Computer Interface (BCI)\" and \"oddball paradigm ... (1 Target, 35 Non-Target)\".","evidence_alignment_check":"Pathology: Metadata says \"Health status: healthy\" (and the readme tags also list \"Pathology: Healthy\"); few-shot pattern for non-clinical student/volunteer cohorts maps to Healthy; ALIGN.\nModality: Metadata says \"Stimulus type: visual flashes\" and \"Primary modality: visual\"; few-shot pattern maps visual-dot discrimination and other screen-based paradigms to Visual; ALIGN.\nType: Metadata says \"Paradigm: p300\" and \"Task type: oddball\" with explicit Target vs NonTarget detection to elicit P300; few-shot patterns suggest two plausible mappings: Perception for discrimination tasks (schizophrenia moving-dots example) vs Attention for cue/target monitoring/control tasks (DPX example). Given explicit P300/oddball target-detection framing, the Attention mapping better fits; mostly ALIGN (no explicit conflict).","decision_summary":"Top-2 candidates:\n- Pathology: Healthy (winner) vs Unknown (runner-up). Winner evidence: \"Health status: healthy\"; also readme \"Participants\" section lists healthy demographics; and tags include \"Pathology: Healthy\".\n- Modality: Visual (winner) vs Multisensory (runner-up). Winner evidence: \"Stimulus type: visual flashes\", \"Stimulus modalities: visual\", \"Primary modality: visual\".\n- Type: Attention (winner) vs Perception (runner-up). Attention evidence: \"Paradigm: p300\", \"Task type: oddball\", \"Events: Target... NonTarget...\", and abstract describing an \"oddball paradigm\" to elicit \"P300\" (classically attention-related). Perception is plausible because it is visual target detection, but the P300/oddball emphasis and BCI target selection aligns more strongly with Attention.\nConfidence justification: Pathology and Modality have multiple explicit metadata statements. Type has multiple explicit task/paradigm statements, but Attention vs Perception remains a close conceptual alternative, lowering confidence slightly."}},"canonical_name":null,"name_confidence":0.82,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Korczowski2014_P300"}}