{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cb9","dataset_id":"nm000236","associated_paper_doi":null,"authors":["Grégoire Cattan","Anton Andreev","Pedro Luiz Coelho Rodrigues","Marco Congedo"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":21,"ages":[26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26],"age_min":26,"age_max":26,"age_mean":26.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000236","osf_url":null,"github_url":null,"paper_url":null},"funding":["IHMTEK Company (Interaction Homme-Machine Technologie)"],"ingestion_fingerprint":"aabc8f58f468a3d658c18425f42c94d04aedaa2b754144eb92a494b5104bd789","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Dataset of an EEG-based BCI experiment in Virtual Reality using P300","readme":"# Dataset of an EEG-based BCI experiment in Virtual Reality using P300\nDataset of an EEG-based BCI experiment in Virtual Reality using P300.\n## Dataset Overview\n- **Code**: Cattan2019-VR\n- **Paradigm**: p300\n- **DOI**: https://doi.org/10.5281/zenodo.2605204\n- **Subjects**: 21\n- **Sessions per subject**: 1\n- **Events**: Target=2, NonTarget=1\n- **Trial interval**: [0, 1.0] s\n- **Runs per session**: 60\n- **Session IDs**: PC, VR\n- **File format**: mat, csv\n- **Contributing labs**: GIPSA-lab\n## Acquisition\n- **Sampling rate**: 512.0 Hz\n- **Number of channels**: 16\n- **Channel types**: eeg=16\n- **Channel names**: Fp1, Fp2, Fc5, Fz, Fc6, T7, Cz, T8, P7, P3, Pz, P4, P8, O1, Oz, O2\n- **Montage**: 10-10\n- **Hardware**: g.USBamp (g.tec, Schiedlberg, Austria)\n- **Software**: OpenVibe\n- **Reference**: right earlobe\n- **Ground**: AFZ\n- **Sensor type**: wet electrodes\n- **Line frequency**: 50.0 Hz\n- **Online filters**: no digital filter applied\n- **Cap manufacturer**: EasyCap\n- **Cap model**: EC20\n## Participants\n- **Number of subjects**: 21\n- **Health status**: healthy\n- **Age**: mean=26.38, std=5.78, min=19.0, max=44.0\n- **Gender distribution**: male=14, female=7\n- **BCI experience**: varied gaming experience: some played video games occasionally, some played First Person Shooters; varied VR experience from none to repetitive\n## Experimental Protocol\n- **Paradigm**: p300\n- **Number of classes**: 2\n- **Class labels**: Target, NonTarget\n- **Study design**: randomized session order (PC vs VR); limit eye blinks, head movements and face muscular contractions\n- **Feedback type**: visual\n- **Stimulus type**: flashing white crosses in 6x6 matrix\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Mode**: offline\n- **Training/test split**: False\n- **Instructions**: focus on a red-squared target symbol while groups of six symbols flash\n- **Stimulus presentation**: description=6x6 matrix of white crosses; groups of 6 symbols flash; each symbol flashes exactly 2 times per repetition, platform=Unity engine exported to PC and VR\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: p300\n- **Number of targets**: 1\n- **Number of repetitions**: 12\n## Data Structure\n- **Trials**: {'target': 120, 'non_target': 600}\n- **Trials per class**: target=120, non_target=600\n- **Blocks per session**: 12\n- **Trials context**: per session: 12 blocks × 5 repetitions × 12 flashes per repetition (2 target, 10 non-target)\n## Preprocessing\n- **Data state**: raw EEG with software tagging via USB (note: tagging introduces jitter and latency - mean 38ms in PC, 117ms in VR)\n- **Preprocessing applied**: False\n- **Notes**: mean tagging latency: ~38 ms in PC, ~117 ms in VR due to different hardware/software setup; these latencies should be used to correct ERPs\n## Signal Processing\n- **Classifiers**: xDAWN, Riemannian\n- **Feature extraction**: Covariance/Riemannian, xDAWN\n## Cross-Validation\n- **Evaluation type**: cross_session\n## BCI Application\n- **Applications**: speller\n- **Environment**: PC and Virtual Reality (VRElegiant HMD with Huawei Ascend Mate 7 smartphone)\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Visual\n- **Type**: Perception\n## Documentation\n- **Description**: EEG recordings of 21 subjects doing a visual P300 experiment on PC and VR to compare BCI performance and user experience\n- **DOI**: 10.5281/zenodo.2605204\n- **Associated paper DOI**: hal-02078533v3\n- **License**: CC-BY-4.0\n- **Investigators**: Grégoire Cattan, Anton Andreev, Pedro Luiz Coelho Rodrigues, Marco Congedo\n- **Senior author**: Marco Congedo\n- **Institution**: GIPSA-lab\n- **Department**: GIPSA-lab, CNRS, University Grenoble-Alpes, Grenoble INP\n- **Address**: GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402, France\n- **Country**: FR\n- **Repository**: Zenodo\n- **Data URL**: https://doi.org/10.5281/zenodo.2605204\n- **Publication year**: 2019\n- **Funding**: IHMTEK Company (Interaction Homme-Machine Technologie)\n- **Ethics approval**: Ethical Committee of the University of Grenoble Alpes (Comité d'Ethique pour la Recherche Non-Interventionnelle)\n- **Acknowledgements**: promoted by the IHMTEK Company\n- **Keywords**: Electroencephalography (EEG), P300, Brain-Computer Interface (BCI), Virtual Reality (VR), experiment\n## Abstract\nDataset contains electroencephalographic recordings on 21 subjects doing a visual P300 experiment on PC and VR. The visual P300 is an event-related potential elicited by a visual stimulation, peaking 240–600 ms after stimulus onset. The experiment compares P300-based BCI on PC vs VR headset (passive HMD with smartphone) concerning physiological, subjective and performance aspects. EEG recorded with 16 electrodes. Experiment conducted at GIPSA-lab in 2018.\n## Methodology\nTwo randomized sessions (PC and VR). Each session: 12 blocks of 5 repetitions. Each repetition: 12 flashes of groups of 6 symbols, ensuring each symbol flashes exactly 2 times. Target flashes twice per repetition (2 target flashes), non-target flashes 10 times. Random feedback given after each repetition (70% expected accuracy). P300 interface: 6x6 matrix of white flashing crosses with red-squared target. VR used passive HMD (VRElegiant) with Huawei Mate 7 smartphone. IMU deactivated to prevent drift. Unity engine used for identical visual stimulation across PC and VR.\n## References\nG. Cattan, A. Andreev, P. L. C. Rodrigues, and M. Congedo (2019). Dataset of an EEG-based BCI experiment in Virtual Reality and on a Personal Computer. Research Report, GIPSA-lab; IHMTEK. https://doi.org/10.5281/zenodo.2605204\n.. versionadded:: 0.5.0\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0VR","1PC"],"size_bytes":391425366,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000236","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-30T14:09:22.433391+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-25T18:25:40Z"},"total_files":2520,"computed_title":"Dataset of an EEG-based BCI experiment in Virtual Reality using P300","nchans_counts":[{"val":16,"count":2520}],"sfreq_counts":[{"val":512.0,"count":2520}],"stats_computed_at":"2026-05-01T13:49:34.645957+00:00","total_duration_s":14757.078125,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"a36af31997032f50","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot by paradigm structure is the Cross-modal Oddball Task example (Parkinson's; Multisensory; Clinical/Intervention), because both use a target vs non-target structure typical of oddball/P300 designs (rare targets among frequent non-targets). That example guides the convention that oddball-like paradigms are categorized by the cognitive construct (often attention to targets), while pathology is determined by recruited population (PD there; healthy here). The DPX Cog Ctl Task example (TBI; Visual; Attention) also supports mapping a visually cued, target/response-driven paradigm to the Type label 'Attention' when the core requirement is maintaining attentional focus to detect task-relevant cues/targets.","metadata_analysis":"Key population facts: (1) \"Health status: healthy\" and (2) \"Dataset contains electroencephalographic recordings on 21 subjects doing a visual P300 experiment\". Key stimulus/modality facts: (1) \"Stimulus type: flashing white crosses in 6x6 matrix\" and (2) \"Stimulus modalities: visual\" / \"Primary modality: visual\". Key task/type facts: (1) \"Paradigm: p300\" with \"Events: Target=2, NonTarget=1\" and (2) \"Instructions: focus on a red-squared target symbol while groups of six symbols flash\", indicating a classic attentional oddball/P300 speller setup.","paper_abstract_analysis":"The included abstract in the README reinforces the paradigm purpose as a P300 ERP/BCI comparison: \"Dataset contains electroencephalographic recordings on 21 subjects doing a visual P300 experiment on PC and VR\" and describes the P300 as an ERP \"elicited by a visual stimulation\"; this supports a visual attention-to-target interpretation rather than motor or resting-state.","evidence_alignment_check":"Pathology: Metadata says participants are healthy (\"Health status: healthy\"); few-shot patterns do not override this and align with using 'Healthy' when no disorder recruitment is present. Modality: Metadata explicitly states visual stimuli (\"Stimulus modalities: visual\", \"flashing white crosses\"); few-shot conventions align (visual stimulus -> Visual). Type: Metadata describes a P300 target/non-target speller with instructions to attend a target (\"focus on a red-squared target symbol\", \"Target, NonTarget\"); few-shot oddball-like example suggests attentional target detection is central. This aligns best with 'Attention' rather than 'Perception' given the primary demand is selective attention to rare targets in a BCI oddball paradigm.","decision_summary":"Top-2 candidates per category:\n- Pathology: (1) Healthy vs (2) Unknown. Evidence for Healthy: \"Health status: healthy\"; \"Dataset contains ... recordings on 21 subjects\" with no clinical recruitment described. -> Select Healthy (alignment: aligned).\n- Modality: (1) Visual vs (2) Multisensory. Evidence for Visual: \"Stimulus type: flashing white crosses in 6x6 matrix\"; \"Stimulus modalities: visual\"; \"Primary modality: visual\". -> Select Visual (alignment: aligned).\n- Type: (1) Attention vs (2) Perception. Evidence for Attention: P300 oddball structure \"Events: Target=2, NonTarget=1\" and attentional instruction \"focus on a red-squared target symbol while groups of six symbols flash\" (classic target-detection/oddball attention). Evidence for Perception: P300 is an ERP elicited by visual stimulation and could be seen as stimulus processing, but the task goal is not sensory discrimination; it is target detection for BCI/speller performance. -> Select Attention (alignment: aligned).\nConfidence justification: Pathology and Modality have multiple explicit metadata statements. Type has clear task-structure and instruction quotes supporting attention, but 'Perception' remains a plausible runner-up because it is visually evoked ERP-focused."}},"canonical_name":null,"name_confidence":0.86,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Cattan2019_P300"}}