{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4c92","dataset_id":"nm000189","associated_paper_doi":null,"authors":["Martijn Schreuder","Thomas Rost","Michael Tangermann"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":10,"ages":[34,34,34,34,34,34,34,34,34,34],"age_min":34,"age_max":34,"age_mean":34.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000189","osf_url":null,"github_url":null,"paper_url":null},"funding":["European ICT Programme Project FP7-224631","European ICT Programme Project FP7-216886","Deutsche Forschungsgemeinschaft (DFG MU 987/3-2)","Bundesministerium fur Bildung und Forschung (BMBF FKZ 01IB001A, 01GQ0850)","FP7-ICT PASCAL2 Network of Excellence ICT-216886"],"ingestion_fingerprint":"d10cbc662eed3a5add1bf163385ca686dc54bedda6bc76ca1c254e4c4f208c5a","license":"CC-BY-NC-ND-4.0","n_contributing_labs":null,"name":"BNCI 2015-003 P300 dataset","readme":"# BNCI 2015-003 P300 dataset\nBNCI 2015-003 P300 dataset.\n## Dataset Overview\n- **Code**: BNCI2015-003\n- **Paradigm**: p300\n- **DOI**: 10.1016/j.neulet.2009.06.045\n- **Subjects**: 10\n- **Sessions per subject**: 1\n- **Events**: Target=2, NonTarget=1\n- **Trial interval**: [0, 0.8] s\n- **Runs per session**: 2\n- **Session IDs**: Session 1, Session 2\n- **File format**: gdf\n- **Data preprocessed**: True\n- **Number of contributing labs**: 1\n## Acquisition\n- **Sampling rate**: 256.0 Hz\n- **Number of channels**: 8\n- **Channel types**: eeg=8\n- **Channel names**: Fz, Cz, P3, Pz, P4, PO7, Oz, PO8\n- **Montage**: standard_1005\n- **Hardware**: BrainAmp\n- **Software**: Matlab\n- **Reference**: nose\n- **Sensor type**: Ag/AgCl electrodes\n- **Line frequency**: 50.0 Hz\n- **Online filters**: hardware analog band-pass filter between 0.1 and 250 Hz\n- **Impedance threshold**: 15.0 kOhm\n- **Cap manufacturer**: Brain Products\n- **Electrode type**: Ag/AgCl\n- **Electrode material**: silver/silver chloride\n- **Auxiliary channels**: EOG (2 ch, bipolar)\n## Participants\n- **Number of subjects**: 10\n- **Health status**: patients\n- **Clinical population**: Healthy\n- **Age**: mean=34.1, std=11.4, min=20, max=57\n- **BCI experience**: naive\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: p300\n- **Task type**: auditory_oddball\n- **Number of classes**: 2\n- **Class labels**: Target, NonTarget\n- **Tasks**: spelling, auditory_attention\n- **Study design**: Auditory Multi-class Spatial ERP (AMUSE) paradigm using spatial auditory cues from six speaker locations in azimuth plane. Two-step hex-o-spell like interface for character selection. Subjects mentally count target stimuli from one of six spatial directions.\n- **Study domain**: communication\n- **Feedback type**: auditory\n- **Stimulus type**: spatial_auditory\n- **Stimulus modalities**: auditory\n- **Primary modality**: auditory\n- **Synchronicity**: synchronous\n- **Mode**: online\n- **Training/test split**: True\n- **Instructions**: Focus attention to one target direction and mentally count the number of appearances\n- **Stimulus presentation**: soa_ms=175, stimulus_duration_ms=40, stimulus_intensity_db=58, speaker_arrangement=6 speakers at ear height, evenly distributed in circle with 60° distance, radius 65 cm\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: p300\n- **Number of targets**: 6\n- **Stimulus onset asynchrony**: 175.0 ms\n## Data Structure\n- **Trials**: 48\n- **Trials per class**: calibration_per_direction=8\n- **Trials context**: calibration_phase\n## Preprocessing\n- **Data state**: filtered\n- **Preprocessing applied**: True\n- **Steps**: low-pass filter, downsampling, baselining\n- **Highpass filter**: 0.1 Hz\n- **Lowpass filter**: 40.0 Hz\n- **Bandpass filter**: {'low_cutoff_hz': 0.1, 'high_cutoff_hz': 40.0}\n- **Filter type**: analog hardware filter for acquisition; low-pass for online\n- **Artifact methods**: variance criterium, peak-to-peak difference criterium\n- **Re-reference**: nose\n- **Downsampled to**: 100.0 Hz\n- **Epoch window**: [-0.15, None]\n- **Notes**: For online use signal was low-pass filtered below 40 Hz and downsampled to 100 Hz. Data baselined using 150 ms pre-stimulus data as reference.\n## Signal Processing\n- **Classifiers**: LDA, linear binary classifier\n- **Feature extraction**: spatio-temporal features, r2 coefficient, interval averaging\n- **Spatial filters**: shrinkage regularization (Ledoit-Wolf)\n## Cross-Validation\n- **Method**: online\n- **Evaluation type**: online\n## Performance (Original Study)\n- **Accuracy**: 77.4%\n- **Itr**: 2.84 bits/min\n- **Char Per Min Session1**: 0.59\n- **Char Per Min Session2 Max**: 1.41\n- **Char Per Min Session2 Avg**: 0.94\n- **Itr Session2 Avg**: 5.26\n- **Itr Session2 Max**: 7.55\n- **Success Rate Session1**: 76.0\n## BCI Application\n- **Applications**: speller, communication\n- **Environment**: laboratory\n- **Online feedback**: True\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Auditory\n- **Type**: ERP, P300\n## Documentation\n- **Description**: Auditory BCI speller using spatial cues (AMUSE paradigm) allowing purely auditory communication interface\n- **DOI**: 10.1016/j.neulet.2009.06.045\n- **Associated paper DOI**: 10.3389/fnins.2011.00112\n- **License**: CC-BY-NC-ND-4.0\n- **Investigators**: Martijn Schreuder, Thomas Rost, Michael Tangermann\n- **Senior author**: Michael Tangermann\n- **Contact**: schreuder@tu-berlin.de\n- **Institution**: Berlin Institute of Technology\n- **Department**: Machine Learning Laboratory\n- **Address**: Machine Learning Laboratory, Berlin Institute of Technology, FR6-9, Franklinstraße 28/29, 10587 Berlin, Germany\n- **Country**: Germany\n- **Repository**: BNCI Horizon\n- **Publication year**: 2011\n- **Funding**: European ICT Programme Project FP7-224631; European ICT Programme Project FP7-216886; Deutsche Forschungsgemeinschaft (DFG MU 987/3-2); Bundesministerium fur Bildung und Forschung (BMBF FKZ 01IB001A, 01GQ0850); FP7-ICT PASCAL2 Network of Excellence ICT-216886\n- **Ethics approval**: Ethics Committee of the Charité University Hospital\n- **Acknowledgements**: Thomas Denck, David List and Larissa Queda for help with experiments. Klaus-Robert Müller and Benjamin Blankertz for fruitful discussions.\n- **Keywords**: brain-computer interface, directional hearing, auditory event-related potentials, P300, N200, dynamic subtrials\n## External Links\n- **Source**: http://www.frontiersin.org/neuroprosthetics/10.3389/fnins.2011.00112/abstract\n## Abstract\nThis online study introduces an auditory spelling interface that eliminates the necessity for visual representation. In up to two sessions, a group of healthy subjects (N=21) was asked to use a text entry application, utilizing the spatial cues of the AMUSE paradigm (Auditory Multi-class Spatial ERP). The speller relies on the auditory sense both for stimulation and the core feedback. Without prior BCI experience, 76% of the participants were able to write a full sentence during the first session. By exploiting the advantages of a newly introduced dynamic stopping method, a maximum writing speed of 1.41 char/min (7.55 bits/min) could be reached during the second session (average: 0.94 char/min, 5.26 bits/min).\n## Methodology\nParticipants surrounded by six speakers at ear height in circle (60° spacing, 65 cm radius). Each direction associated with unique combination of tone (base frequency + harmonics) and band-pass filtered noise. Two-step hex-o-spell interface for character selection. Session 1: calibration (48 trials, 8 per direction, 15 iterations each) followed by online spelling with 15 fixed iterations. Session 2: calibration followed by online spelling with dynamic stopping method (4-15 iterations). Spatio-temporal feature extraction using r2 coefficient and interval selection (2-4 intervals for early and late components, 112-224 features total). Linear binary classifier with shrinkage regularization (Ledoit-Wolf). Decision making based on median classifier scores across iterations.\n## References\nSchreuder, M., Rost, T., & Tangermann, M. (2011). Listen, you are writing! Speeding up online spelling with a dynamic auditory BCI. Frontiers in neuroscience, 5, 112. https://doi.org/10.3389/fnins.2011.00112\nNotes\n.. note::\n``BNCI2015_003`` was previously named ``BNCI2015003``. ``BNCI2015003`` will be removed in version 1.1.\n.. versionadded:: 0.4.0\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":22825131,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000189","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-30T14:08:53.051731+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-30T13:04:33Z"},"total_files":20,"computed_title":"BNCI 2015-003 P300 dataset","nchans_counts":[{"val":8,"count":40}],"sfreq_counts":[{"val":256.0,"count":40}],"stats_computed_at":"2026-05-01T13:49:34.645456+00:00","total_duration_s":6726.2421875,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"1a90be416c83feeb","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Auditory"],"type":["Attention"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot paradigm-wise is the “Cross-modal Oddball Task” example (oddball with standard vs oddball/target events). That example illustrates the convention that oddball-style ERP datasets are labeled by the dominant stimulus modality (there: multisensory due to simultaneous visual+auditory pre-cues) and the research aim/type is not the button response but the cognitive construct emphasized (there: clinical biomarkers in PD → Clinical/Intervention). Another relevant convention comes from the “EEG: DPX Cog Ctl Task in Acute Mild TBI” example where a target/non-target, control-demanding paradigm is categorized as Type=Attention. For BNCI 2015-003, the oddball/P300 speller requires sustained/selective attention to the target direction (mentally counting targets), aligning better with Attention than Perception under these conventions.","metadata_analysis":"Key population facts: (1) “Clinical population: Healthy” and (2) the included abstract states “a group of healthy subjects”. Key modality/task facts: (1) “Primary modality: auditory”, (2) “Task type: auditory_oddball”, and (3) “spatial auditory cues from six speaker locations in azimuth plane.” Key cognitive-aim facts: (1) “Instructions: Focus attention to one target direction and mentally count the number of appearances” and (2) “Tasks: spelling, auditory_attention”.","paper_abstract_analysis":"The included abstract reinforces a healthy volunteer sample (“a group of healthy subjects”) and clarifies the purpose as an “auditory spelling interface” using auditory spatial cues (consistent with an attention-demanding P300/oddball BCI speller).","evidence_alignment_check":"Pathology — Metadata says: “Clinical population: Healthy” and abstract says “healthy subjects”. Few-shot pattern suggests that explicit recruitment diagnoses override any other cues; here explicit metadata indicates healthy participants. ALIGN.\nModality — Metadata says: “Primary modality: auditory”, “Stimulus modalities: auditory”, and “Task type: auditory_oddball” with “six speaker locations”. Few-shot pattern maps modality to stimulus channel (e.g., oddball example labeled multisensory because both auditory+visual pre-cues). Here stimuli are auditory. ALIGN.\nType — Metadata says: “Focus attention to one target direction and mentally count” and lists “auditory_attention”. Few-shot pattern suggests oddball/target paradigms often map to Attention when selective attention to targets is central (as in the DPX cognitive control/attention example). ALIGN (stronger than Perception because emphasis is on attentional selection/target detection rather than sensory discrimination).","decision_summary":"Top-2 Pathology candidates: (a) Healthy — supported by “Clinical population: Healthy” and “healthy subjects”; (b) Unknown — only if ignoring explicit lines. Winner: Healthy (explicit statements).\nTop-2 Modality candidates: (a) Auditory — supported by “Primary modality: auditory”, “Stimulus modalities: auditory”, “auditory_oddball”, “six speaker locations”; (b) Multisensory — possible only if there were visual stimuli, but none are described. Winner: Auditory.\nTop-2 Type candidates: (a) Attention — supported by “Focus attention to one target direction”, “mentally count target stimuli”, and task tag “auditory_attention”; (b) Perception — plausible due to target vs non-target detection, but less aligned with stated instructions/aim. Winner: Attention.\nConfidence justification: Pathology has 2 explicit quotes; Modality has 3+ explicit quotes; Type has 2 explicit quotes plus a close few-shot oddball/attention convention match."}},"canonical_name":null,"name_confidence":0.74,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Schreuder2015_P300"}}