{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cb7","dataset_id":"nm000234","associated_paper_doi":null,"authors":["Martijn Schreuder","Benjamin Blankertz","Michael Tangermann"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":21,"ages":[30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30],"age_min":30,"age_max":30,"age_mean":30.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000234","osf_url":null,"github_url":null,"paper_url":null},"funding":["European ICT Programme Project FP7-224631","European ICT Programme Project FP7-216886","Deutsche Forschungsgemeinschaft (DFG) MU 987/3-1","Bundesministerium für Bildung und Forschung (BMBF) FKZ 01IB001A","Bundesministerium für Bildung und Forschung (BMBF) FKZ 01GQ0850","FP7-ICT PASCAL2 Network of Excellence ICT-216886"],"ingestion_fingerprint":"f986941244df77ca372f082be9f93bbc1fcb0b97e80c49d6ac8746a152fad51a","license":"CC-BY-NC-ND-4.0","n_contributing_labs":null,"name":"BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset","readme":"# BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset\nBNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset.\n## Dataset Overview\n- **Code**: BNCI2015-009\n- **Paradigm**: p300\n- **DOI**: 10.3389/fnins.2011.00112\n- **Subjects**: 21\n- **Sessions per subject**: 1\n- **Events**: Target=1, NonTarget=2\n- **Trial interval**: [0, 0.8] s\n- **Runs per session**: 2\n- **File format**: gdf\n- **Data preprocessed**: True\n## Acquisition\n- **Sampling rate**: 250.0 Hz\n- **Number of channels**: 60\n- **Channel types**: eeg=60, eog=2\n- **Montage**: 10-20\n- **Hardware**: Brain Products 128-channel amplifier\n- **Software**: Matlab\n- **Reference**: nose\n- **Sensor type**: Ag/AgCl electrodes\n- **Line frequency**: 50.0 Hz\n- **Online filters**: 0.1-250 Hz analog bandpass\n- **Auxiliary channels**: EOG (2 ch, bipolar)\n## Participants\n- **Number of subjects**: 21\n- **Health status**: patients\n- **Clinical population**: Healthy\n- **Age**: mean=30.3, min=22, max=55\n- **Gender distribution**: male=6, female=4\n- **Handedness**: unknown\n- **BCI experience**: mixed\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: p300\n- **Task type**: oddball\n- **Number of classes**: 2\n- **Class labels**: Target, NonTarget\n- **Trial duration**: 0.8 s\n- **Tasks**: spatial_auditory_oddball\n- **Study design**: Offline auditory oddball task using spatial location of auditory stimuli as discriminating cue. Frontal five speakers used (speakers 1,2,3,7,8) with 45 degree spacing. Three conditions tested: C300 (300ms ISI), C175 (175ms ISI), C300s (300ms ISI, single speaker). Each stimulus was unique 40ms complex sound from bandpass filtered white noise with tone overlay.\n- **Study domain**: BCI\n- **Feedback type**: none\n- **Stimulus type**: auditory_spatial\n- **Stimulus modalities**: auditory\n- **Primary modality**: auditory\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Training/test split**: False\n- **Instructions**: Subjects asked to mentally count target stimulations or respond by keypress (condition Cr). Minimize eye movements and muscle contractions. Target direction indicated prior to each block visually and by presenting stimulus from that location.\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: p300\n- **Number of targets**: 5\n- **Number of repetitions**: 15\n- **Inter-stimulus interval**: 300.0 ms\n## Data Structure\n- **Trials**: varied by condition\n- **Blocks per session**: 50\n- **Trials context**: BCI experiments: C300 (50 trials × 75 subtrials = 3750 subtrials), C175 (40 trials × 75 subtrials = 3000 subtrials), C300s (20 trials × 75 subtrials = 1500 subtrials). Physiological experiments: C1000 (32 trials × 80 subtrials = 2560 subtrials), Cr (576-768 subtrials)\n## Preprocessing\n- **Data state**: filtered\n- **Preprocessing applied**: True\n- **Steps**: bandpass filter, notch filter, downsampling, artifact rejection\n- **Highpass filter**: 0.1 Hz\n- **Lowpass filter**: 250.0 Hz\n- **Bandpass filter**: {'low_cutoff_hz': 0.1, 'high_cutoff_hz': 250.0}\n- **Notch filter**: [50] Hz\n- **Filter type**: Chebyshev II order 8 (for visual inspection: 30 Hz pass, 42 Hz stop, 50 dB damping)\n- **Artifact methods**: threshold-based artifact rejection\n- **Re-reference**: nose\n- **Downsampled to**: 100.0 Hz\n- **Epoch window**: [-0.15, 0.8]\n- **Notes**: Raw data acquired at 1000 Hz. For visual inspection: low-pass filtered with order 8 Chebyshev II filter (30 Hz pass, 42 Hz stop, 50 dB damping) applied forward and backward to minimize phase shifts, then downsampled to 100 Hz. For classification: same filter applied causally (forward only) for online portability. Artifact rejection used simple threshold method: subtrials with deflection >70 µV over ocular channels compared to baseline were rejected.\n## Signal Processing\n- **Classifiers**: LDA\n- **Feature extraction**: ROC-separability-index\n- **Frequency bands**: analyzed=[0.1, 250.0] Hz\n## Cross-Validation\n- **Method**: cross-validation\n- **Evaluation type**: offline\n## Performance (Original Study)\n- **Accuracy**: 90.0%\n- **Itr**: 17.39 bits/min\n- **Best Subject Itr**: 25.2\n- **Best Subject Accuracy**: 100.0\n- **C300S Accuracy**: 70.0\n## BCI Application\n- **Applications**: speller, communication\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Auditory\n- **Type**: P300\n## Documentation\n- **Description**: A new auditory multi-class brain-computer interface paradigm using spatial hearing as an informative cue\n- **DOI**: 10.1371/journal.pone.0009813\n- **Associated paper DOI**: 10.3389/fnins.2011.00112\n- **License**: CC-BY-NC-ND-4.0\n- **Investigators**: Martijn Schreuder, Benjamin Blankertz, Michael Tangermann\n- **Senior author**: Michael Tangermann\n- **Contact**: martijn@cs.tu-berlin.de\n- **Institution**: Berlin Institute of Technology\n- **Department**: Machine Learning Department\n- **Address**: Berlin, Germany\n- **Country**: Germany\n- **Repository**: BNCI Horizon\n- **Publication year**: 2010\n- **Funding**: European ICT Programme Project FP7-224631; European ICT Programme Project FP7-216886; Deutsche Forschungsgemeinschaft (DFG) MU 987/3-1; Bundesministerium für Bildung und Forschung (BMBF) FKZ 01IB001A; Bundesministerium für Bildung und Forschung (BMBF) FKZ 01GQ0850; FP7-ICT PASCAL2 Network of Excellence ICT-216886\n- **Ethics approval**: Ethics Committee of the Charité University Hospital (number EA4/073/09)\n- **Keywords**: auditory BCI, P300, spatial hearing, multi-class, oddball paradigm\n## References\nSchreuder, M., Rost, T., & Tangermann, M. (2011). Listen, you are writing! Speeding up online spelling with a dynamic auditory BCI. Frontiers in neuroscience, 5, 112. https://doi.org/10.3389/fnins.2011.00112\nNotes\n.. versionadded:: 1.2.0\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":4917017449,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000234","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-30T14:09:21.789607+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-25T18:15:02Z"},"total_files":42,"computed_title":"BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset","nchans_counts":[{"val":60,"count":42}],"sfreq_counts":[{"val":250.0,"count":42}],"stats_computed_at":"2026-05-01T13:49:34.645931+00:00","total_duration_s":108644.83200000001,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"fae50416b0648d19","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Auditory"],"type":["Attention"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.85},"reasoning":{"few_shot_analysis":"Most similar few-shot by paradigm is the **Cross-modal Oddball Task** example (Parkinson’s; task='oddball' with pre-cues) which demonstrates the convention that oddball/P300-style target-vs-nontarget paradigms are categorized by their dominant stimulus modality (there: Multisensory due to visual+auditory) and by a cognitive-control/target-detection framing rather than motor output. Another relevant example is **EEG: Three-Stim Auditory Oddball and Rest in Acute and Chronic TBI**, which shares the auditory oddball structure and supports choosing **Auditory** for Modality when tones/oddball stimuli are presented. For Type, these few-shots show that oddball tasks can map to higher-level constructs like Attention/Decision-making depending on study emphasis; here, the metadata emphasizes P300/spatial oddball BCI target detection, which aligns best with Attention under EEGDash conventions.","metadata_analysis":"Key metadata facts: (1) Population: the readme explicitly states \"**Clinical population: Healthy**\" (despite also listing \"Health status: patients\"). (2) Stimulus channel: \"**Stimulus modalities: auditory**\" and \"**Stimulus type: auditory_spatial**\" with \"**Frontal five speakers used...**\" indicating spatial auditory stimulation. (3) Paradigm/purpose: \"**Paradigm: p300**\" and \"**Task type: oddball**\" with \"**Events: Target=1, NonTarget=2**\" and instruction \"**Subjects asked to mentally count target stimulations**\"—classic P300/oddball target detection used in BCI.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata SAYS \"Clinical population: Healthy\"; few-shot pattern SUGGESTS using explicit diagnoses when present (e.g., Parkinson’s/TBI) and otherwise Healthy. ALIGN (choose Healthy).\nModality: Metadata SAYS \"Stimulus modalities: auditory\" and describes spatial speaker presentation; few-shot pattern SUGGESTS labeling by dominant stimulus channel (auditory for auditory oddball, multisensory only when multiple are primary). ALIGN (choose Auditory).\nType: Metadata SAYS \"Paradigm: p300\" and \"Task type: oddball\" with target counting—primary construct is target detection/attentional orienting. Few-shot pattern SUGGESTS oddball tasks often map to Attention/related control constructs (though one TBI oddball example was labeled Decision-making, likely due to that dataset’s broader cognitive/clinical framing). PARTIAL CONFLICT in that oddball could be Decision-making vs Attention; metadata emphasis is P300 target detection/BCI rather than value-based choices, so Attention wins.","decision_summary":"Top-2 candidates per category:\n\nPathology:\n1) Healthy (WIN) — explicit quote: \"Clinical population: Healthy\"; also demographic-only description with no disorder recruitment.\n2) Unknown/Other (runner-up) — minor ambiguity from \"Health status: patients\" but contradicted by explicit clinical population field.\nAlignment: Mostly aligned; explicit metadata supports Healthy.\nConfidence evidence: 1 strong explicit quote (plus lack of any diagnosis) => 0.8.\n\nModality:\n1) Auditory (WIN) — quotes: \"Stimulus modalities: auditory\"; \"Stimulus type: auditory_spatial\"; speaker-based spatial sound description.\n2) Multisensory (runner-up) — target direction indicated \"visually\" prior to block, but discriminating cue is explicitly spatial auditory stimulation.\nAlignment: Aligned with few-shot auditory/cross-modal conventions.\nConfidence evidence: 2+ explicit quotes supporting auditory dominance => 0.9.\n\nType:\n1) Attention (WIN) — quotes: \"Paradigm: p300\"; \"Task type: oddball\"; \"Subjects asked to mentally count target stimulations\" (classic attentional target detection/P300).\n2) Perception (runner-up) — could be framed as auditory spatial discrimination, but P300 oddball target detection/BCI emphasis suggests attentional selection more than pure sensory psychophysics.\nAlignment: Mostly aligned; few-shot oddball tasks map to attention/control constructs unless clearly decision/value learning.\nConfidence evidence: 3 explicit task/paradigm quotes supporting P300/oddball target detection => 0.85."}},"canonical_name":null,"name_confidence":0.99,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Schreuder2015_ERP"}}