{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4c96","dataset_id":"nm000193","associated_paper_doi":null,"authors":["Simon Kojima","Shin'ichiro Kanoh"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":11,"ages":[22,22,22,22,22,22,22,22,22,22,22],"age_min":22,"age_max":22,"age_mean":22.0,"species":null,"sex_distribution":{"m":10,"f":1},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000193","osf_url":null,"github_url":null,"paper_url":null},"funding":["JSPS KAKENHI Grant Number JP23K11811"],"ingestion_fingerprint":"3835fa459d374462364851d772e9d8dd8ef836a60589207b3356b7bef8a98c56","license":"CC0-1.0","n_contributing_labs":null,"name":"Kojima et al. 2024 (Dataset A) — An auditory brain-computer interface based on selective attention to multiple tone streams","readme":"# Class for Kojima2024A dataset management. P300 dataset\nClass for Kojima2024A dataset management. P300 dataset.\n## Dataset Overview\n- **Code**: Kojima2024A\n- **Paradigm**: p300\n- **DOI**: 10.7910/DVN/MQOVEY\n- **Subjects**: 11\n- **Sessions per subject**: 1\n- **Events**: Target=1, NonTarget=0\n- **Trial interval**: [-0.5, 1.2] s\n- **Runs per session**: 6\n- **File format**: BrainVision\n- **Number of contributing labs**: 1\n## Acquisition\n- **Sampling rate**: 1000.0 Hz\n- **Number of channels**: 64\n- **Channel types**: eeg=64, eog=2\n- **Channel names**: AF3, AF4, AF7, AF8, AFz, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT10, FT7, FT8, FT9, Fp1, Fp2, Fz, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO7, PO8, POz, Pz, T7, T8, TP10, TP7, TP8, TP9, hEOG, vEOG\n- **Montage**: standard_1020\n- **Hardware**: Brain Amp DC (Brain Products GmbH, Germany) and MR plus (Brain Products GmbH, Germany)\n- **Reference**: right earlobe\n- **Ground**: left earlobe\n- **Sensor type**: eeg\n- **Line frequency**: 50.0 Hz\n- **Online filters**: {'bandpass': '0.1 Hz to 100 Hz'}\n- **Cap manufacturer**: EASYCAP GmbH\n- **Electrode material**: Ag-AgCl\n- **Auxiliary channels**: EOG (2 ch, vertical, horizontal)\n## Participants\n- **Number of subjects**: 11\n- **Health status**: healthy\n- **Age**: mean=22.5, min=22.0, max=23.0\n- **Gender distribution**: male=10, female=1\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: p300\n- **Task type**: auditory selective attention\n- **Number of classes**: 2\n- **Class labels**: Target, NonTarget\n- **Tasks**: attend to Stream 1, attend to Stream 2, attend to Stream 3\n- **Study design**: within-subject\n- **Study domain**: auditory BCI\n- **Feedback type**: none\n- **Stimulus type**: auditory musical tones\n- **Stimulus modalities**: auditory\n- **Primary modality**: auditory\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Training/test split**: False\n- **Instructions**: Subjects were requested to attend to one of three streams and to count the number of target stimuli in the attended stream\n- **Stimulus presentation**: method=Digital signal processor (System3, Tucker-Davis Technologies, USA) and headphones (HDA200, Sennheiser), ear=right ear only, tone_generator=Software synthesizer (Piano tones Grand Piano 1 SE from SampleTank3, IK multimedia Production, Italy)\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: p300\n- **Number of targets**: 3\n- **Stimulus onset asynchrony**: 180.0 ms\n## Data Structure\n- **Blocks per session**: 6\n- **Block duration**: 300.0 s\n- **Trials context**: Each task block had 3 runs (5 minutes each). Subjects counted target stimuli in Streams 1, 2, and 3 on the 1st, 2nd, and 3rd measurements respectively. Task block was repeated twice.\n## Preprocessing\n- **Data state**: raw\n- **Preprocessing applied**: False\n## Signal Processing\n- **Classifiers**: Logistic Regression, Minimum Distance to Mean (MDM)\n- **Feature extraction**: xDAWN spatial filtering, Riemannian geometry covariance matrices\n- **Frequency bands**: analyzed=[1.0, 40.0] Hz\n- **Spatial filters**: xDAWN\n## Cross-Validation\n- **Method**: 10-fold cross validation\n- **Folds**: 10\n- **Evaluation type**: within-subject\n## Performance (Original Study)\n- **Description**: Classification accuracy over 80% for 5 subjects, over 75% for 9 subjects\n- **Metric**: MCC (Matthews correlation coefficient)\n## BCI Application\n- **Applications**: communication\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: auditory\n- **Type**: EEG, P300, BCI\n## Documentation\n- **Description**: A 3-class auditory BCI using three tone sequences based on auditory stream segregation. Musical tones were presented to subjects' right ear, and subjects attended to one of three streams while counting target stimuli. P300 activity was elicited by target stimuli in the attended stream.\n- **DOI**: 10.1371/journal.pone.0303565\n- **Associated paper DOI**: 10.1371/journal.pone.0303565\n- **License**: CC0-1.0\n- **Investigators**: Simon Kojima, Shin'ichiro Kanoh\n- **Senior author**: Shin'ichiro Kanoh\n- **Contact**: nb21106@shibaura-it.ac.jp\n- **Institution**: Shibaura Institute of Technology\n- **Department**: Graduate School of Engineering and Science; College of Engineering\n- **Address**: Koto-ku, Tokyo, Japan\n- **Country**: JP\n- **Repository**: Harvard dataverse\n- **Data URL**: https://doi.org/10.7910/DVN/MQOVEY\n- **Publication year**: 2024\n- **Funding**: JSPS KAKENHI Grant Number JP23K11811\n- **Ethics approval**: Review Board on Bioengineering Research Ethics of Shibaura Institute of Technology; Declaration of Helsinki\n- **Keywords**: auditory BCI, P300, auditory stream segregation, selective attention, oddball paradigm, Riemannian geometry\n## External Links\n- **Source**: https://doi.org/10.7910/DVN/MQOVEY\n- **Paper**: https://doi.org/10.1371/journal.pone.0303565\n## Abstract\nIn this study, we attempted to improve brain-computer interface (BCI) systems by means of auditory stream segregation in which alternately presented tones are perceived as sequences of various different tones (streams). A 3-class BCI using three tone sequences, which were perceived as three different tone streams, was investigated and evaluated. Each presented musical tone was generated by a software synthesizer. Eleven subjects took part in the experiment. Stimuli were presented to each user's right ear. Subjects were requested to attend to one of three streams and to count the number of target stimuli in the attended stream. In addition, 64-channel electroencephalogram (EEG) and two-channel electrooculogram (EOG) signals were recorded from participants with a sampling frequency of 1000 Hz. The measured EEG data were classified based on Riemannian geometry to detect the object of the subject's selective attention. P300 activity was elicited by the target stimuli in the segregated tone streams. In five out of eleven subjects, P300 activity was elicited only by the target stimuli included in the attended stream. In a 10-fold cross validation test, a classification accuracy over 80% for five subjects and over 75% for nine subjects was achieved. For subjects whose accuracy was lower than 75%, either the P300 was also elicited for nonattended streams or the amplitude of P300 was small. It was concluded that the number of selected BCI systems based on auditory stream segregation can be increased to three classes, and these classes can be detected by a single ear without the aid of any visual modality.\n## Methodology\nMusical tones generated by a digital auditory workstation were used as auditory stimuli. Piano tones from a MIDI sound source were presented using a digital signal processor and headphones to participants' right ear only. Three tone streams were created using auditory stream segregation, each consisting of standard (90% probability) and deviant (10% probability) tones. The duration of each tone was 150 ms with stimulus onset asynchrony of 180 ms. The 64-channel EEG and 2-channel EOG signals were recorded at 1000 Hz. Each experiment consisted of two task blocks with three runs each (5 minutes per run). Subjects counted target stimuli in different streams across runs. Data analysis involved bandpass filtering (0.1-40 Hz for ERP analysis, 1-40 Hz for classification), baseline correction, artifact rejection (±100μV for EEG, ±500μV for EOG), xDAWN spatial filtering, and classification using Riemannian geometry with covariance matrices and logistic regression. Performance was evaluated using 10-fold cross validation with accuracy and Matthews correlation coefficient (MCC) metrics.\n## References\nKojima, S. (2024). Replication Data for: An auditory brain-computer interface based on selective attention to multiple tone streams. Harvard Dataverse, V1. DOI: https://doi.org/10.7910/DVN/MQOVEY\nKojima, S. & Kanoh, S. (2024). An auditory brain-computer interface based on selective attention to multiple tone streams. PLoS ONE 19(5): e0303565. DOI: https://doi.org/10.1371/journal.pone.0303565\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":4016730515,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000193","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-30T14:08:54.649612+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-11T20:56:52Z"},"total_files":66,"computed_title":"Kojima et al. 2024 (Dataset A) — An auditory brain-computer interface based on selective attention to multiple tone streams","nchans_counts":[{"val":64,"count":66}],"sfreq_counts":[{"val":1000.0,"count":66}],"stats_computed_at":"2026-05-01T13:49:34.645515+00:00","total_duration_s":20871.134000000002,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"a5c53bbe230d667c","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Auditory"],"type":["Attention"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot by paradigm is the \"Cross-modal Oddball Task\" example (Parkinson's; Multisensory; Clinical/Intervention) which uses an oddball structure and cue-evoked ERP logic. While that example is clinical and multisensory, it establishes the convention that oddball-style target vs non-target paradigms map the stimulus channel to Modality and the cognitive goal often to control/attention constructs rather than motor output. For auditory stimulus modality, the \"Subcortical responses to music and speech...\" example (Healthy; Auditory; Perception) shows that when the core is auditory stimulation/encoding it maps to Auditory modality; however, the present dataset is explicitly \"auditory selective attention\" and BCI target detection, aligning better with an Attention type than pure Perception. The \"EEG: DPX Cog Ctl Task\" example (TBI; Visual; Attention) guides mapping when the construct is attention/control rather than basic sensation.","metadata_analysis":"Pathology/population facts: the README explicitly states \"Health status: healthy\" and also \"Eleven subjects took part in the experiment.\" Task/stimulus facts: it states \"Task type: auditory selective attention\" and \"Stimulus type: auditory musical tones\" with \"Stimulus modalities: auditory\" / \"Primary modality: auditory.\" It also states the oddball/P300 target structure: \"Events: Target=1, NonTarget=0\" and \"P300 activity was elicited by the target stimuli in the attended stream\" plus the instruction: \"Subjects were requested to attend to one of three streams and to count the number of target stimuli in the attended stream.\"","paper_abstract_analysis":"The included abstract reinforces the task goal as selective attention-based BCI: \"detect the object of the subject's selective attention\" and \"P300 activity was elicited by the target stimuli in the segregated tone streams.\" This supports labeling Type as Attention rather than Perception.","evidence_alignment_check":"Pathology: Metadata says \"Health status: healthy\" (and tags list \"Pathology: Healthy\"). Few-shot pattern suggests using explicit recruited diagnosis when present; here it is explicitly healthy. ALIGN.\nModality: Metadata says \"Stimulus type: auditory musical tones\" and \"Primary modality: auditory\" (tones via headphones). Few-shot convention maps modality to stimulus channel; aligns with Auditory. ALIGN.\nType: Metadata says \"Task type: auditory selective attention\" and instructions to \"attend to one of three streams\" to count targets; abstract says the aim is to \"detect the object of the subject's selective attention\" via P300. Few-shot oddball conventions could alternatively suggest Perception (target detection) for some oddball datasets, but here the explicit construct is selective attention/BCI control. Mostly ALIGN with Attention; mild tension only with the generic oddball→perception possibility, resolved by explicit wording.","decision_summary":"Pathology top-2: (1) Healthy vs (2) Unknown. Healthy wins because metadata explicitly states \"Health status: healthy\" and participants are normal young adults (\"Eleven subjects...\"). Alignment: aligned with few-shot rules prioritizing explicit population facts. Confidence evidence: \"Health status: healthy\"; \"Eleven subjects took part\"; tags include \"Pathology: Healthy\".\nModality top-2: (1) Auditory vs (2) Multisensory. Auditory wins because stimuli are tones presented via headphones: \"Stimulus type: auditory musical tones\", \"Stimulus modalities: auditory\", \"Primary modality: auditory\", and \"Stimuli were presented to each user's right ear.\" Alignment: matches few-shot modality convention. \nType top-2: (1) Attention vs (2) Perception. Attention wins because the task is explicitly selective attention: \"Task type: auditory selective attention\"; instruction \"attend to one of three streams\"; abstract aim \"detect the object of the subject's selective attention\". Perception is a plausible runner-up due to oddball-like target/non-target detection (\"Events: Target=1, NonTarget=0\"; P300 elicited by targets), but the primary construct emphasized is attention/selection for BCI. Confidence evidence: the three explicit selective-attention quotes above."}},"canonical_name":null,"name_confidence":0.72,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Kojima2024A_P300"}}