{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4ca4","dataset_id":"nm000207","associated_paper_doi":null,"authors":["Simon Kojima","Shin'ichiro Kanoh"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":15,"ages":[22,22,22,22,22,22,22,22,22,22,22,22,22,22,22],"age_min":22,"age_max":22,"age_mean":22.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000207","osf_url":null,"github_url":null,"paper_url":null},"funding":["JSPS KAKENHI (Grant Number JP23K11811 to Shin'ichiro Kanoh)"],"ingestion_fingerprint":"04e3b51e6d870e70f845cd301feb08ff3983487e79a66f9c9be8d34dba020433","license":"CC0-1.0","n_contributing_labs":null,"name":"Kojima et al. 2024 (Dataset B) — Four-class ASME BCI: investigation of the feasibility and comparison of two strategies for multiclassing","readme":"# Class for Kojima2024B dataset management. P300 dataset\nClass for Kojima2024B dataset management. P300 dataset.\n## Dataset Overview\n- **Code**: Kojima2024B\n- **Paradigm**: p300\n- **DOI**: 10.7910/DVN/1UJDV6\n- **Subjects**: 15\n- **Sessions per subject**: 1\n- **Events**: Target=[111, 112, 113, 114], NonTarget=[101, 102, 103, 104]\n- **Trial interval**: [-0.5, 1.2] s\n- **Runs per session**: 12\n- **File format**: BrainVision\n- **Number of contributing labs**: 1\n## Acquisition\n- **Sampling rate**: 1000.0 Hz\n- **Number of channels**: 64\n- **Channel types**: eeg=64, eog=2\n- **Channel names**: AF3, AF4, AF7, AF8, AFz, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT10, FT7, FT8, FT9, Fp1, Fp2, Fz, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO7, PO8, POz, Pz, T7, T8, TP10, TP7, TP8, TP9, hEOG, vEOG\n- **Montage**: standard_1020\n- **Hardware**: BrainAmp\n- **Reference**: right mastoid\n- **Ground**: left mastoid\n- **Sensor type**: EEG\n- **Line frequency**: 50.0 Hz\n- **Cap manufacturer**: EasyCap\n- **Electrode type**: passive Ag/AgCl\n- **Electrode material**: Ag/AgCl\n- **Auxiliary channels**: EOG (2 ch, vertical, horizontal)\n## Participants\n- **Number of subjects**: 15\n- **Health status**: healthy\n- **Age**: mean=22.8, min=21.0, max=24.0\n- **Gender distribution**: male=13, female=2\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: p300\n- **Task type**: auditory stream segregation with oddball\n- **Number of classes**: 2\n- **Class labels**: Target, NonTarget\n- **Trial duration**: 90.0 s\n- **Tasks**: ASME-4stream, ASME-2stream\n- **Study design**: within-subject comparison\n- **Study domain**: auditory BCI\n- **Feedback type**: none\n- **Stimulus type**: auditory tones\n- **Stimulus modalities**: auditory\n- **Primary modality**: auditory\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Training/test split**: False\n- **Instructions**: focus selectively on deviant stimuli in one of the streams and count target deviant stimuli\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: p300\n- **Number of targets**: 4\n- **Number of repetitions**: 15\n- **Stimulus onset asynchrony**: {'ASME-4stream_overall': 150.0, 'ASME-2stream_overall': 300.0, 'within_stream': 600.0} ms\n## Data Structure\n- **Trials**: {'ASME-4stream': '600 stimuli per trial (4 trials per run, 6 runs)', 'ASME-2stream': '300 stimuli per trial (4 trials per run, 6 runs)'}\n- **Blocks per session**: 12\n- **Block duration**: 90.0 s\n- **Trials context**: 12 runs alternating between ASME-4stream and ASME-2stream, 4 trials per run\n## Preprocessing\n- **Data state**: raw\n- **Preprocessing applied**: False\n## Signal Processing\n- **Classifiers**: Linear Discriminant Analysis (LDA), shrinkage-LDA\n- **Feature extraction**: mean amplitudes in 10 intervals (0.1s non-overlapping, 0-1.0s)\n- **Frequency bands**: analyzed=[0.1, 8.0] Hz\n## Cross-Validation\n- **Method**: 3-fold chronological cross-validation (BCI simulation); 4-fold chronological cross-validation (binary classification)\n- **Evaluation type**: offline simulation\n## Performance (Original Study)\n- **Asme-4Stream Accuracy**: 0.83\n- **Asme-2Stream Accuracy**: 0.86\n## BCI Application\n- **Applications**: communication\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: auditory\n- **Type**: ERP, P300\n## Documentation\n- **Description**: Four-class ASME BCI investigation comparing two strategies for multiclassing: ASME-4stream (four streams with single target stimulus each) vs ASME-2stream (two streams with two target stimuli each)\n- **DOI**: 10.3389/fnhum.2024.1461960\n- **Associated paper DOI**: 10.3389/fnhum.2024.1461960\n- **License**: CC0-1.0\n- **Investigators**: Simon Kojima, Shin'ichiro Kanoh\n- **Senior author**: Shin'ichiro Kanoh\n- **Contact**: simon.kojima@ieee.org\n- **Institution**: Shibaura Institute of Technology\n- **Department**: Graduate School of Engineering and Science (Simon Kojima); College of Engineering (Shin'ichiro Kanoh)\n- **Address**: Tokyo, Japan\n- **Country**: JP\n- **Repository**: Harvard dataverse\n- **Data URL**: https://doi.org/10.7910/DVN/1UJDV6\n- **Publication year**: 2024\n- **Funding**: JSPS KAKENHI (Grant Number JP23K11811 to Shin'ichiro Kanoh)\n- **Ethics approval**: Review Board on Bioengineering Research Ethics of the Shibaura Institute of Technology\n- **Keywords**: brain-computer interface, electroencephalogram, event-related potential, auditory scene analysis, stream segregation, machine learning, NASA-TLX\n## Abstract\nThe ASME (Auditory Stream segregation Multiclass ERP) paradigm is used for an auditory brain-computer interface (BCI). Two approaches for achieving four-class ASME were investigated: ASME-4stream (four streams with a single target stimulus each) and ASME-2stream (two streams with two target stimuli each). Fifteen healthy subjects participated. ERPs were analyzed, and binary classification and BCI simulation were conducted offline using linear discriminant analysis. Average accuracies were 0.83 (ASME-4stream) and 0.86 (ASME-2stream). The ASME-2stream paradigm showed shorter latency and larger amplitude of P300, higher binary classification accuracy, and smaller workload. Both paradigms achieved sufficiently high accuracy (over 80%) for practical auditory BCI.\n## Methodology\nSubjects performed 12 runs alternating between ASME-4stream and ASME-2stream paradigms. Each run contained 4 trials with ~90s duration. ASME-4stream presented 4 streams (SOA=0.15s, 600 stimuli/trial, ratio 9:1 standard:deviant). ASME-2stream presented 2 streams with 2 deviant stimuli each (SOA=0.3s, 300 stimuli/trial, ratio 8:1:1). EEG recorded at 1000 Hz from 64 channels. EOG artifacts removed using ICA on 15 PCs. Data filtered (1-40 Hz for ERP, 0.1-8 Hz for classification), epoched (-0.1 to 1.2s), downsampled to 250 Hz. Classification used shrinkage-LDA with mean amplitudes from 10 intervals (0-1.0s) as features. Performance evaluated using 4-fold chronological cross-validation. Usability assessed via NASA-TLX questionnaire.\n## References\nKojima, S. (2024). Replication Data for: Four-class ASME BCI: investigation of the feasibility and comparison of two strategies for multiclassing. Harvard Dataverse, V1. DOI: https://doi.org/10.7910/DVN/1UJDV6\nKojima, S. & Kanoh, S. (2024). Four-class ASME BCI: investigation of the feasibility and comparison of two strategies for multiclassing. Frontiers in Human Neuroscience 18:1461960. DOI: https://doi.org/10.3389/fnhum.2024.1461960\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":14976719241,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000207","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-30T14:09:02.090894+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-11T20:58:38Z"},"total_files":180,"computed_title":"Kojima et al. 2024 (Dataset B) — Four-class ASME BCI: investigation of the feasibility and comparison of two strategies for multiclassing","nchans_counts":[{"val":64,"count":180}],"sfreq_counts":[{"val":1000.0,"count":180}],"stats_computed_at":"2026-05-01T13:49:34.645670+00:00","total_duration_s":77862.5,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"f0c32f701d4b9223","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Auditory"],"type":["Attention"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot by paradigm is the “Cross-modal Oddball Task” example (Parkinson’s; task is explicitly an oddball with standard vs oddball cues). This guides the convention that oddball/target-detection paradigms are typically labeled by the cognitive construct of selecting/detecting targets (often Attention) rather than by response mechanics. Another relevant convention is that when metadata explicitly states a clinical recruitment (e.g., Parkinson’s, TBI), Pathology follows that fact; here the metadata explicitly states “healthy”, so Pathology should be Healthy regardless of oddball/BCI paradigm.","metadata_analysis":"Key metadata facts:\n1) Population: “Health status: healthy” and “Fifteen healthy subjects participated.”\n2) Stimulus modality: “Stimulus type: auditory tones”, “Stimulus modalities: auditory”, and “Primary modality: auditory”.\n3) Cognitive demand / task goal: “Task type: auditory stream segregation with oddball” and “Instructions: focus selectively on deviant stimuli in one of the streams and count target deviant stimuli”.\nThese directly support Healthy + Auditory + an attention/target-selection construct (oddball P300).","paper_abstract_analysis":"The included abstract reiterates the key task-purpose facts: “The ASME (Auditory Stream segregation Multiclass ERP) paradigm is used for an auditory brain-computer interface (BCI)… Fifteen healthy subjects participated.” It also frames the paradigm around “P300” ERPs to targets, consistent with an oddball target-detection/selection (attention) construct rather than resting-state, motor, or affect.","evidence_alignment_check":"Pathology:\n- Metadata says: “Health status: healthy”; “Fifteen healthy subjects participated.”\n- Few-shot pattern suggests: when diagnosis is named, use that diagnosis; otherwise Healthy for normative cohorts.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: “Stimulus type: auditory tones”; “Primary modality: auditory”; “Stimulus modalities: auditory”.\n- Few-shot pattern suggests: modality is defined by stimulus channel (e.g., oddball with auditory cues -> Auditory; cross-modal oddball -> Multisensory).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: “Task type: auditory stream segregation with oddball” and “focus selectively on deviant stimuli… count target deviant stimuli” (classic P300 target-detection).\n- Few-shot pattern suggests: oddball/target-detection tasks map to an attention/target-selection construct (see Oddball example; although labeled Clinical/Intervention due to PD, the task itself is an oddball target-response paradigm).\n- Alignment: ALIGN (Attention is most consistent with selective focus and target detection).","decision_summary":"Top-2 candidates (with head-to-head comparison):\n\nPathology:\n1) Healthy (WINNER)\n- Evidence: “Health status: healthy”; “Fifteen healthy subjects participated.”\n2) Unknown (runner-up)\n- Would apply if no recruitment/health info were provided; not the case here.\nAlignment status: aligns with metadata and few-shot conventions.\n\nModality:\n1) Auditory (WINNER)\n- Evidence: “Stimulus type: auditory tones”; “Stimulus modalities: auditory”; “Primary modality: auditory”.\n2) Multisensory (runner-up)\n- Would require both auditory and visual/tactile stimuli; metadata specifies auditory as primary and stimulus type as auditory tones.\nAlignment status: aligns.\n\nType:\n1) Attention (WINNER)\n- Evidence: “auditory stream segregation with oddball”; “focus selectively on deviant stimuli… count target deviant stimuli”; explicit “Paradigm: p300” / target vs nontarget ERP.\n2) Perception (runner-up)\n- Possible because it involves auditory discrimination/segregation, but the primary instruction is selective attention to targets (P300 BCI target detection) rather than perceptual threshold/discrimination as the main aim.\nAlignment status: aligns; no override needed.\n\nConfidence justification:\n- Pathology 0.9: supported by multiple explicit statements (“Health status: healthy”, “Fifteen healthy subjects participated”, plus demographic summary).\n- Modality 0.9: multiple explicit modality lines (“Stimulus type: auditory tones”, “Stimulus modalities: auditory”, “Primary modality: auditory”).\n- Type 0.8: explicit oddball/P300 and selective-attention instruction strongly support Attention, though Perception is a plausible runner-up due to “stream segregation”."}},"canonical_name":null,"name_confidence":0.66,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Kojima2024B_P300"}}