{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cc6","dataset_id":"nm000259","associated_paper_doi":null,"authors":["William Speier","Corey Arnold","Aniket Deshpande","Nader Pouratian"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.1371/journal.pone.0175382","datatypes":["eeg"],"demographics":{"subjects_count":10,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000259","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"16b4cedb366544c021bb1cd76c67ba8a1700ad22a530b22e6d28b9a37d8c45c3","license":"CC0","n_contributing_labs":null,"name":"Speier et al. 2017 — A comparison of stimulus types in online classification of the P300 speller using language models","readme":"Speier2017\n==========\nP300 speller dataset from Speier et al 2017.\nDataset Overview\n----------------\n  Code: Speier2017\n  Paradigm: p300\n  DOI: 10.1371/journal.pone.0175382\n  Subjects: 10\n  Sessions per subject: 2\n  Events: Target=2, NonTarget=1\n  Trial interval: [0, 0.8] s\n  Runs per session: 3\n  File format: BCI2000\nAcquisition\n-----------\n  Sampling rate: 256.0 Hz\n  Number of channels: 32\n  Channel types: eeg=32\n  Channel names: Fz, FC1, FCz, FC2, FC4, FC6, C4, C6, CP4, CP6, FC3, FC5, C3, C5, CP3, CP5, CP1, P1, Cz, CPz, Pz, POz, CP2, P2, PO7, PO3, O1, Oz, Iz, O2, PO4, PO8\n  Montage: standard_1005\n  Hardware: g.tec amplifier\n  Reference: left ear\n  Ground: AFz\n  Line frequency: 60.0 Hz\nParticipants\n------------\n  Number of subjects: 10\n  Health status: healthy\n  Age: min=20, max=35\n  Species: human\nExperimental Protocol\n---------------------\n  Paradigm: p300\n  Number of classes: 2\n  Class labels: Target, NonTarget\n  Trial duration: 1.0 s\n  Study design: P300 row-column speller; 2 stimulus conditions (Famous Faces, Inverting); 6x6 character matrix\n  Feedback type: visual\n  Stimulus type: flash / famous face overlay\n  Stimulus modalities: visual\n  Primary modality: visual\n  Mode: online\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: p300\n  Inter-stimulus interval: 25.0 ms\n  Stimulus onset asynchrony: 125.0 ms\nData Structure\n--------------\n  Trials: ~1200 flashes per training run (10 chars x 10 seq x 12)\n  Trials context: per_run\nTags\n----\n  Pathology: Healthy\n  Modality: ERP\n  Type: P300\nDocumentation\n-------------\n  DOI: 10.1371/journal.pone.0175382\n  License: CC0\n  Investigators: William Speier, Corey Arnold, Aniket Deshpande, Nader Pouratian\n  Institution: University of California, Los Angeles\n  Country: US\n  Data URL: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/PHHHB6\n  Publication year: 2017\nReferences\n----------\nSpeier, W., Deshpande, A., & Pouratian, N. (2017). A comparison of stimulus types in online classification of the P300 speller using language models. PLoS ONE, 12(4), e0175382. https://doi.org/10.1371/journal.pone.0175382\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1"],"size_bytes":304322095,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000259","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-30T14:09:45.246092+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-11T21:14:09Z"},"total_files":60,"computed_title":"Speier et al. 2017 — A comparison of stimulus types in online classification of the P300 speller using language models","nchans_counts":[{"val":32,"count":60}],"sfreq_counts":[{"val":256.0,"count":60}],"stats_computed_at":"2026-05-01T13:49:34.646370+00:00","total_duration_s":12155.765625,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"81a6f2738178b8a8","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Closest few-shot convention matches are the oddball/target-vs-nontarget ERP paradigms. Example: the Cross-modal Oddball Task (Parkinson's; labeled Type=Clinical/Intervention due to pathology focus) describes an \"oddball task\" with standard vs oddball cues, aligning with P300-style target detection. Example: \"EEG: Three-Stim Auditory Oddball and Rest in Acute and Chronic TBI\" is also an oddball target/standard ERP design. These examples guide the mapping that target-vs-nontarget ERP paradigms are best captured as an attention/cognitive-control style construct (here: Attention) rather than Motor or Resting-state. For Modality, the few-shot examples show that stimulus channel (e.g., auditory tones vs visual stimuli) determines the Modality label; this dataset explicitly states visual stimulus presentation.","metadata_analysis":"Key stated facts:\n- Population: \"Health status: healthy\" and also \"Tags\\n----\\n  Pathology: Healthy\".\n- Paradigm/task: \"P300 speller dataset\" and \"Study design: P300 row-column speller\" with \"Events: Target=2, NonTarget=1\".\n- Stimulus/input channel: \"Stimulus modalities: visual\" and \"Primary modality: visual\" plus \"Stimulus type: flash / famous face overlay\".\nThese indicate a healthy cohort performing a visual P300 speller (target detection) task.","paper_abstract_analysis":"No useful paper information. (Only a DOI is provided; no abstract text included in the metadata.)","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\"; also \"Pathology: Healthy\" (Tags section).\n- Few-shot pattern suggests: use explicit recruitment condition when stated.\n- Alignment: ALIGN (explicit healthy cohort).\n\nModality:\n- Metadata says: \"Stimulus modalities: visual\" and \"Primary modality: visual\".\n- Few-shot pattern suggests: modality is based on stimulus channel (e.g., oddball tones -> Auditory; dot motion -> Visual).\n- Alignment: ALIGN (visual stimuli dominate).\n\nType:\n- Metadata says: \"Paradigm: p300\"; \"Events: Target... NonTarget\"; \"P300 row-column speller\".\n- Few-shot pattern suggests: target/oddball ERP paradigms are typically categorized under Attention (target detection/attentional selection) rather than Motor or Resting-state.\n- Alignment: ALIGN (attention-driven P300 target selection is the core construct).","decision_summary":"Top-2 comparative selection:\n\n1) Pathology\n- Candidate A: Healthy\n  Evidence: \"Health status: healthy\"; \"Pathology: Healthy\".\n- Candidate B: Unknown\n  Evidence: would apply only if health status were not provided.\nHead-to-head: Healthy wins due to explicit population statement.\n\n2) Modality\n- Candidate A: Visual\n  Evidence: \"Stimulus modalities: visual\"; \"Primary modality: visual\"; \"Stimulus type: flash / famous face overlay\".\n- Candidate B: Other\n  Evidence: could be considered if stimuli were not clearly sensory-specific.\nHead-to-head: Visual wins with explicit modality fields.\n\n3) Type\n- Candidate A: Attention\n  Evidence: P300 target detection structure: \"Events: Target... NonTarget\" and \"P300 row-column speller\" implies attentional selection to rare/goal targets.\n- Candidate B: Perception\n  Evidence: involves visual flashes/faces, but the primary aim is target detection/selection (P300) rather than sensory discrimination per se.\nHead-to-head: Attention wins because the defining construct of a P300 speller is attentional target selection (ERP-based BCI), not perceptual thresholding.\n\nConfidence justification:\n- Pathology 0.9: 2 explicit quotes directly stating healthy status/pathology.\n- Modality 0.9: multiple explicit modality lines (stimulus modalities + primary modality + stimulus type).\n- Type 0.8: explicit P300/Target-NonTarget paradigm text + strong few-shot analog to oddball/target-detection conventions, but no abstract text to further confirm study aim wording."}},"canonical_name":null,"name_confidence":0.9,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Speier2017"}}