{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cd8","dataset_id":"nm000326","associated_paper_doi":null,"authors":["Boyla Mainsah","Chance Fleeting","Thomas Balmat","Eric Sellers","Leslie Collins"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.13026/0byy-ry86","datatypes":["eeg"],"demographics":{"subjects_count":19,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000326","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"beddee69515d3603d8e989756bfb103787b6acffad9a89bb84a4897e45e87cfa","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Mainsah et al. 2025 — bigP3BCI: An Open, Diverse and Machine Learning Ready P300-based Brain-Computer Interface Dataset (Study C)","readme":"Mainsah2025-C\n=============\nBigP3BCI Study C — 6x6 checkerboard with ERN (19 healthy subjects).\nDataset Overview\n----------------\n  Code: Mainsah2025-C\n  Paradigm: p300\n  DOI: 10.13026/0byy-ry86\n  Subjects: 19\n  Sessions per subject: 1\n  Events: Target=2, NonTarget=1\n  Trial interval: [0, 1.0] s\nAcquisition\n-----------\n  Sampling rate: 256.0 Hz\n  Number of channels: 32\n  Channel types: eeg=32\n  Montage: standard_1020\n  Hardware: g.USBamp (g.tec)\n  Line frequency: 60.0 Hz\nParticipants\n------------\n  Number of subjects: 19\n  Health status: healthy\nExperimental Protocol\n---------------------\n  Paradigm: p300\n  Number of classes: 2\n  Class labels: Target, NonTarget\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: p300\nSignal Processing\n-----------------\n  Feature extraction: P300_ERP_detection\nCross-Validation\n----------------\n  Method: calibration-then-test\n  Evaluation type: within_subject\nBCI Application\n---------------\n  Applications: speller\n  Environment: laboratory\n  Online feedback: True\nTags\n----\n  Modality: visual\n  Type: perception\nDocumentation\n-------------\n  Description: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms.\n  DOI: 10.13026/0byy-ry86\n  License: CC-BY-4.0\n  Investigators: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins\n  Institution: Duke University; East Tennessee State University\n  Country: US\n  Repository: PhysioNet\n  Data URL: https://physionet.org/content/bigp3bci/1.0.0/\n  Publication year: 2025\nReferences\n----------\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":1321537007,"source":"nemar","storage":{"backend":"s3","base":"s3://openneuro.org/nm000326","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-22T12:52:26.304380+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":341,"computed_title":"Mainsah et al. 2025 — bigP3BCI: An Open, Diverse and Machine Learning Ready P300-based Brain-Computer Interface Dataset (Study C)","nchans_counts":[{"val":32,"count":341}],"sfreq_counts":[{"val":256.0000775610915,"count":284},{"val":256.0001218685495,"count":57}],"stats_computed_at":"2026-04-22T23:16:00.314558+00:00","total_duration_s":52843.65092272895,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"4b366fd512659188","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot paradigms are the oddball-style datasets: (1) “Cross-modal Oddball Task” (Parkinson’s) and (2) “EEG: Three-Stim Auditory Oddball and Rest in Acute and Chronic TBI”. These examples establish the convention that target vs non-target (oddball-like) event structures are labeled by stimulus modality (auditory vs multisensory) and a cognitive Type related to detecting/attending to rare targets (often mapped to Attention/Perception depending on emphasis). Our dataset is explicitly a P300 (oddball ERP) speller with Target/NonTarget events, so we follow the same convention: Visual modality, and a Type centered on target detection/attentional selection rather than motor output.","metadata_analysis":"Key metadata facts:\n- Population/clinical status: “BigP3BCI Study C — 6x6 checkerboard with ERN (19 healthy subjects).” and “Health status: healthy” and “Number of subjects: 19”.\n- Stimulus modality: HED annotations for both event types include “Visual-presentation” (under both Target and NonTarget), and the dataset includes “6x6 checkerboard”, plus “Tags … Modality: visual”.\n- Paradigm/purpose: “Paradigm: p300”, “Events: Target=2, NonTarget=1”, “Feature extraction: P300_ERP_detection”, and “BCI Application … Applications: speller” with “Online feedback: True”.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: “19 healthy subjects” / “Health status: healthy”.\n- Few-shot pattern suggests: oddball/P300 datasets can be either clinical (e.g., Parkinson’s) or healthy depending on recruitment.\n- Alignment: ALIGN (explicit healthy recruitment).\n\nModality:\n- Metadata says: HED includes “Visual-presentation” for Target/NonTarget and “Modality: visual”.\n- Few-shot pattern suggests: oddball modality is labeled by stimulus channel (auditory vs visual vs multisensory).\n- Alignment: ALIGN (clearly visual stimuli).\n\nType:\n- Metadata says: “Paradigm: p300”, “Events: Target=2, NonTarget=1”, and “Applications: speller” (P300-BCI target selection).\n- Few-shot pattern suggests: oddball/target detection paradigms are typically categorized as Attention or Perception depending on whether emphasis is selective attention/target detection vs sensory discrimination.\n- Alignment: ALIGN (P300 speller fundamentally relies on selective attention to target flashes).","decision_summary":"Top-2 candidates (with head-to-head selection):\n\nPathology:\n1) Healthy — Supported by “19 healthy subjects” and “Health status: healthy”.\n2) Unknown — would apply only if health status were not stated.\nWinner: Healthy. (Alignment: aligned)\n\nModality:\n1) Visual — Supported by HED “Visual-presentation”, “6x6 checkerboard”, and “Modality: visual”.\n2) Multisensory — possible in some oddball tasks, but no auditory/tactile stimuli indicated here.\nWinner: Visual. (Alignment: aligned)\n\nType:\n1) Attention — Supported by “Paradigm: p300” with “Target… NonTarget” (oddball target detection) and P300 speller use (“Applications: speller”, “Online feedback: True”), which depends on selective attention to targets.\n2) Perception — plausible because it is stimulus-driven ERP classification, but less specific than attentional selection for P300.\nWinner: Attention, because P300 oddball/speller performance is primarily driven by attentional selection of rare targets rather than perceptual discrimination per se. (Alignment: aligned)\n\nConfidence justification:\n- Pathology confidence is high due to multiple explicit statements of healthy recruitment (“19 healthy subjects”, “Health status: healthy”).\n- Modality confidence is high due to multiple explicit visual indicators (HED “Visual-presentation”, “6x6 checkerboard”, “Modality: visual”).\n- Type confidence is moderate-high: explicit P300/Target-vs-NonTarget + speller context strongly implies attention, but Perception remains a reasonable runner-up label."}},"canonical_name":null,"name_confidence":0.98,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Mainsah2025_C"}}