{"success":true,"database":"eegdash","data":{"_id":"69d16e06897a7725c66f4cde","dataset_id":"nm000340","associated_paper_doi":null,"authors":["Boyla Mainsah","Chance Fleeting","Thomas Balmat","Eric Sellers","Leslie Collins"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.13026/0byy-ry86","datatypes":["eeg"],"demographics":{"subjects_count":20,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":{"m":6,"f":14},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000340","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"2f21dd7a546b1edea72fb7f94b8ebb5d2fb739d79bbae33231f2a9b83a3d5060","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Mainsah et al. 2025 — bigP3BCI: An Open, Diverse and Machine Learning Ready P300-based Brain-Computer Interface Dataset (Study J)","readme":"Mainsah2025-J\n=============\nBigP3BCI Study J — 9x8 performance-based/row-column (20 healthy subjects).\nDataset Overview\n----------------\n  Code: Mainsah2025-J\n  Paradigm: p300\n  DOI: 10.13026/0byy-ry86\n  Subjects: 20\n  Sessions per subject: 1\n  Events: Target=2, NonTarget=1\n  Trial interval: [0, 1.0] s\nAcquisition\n-----------\n  Sampling rate: 256.0 Hz\n  Number of channels: 16\n  Channel types: eeg=16\n  Montage: standard_1020\n  Hardware: g.USBamp (g.tec)\n  Line frequency: 60.0 Hz\nParticipants\n------------\n  Number of subjects: 20\n  Health status: healthy\nExperimental Protocol\n---------------------\n  Paradigm: p300\n  Number of classes: 2\n  Class labels: Target, NonTarget\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: p300\nSignal Processing\n-----------------\n  Feature extraction: P300_ERP_detection\nCross-Validation\n----------------\n  Method: calibration-then-test\n  Evaluation type: within_subject\nBCI Application\n---------------\n  Applications: speller\n  Environment: laboratory\n  Online feedback: True\nTags\n----\n  Modality: visual\n  Type: perception\nDocumentation\n-------------\n  Description: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms.\n  DOI: 10.13026/0byy-ry86\n  License: CC-BY-4.0\n  Investigators: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins\n  Institution: Duke University; East Tennessee State University\n  Country: US\n  Repository: PhysioNet\n  Data URL: https://physionet.org/content/bigp3bci/1.0.0/\n  Publication year: 2025\nReferences\n----------\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":454340040,"source":"nemar","storage":{"backend":"s3","base":"s3://openneuro.org/nm000340","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-22T12:52:28.712754+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":502,"computed_title":"Mainsah et al. 2025 — bigP3BCI: An Open, Diverse and Machine Learning Ready P300-based Brain-Computer Interface Dataset (Study J)","nchans_counts":[{"val":16,"count":502}],"sfreq_counts":[{"val":256.0,"count":502}],"stats_computed_at":"2026-04-22T23:16:00.314629+00:00","total_duration_s":35698.0390625,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"8127f337e46ce67d","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.7},"reasoning":{"few_shot_analysis":"Most similar few-shot paradigms are oddball/P300-like target vs non-target designs. In the Parkinson's cross-modal oddball example, the paradigm is explicitly described as an oddball task with standard vs oddball cues and a required response; it is labeled with a cognitive-control/clinical framing (Type: Clinical/Intervention driven by the clinical cohort). In the TBI three-stim auditory oddball example, the paradigm is also an oddball target/standard/novel setup. These examples guide the convention that target-vs-nontarget (oddball/P300) datasets are typically categorized by the attentional/target-detection construct (unless the dataset’s main purpose is clinical/intervention). Here, the cohort is healthy and the paradigm is a P300 speller (a classic attention-to-target ERP/BCI setting), so the few-shot convention suggests an attention/target-detection type label rather than a clinical one.","metadata_analysis":"Key quoted metadata facts:\n1) Population: \"20 healthy subjects\" and \"Health status: healthy\".\n2) Paradigm/task: \"Paradigm: p300\" and \"BCI Application — Applications: speller\".\n3) Event structure: \"Events: Target=2, NonTarget=1\".\n4) Stimulus modality: HED labels include \"Visual-presentation\" under both \"Target\" and \"NonTarget\"; also \"Tags — Modality: visual\".\n5) Potential conflicting in-metadata tag for Type: \"Tags — Type: perception\".","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: \"20 healthy subjects\" / \"Health status: healthy\".\n- Few-shot pattern suggests: oddball/P300 can appear in both healthy and clinical datasets; pathology should follow recruitment.\n- Alignment: ALIGN. Final uses metadata fact.\n\nModality:\n- Metadata says: HED annotations include \"Visual-presentation\" for both Target and NonTarget; also \"Tags — Modality: visual\".\n- Few-shot pattern suggests: oddball/speller paradigms are often visual (or multisensory in the PD example); modality should follow stimulus channel.\n- Alignment: ALIGN.\n\nType:\n- Metadata says (explicit tag): \"Tags — Type: perception\"; and paradigm indicates \"p300\" with \"Target\" vs \"NonTarget\".\n- Few-shot pattern suggests: oddball/P300 target-detection paradigms are commonly treated as attention/target detection constructs (unless primarily clinical).\n- Alignment: PARTIAL CONFLICT. The dataset includes an explicit tag 'Type: perception', but the task structure (P300 speller target detection) more strongly matches the catalog definition of Attention (selective attention to targets) rather than Perception (sensory discrimination) as the primary construct. This is an interpretation choice for cognitive-construct labeling; no clinical-population fact is being overridden.","decision_summary":"Top-2 candidate labels with head-to-head selection:\n\nPathology:\n- Candidate 1: Healthy\n  Evidence: \"20 healthy subjects\"; \"Health status: healthy\".\n- Candidate 2: Unknown\n  Evidence: would apply only if health status were not specified.\nDecision: Healthy wins (explicit recruitment/health status). Confidence=0.9 (2+ explicit quotes, unambiguous).\n\nModality:\n- Candidate 1: Visual\n  Evidence: HED for Target/NonTarget includes \"Visual-presentation\"; \"Tags — Modality: visual\"; P300 speller grid implied by \"row-column\" and \"speller\".\n- Candidate 2: Multisensory\n  Evidence: not supported here; no auditory/tactile channel mentioned.\nDecision: Visual wins. Confidence=0.9 (3+ explicit supporting phrases).\n\nType:\n- Candidate 1: Attention\n  Evidence: \"Paradigm: p300\"; \"Events: Target=2, NonTarget=1\" indicates an oddball-like target-detection paradigm; \"Applications: speller\" implies selective attention to the intended character.\n- Candidate 2: Perception\n  Evidence: explicit metadata tag \"Tags — Type: perception\"; visual stimulus presentations.\nDecision: Attention wins by construct definition for P300 target selection, despite the in-file tag 'perception'. Confidence=0.7 (supported by 1–2 strong task-structure quotes, but with an explicit competing tag inside metadata)."}},"canonical_name":null,"name_confidence":0.66,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Mainsah2025_J"}}