{"success":true,"database":"eegdash","data":{"_id":"69d16e06897a7725c66f4ce7","dataset_id":"nm000351","associated_paper_doi":null,"authors":["Boyla Mainsah","Chance Fleeting","Thomas Balmat","Eric Sellers","Leslie Collins"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.13026/0byy-ry86","datatypes":["eeg"],"demographics":{"subjects_count":19,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000351","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"1f42c701b9bafdcdbb885a278963198ee1b648417af8dd5b6705a66584815f2e","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Mainsah et al. 2025 — bigP3BCI: An Open, Diverse and Machine Learning Ready P300-based Brain-Computer Interface Dataset (Study P)","readme":"Mainsah2025-P\n=============\nBigP3BCI Study P — 9x8 predictive/non-predictive spelling (19 ALS subjects).\nDataset Overview\n----------------\n  Code: Mainsah2025-P\n  Paradigm: p300\n  DOI: 10.13026/0byy-ry86\n  Subjects: 19\n  Sessions per subject: 2\n  Events: Target=2, NonTarget=1\n  Trial interval: [0, 1.0] s\nAcquisition\n-----------\n  Sampling rate: 256.0 Hz\n  Number of channels: 32\n  Channel types: eeg=32\n  Montage: standard_1020\n  Hardware: g.USBamp (g.tec)\n  Line frequency: 60.0 Hz\nParticipants\n------------\n  Number of subjects: 19\n  Health status: healthy\nExperimental Protocol\n---------------------\n  Paradigm: p300\n  Number of classes: 2\n  Class labels: Target, NonTarget\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: p300\nSignal Processing\n-----------------\n  Feature extraction: P300_ERP_detection\nCross-Validation\n----------------\n  Method: calibration-then-test\n  Evaluation type: within_subject\nBCI Application\n---------------\n  Applications: speller\n  Environment: laboratory\n  Online feedback: True\nTags\n----\n  Modality: visual\n  Type: perception\nDocumentation\n-------------\n  Description: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms.\n  DOI: 10.13026/0byy-ry86\n  License: CC-BY-4.0\n  Investigators: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins\n  Institution: Duke University; East Tennessee State University\n  Country: US\n  Repository: PhysioNet\n  Data URL: https://physionet.org/content/bigp3bci/1.0.0/\n  Publication year: 2025\nReferences\n----------\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1"],"size_bytes":1596517117,"source":"openneuro","storage":{"backend":"s3","base":"s3://openneuro.org/nm000351","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-22T12:52:31.292623+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":228,"computed_title":"Mainsah et al. 2025 — bigP3BCI: An Open, Diverse and Machine Learning Ready P300-based Brain-Computer Interface Dataset (Study P)","nchans_counts":[{"val":32,"count":228}],"sfreq_counts":[{"val":256.0000930697907,"count":152},{"val":256.00008203487505,"count":38},{"val":256.0,"count":22},{"val":256.0000983717175,"count":1},{"val":256.0001237687188,"count":1},{"val":256.00008694547637,"count":1},{"val":256.00011666197287,"count":1},{"val":256.00007259581656,"count":1},{"val":256.00009240629606,"count":1},{"val":256.0001196052653,"count":1},{"val":256.0000936228914,"count":1},{"val":256.00006037203326,"count":1},{"val":256.00011860780387,"count":1},{"val":256.00009466146327,"count":1},{"val":256.0001109005733,"count":1},{"val":256.0001141647201,"count":1},{"val":256.00009044741086,"count":1},{"val":256.00008676866054,"count":1},{"val":256.0000896869156,"count":1}],"stats_computed_at":"2026-04-22T23:16:00.314743+00:00","total_duration_s":63669.09190617447,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"33baf9cd34c8c7e1","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Other"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.75,"modality":0.9,"type":0.7},"reasoning":{"few_shot_analysis":"Most similar few-shot convention is the oddball-style datasets (e.g., the \"Cross-modal Oddball Task\" and the \"Three-Stim Auditory Oddball\" example): they use a Target vs NonTarget/Standard structure to elicit P3/P300 responses and are labeled by dominant stimulus modality plus a cognitive-purpose label (not by button pressing). A P300 speller is essentially a visual oddball/target-detection paradigm; unlike the motor imagery example (EEGMMIDB) it is not primarily about movement. This supports labeling Modality as Visual and Type closer to Attention (target detection) or Perception (stimulus discrimination); the few-shot style favors choosing the cognitive construct tied to the target-detection ERP rather than “BCI” as a type.","metadata_analysis":"Key facts from metadata:\n- Clinical population stated in title/readme: \"BigP3BCI Study P — 9x8 predictive/non-predictive spelling (19 ALS subjects).\"\n- Conflicting participant status line: \"Health status: healthy\" (but this contradicts the explicit ALS recruitment statement above).\n- Task/paradigm: \"Paradigm: p300\" and \"Events: Target=2, NonTarget=1\".\n- Stimulus modality explicitly annotated: HED shows \"Visual-presentation\" under both Target and NonTarget, and the readme tag says \"Modality: visual\".\n- BCI use-case: \"Applications: speller\" and \"Online feedback: True\".","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: \"(19 ALS subjects)\" and also says \"Health status: healthy\".\n- Few-shot pattern suggests: when a clinical population is explicitly named, use a clinical pathology label (e.g., PD examples labeled Parkinson's).\n- Alignment: CONFLICT within metadata (ALS vs healthy), but few-shot convention + the explicit recruitment phrase support using the clinical-population fact. Per override rule, explicit clinical population FACT takes precedence.\n\nModality:\n- Metadata says: HED annotations include \"Visual-presentation\" for Target and NonTarget; also \"Modality: visual\".\n- Few-shot pattern suggests: label modality by stimulus channel (e.g., auditory tasks labeled Auditory).\n- Alignment: ALIGNS (clearly Visual).\n\nType:\n- Metadata says: \"Paradigm: p300\", \"Events: Target=2, NonTarget=1\", and \"Applications: speller\" (a target-detection oddball-like paradigm).\n- Few-shot pattern suggests: oddball/target-detection paradigms are typically categorized by the cognitive construct of attention/target detection or perception/discrimination, not motor response.\n- Alignment: PARTIAL—metadata itself includes a tag \"Type: perception\", but the task structure (P300 target detection) is strongly attention-based. No explicit statement of the scientific aim beyond BCI spelling; thus Attention vs Perception remains a close call, resolved in favor of Attention based on the P300/target-detection convention.","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n1) Other — Evidence: \"(19 ALS subjects)\" indicates amyotrophic lateral sclerosis, which is not an allowed specific label; map to Other.\n2) Healthy — Evidence: \"Health status: healthy\".\nHead-to-head: Other wins because explicit recruitment fact \"ALS subjects\" outweighs the conflicting generic health-status line. (Conflict noted.)\nConfidence evidence: 1 strong explicit quote for ALS + 1 conflicting quote for healthy => moderate confidence.\n\nModality:\n1) Visual — Evidence: HED \"Visual-presentation\"; readme tag \"Modality: visual\".\n2) Multisensory — (weak) no supporting evidence.\nHead-to-head: Visual clearly wins.\nConfidence evidence: 2 explicit modality quotes.\n\nType:\n1) Attention — Evidence: classic P300 target-detection structure \"Paradigm: p300\" + \"Events: Target=2, NonTarget=1\"; speller relies on attending to the target.\n2) Perception — Evidence: readme tag \"Type: perception\".\nHead-to-head: Attention slightly stronger given P300 target-detection/oddball convention, but the dataset’s own tag makes this somewhat ambiguous.\nConfidence evidence: 2 explicit task-structure quotes, plus 1 competing tag supporting Perception => moderate confidence."}},"canonical_name":null,"name_confidence":0.72,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Mainsah2025_P"}}