{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cdb","dataset_id":"nm000336","associated_paper_doi":null,"authors":["Boyla Mainsah","Chance Fleeting","Thomas Balmat","Eric Sellers","Leslie Collins"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.13026/0byy-ry86","datatypes":["eeg"],"demographics":{"subjects_count":20,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000336","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"dbd6d43fd9f3111a7442a6eb943f7841c1a174ea387d2832fcaa490817ed070a","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Mainsah et al. 2025 — bigP3BCI: An Open, Diverse and Machine Learning Ready P300-based Brain-Computer Interface Dataset (Study R)","readme":"Mainsah2025-R\n=============\nBigP3BCI Study R — 9x8 multi-face paradigms (20 ALS subjects).\nDataset Overview\n----------------\n  Code: Mainsah2025-R\n  Paradigm: p300\n  DOI: 10.13026/0byy-ry86\n  Subjects: 20\n  Sessions per subject: 2\n  Events: Target=2, NonTarget=1\n  Trial interval: [0, 1.0] s\nAcquisition\n-----------\n  Sampling rate: 256.0 Hz\n  Number of channels: 32\n  Channel types: eeg=32\n  Montage: standard_1020\n  Hardware: g.USBamp (g.tec)\n  Line frequency: 60.0 Hz\nParticipants\n------------\n  Number of subjects: 20\n  Health status: healthy\nExperimental Protocol\n---------------------\n  Paradigm: p300\n  Number of classes: 2\n  Class labels: Target, NonTarget\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: p300\nSignal Processing\n-----------------\n  Feature extraction: P300_ERP_detection\nCross-Validation\n----------------\n  Method: calibration-then-test\n  Evaluation type: within_subject\nBCI Application\n---------------\n  Applications: speller\n  Environment: laboratory\n  Online feedback: True\nTags\n----\n  Modality: visual\n  Type: perception\nDocumentation\n-------------\n  Description: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms.\n  DOI: 10.13026/0byy-ry86\n  License: CC-BY-4.0\n  Investigators: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins\n  Institution: Duke University; East Tennessee State University\n  Country: US\n  Repository: PhysioNet\n  Data URL: https://physionet.org/content/bigp3bci/1.0.0/\n  Publication year: 2025\nReferences\n----------\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1"],"size_bytes":2117431908,"source":"nemar","storage":{"backend":"s3","base":"s3://openneuro.org/nm000336","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-22T12:52:27.600031+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":480,"computed_title":"Mainsah et al. 2025 — bigP3BCI: An Open, Diverse and Machine Learning Ready P300-based Brain-Computer Interface Dataset (Study R)","nchans_counts":[{"val":32,"count":480}],"sfreq_counts":[{"val":256.0000579103764,"count":249},{"val":256.00011324306917,"count":132},{"val":256.00009140820043,"count":39},{"val":256.0000766323896,"count":27},{"val":256.000065968772,"count":18},{"val":256.0,"count":15}],"stats_computed_at":"2026-04-22T23:16:00.314585+00:00","total_duration_s":84694.10175496984,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"d4bac08c6dd3f5b6","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Other"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.7},"reasoning":{"few_shot_analysis":"Most similar few-shot paradigms are the oddball-style tasks: (1) the Cross-modal Oddball Task example (Parkinson's; labeled Modality=Multisensory, Type=Clinical/Intervention) and (2) the Three-Stim Auditory Oddball in TBI example (labeled Modality=Auditory, Type=Decision-making). These examples guide the convention that oddball/target-vs-nontarget paradigms are labeled by the stimulus modality (auditory/visual/multisensory) and the Type reflects the primary construct (often attention/cognitive control, sometimes decision-related depending on dataset framing). Our dataset is explicitly a P300 speller BCI (target vs nontarget flashes), which is closest to an oddball/target-detection framing, suggesting an Attention-leaning Type rather than Motor/Memory/etc.","metadata_analysis":"Key quoted metadata facts:\n1) Clinical population conflict: \"BigP3BCI Study R — 9x8 multi-face paradigms (20 ALS subjects).\" vs \"Health status: healthy\".\n2) Task/paradigm: \"Paradigm: p300\" and \"Events: Target=2, NonTarget=1\".\n3) Stimulus modality: HED annotations include \"Visual-presentation\" under both Target and NonTarget, and also \"Tags ---- Modality: visual\".\n4) Application framing: \"BCI Application ---- Applications: speller\" and \"Online feedback: True\".\nThese indicate a visual P300 (oddball-like) speller BCI dataset with a clinical group (ALS) despite an inconsistent 'health status' line.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: \"(20 ALS subjects)\" (explicit clinical population) but also says \"Health status: healthy\".\n- Few-shot pattern suggests: when a specific patient group is recruited (e.g., Parkinson's, TBI), Pathology should reflect that clinical population.\n- Alignment/Conflict: CONFLICT within metadata (ALS vs healthy). Resolution: use the explicit recruitment description \"20 ALS subjects\" as the controlling fact; since ALS is not an allowed specific pathology label here, map to \"Other\".\n\nModality:\n- Metadata says: HED tags include \"Visual-presentation\" and readme lists \"Modality: visual\".\n- Few-shot pattern suggests: modality follows stimulus channel (e.g., oddball examples labeled Auditory or Multisensory depending on cues).\n- Alignment/Conflict: ALIGN (clearly visual stimulation).\n\nType:\n- Metadata says: \"Paradigm: p300\", \"Events: Target... NonTarget...\", and \"Applications: speller\" with \"Online feedback: True\"; tags also state \"Type: perception\".\n- Few-shot pattern suggests: oddball/target-detection paradigms are often treated as attention/cognitive control constructs (e.g., oddball tasks used to probe cognitive control/attention), though some datasets may be labeled differently depending on framing.\n- Alignment/Conflict: Partial tension (metadata tag says perception; oddball convention often implies Attention). Because the task is fundamentally target selection among flashes (P300 attention to target), we select Attention as the stronger construct label, while noting Perception as the runner-up.","decision_summary":"Top-2 candidates and final selections:\n\nPathology:\n- Candidate 1: Other — supported by explicit recruitment phrase \"(20 ALS subjects)\" indicating a clinical population not present as a dedicated allowed label.\n- Candidate 2: Healthy — supported only by the conflicting line \"Health status: healthy\".\nHead-to-head: \"20 ALS subjects\" is the more specific recruitment fact; choose Other. (Conflict within metadata; explicit diagnosis recruitment wins.)\n\nModality:\n- Candidate 1: Visual — supported by \"Visual-presentation\" in HED annotations and \"Modality: visual\".\n- Candidate 2: Multisensory/Other — no supporting evidence (no auditory/tactile cues described).\nHead-to-head: Visual clearly dominates.\n\nType:\n- Candidate 1: Attention — supported by oddball-like \"Target\" vs \"NonTarget\" P300 speller structure (selective attention to target flashes) and BCI calibration/testing.\n- Candidate 2: Perception — supported by the dataset tag \"Type: perception\" and the general sensory-evoked ERP framing.\nHead-to-head: Attention is more specific to the cognitive operation in P300 spellers (attend to desired target among non-targets). Evidence is adequate but not extensive, so confidence is moderate.\n\nConfidence justification (quotes/features):\n- Pathology 0.8: strong explicit quote \"(20 ALS subjects)\" but internal conflict with \"Health status: healthy\" reduces certainty.\n- Modality 0.9: multiple explicit supports: \"Visual-presentation\" (HED) + \"Modality: visual\" + P300 speller grid paradigm context.\n- Type 0.7: explicit task structure \"Target\"/\"NonTarget\" + \"Paradigm: p300\" + speller BCI context, but competing metadata tag \"Type: perception\" keeps confidence moderate."}},"canonical_name":null,"name_confidence":0.72,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Mainsah2025_R"}}