{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cd0","dataset_id":"nm000277","associated_paper_doi":null,"authors":["Boyla Mainsah","Chance Fleeting","Thomas Balmat","Eric Sellers","Leslie Collins"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.13026/0byy-ry86","datatypes":["eeg"],"demographics":{"subjects_count":20,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":{"f":15,"m":5},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000277","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"3537c0f620000d1d3f062f545c1097b35f33822bbded6bc4934af25bc92c8aea","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Mainsah et al. 2025 — bigP3BCI: An Open, Diverse and Machine Learning Ready P300-based Brain-Computer Interface Dataset (Study G)","readme":"Mainsah2025-G\n=============\nBigP3BCI Study G — 9x8 checkerboard/dynamic (20 healthy subjects).\nDataset Overview\n----------------\n  Code: Mainsah2025-G\n  Paradigm: p300\n  DOI: 10.13026/0byy-ry86\n  Subjects: 20\n  Sessions per subject: 1\n  Events: Target=2, NonTarget=1\n  Trial interval: [0, 1.0] s\nAcquisition\n-----------\n  Sampling rate: 256.0 Hz\n  Number of channels: 16\n  Channel types: eeg=16\n  Montage: standard_1020\n  Hardware: g.USBamp (g.tec)\n  Line frequency: 60.0 Hz\nParticipants\n------------\n  Number of subjects: 20\n  Health status: healthy\nExperimental Protocol\n---------------------\n  Paradigm: p300\n  Number of classes: 2\n  Class labels: Target, NonTarget\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: p300\nSignal Processing\n-----------------\n  Feature extraction: P300_ERP_detection\nCross-Validation\n----------------\n  Method: calibration-then-test\n  Evaluation type: within_subject\nBCI Application\n---------------\n  Applications: speller\n  Environment: laboratory\n  Online feedback: True\nTags\n----\n  Modality: visual\n  Type: perception\nDocumentation\n-------------\n  Description: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms.\n  DOI: 10.13026/0byy-ry86\n  License: CC-BY-4.0\n  Investigators: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins\n  Institution: Duke University; East Tennessee State University\n  Country: US\n  Repository: PhysioNet\n  Data URL: https://physionet.org/content/bigp3bci/1.0.0/\n  Publication year: 2025\nReferences\n----------\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":349129303,"source":"nemar","storage":{"backend":"s3","base":"s3://openneuro.org/nm000277","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-22T12:52:20.886616+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":320,"computed_title":"Mainsah et al. 2025 — bigP3BCI: An Open, Diverse and Machine Learning Ready P300-based Brain-Computer Interface Dataset (Study G)","nchans_counts":[{"val":16,"count":320}],"sfreq_counts":[{"val":256.0,"count":320}],"stats_computed_at":"2026-04-22T23:16:00.314445+00:00","total_duration_s":27431.75,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"628f952afb51a0a2","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.85,"modality":0.95,"type":0.75},"reasoning":{"few_shot_analysis":"Most similar few-shot paradigms are the oddball-style datasets: (1) “Cross-modal Oddball Task.” (Parkinson’s) and (2) “EEG: Three-Stim Auditory Oddball and Rest in Acute and Chronic TBI” (TBI). These examples establish the convention that oddball/target-vs-nontarget ERP paradigms are categorized primarily by (a) the recruited clinical population for Pathology and (b) the stimulus channel for Modality, while Type reflects the main cognitive construct (often attentional/target detection or broader clinical focus when the cohort is clinical). This dataset is also a target/non-target P300 paradigm, but unlike the Parkinson’s/TBI examples it explicitly recruits “healthy subjects,” so Pathology should be Healthy (metadata fact override).","metadata_analysis":"Key facts from metadata:\n- Population: “(20 healthy subjects)” and “Health status: healthy”.\n- Paradigm/task: “Paradigm: p300”, “Events: Target=2, NonTarget=1”, and “Applications: speller” with “Online feedback: True”.\n- Stimulus modality: “9x8 checkerboard/dynamic” and HED annotations include “Visual-presentation” for both Target and NonTarget; also “Tags … Modality: visual”.\nThese indicate a visual P300 (oddball-like) BCI speller dataset in healthy participants.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: “(20 healthy subjects)” and “Health status: healthy”.\n- Few-shot pattern suggests: oddball datasets can be clinical (e.g., Parkinson’s, TBI) when recruited that way.\n- Alignment: ALIGN (this dataset is explicitly healthy).\n\nModality:\n- Metadata says: “9x8 checkerboard/dynamic” and HED includes “Visual-presentation”; also “Modality: visual”.\n- Few-shot pattern suggests: oddball modality follows the stimulus channel (auditory oddball → Auditory; cross-modal → Multisensory).\n- Alignment: ALIGN (visual stimuli → Visual).\n\nType:\n- Metadata says: “Paradigm: p300”, “Events: Target… NonTarget…”, “Applications: speller”, “P300_ERP_detection”. (Also includes a tag “Type: perception”, but this is an author tag rather than a description of the studied construct.)\n- Few-shot pattern suggests: oddball/target-detection ERP tasks are commonly typed by the dominant cognitive construct (often attentional target detection), unless the dataset’s primary goal is clinical characterization (then Clinical/Intervention).\n- Alignment: PARTIAL. Metadata’s own tag suggests “perception”, but the paradigm (P300 target vs non-target) more strongly matches selective attention/target detection conventions from oddball-style examples. No clinical cohort focus here, so Clinical/Intervention is not supported.","decision_summary":"Top-2 candidates with head-to-head selection:\n\nPathology:\n1) Healthy (WIN) — explicit: “(20 healthy subjects)”; “Health status: healthy”.\n2) Unknown (runner-up) — would apply only if health status were not stated.\nSelected Healthy. Confidence high due to 2 clear explicit statements.\n\nModality:\n1) Visual (WIN) — “9x8 checkerboard/dynamic”; HED “Visual-presentation”; “Modality: visual”.\n2) Multisensory (runner-up) — only if additional auditory/tactile cues were described (not present).\nSelected Visual. Confidence very high due to 3 explicit supporting snippets.\n\nType:\n1) Attention (WIN) — P300 Target/NonTarget paradigm implies attentional target detection: “Paradigm: p300”; “Events: Target… NonTarget…”; BCI speller context “Applications: speller” and “P300_ERP_detection”.\n2) Perception (runner-up) — supported mainly by the metadata tag “Type: perception” and the fact it is stimulus-driven detection.\nSelected Attention because the P300 oddball/speller construct is primarily selective attention to rare targets rather than sensory discrimination per se. Confidence moderate-high because evidence supports a P300 attention paradigm, but a competing explicit tag suggests Perception."}},"canonical_name":null,"name_confidence":0.62,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Mainsah2025_G"}}