{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cd2","dataset_id":"nm000303","associated_paper_doi":null,"authors":["Boyla Mainsah","Chance Fleeting","Thomas Balmat","Eric Sellers","Leslie Collins"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.13026/0byy-ry86","datatypes":["eeg"],"demographics":{"subjects_count":18,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000303","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"17ccb8d9a5acac7d00fb7beaf473b2f0f75f881c591f546c536823aa2990033b","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Mainsah et al. 2025 — bigP3BCI: An Open, Diverse and Machine Learning Ready P300-based Brain-Computer Interface Dataset (Study O)","readme":"Mainsah2025-O\n=============\nBigP3BCI Study O — 9x8 supervised/checkerboard (18 ALS subjects).\nDataset Overview\n----------------\n  Code: Mainsah2025-O\n  Paradigm: p300\n  DOI: 10.13026/0byy-ry86\n  Subjects: 18\n  Sessions per subject: 2\n  Events: Target=2, NonTarget=1\n  Trial interval: [0, 1.0] s\nAcquisition\n-----------\n  Sampling rate: 256.0 Hz\n  Number of channels: 32\n  Channel types: eeg=32\n  Montage: standard_1020\n  Hardware: g.USBamp (g.tec)\n  Line frequency: 60.0 Hz\nParticipants\n------------\n  Number of subjects: 18\n  Health status: healthy\nExperimental Protocol\n---------------------\n  Paradigm: p300\n  Number of classes: 2\n  Class labels: Target, NonTarget\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: p300\nSignal Processing\n-----------------\n  Feature extraction: P300_ERP_detection\nCross-Validation\n----------------\n  Method: calibration-then-test\n  Evaluation type: within_subject\nBCI Application\n---------------\n  Applications: speller\n  Environment: laboratory\n  Online feedback: True\nTags\n----\n  Modality: visual\n  Type: perception\nDocumentation\n-------------\n  Description: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms.\n  DOI: 10.13026/0byy-ry86\n  License: CC-BY-4.0\n  Investigators: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins\n  Institution: Duke University; East Tennessee State University\n  Country: US\n  Repository: PhysioNet\n  Data URL: https://physionet.org/content/bigp3bci/1.0.0/\n  Publication year: 2025\nReferences\n----------\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1"],"size_bytes":1040184926,"source":"nemar","storage":{"backend":"s3","base":"s3://openneuro.org/nm000303","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-22T12:52:23.037131+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":347,"computed_title":"Mainsah et al. 2025 — bigP3BCI: An Open, Diverse and Machine Learning Ready P300-based Brain-Computer Interface Dataset (Study O)","nchans_counts":[{"val":32,"count":347}],"sfreq_counts":[{"val":256.0000930697907,"count":162},{"val":256.0,"count":101},{"val":256.00008203487505,"count":40},{"val":256.0001098418278,"count":12},{"val":256.00012071918457,"count":12},{"val":256.00010076264726,"count":7},{"val":256.0001184842897,"count":4},{"val":256.00010663894057,"count":3},{"val":256.00009694678226,"count":3},{"val":256.00008886963377,"count":2},{"val":256.00010131094785,"count":1}],"stats_computed_at":"2026-04-22T23:16:00.314473+00:00","total_duration_s":41392.632266581226,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"ddb686b1418efc1f","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Other"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.7,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Closest few-shot by paradigm is the oddball-style target vs non-target structure in the Parkinson's cross-modal oddball example (target detection with rare events) and the TBI auditory oddball example (standard/target/novel). These guide mapping a target-detection ERP paradigm to (i) stimulus Modality based on presented cues, and (ii) Type as a sensory target-detection/perceptual paradigm rather than Motor. For Type specifically, the schizophrenia visual discrimination example is a closer match in that it is a visually driven stimulus discrimination/detection paradigm labeled as Perception, aligning with P300 target detection.","metadata_analysis":"Key quoted metadata facts:\n1) Population/diagnosis conflict: \"BigP3BCI Study O — 9x8 supervised/checkerboard (18 ALS subjects).\" vs \"Health status: healthy\".\n2) Paradigm/task: \"Paradigm: p300\" and \"Events: Target=2, NonTarget=1\".\n3) Stimulus modality is visual: HED annotations include \"Visual-presentation\" under both \"Target\" and \"NonTarget\"; also \"Tags\" lists \"Modality: visual\".\n4) Application context: \"Applications: speller\" and \"Online feedback: True\" (BCI speller P300).","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: \"(18 ALS subjects)\" (explicit clinical population) but also says: \"Health status: healthy\".\n- Few-shot pattern suggests: clinical recruitment (e.g., Parkinson's, TBI, Epilepsy) should be labeled by the diagnosis; when diagnosis is outside allowed list, use \"Other\".\n- Alignment: CONFLICT within metadata itself (ALS vs healthy). Resolution: explicit diagnosis/recruitment phrase \"ALS subjects\" takes precedence over the generic/possibly erroneous \"Health status: healthy\"; ALS is not an allowed pathology label, so map to \"Other\".\n\nModality:\n- Metadata says: HED for Target/NonTarget includes \"Visual-presentation\"; also \"Modality: visual\" and \"9x8 ... checkerboard\" speller grid.\n- Few-shot pattern suggests: modality is determined by stimulus channel (e.g., Braille -> Tactile; oddball with audio -> Auditory). P300 speller grid -> Visual.\n- Alignment: ALIGN.\n\nType:\n- Metadata says: \"Paradigm: p300\", \"Events: Target=2, NonTarget=1\", and tag \"Type: perception\".\n- Few-shot pattern suggests: target-detection/discrimination paradigms map to Perception (as in the visual discrimination example), even when used for BCI/speller.\n- Alignment: ALIGN.","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n1) Other (WINNER) — Evidence: \"(18 ALS subjects)\" indicates a clinical population; ALS is not in the allowed list, so it maps to \"Other\". Conflict noted with \"Health status: healthy\" but diagnosis statement is more specific.\n2) Healthy (RUNNER-UP) — Evidence: explicit line \"Health status: healthy\".\nDecision: Choose Other due to explicit ALS recruitment; confidence moderated by internal metadata conflict.\n\nModality:\n1) Visual (WINNER) — Evidence: \"9x8 ... checkerboard\", HED includes \"Visual-presentation\" for both Target and NonTarget, and tag \"Modality: visual\".\n2) Other (RUNNER-UP) — Would apply only if stimulus modality were unclear; here it is explicit.\nDecision: Visual.\n\nType:\n1) Perception (WINNER) — Evidence: \"Paradigm: p300\", \"Events: Target=2, NonTarget=1\" (target detection), and tag \"Type: perception\"; few-shot oddball/discrimination conventions support Perception.\n2) Attention (RUNNER-UP) — P300 involves selective attention to targets; plausible but less directly stated than perception/target-detection framing in metadata.\nDecision: Perception.\n\nConfidence justification features:\n- Pathology: 2 explicit but conflicting quotes (ALS vs healthy) -> lower.\n- Modality: 3 explicit cues (checkerboard grid, HED Visual-presentation, tag Modality: visual) + strong paradigm fit -> high.\n- Type: 3 explicit cues (Paradigm p300, Target/NonTarget events, tag Type: perception) + few-shot-consistent mapping to perceptual target detection -> high."}},"canonical_name":null,"name_confidence":0.7,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Mainsah2025_O"}}