{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4c8e","dataset_id":"nm000176","associated_paper_doi":null,"authors":["Boyla Mainsah","Chance Fleeting","Thomas Balmat","Eric Sellers","Leslie Collins"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":5,"ages":[24,27,23,23,25],"age_min":23,"age_max":27,"age_mean":24.4,"species":null,"sex_distribution":{"f":4,"m":1},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000176","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"cb014c622a80a47364684dc8c355ec743451393c423e73a4772d8855f3a6f7c2","license":"CC-BY-4.0","n_contributing_labs":null,"name":"BigP3BCI Study K — 9x8 adaptive/checkerboard, 2 sessions (5 healthy subjects)","readme":"# BigP3BCI Study K — 9x8 adaptive/checkerboard, 2 sessions (5 healthy subjects)\nBigP3BCI Study K — 9x8 adaptive/checkerboard, 2 sessions (5 healthy subjects).\n## Dataset Overview\n- **Code**: Mainsah2025-K\n- **Paradigm**: p300\n- **DOI**: 10.13026/0byy-ry86\n- **Subjects**: 5\n- **Sessions per subject**: 2\n- **Events**: Target=2, NonTarget=1\n- **Trial interval**: [0, 1.0] s\n## Acquisition\n- **Sampling rate**: 256.0 Hz\n- **Number of channels**: 16\n- **Channel types**: eeg=16\n- **Montage**: standard_1020\n- **Hardware**: g.USBamp (g.tec)\n- **Line frequency**: 60.0 Hz\n## Participants\n- **Number of subjects**: 5\n- **Health status**: healthy\n## Experimental Protocol\n- **Paradigm**: p300\n- **Number of classes**: 2\n- **Class labels**: Target, NonTarget\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: p300\n## Signal Processing\n- **Feature extraction**: P300_ERP_detection\n## Cross-Validation\n- **Method**: calibration-then-test\n- **Evaluation type**: within_subject\n## BCI Application\n- **Applications**: speller\n- **Environment**: laboratory\n- **Online feedback**: True\n## Tags\n- **Modality**: visual\n- **Type**: perception\n## Documentation\n- **Description**: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms.\n- **DOI**: 10.13026/0byy-ry86\n- **License**: CC-BY-4.0\n- **Investigators**: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins\n- **Institution**: Duke University; East Tennessee State University\n- **Country**: US\n- **Repository**: PhysioNet\n- **Data URL**: https://physionet.org/content/bigp3bci/1.0.0/\n- **Publication year**: 2025\n## References\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1"],"size_bytes":176432632,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000176","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-30T14:08:48.219278+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-02T21:18:40Z"},"total_files":128,"computed_title":"BigP3BCI Study K — 9x8 adaptive/checkerboard, 2 sessions (5 healthy subjects)","nchans_counts":[{"val":16,"count":128}],"sfreq_counts":[{"val":256.0,"count":128}],"stats_computed_at":"2026-05-01T13:49:34.645403+00:00","total_duration_s":12944.125,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"bbb831d2e3cf220a","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.75},"reasoning":{"few_shot_analysis":"Closest few-shot convention match is the Cross-modal Oddball Task example: it has an oddball/target-vs-nontarget structure and is labeled with stimulus modality based on the presented cues (visual+auditory) and Type tied to the primary research aim. Here, the Study K dataset is also a target vs non-target paradigm (P300/oddball-like), but unlike the Parkinson’s cohort example (where pathology drives Clinical/Intervention), this dataset explicitly recruits healthy participants. Few-shot examples therefore mainly guide (i) treating target/nontarget paradigms as attention/perception ERP tasks and (ii) setting Modality from stimulus channel (visual vs auditory), not from responses.","metadata_analysis":"Key extracted facts from provided metadata:\n- Healthy recruitment: title states \"(5 healthy subjects)\" and README lists \"Health status: healthy\" and \"Participants ... Health status: healthy\".\n- Paradigm/task: README states \"Paradigm: p300\" and \"BCI Application - Applications: speller\".\n- Event structure consistent with P300 oddball/speller: README lists \"Events: Target=2, NonTarget=1\" and HED annotations show \"Target ... Visual-presentation\" and \"NonTarget ... Visual-presentation\".\n- Visual stimulus modality is explicit: README tag \"Modality: visual\" and description mentions \"6x6 or 9x8 character grids\" / \"9x8 adaptive/checkerboard\".","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: \"(5 healthy subjects)\" and \"Health status: healthy\".\n- Few-shot pattern suggests: if explicitly clinical (e.g., Parkinson’s), use that; otherwise Healthy.\n- Alignment: ALIGN (both support Healthy).\n\nModality:\n- Metadata says: HED includes \"Visual-presentation\" for Target/NonTarget and tag \"Modality: visual\".\n- Few-shot pattern suggests: infer modality from stimulus channel (e.g., oddball example uses visual+auditory cues → Multisensory).\n- Alignment: ALIGN (visual-only stimulation here).\n\nType:\n- Metadata says: \"Paradigm: p300\", \"Events: Target... NonTarget...\", \"Feature extraction: P300_ERP_detection\", and even tag \"Type: perception\".\n- Few-shot pattern suggests: oddball/target-detection paradigms are typically categorized by the underlying cognitive construct (often Attention or Perception), while clinical-focus datasets may shift to Clinical/Intervention.\n- Alignment: PARTIAL (few-shot indicates Attention/Perception are both plausible for oddball-like tasks; metadata explicitly provides \"Type: perception\", which supports choosing Perception over Attention).","decision_summary":"Top-2 candidates per category and selection:\n\nPathology:\n1) Healthy (WIN) — evidence: \"(5 healthy subjects)\"; \"Health status: healthy\"; \"Participants ... Health status: healthy\".\n2) Unknown — would apply only if no recruitment info were present (not the case).\nFinal: Healthy. Confidence justified by 3 explicit metadata statements.\n\nModality:\n1) Visual (WIN) — evidence: \"9x8 adaptive/checkerboard\"; tag \"Modality: visual\"; HED for events includes \"Visual-presentation\" for both Target and NonTarget.\n2) Multisensory — would require auditory/tactile cues (none mentioned).\nFinal: Visual. Confidence justified by 3 explicit metadata indicators.\n\nType:\n1) Perception (WIN) — evidence: tag \"Type: perception\"; P300 target vs non-target detection (\"Events: Target... NonTarget...\"); \"Feature extraction: P300_ERP_detection\" indicating stimulus-evoked ERP detection.\n2) Attention — also plausible because P300 spellers rely on attending to a target item; however, explicit metadata tag and framing emphasize target detection/ERP classification rather than a broader attention manipulation.\nFinal: Perception. Confidence slightly reduced due to Perception vs Attention ambiguity despite explicit tag support."}},"canonical_name":null,"name_confidence":0.74,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Mainsah2025_BigP3BCI"}}