{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4c9d","dataset_id":"nm000200","associated_paper_doi":null,"authors":["Boyla Mainsah","Chance Fleeting","Thomas Balmat","Eric Sellers","Leslie Collins"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":13,"ages":[21],"age_min":21,"age_max":21,"age_mean":21.0,"species":null,"sex_distribution":{"m":7,"f":6},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000200","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"0c31ce0004775ebba12508d6620d96f7c4bc16626fab71b21b375fd9efdc7c63","license":"CC-BY-4.0","n_contributing_labs":null,"name":"BigP3BCI Study I — 9x8 checkerboard/performance-based (13 healthy subjects)","readme":"# BigP3BCI Study I — 9x8 checkerboard/performance-based (13 healthy subjects)\nBigP3BCI Study I — 9x8 checkerboard/performance-based (13 healthy subjects).\n## Dataset Overview\n- **Code**: Mainsah2025-I\n- **Paradigm**: p300\n- **DOI**: 10.13026/0byy-ry86\n- **Subjects**: 13\n- **Sessions per subject**: 1\n- **Events**: Target=2, NonTarget=1\n- **Trial interval**: [0, 1.0] s\n## Acquisition\n- **Sampling rate**: 256.0 Hz\n- **Number of channels**: 16\n- **Channel types**: eeg=16\n- **Montage**: standard_1020\n- **Hardware**: g.USBamp (g.tec)\n- **Line frequency**: 60.0 Hz\n## Participants\n- **Number of subjects**: 13\n- **Health status**: healthy\n## Experimental Protocol\n- **Paradigm**: p300\n- **Number of classes**: 2\n- **Class labels**: Target, NonTarget\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: p300\n## Signal Processing\n- **Feature extraction**: P300_ERP_detection\n## Cross-Validation\n- **Method**: calibration-then-test\n- **Evaluation type**: within_subject\n## BCI Application\n- **Applications**: speller\n- **Environment**: laboratory\n- **Online feedback**: True\n## Tags\n- **Modality**: visual\n- **Type**: perception\n## Documentation\n- **Description**: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms.\n- **DOI**: 10.13026/0byy-ry86\n- **License**: CC-BY-4.0\n- **Investigators**: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins\n- **Institution**: Duke University; East Tennessee State University\n- **Country**: US\n- **Repository**: PhysioNet\n- **Data URL**: https://physionet.org/content/bigp3bci/1.0.0/\n- **Publication year**: 2025\n## References\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":340172345,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000200","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-30T14:08:59.713953+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-24T01:31:41Z"},"total_files":265,"computed_title":"BigP3BCI Study I — 9x8 checkerboard/performance-based (13 healthy subjects)","nchans_counts":[{"val":16,"count":265}],"sfreq_counts":[{"val":256.0,"count":265}],"stats_computed_at":"2026-05-01T13:49:34.645610+00:00","total_duration_s":26651.46484375,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"aa7eab729ce4f77d","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot by paradigm is the “Cross-modal Oddball Task” example (Parkinson’s; oddball with standards/oddballs and a GO cue), which illustrates that oddball-style target vs non-target event structures are treated as a distinct paradigm class in labeling (and that modality is determined by stimulus channels, e.g., visual+auditory => Multisensory). Another helpful contrast is the schizophrenia example (“visual discrimination task” with left/right motion discrimination) labeled as Visual + Perception, showing that when the main aim is sensory discrimination it maps to Perception rather than Attention. For this dataset, the P300 speller uses a target vs non-target detection/selection (oddball-like) rather than sensory discrimination, which by convention most closely aligns with Attention (selective attention to a target).","metadata_analysis":"Key population facts: (1) title explicitly states “(13 healthy subjects)”; (2) README lists “**Health status**: healthy”; (3) README lists “**Number of subjects**: 13”.\n\nKey stimulus/task facts: (1) README: “**Paradigm**: p300” and “**Events**: Target=2, NonTarget=1”; (2) README HED annotations for both Target and NonTarget include “Visual-presentation”; (3) README: “**Applications**: speller” and “**Online feedback**: True” (BCI P300 speller context).","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology — Metadata says: “(13 healthy subjects)”, and “Health status: healthy”. Few-shot pattern suggests: when explicitly healthy, label Healthy. ALIGN.\n\nModality — Metadata says stimuli are visual: HED tags include “Visual-presentation” for Target/NonTarget; also dataset tag “Modality: visual”. Few-shot pattern suggests: stimulus channel determines modality; oddball example shows mapping based on cue modality. ALIGN.\n\nType — Metadata says: “Paradigm: p300”, “Events: Target, NonTarget”, and “Applications: speller” with “Online feedback: True”, indicating a P300-BCI target-detection/selection paradigm. Few-shot pattern suggests: sensory discrimination tasks map to Perception (schizophrenia motion discrimination example), whereas target/oddball-like paradigms more naturally map to Attention (selective attention to targets). Mostly ALIGN with choosing Attention over Perception; minor potential CONFLICT with the dataset’s own tag “Type: perception”, but this appears to be a generic tag and is weaker than the paradigm evidence for attentional target selection.","decision_summary":"Top-2 candidates per category:\n\nPathology:\n1) Healthy — Supported by “(13 healthy subjects)”, “Health status: healthy”, and “Number of subjects: 13”.\n2) Unknown — only if no population info were provided.\nDecision: Healthy (alignment clear; multiple explicit statements).\n\nModality:\n1) Visual — Supported by HED “Visual-presentation” for both Target and NonTarget; “9x8 checkerboard” speller grid in title; and tag “Modality: visual”.\n2) Multisensory — only if non-visual stimuli were present (not indicated).\nDecision: Visual (clear explicit evidence).\n\nType:\n1) Attention — Supported by “Paradigm: p300”, “Events: Target=2, NonTarget=1” (oddball-like target detection), and “Applications: speller” (selective attention to desired character).\n2) Perception — Possible because stimuli are visual and metadata tag says “Type: perception”, but less specific to the cognitive construct than the P300 target-selection framing.\nDecision: Attention. Confidence reflects strong paradigm evidence but acknowledges the competing internal tag “Type: perception”."}},"canonical_name":null,"name_confidence":0.74,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Mainsah2025_BigP3BCI_I"}}