{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4c8f","dataset_id":"nm000186","associated_paper_doi":null,"authors":["Boyla Mainsah","Chance Fleeting","Thomas Balmat","Eric Sellers","Leslie Collins"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":8,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":{"f":5,"m":3},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000186","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"6c96467e12fe3aac04e029247397499e144f067b2c219de6e6474c679bbe7eee","license":"CC-BY-4.0","n_contributing_labs":null,"name":"BigP3BCI Study E — 6x6 checkerboard (8 healthy subjects)","readme":"# BigP3BCI Study E — 6x6 checkerboard (8 healthy subjects)\nBigP3BCI Study E — 6x6 checkerboard (8 healthy subjects).\n## Dataset Overview\n- **Code**: Mainsah2025-E\n- **Paradigm**: p300\n- **DOI**: 10.13026/0byy-ry86\n- **Subjects**: 8\n- **Sessions per subject**: 1\n- **Events**: Target=2, NonTarget=1\n- **Trial interval**: [0, 1.0] s\n## Acquisition\n- **Sampling rate**: 256.0 Hz\n- **Number of channels**: 16\n- **Channel types**: eeg=16\n- **Montage**: standard_1020\n- **Hardware**: g.USBamp (g.tec)\n- **Line frequency**: 60.0 Hz\n## Participants\n- **Number of subjects**: 8\n- **Health status**: healthy\n## Experimental Protocol\n- **Paradigm**: p300\n- **Number of classes**: 2\n- **Class labels**: Target, NonTarget\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: p300\n## Signal Processing\n- **Feature extraction**: P300_ERP_detection\n## Cross-Validation\n- **Method**: calibration-then-test\n- **Evaluation type**: within_subject\n## BCI Application\n- **Applications**: speller\n- **Environment**: laboratory\n- **Online feedback**: True\n## Tags\n- **Modality**: visual\n- **Type**: perception\n## Documentation\n- **Description**: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms.\n- **DOI**: 10.13026/0byy-ry86\n- **License**: CC-BY-4.0\n- **Investigators**: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins\n- **Institution**: Duke University; East Tennessee State University\n- **Country**: US\n- **Repository**: PhysioNet\n- **Data URL**: https://physionet.org/content/bigp3bci/1.0.0/\n- **Publication year**: 2025\n## References\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":109759444,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000186","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-30T14:08:51.607611+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-02T21:18:41Z"},"total_files":88,"computed_title":"BigP3BCI Study E — 6x6 checkerboard (8 healthy subjects)","nchans_counts":[{"val":16,"count":88}],"sfreq_counts":[{"val":256.0,"count":88}],"stats_computed_at":"2026-05-01T13:49:34.645417+00:00","total_duration_s":8597.65625,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"ea613dd0692ce82c","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.7},"reasoning":{"few_shot_analysis":"Closest few-shot conventions are the oddball-like datasets: (1) \"Cross-modal Oddball Task\" shows that oddball paradigms are categorized primarily by recruited clinical group (Parkinson’s) and stimulus modality (multisensory) rather than by response mechanics; (2) \"EEG: DPX Cog Ctl Task in Acute Mild TBI\" demonstrates that when the construct is target monitoring/cognitive control, Type can map to \"Attention\". While no few-shot is an exact P300 speller, these examples guide (a) using explicit population facts for Pathology, (b) using stimulus channel for Modality, and (c) preferring an attentional construct label for target-vs-nontarget detection paradigms.","metadata_analysis":"Key metadata facts:\n- Population: \"Subjects: 8\" and \"Health status: healthy\".\n- Paradigm/task: \"Paradigm: p300\" and \"Tasks: [ 'p300' ]\".\n- Stimulus modality: HED annotations include \"Visual-presentation\" for both Target and NonTarget (\"Target ... Visual-presentation\"; \"NonTarget ... Visual-presentation\").\n- Task structure consistent with oddball/target detection: \"Events: Target=2, NonTarget=1\" and \"Class labels: Target, NonTarget\".\n- Application context: \"Applications: speller\" and \"Feature extraction: P300_ERP_detection\"; also \"Online feedback: True\".","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\".\n- Few-shot pattern suggests: when explicitly healthy, label as Healthy.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: HED tags include \"Visual-presentation\" (for both Target and NonTarget) and dataset is \"6x6 checkerboard\".\n- Few-shot pattern suggests: label modality by stimulus channel (e.g., visual tasks -> Visual).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: \"Paradigm: p300\", \"Events: Target=2, NonTarget=1\", and \"Applications: speller\" (P300 ERP target detection paradigm).\n- Few-shot pattern suggests: target monitoring/oddball-like paradigms can map to attention-related constructs (e.g., DPX example labeled Attention).\n- Alignment: PARTIAL (metadata does not explicitly say 'attention', but task structure strongly implies attentional target selection). No conflict with any explicit metadata fact.","decision_summary":"Top-2 candidates (with head-to-head comparison):\n\nPathology:\n1) Healthy — Evidence: \"Health status: healthy\"; \"8 healthy subjects\" (title).\n2) Unknown — would apply if no population info were given.\nWinner: Healthy (explicitly stated). Confidence 0.9 (multiple explicit mentions).\n\nModality:\n1) Visual — Evidence: \"6x6 checkerboard\"; HED includes \"Visual-presentation\"; readme tag: \"Modality: visual\".\n2) Multisensory — would require multiple stimulus channels; not supported.\nWinner: Visual. Confidence 0.9 (3+ explicit cues).\n\nType:\n1) Attention — Evidence: target vs non-target structure (\"Events: Target=2, NonTarget=1\"; \"Class labels: Target, NonTarget\") and P300 speller context (\"Paradigm: p300\"; \"Applications: speller\") implies selective attention to targets.\n2) Perception — supported by dataset tag \"Type: perception\" and that stimuli are visual.\nWinner: Attention (construct more directly tied to P300 oddball/speller target selection than generic perception). Confidence 0.7 because 'attention' is inferred from paradigm/structure rather than explicitly stated."}},"canonical_name":null,"name_confidence":0.8,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Mainsah2025_BigP3BCI_E"}}