{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4ccf","dataset_id":"nm000272","associated_paper_doi":null,"authors":["Michele Romani","Devis Zanoni","Elisabetta Farella","Luca Turchet"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.48550/arXiv.2510.10169","datatypes":["eeg"],"demographics":{"subjects_count":22,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000272","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"75bb75d79d6d77d85df12bf59206e4cfafa7112224fd174993017af09edb98dc","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Romani et al. 2025 — BrainForm: a Serious Game for BCI Training and Data Collection (P300 ERP, University of Trento)","readme":null,"recording_modality":["eeg"],"senior_author":null,"sessions":["0cb","0grain","1cb","1grain","2cbExtra","2grainExtra"],"size_bytes":140716431,"source":"nemar","storage":{"backend":"s3","base":"s3://openneuro.org/nm000272","raw_key":"dataset_description.json","dep_keys":[]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-22T12:52:20.035529+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":120,"computed_title":"Romani et al. 2025 — BrainForm: a Serious Game for BCI Training and Data Collection (P300 ERP, University of Trento)","nchans_counts":[{"val":8,"count":120}],"sfreq_counts":[{"val":250.0,"count":120}],"stats_computed_at":"2026-04-22T23:16:00.314430+00:00","total_duration_s":22601.487999999998,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"55d88e7eb05a9edb","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Unknown"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.55,"modality":0.6,"type":0.7},"reasoning":{"few_shot_analysis":"Closest few-shot conventions are the oddball/P300-style paradigms. Example: \"Cross-modal Oddball Task\" is labeled with Type=Clinical/Intervention because it explicitly recruits Parkinson’s disease patients, but it demonstrates the convention that oddball/P300 paradigms map to an attention/cognitive-control style Type label rather than Motor/Resting-state. Another relevant convention is the \"Three-Stim Auditory Oddball\" dataset showing that when the paradigm is explicitly auditory tones, Modality=Auditory; by analogy, if the paradigm is P300 without further details, Modality must be inferred cautiously (often visual P300/speller, but not guaranteed).","metadata_analysis":"Available metadata is sparse. Key snippets:\n- Title: \"romani-bf2025-erp - NEMAR Dataset\"\n- Tasks list includes: \"p300\"\n- Participants: \"Subjects: 22\"\nFrom \"p300\" and \"erp\" in the title, the dataset likely contains an ERP P300 paradigm (commonly an oddball/target-detection task). There is no explicit mention of a clinical recruitment group/diagnosis, and no description of stimulus type (visual flashes/letters vs auditory tones).","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: only \"Subjects: 22\" with no diagnosis/group labels.\n- Few-shot pattern suggests: when no clinical population is stated, label often defaults to Healthy, but this is not an explicit fact here.\n- Alignment: CONFLICT/INSUFFICIENT METADATA. Since no recruitment condition is stated, choose Unknown (metadata insufficiency outweighs pattern-based assumption).\n\nModality:\n- Metadata says: task name \"p300\" and dataset title includes \"erp\", but no stimulus channel specified.\n- Few-shot pattern suggests: P300/oddball can be Visual (e.g., visual discrimination/target detection) or Auditory (tone oddball); cross-modal examples show Modality follows stimulus channel.\n- Alignment: PARTIAL. We can only infer weakly; choose Visual as the most common P300 ERP implementation in EEG repositories (often visual P300/speller), but acknowledge ambiguity.\n\nType:\n- Metadata says: \"p300\" / \"erp\" implies a target-detection ERP paradigm.\n- Few-shot pattern suggests: oddball/P300 paradigms are typically categorized under Attention (target detection, oddball processing) rather than Perception or Motor.\n- Alignment: ALIGNS (paradigm-level match), though still inferred due to sparse task description.","decision_summary":"Top-2 candidates and final selections:\n\nPathology:\n1) Unknown — supported by lack of any clinical recruitment info (\"Subjects: 22\" only; no diagnosis/groups given).\n2) Healthy — plausible default if typical ERP study with volunteers, but not explicitly stated.\nWinner: Unknown. Evidence alignment: insufficient metadata to assert Healthy.\nConfidence basis: only negative evidence (absence of diagnosis) → moderate-low.\n\nModality:\n1) Visual — plausible because many EEG \"p300\" tasks are visual P300/speller/visual oddball; supported only indirectly by \"p300\" and \"erp\".\n2) Auditory — equally plausible because P300 is also frequently elicited via auditory oddball.\nWinner: Visual (weak inference). Evidence alignment: ambiguous.\nConfidence basis: no explicit stimulus description; inference only.\n\nType:\n1) Attention — P300 commonly indexes attention/target detection; guided by oddball/P300 few-shot conventions.\n2) Perception — alternative if treated as sensory discrimination, but P300 framing more often emphasizes attentional target processing.\nWinner: Attention. Evidence alignment: paradigm-level match.\nConfidence basis: task name explicitly \"p300\" plus ERP framing in title, but no further details."}},"canonical_name":null,"name_confidence":0.42,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Romani2025_BF_ERP"}}