{"success":true,"database":"eegdash","data":{"_id":"6953f4239276ef1ee07a32a0","dataset_id":"ds002312","associated_paper_doi":null,"authors":["Teon L Brooks","Laura Gwilliams","Alexandre Gramfort","Alec Marantz"],"bids_version":"1.2.0","contact_info":["Teon L. Brooks"],"contributing_labs":null,"data_processed":false,"dataset_doi":"10.18112/openneuro.ds002312.v1.0.0","datatypes":["meg"],"demographics":{"subjects_count":19,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds002312","osf_url":null,"github_url":null,"paper_url":null},"funding":["NSF DGE-1342536 (TB)","Abu  Dhabi  Institute Grant G1001 (AM)"],"ingestion_fingerprint":"1170190b95a62c5973db1df95b415b08c32575dc7ffda4e77019d5cdffd41add","license":"CC0","n_contributing_labs":null,"name":"OcularLDT","readme":null,"recording_modality":["meg"],"senior_author":"Alec Marantz","sessions":[],"size_bytes":36600686939,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["OcularLDT"],"timestamps":{"digested_at":"2026-04-22T12:25:26.393486+00:00","dataset_created_at":"2019-11-08T15:37:35.868Z","dataset_modified_at":"2019-11-08T23:29:46.000Z"},"total_files":23,"storage":{"backend":"s3","base":"s3://openneuro.org/ds002312","raw_key":"dataset_description.json","dep_keys":["CHANGES","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"36a9d5d434aa2e90","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.6,"modality":0.6,"type":0.55},"reasoning":{"few_shot_analysis":"No few-shot example directly matches an “OcularLDT” paradigm. The closest conventions are that datasets named for a specific task (e.g., “visual discrimination task” in the schizophrenia example; “Oddball” in the Parkinson’s example) are labeled by (a) the recruited clinical group for Pathology and (b) the stimulus channel for Modality, with Type reflecting the primary cognitive construct (e.g., discrimination/detection tasks → Perception; reward learning tasks → Learning). Here, the task name suggests an LDT (commonly ‘lexical decision task’), which by convention would most plausibly map to Visual modality (word/nonword stimuli) and a Perception-like construct (word-form recognition/reading), but this is an inference rather than an explicit fact.","metadata_analysis":"Available metadata is minimal. Key snippets: (1) Title/task label: \"OcularLDT\". (2) Dataset description: \"Name: OcularLDT\" (authors listed, no task details). (3) Participants: \"Subjects: 19\" with no diagnostic/group fields. There are no explicit mentions of patient populations, stimulus type (visual/auditory), or the cognitive construct beyond the task acronym.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata SAYS only \"Subjects: 19\" with no diagnosis/group info; few-shot pattern SUGGESTS that absent explicit clinical recruitment, label as Healthy. ALIGN (no conflict).\nModality: Metadata SAYS only \"OcularLDT\" (no explicit stimulus description); few-shot pattern SUGGESTS modality should follow stimulus channel, but here the channel is not stated—only inferred from common meaning of LDT and “ocular” implying visual/eye-related processing. PARTIAL ALIGN but largely INFERRED (weak evidence).\nType: Metadata SAYS only the task name \"OcularLDT\"; few-shot pattern SUGGESTS discrimination/recognition tasks map to Perception, but lexical decision could also be treated as Other (language) given the allowed label set. No direct alignment possible; decision is inference-based.","decision_summary":"Top-2 candidates per category:\n- Pathology: (1) Healthy vs (2) Unknown. Evidence: no clinical terms anywhere (\"Subjects: 19\"; only authors/DOI). Choose Healthy because most OpenNeuro task datasets without explicit diagnosis are normative cohorts. Alignment: aligns with few-shot convention. Confidence 0.6 (contextual inference; no direct ‘healthy’ statement).\n- Modality: (1) Visual vs (2) Unknown. Evidence: task name \"OcularLDT\" suggests ocular/lexical decision (commonly visually presented words), but no explicit stimulus description in metadata. Choose Visual by best inference. Confidence 0.6 (inference only).\n- Type: (1) Perception vs (2) Other. Evidence: LDT typically probes word recognition/reading/lexical access; within allowed labels, Perception is the closest to stimulus/recognition processing, but language is not an explicit Type option, making Other plausible. Choose Perception as closer to recognition/discrimination convention. Confidence 0.55 (multiple plausible labels; weak evidence)."}},"computed_title":"OcularLDT","nchans_counts":[{"val":257,"count":23}],"sfreq_counts":[{"val":1000.0,"count":15}],"stats_computed_at":"2026-04-22T23:16:00.221564+00:00","total_duration_s":25545.637,"canonical_name":null,"name_confidence":0.65,"name_meta":{"suggested_at":"2026-04-14T10:18:35.342Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Brooks2019"}}