{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3469","dataset_id":"ds006817","associated_paper_doi":null,"authors":["Benjamin Lowe (ben.lowe@mq.edu.au)","Naohide Yamamoto (naohide.yamamoto@qut.edu.au)","Jonathan Robinson (jonathan.robinson@monash.edu)","Patrick Johnston (dr.pat.johnston@icloud.com)"],"bids_version":"1.1.0","contact_info":["Benjamin G. Lowe"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds006817.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":34,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds006817","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"b13ece15093978b16c4380d817c0f70353b0d388e088685b1d80746add40d0c4","license":"CC0","n_contributing_labs":null,"name":"Visual Attribute-Specific Contextual Trajectory Paradigm 2.0","readme":"TBD upon publication.\nAssociated pre-print: https://doi.org/10.1101/2025.08.18.670829","recording_modality":["eeg"],"senior_author":"Patrick Johnston (dr.pat.johnston@icloud.com)","sessions":[],"size_bytes":10400423631,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["CTP"],"timestamps":{"digested_at":"2026-04-22T12:29:37.481757+00:00","dataset_created_at":"2025-10-20T23:53:08.416Z","dataset_modified_at":"2025-10-28T08:15:53.000Z"},"total_files":34,"storage":{"backend":"s3","base":"s3://openneuro.org/ds006817","raw_key":"dataset_description.json","dep_keys":["CHANGES","README.md","task-CTP_eeg.json","task-CTP_events.json"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"884c53cb63300fb1","model":"openai/gpt-5.2","tagged_at":"2026-01-20T19:09:21.989922+00:00"},"tags":{"pathology":["Unknown"],"modality":["Unknown"],"type":["Unknown"],"confidence":{"pathology":0.35,"modality":0.35,"type":0.35},"reasoning":{"few_shot_analysis":"Few-shot examples show the convention that labels are assigned from explicit participant/task/stimulus descriptions in the README (e.g., Parkinson’s oddball dataset: \"98 individuals with Parkinsons disease\"; Dementia resting dataset: \"EEG resting state-closed eyes recordings\"). In this target dataset, the README provides no such factual details (only a placeholder and a preprint link), so unlike the few-shot cases there is insufficient information to map to any specific Pathology/Modality/Type label. This guides using the 'Unknown' labels rather than inferring from patterns.","metadata_analysis":"The only provided metadata is the README, which contains no task/population/stimulus facts. Key quotes: (1) \"TBD upon publication.\" (2) \"Associated pre-print: https://doi.org/10.1101/2025.08.18.670829\". There are no statements about participants (e.g., patients/controls), experimental paradigm (e.g., oddball, motor imagery), or stimulus modality (visual/auditory/etc.).","paper_abstract_analysis":"No useful paper information. Only a preprint DOI link is provided, but no abstract/content is included in the metadata payload to extract task/population/stimulus facts.","evidence_alignment_check":"Pathology: Metadata says only \"TBD upon publication.\" (no diagnosis or recruitment criteria). Few-shot pattern suggests using explicit diagnostic phrases when present; here none exist. ALIGN (both lead to Unknown).\nModality: Metadata contains no stimulus description (only preprint link). Few-shot pattern relies on explicit stimulus channel (e.g., tones, visual dots, braille). Here none exist. ALIGN -> Unknown.\nType: Metadata contains no study aim/task description. Few-shot pattern maps task/aim (resting-state, sleep, learning, etc.) from explicit descriptions; here absent. ALIGN -> Unknown.","decision_summary":"Pathology top-2: (1) Unknown — supported by absence of any participant/diagnosis info (\"TBD upon publication.\"). (2) Healthy — plausible default in many datasets but unsupported here. Final: Unknown. Confidence 0.35 due to no factual evidence.\nModality top-2: (1) Unknown — no stimulus/task modality described (\"TBD upon publication.\"). (2) Resting State — common in EEG datasets but not supported. Final: Unknown. Confidence 0.35.\nType top-2: (1) Unknown — no paradigm/construct stated (only \"Associated pre-print...\"). (2) Clinical/Intervention — could be associated with a preprint but no facts. Final: Unknown. Confidence 0.35."}},"computed_title":"Visual Attribute-Specific Contextual Trajectory Paradigm 2.0","nchans_counts":[{"val":65,"count":34}],"sfreq_counts":[{"val":1024.0,"count":34}],"stats_computed_at":"2026-04-22T23:16:00.312009+00:00","total_duration_s":null,"canonical_name":null,"name_confidence":0.74,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Lowe2025"}}