{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c4a","dataset_id":"ds007463","associated_paper_doi":null,"authors":["Morgan Fogarty","Sean M. Rafferty","Zachary E. Markow","Anthony C. O’Sullivan","Calamity F. Svoboda","Tessa George","Kelsey King","Dana Wilhelm","Kalyan Tripathy","Emily M. Mugler","Stephanie Naufel","Allen Yin","Jason W. Trobaugh","Adam T. Eggebrecht","Edward J. Richter","Joseph P. Culver"],"bids_version":"","contact_info":["Morgan Fogarty"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds007463.v1.1.1","datatypes":["fnirs"],"demographics":{"subjects_count":8,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds007463","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"14193f9bbf4bebd55a2f3d6766a54712d45ed14eedb78fc6c7ba4ed002980b39","license":"CC0","n_contributing_labs":null,"name":"Very-High-Density Diffuse Optical Tomography System Validation Dataset","readme":"This dataset consists of 8 participants completing functional localizer and movie-viewing tasks in both Very High Density Diffuse Optical Tomography (VHD-DOT) and fMRI. Sessions 1 and 2 for each subject include the VHD-DOT data in SNIRF format while sessions 3 or more include the fMRI data in NIFTI format.\nPreprocessed fMRI data used for comparisons to VHD-DOT are in the /derivatives folder and are in NIFTI format.\nMore information on this data can be found here:\nMorgan Fogarty, Sean M. Rafferty, Zachary E. Markow, Anthony C. O’Sullivan, Calamity F. Svoboda, Tessa George, Kelsey King, Dana Wilhelm, Kalyan Tripathy, Emily M. Mugler, Stephanie Naufel, Allen Yin, Jason W. Trobaugh, Adam T. Eggebrecht, Edward J. Richter, Joseph P. Culver; Functional brain mapping using whole-head very high-density diffuse optical tomography. Imaging Neuroscience 2025; 3 IMAG.a.54. doi: https://doi.org/10.1162/IMAG.a.54","recording_modality":["fnirs"],"senior_author":"Joseph P. Culver","sessions":["1","2"],"size_bytes":74413175764,"source":"openneuro","storage":{"backend":"s3","base":"s3://openneuro.org/ds007463","raw_key":"dataset_description.json","dep_keys":["CHANGES","README.md","datacite.yml"]},"study_design":null,"study_domain":null,"tasks":["AC001","AC002","GV001","GV002","GV003","HW001","HW002","HW003","MOT001","MOT002","MOV001","MOV002","MOV003","MOV004"],"timestamps":{"digested_at":"2026-04-22T12:30:17.928468+00:00","dataset_created_at":"2026-03-02T15:45:24.598Z","dataset_modified_at":"2026-03-06T16:43:09.000Z"},"total_files":88,"computed_title":"Very-High-Density Diffuse Optical Tomography System Validation Dataset","nchans_counts":[{"val":19086,"count":14},{"val":19528,"count":11},{"val":19426,"count":11},{"val":19620,"count":11},{"val":21518,"count":11},{"val":20874,"count":10},{"val":20218,"count":10},{"val":19908,"count":10}],"sfreq_counts":[{"val":7.8125,"count":88}],"stats_computed_at":"2026-04-22T23:16:00.312790+00:00","total_duration_s":68034.176,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"55dadb476e78565a","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.7,"modality":0.7,"type":0.7},"reasoning":{"few_shot_analysis":"Most similar few-shot examples by paradigm/stimulus structure are the healthy, stimulus-driven perception datasets: (1) the Visual discrimination task dataset labeled Visual + Perception (Meta-rdk) and (2) the auditory continuous stimulus dataset labeled Auditory + Perception (MusicvsSpeech). These examples indicate the convention that when participants primarily receive external sensory stimuli (e.g., visual discrimination or continuous listening), the Type is typically mapped to Perception (rather than Motor/Resting-state). For Modality, the convention is to label by the dominant stimulus channel (e.g., Visual for dot-motion; Auditory for music/speech). For Pathology, multiple examples show that when no disorder/diagnosis recruitment is stated, label as Healthy.","metadata_analysis":"Key metadata facts:\n- Population/health status: the README states only \"8 participants\" and does not mention any diagnosis or patient group: \"This dataset consists of 8 participants completing functional localizer and movie-viewing tasks\".\n- Modality/task content: the tasks are explicitly \"functional localizer and movie-viewing tasks\" and include task IDs like \"MOV001\", \"MOV002\", \"MOV003\", \"MOV004\" suggesting movie viewing; additionally, the study is described as \"Functional brain mapping\" (validation/mapping context): \"Functional brain mapping using whole-head very high-density diffuse optical tomography\".\n- Measurement modality is VHD-DOT and fMRI: \"completing ... tasks in both Very High Density Diffuse Optical Tomography (VHD-DOT) and fMRI\" (this helps interpret the tasks as stimulus-evoked functional mapping rather than rest).","paper_abstract_analysis":"No paper abstract content was provided in the metadata beyond the citation line; no additional task/stimulus details to further disambiguate modality (e.g., whether movies include audio) were available.","evidence_alignment_check":"Pathology:\n- Metadata says: only \"8 participants\" with no clinical descriptors (\"This dataset consists of 8 participants...\").\n- Few-shot pattern suggests: when no disorder recruitment is specified, label Healthy (seen across multiple healthy examples).\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"movie-viewing tasks\" and \"functional localizer\"; task list includes multiple \"MOV00X\" entries.\n- Few-shot pattern suggests: movie viewing/localizer paradigms are typically stimulus-driven; absent explicit auditory/tactile cues, label by the likely dominant channel (Visual), consistent with Visual + Perception examples.\n- Alignment: MOSTLY ALIGN, but some uncertainty remains because metadata does not explicitly state whether the movies had sound (could be multisensory).\n\nType:\n- Metadata says: tasks are \"functional localizer and movie-viewing\" and the cited work is \"Functional brain mapping\".\n- Few-shot pattern suggests: stimulus-driven paradigms (visual discrimination; continuous auditory stimulation) map to Perception when the aim is sensory/functional response characterization rather than motor or resting.\n- Alignment: ALIGN (Perception is the closest allowed construct for functional localizer + movie-viewing sensory mapping, though the dataset’s explicit purpose is also 'system validation').","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n1) Healthy — Evidence: no diagnosis/patient recruitment mentioned; \"8 participants\" only. Aligns with few-shot convention for non-clinical cohorts.\n2) Unknown — Would apply if participant health status were unclear, but lack of any clinical framing favors Healthy.\nFinal: Healthy. Confidence basis: one clear quote indicating only participants count and no clinical population.\n\nModality:\n1) Visual — Evidence: \"movie-viewing tasks\"; multiple task IDs \"MOV001\"-\"MOV004\"; functional localizers commonly visual. Few-shot convention labels modality by stimulus channel.\n2) Multisensory — Possible if movies included audio, but metadata never mentions sound/audio.\nFinal: Visual. Confidence basis: two metadata cues (movie-viewing + MOV task IDs) but no explicit statement about audio.\n\nType:\n1) Perception — Evidence: \"functional localizer and movie-viewing tasks\" plus \"Functional brain mapping\" implies stimulus-evoked sensory responses; few-shot visual/auditory stimulus datasets map to Perception.\n2) Other — Could reflect 'system validation' as an engineering aim, but the experimental content is still perception-like stimulus mapping.\nFinal: Perception. Confidence basis: one direct task description quote + strong few-shot analog (stimulus-driven perceptual paradigms)."}},"canonical_name":null,"name_confidence":0.78,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Fogarty2026_Very"}}