{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a342e","dataset_id":"ds005964","associated_paper_doi":null,"authors":["Robert Luke","Maureen Shader","David McAlpine"],"bids_version":"1.7.0","contact_info":[],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds005964.v1.0.0","datatypes":["fnirs"],"demographics":{"subjects_count":17,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005964","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"6fc06e5b9a81f07a40d5e0489e3da4b59f59244fde00b9beab1581bb06ca9cf4","license":"CC0","n_contributing_labs":null,"name":"FRESH Audio Dataset","readme":"﻿References\n----------\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nIn preperation","recording_modality":["fnirs"],"senior_author":"David McAlpine","sessions":["01"],"size_bytes":65461433,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["FRESHAUDIO"],"timestamps":{"digested_at":"2026-04-22T12:29:01.448933+00:00","dataset_created_at":"2025-03-06T19:29:51.176Z","dataset_modified_at":"2025-03-06T19:37:17.000Z"},"total_files":17,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005964","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"c79a0e732b4e588c","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Unknown"],"modality":["Auditory"],"type":["Perception"],"confidence":{"pathology":0.4,"modality":0.7,"type":0.6},"reasoning":{"few_shot_analysis":"Most similar few-shot by stimulus channel is: (1) “Subcortical responses to music and speech…” which is labeled Modality=Auditory and Type=Perception based on listening to acoustic stimuli. This guides mapping an audio-listening dataset toward Auditory modality, and (if task is stimulus-driven rather than resting) toward Perception type. However, unlike that example, the current metadata does not explicitly describe the paradigm (e.g., clicks/music/speech, oddball, etc.), so Type is less certain. For Pathology, many few-shots label Healthy when participants are described as healthy/controls; here, participant health status is not stated, so we cannot use few-shot convention as a factual substitute.","metadata_analysis":"Key available metadata is very sparse. The dataset title indicates an audio focus: “FRESH Audio Dataset”. The only participant info is a count: “Subjects: 17”. The task name is non-descriptive but suggests an experimental task rather than pure rest: tasks: “FRESHAUDIO”. There are no explicit statements about diagnosis/clinical recruitment, and no description of stimuli beyond the word “Audio” in the title.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata SAYS only “Subjects: 17” with no diagnosis/healthy/control wording; few-shot pattern SUGGESTS many non-clinical experimental datasets are Healthy, but this would be an inference without explicit support. No direct conflict; evidence is insufficient, so assign Unknown.\nModality: Metadata SAYS “FRESH Audio Dataset” (explicit ‘Audio’); few-shot pattern SUGGESTS labeling such datasets as Auditory (e.g., music/speech ABR dataset). These ALIGN.\nType: Metadata SAYS only a task label “FRESHAUDIO” without paradigm details; few-shot pattern SUGGESTS audio-stimulus experiments often map to Perception when primarily sensory/encoding focused. This partially aligns but is weakly supported by metadata, so Type remains uncertain; choose Perception as the best-supported by the ‘Audio’ emphasis but with lower confidence.","decision_summary":"Top-2 candidates per category:\n- Pathology: (A) Unknown—supported by lack of any diagnostic/healthy wording (quotes: “Subjects: 17”); (B) Healthy—inferred from non-clinical framing but not stated. Winner: Unknown (metadata insufficient to assert Healthy). Evidence alignment: few-shot convention cannot override missing factual metadata.\n- Modality: (A) Auditory—supported by title “FRESH Audio Dataset”; (B) Unknown—if ‘Audio’ were incidental, but it is in the dataset name. Winner: Auditory. Evidence alignment: aligns with auditory few-shot conventions.\n- Type: (A) Perception—best fit given explicit “Audio” focus and task present (tasks: “FRESHAUDIO”); (B) Unknown—because no paradigm/cognitive construct is described. Winner: Perception, but weakly. Confidence reflects limited supporting quotes (mostly title only)."}},"computed_title":"FRESH Audio Dataset","nchans_counts":[{"val":66,"count":17}],"sfreq_counts":[{"val":5.208333333333333,"count":17}],"stats_computed_at":"2026-04-22T23:16:00.311199+00:00","total_duration_s":22117.248,"canonical_name":null,"name_confidence":0.55,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Luke2025"}}