{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a33aa","dataset_id":"ds004995","associated_paper_doi":null,"authors":["Denise Moerel","James Psihoyos","Thomas A. Carlson"],"bids_version":"1.0.2","contact_info":["Denise Moerel"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds004995.v1.0.2","datatypes":["eeg"],"demographics":{"subjects_count":20,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds004995","osf_url":null,"github_url":null,"paper_url":null},"funding":["ARC DP160101300 (TAC)","ARC DP200101787 (TAC)"],"ingestion_fingerprint":"a5de6afdd390698545561d63b958d2da307afac6c161cb3d0414440b01f150dc","license":"CC0","n_contributing_labs":null,"name":"The Time-Course of Food Representation in the Human Brain","readme":"The main folder contains the raw EEG data in standard bids format. See references.\nCode and figures: https://doi.org/10.17605/OSF.IO/PWC4K\nManuscript: https://doi.org/10.1101/2023.06.06.543985\nReferences:\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nNiso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. https://doi.org/10.1038/sdata.2018.110","recording_modality":["eeg"],"senior_author":"Thomas A. Carlson","sessions":[],"size_bytes":29637642698,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["food"],"timestamps":{"digested_at":"2026-04-22T12:27:10.768877+00:00","dataset_created_at":"2024-02-28T04:25:48.173Z","dataset_modified_at":"2024-03-24T05:23:36.000Z"},"total_files":20,"storage":{"backend":"s3","base":"s3://openneuro.org/ds004995","raw_key":"dataset_description.json","dep_keys":["CHANGES","README"]},"nemar_citation_count":1,"computed_title":"The Time-Course of Food Representation in the Human Brain","nchans_counts":[{"val":127,"count":20}],"sfreq_counts":[{"val":1000.0,"count":20}],"stats_computed_at":"2026-04-22T23:16:00.308797+00:00","tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.7,"modality":0.6,"type":0.6},"reasoning":{"few_shot_analysis":"Most similar few-shot conventions are the healthy + visual stimulus examples that label stimulus-driven category/feature decoding as Visual + Perception (e.g., the example \"Subcortical responses to music and speech...\" is stimulus-driven perception and is labeled Modality=Auditory, Type=Perception; by analogy, a stimulus-driven representation study with food categories would map to Modality=Visual (if images) and Type=Perception). Also consistent with the schizophrenia visual discrimination example labeled Modality=Visual, Type=Perception: even when the analysis focus is representational/decoding, the construct is still primarily perceptual processing rather than motor or resting-state.","metadata_analysis":"Available metadata is sparse but indicates a task-based EEG study about food representations, with no clinical recruitment mentioned. Key snippets: (1) Title: \"The Time-Course of Food Representation in the Human Brain\" (implies stimulus-evoked representational processing over time). (2) tasks: \"food\" (suggests an experimental condition involving food stimuli). (3) participants_overview: \"Subjects: 20\" (no patient/control groups or diagnoses listed). (4) readme: \"The main folder contains the raw EEG data in standard bids format.\" (confirms EEG task dataset but not stimulus modality).","paper_abstract_analysis":"No useful paper information. (Only manuscript/preprint link is provided; no abstract text included in the provided metadata.)","evidence_alignment_check":"Pathology: Metadata says no disorder is mentioned (\"Subjects: 20\"; no diagnosis terms anywhere). Few-shot pattern suggests that when no clinical population is specified, label as Healthy. ALIGN.\n\nModality: Metadata does not explicitly state the sensory modality; it only indicates a \"food\" task and \"Food Representation\" in the title. Few-shot conventions suggest that representational studies typically reflect the stimulus channel (e.g., music/speech -> Auditory; dot-motion -> Visual). Given typical food-representation EEG paradigms use visual food images, the few-shot pattern suggests Visual, but this is an inference rather than an explicit fact. PARTIAL ALIGN (inference required).\n\nType: Metadata emphasizes \"Representation\" and \"Time-Course\" rather than choice, learning, or intervention; this most closely matches perceptual/representational processing. Few-shot conventions map discrimination/feature-encoding tasks to Perception (e.g., visual discrimination; auditory encoding). ALIGN, but still somewhat inferential due to missing task description.","decision_summary":"Top-2 candidates per category with head-to-head selection:\n\nPathology:\n- Healthy: Supported by absence of any clinical recruitment info (\"Subjects: 20\"; no group/disease labels) and dataset framed as basic neuroscience (\"Food Representation in the Human Brain\").\n- Unknown: Possible because participants are not explicitly described as healthy controls.\nDecision: Healthy wins because EEGDash convention is to label non-clinical, non-disorder-focused studies as Healthy when no diagnosis is stated.\n\nModality:\n- Visual: Likely food stimuli are presented as pictures for representational time-course decoding; task label \"food\" and title imply stimulus-evoked representations.\n- Other: Possible if food stimuli were olfactory/gustatory or mixed, but there is no metadata support for those.\nDecision: Visual wins as the most plausible dominant stimulus channel given typical EEG food-representation paradigms; however confidence is limited because the modality is not explicitly stated.\n\nType:\n- Perception: Best matches \"Time-Course of ... Representation\" (stimulus-evoked representational dynamics).\n- Affect: Food can be affective/reward-related, but no metadata mentions emotion, reward, craving, liking, or valuation.\nDecision: Perception wins because the stated goal centers on representational processing over time, not affective evaluation.\n\nConfidence justification (quotes/features): Pathology relies on the explicit lack of clinical terms plus \"Subjects: 20\"; Modality and Type rely mainly on title \"Food Representation\" and task name \"food\" with conventional inference from similar stimulus-driven few-shot examples."}},"total_duration_s":null,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"ee0bf884e1a437a5","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"canonical_name":null,"name_confidence":0.63,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Moerel2024"}}