{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c4e","dataset_id":"ds007521","associated_paper_doi":null,"authors":["Moerel, Denise","Chenh, Cecilia","Bowman, Sophie","Carlson, Thomas"],"bids_version":"1.0.2","contact_info":["Denise Moerel"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds007521.v1.0.1","datatypes":["eeg"],"demographics":{"subjects_count":23,"ages":[22,62,21,18,21,23,21,20,21,21,22,23,21,22,20,21,20,22,22,21,22,18,21],"age_min":18,"age_max":62,"age_mean":22.82608695652174,"species":null,"sex_distribution":{"f":14,"m":9},"handedness_distribution":{"r":20,"l":3}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds007521","osf_url":null,"github_url":null,"paper_url":null},"funding":["ARC DP200101787 (TAC)"],"ingestion_fingerprint":"4c66a7318e84ada67e09f2565df26a6040a8d94b64b227c72e605d412bbb312d","license":"CC0","n_contributing_labs":null,"name":"The effect of hunger and state preferences on the neural processing of food images","readme":"A preprint of the manuscript can be found on bioRxiv: doi.org/10.1101/2025.09.09.674354\nThe experiment and analysis code can be found via the Open Science Framework: doi.org/10.17605/OSF.IO/ZFD7P\nExperiment Details:\nHuman electroencephalography recordings from 23 participants, who did a letter task and calorie categorisation task. In the letter task, participants viewed rapid streams of overlaid food/non-food images and letters, pressing a button whenever they saw a vowel, while ignoring the images. This setup directed attention away from the visual objects, making them task-irrelevant. In contrast, the calorie categorisation task required participants to actively evaluate each food image and classify it as higher or lower in calories than bread, by pressing a button.\nExperiment length: 1 hour","recording_modality":["eeg"],"senior_author":"Carlson, Thomas","sessions":["1","2"],"size_bytes":31115320529,"source":"openneuro","storage":{"backend":"s3","base":"s3://openneuro.org/ds007521","raw_key":"dataset_description.json","dep_keys":["CHANGES","README.txt","participants.json","participants.tsv","task-food_eeg.json","task-food_events.json"]},"study_design":null,"study_domain":null,"tasks":["food"],"timestamps":{"digested_at":"2026-04-22T12:30:21.148916+00:00","dataset_created_at":"2026-03-16T02:50:30.616Z","dataset_modified_at":"2026-03-18T02:09:41.000Z"},"total_files":46,"computed_title":"The effect of hunger and state preferences on the neural processing of food images","nchans_counts":[{"val":64,"count":46}],"sfreq_counts":[{"val":100.0,"count":46}],"stats_computed_at":"2026-04-22T23:16:00.312849+00:00","total_duration_s":null,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"8d19f16f81b14b6b","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.6,"modality":0.85,"type":0.7},"reasoning":{"few_shot_analysis":"Most similar conventions in the few-shot set are the visual-task examples that map visual image-based paradigms to Visual modality and perceptual/attentional constructs to Type. For instance, the schizophrenia visual moving-dots discrimination dataset is labeled (Visual, Perception), showing that image-based visual discrimination tasks generally map to Perception. Separately, the DPX cognitive control task in TBI is labeled Type=Attention, illustrating that when the paradigm’s key manipulation is where attention is directed (and performance depends on attentional control), Attention is the appropriate Type label. This dataset similarly contrasts an image-irrelevant task (vowel detection) vs an image-relevant evaluation task, making an Attention vs Perception decision the key mapping question.","metadata_analysis":"Key metadata facts:\n1) Population: \"Human electroencephalography recordings from 23 participants\" with no mention of a diagnosed clinical group.\n2) Visual stimuli and attention manipulation: \"participants viewed rapid streams of overlaid food/non-food images and letters\" and \"pressing a button whenever they saw a vowel, while ignoring the images. This setup directed attention away from the visual objects\".\n3) Image evaluation task: \"the calorie categorisation task required participants to actively evaluate each food image and classify it as higher or lower in calories than bread\".\nThese support Healthy (no recruitment pathology stated), Visual modality (images/letters), and a Type centered on attentional allocation (ignore vs evaluate).","paper_abstract_analysis":"No useful paper information. (Only a preprint link is provided; no abstract text included in the metadata provided here.)","evidence_alignment_check":"Pathology:\n- Metadata says: \"Human electroencephalography recordings from 23 participants\" (no disorder/diagnosis stated).\n- Few-shot pattern suggests: absent explicit clinical recruitment language, label as Healthy.\n- Alignment: ALIGN (no conflict).\n\nModality:\n- Metadata says: \"viewed rapid streams of overlaid food/non-food images and letters\" and \"actively evaluate each food image\".\n- Few-shot pattern suggests: image-based tasks map to Visual modality.\n- Alignment: ALIGN.\n\nType:\n- Metadata says: \"directed attention away from the visual objects\" (letter/vowel task while ignoring images) versus \"actively evaluate each food image and classify it\" (calorie categorization).\n- Few-shot pattern suggests: if the primary manipulation is attentional focus/ignoring vs attending, Type=Attention (as in the DPX example); if primarily visual discrimination, Type=Perception.\n- Alignment: PARTIAL—metadata supports both Attention and Perception/Decision-making elements; the attention-directing contrast is explicitly highlighted, so Attention is selected.","decision_summary":"Top-2 candidates per category and final choice:\n\nPathology:\n1) Healthy — Evidence: no clinical recruitment/diagnosis mentioned (\"23 participants\" only).\n2) Unknown — Could be considered if health status is not stated explicitly.\nHead-to-head: Healthy wins because the dataset is framed as a general cognitive experiment (hunger/state preference manipulation) with no clinical cohort indicated.\nConfidence drivers: lack of explicit \"healthy\" wording keeps confidence moderate.\n\nModality:\n1) Visual — Evidence: \"food/non-food images and letters\"; \"evaluate each food image\".\n2) Multisensory — Only if additional non-visual stimuli were present (not indicated).\nHead-to-head: Visual clearly wins (stimuli are images/letters).\nConfidence drivers: multiple explicit stimulus quotes.\n\nType:\n1) Attention — Evidence: explicit attentional manipulation: \"directed attention away from the visual objects\" in the letter task vs image-relevant evaluation in the calorie task.\n2) Decision-making — Evidence: binary categorization: \"classify it as higher or lower in calories than bread\".\nHead-to-head: Attention wins because the paradigm is explicitly designed around whether visual objects are task-irrelevant vs task-relevant (attend/ignore), a hallmark of attentional processing studies.\nConfidence drivers: one strong explicit quote for attention plus a plausible competing Decision-making interpretation."}},"canonical_name":null,"name_confidence":0.55,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Moerel2026"}}