{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a340e","dataset_id":"ds005672","associated_paper_doi":null,"authors":["Li Zhiyuan","Zhao Jiaxin"],"bids_version":"1.8.0","contact_info":["Anonymous User"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds005672.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":3,"ages":[24,23,24],"age_min":23,"age_max":24,"age_mean":23.666666666666668,"species":null,"sex_distribution":{"m":3},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005672","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"a4052a103b2f608d600e2cdbe2126a2348305a7891fdfed1092525f8e4724cd8","license":"CC0","n_contributing_labs":null,"name":"PerceiveImagine","readme":"Participants perceive the image for 6 seconds based on the prompt, then close their eyes and imagine the image they just saw for 6 seconds based on the prompt. After hearing the prompt sound, they enter the next loop","recording_modality":["eeg"],"senior_author":"Zhao Jiaxin","sessions":[],"size_bytes":4545640816,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["PerceiveImagine"],"timestamps":{"digested_at":"2026-04-22T12:28:40.727014+00:00","dataset_created_at":"2024-11-30T09:25:18.469Z","dataset_modified_at":"2024-12-04T15:03:18.000Z"},"total_files":3,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005672","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv","task-PerceiveImagine_events.json"]},"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"33886005cf6174e8","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Memory"],"confidence":{"pathology":0.7,"modality":0.7,"type":0.7},"reasoning":{"few_shot_analysis":"Closest few-shot conventions: (1) 'EEG Motor Movement/Imagery Dataset' shows that when the research focus is imagery, the Type label can reflect the cognitive domain of the imagery (they label Motor for motor imagery). By analogy, this dataset’s imagery is visual (imagining pictures), so a non-motor cognitive label like Memory is a better fit than Motor. (2) 'Meta-rdk: Preprocessed EEG data' is a visual discrimination/perception task labeled Visual/Perception; our dataset includes an explicit perception phase (viewing images), supporting Visual as the main Modality and Perception as a runner-up Type. (3) 'digit span task' is labeled Memory when the task is about holding/manipulating recently presented stimuli; similarly, imagining a just-seen image fits a short-term memory/mental imagery framing.","metadata_analysis":"Key metadata facts: (1) Task structure includes visual perception and subsequent imagination: \"Participants perceive the image for 6 seconds... then close their eyes and imagine the image they just saw for 6 seconds\". (2) There is an auditory cue but it appears to be instructional: \"After hearing the prompt sound, they enter the next loop\". (3) Participant demographics do not indicate any disorder and look like a small normative sample: \"Subjects: 3; Sex: {'m': 3}; Age range: 23-24\".","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says nothing about a diagnosis and only lists basic demographics (\"Subjects: 3... Age range: 23-24\"), suggesting a normative cohort. Few-shot pattern: many non-clinical small-N experiments are labeled Healthy. ALIGN.\n\nModality: Metadata says the central stimulus is an image (\"perceive the image\") and also mentions an auditory prompt (\"prompt sound\"). Few-shot pattern: when one stimulus channel is clearly primary (e.g., visual discrimination tasks), Modality is that channel (Visual), while Multisensory is used when both channels are integral (e.g., cross-modal oddball with simultaneous visual+auditory cues). Here the sound appears to be a cue, not the main content. ALIGN (choose Visual).\n\nType: Metadata describes two phases: perception of an image and imagining the same image (\"imagine the image they just saw\"). Few-shot pattern: pure sensory discrimination maps to Perception, while maintaining/recalling recently presented items maps to Memory (digit-span example). This dataset emphasizes imagery of a just-seen stimulus; that fits Memory/mental imagery more than Perception alone. Mostly ALIGN (imagery not explicitly labeled in allowed types, so we map to the closest convention: Memory).","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n- Healthy: Supported by absence of any clinical recruitment language and only demographics (\"Subjects: 3...\").\n- Unknown: Possible because recruitment criteria aren’t explicitly stated.\nHead-to-head: Healthy wins because the metadata is consistent with a typical non-clinical sample and contains no pathology terms.\n\nModality:\n- Visual: \"perceive the image\" indicates the dominant stimulus content is visual.\n- Multisensory: \"prompt sound\" adds an auditory component.\nHead-to-head: Visual wins because the auditory element is phrased as a prompt/cue, while the core stimulus is an image.\n\nType:\n- Memory: \"imagine the image they just saw\" implies maintaining/reconstructing a just-presented visual stimulus (mental imagery/short-term memory).\n- Perception: The task includes an explicit perception phase (\"perceive the image for 6 seconds\").\nHead-to-head: Memory wins because the paradigm’s distinguishing feature is the imagination/recall phase; Perception remains plausible as a component but seems secondary to the perceive-then-imagine design.\n\nConfidence justifications:\n- Pathology (0.7): One explicit demographic quote and no competing clinical evidence.\n- Modality (0.7): One explicit quote about images plus one quote about a prompt sound; requires inference about which is dominant.\n- Type (0.7): One explicit quote about imagining the just-seen image; mapping imagery to Memory is a reasonable but still inferential step given allowed labels."}},"nemar_citation_count":2,"computed_title":"PerceiveImagine","nchans_counts":[{"val":69,"count":2},{"val":65,"count":1}],"sfreq_counts":[{"val":1000.0,"count":3}],"stats_computed_at":"2026-04-22T23:16:00.310780+00:00","source_url":"https://openneuro.org/datasets/ds005672","total_duration_s":16507.8,"author_year":"Zhiyuan2024","canonical_name":null}}