{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3428","dataset_id":"ds005932","associated_paper_doi":null,"authors":["Phillip J. Holcomb","Jacklyn Jardel","Katherine J. Midgley","and Karen Emmorey"],"bids_version":"1.8.0","contact_info":["Jacklyn Jardel","Phillip Holcomb","Phillip Holcomb"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds005932.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":29,"ages":[29,20,22,23,24,21,25,20,21,20,19,20,23,22,27,29,35,35,29,20,25,26,31,30,20,26,25,22,19],"age_min":19,"age_max":35,"age_mean":24.413793103448278,"species":null,"sex_distribution":{"f":19,"m":10},"handedness_distribution":{"r":28,"l":1}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005932","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"583355835e6c2df177468e87c56419c00ef547f65873cb39968a15640d13e1f9","license":"CC0","n_contributing_labs":null,"name":"PWIe","readme":"Data collection took place at the NeuroCognition Laboratory (NCL) in San Diego, California under the supervision of Dr. Phillip Holcomb. This project followed the San Diego State University’s IRB guidelines.\nParticipants sat in a comfortable chair in a darkened sound attenuated room throughout the experiment and wore 32 head and face electrodes (left mastoid reference). They were given a gamepad for button pressing and wore a lightweight headset to record their verbal responses. They were instructed to watch the LCD video monitor that was at a viewing distance of 150cm. All stimuli were less than 2° of horizontal and vertical visual angle.\nParticipants were presented with 100 unique simple black on white to-be-named line drawings, with 50 pictures in the Semantic category and 50 in the Identity category. Each picture was presented twice, once preceded by an unrelated English distractor word and once by a related English distractor word (2000 ms duration). Prime \"distractor\" words were presented before the picture for 200 ms and were either semantically related, were the same name as the picture, or were unrelated to the picture. Participants were told to name each picture as quickly as possible in English. Their voice response was digitized online. The experiment was self-paced and participants pressed a button after each trial when ready to go on.\nEEG was sampled continuously at 500 Hz with a bandpass of DC to 200 Hz. Event markers were stored with the EEG data for later ERP averaging. The raw EEG data were imported into EEGLab and saved as .set files. A key to the event code structure is contained in the PWIe bdf files for each subject.                                                                                                                                                                                                                                                                                                                                                                                                                                           \u0000","recording_modality":["eeg"],"senior_author":"and Karen Emmorey","sessions":[],"size_bytes":2446718661,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["PictureWordInterference"],"timestamps":{"digested_at":"2026-04-22T12:29:00.137046+00:00","dataset_created_at":"2025-02-19T01:35:23.604Z","dataset_modified_at":"2025-11-05T20:18:42.000Z"},"total_files":29,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005932","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv","task-PictureWordInterference_events.json"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"5a5d182f8723bb4d","model":"openai/gpt-5.2","tagged_at":"2026-01-20T18:44:29.380598+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Other"],"confidence":{"pathology":0.6,"modality":0.85,"type":0.7},"reasoning":{"few_shot_analysis":"Closest few-shot by stimulus/task structure is the schizophrenia-spectrum “visual discrimination task” example (moving dot stimuli on a screen, visual decisions), which is labeled Modality=Visual and Type=Perception. This guides the convention that screen-presented stimuli (pictures/words) map to Modality=Visual. However, unlike a pure perceptual discrimination task, the current dataset is a picture naming / picture-word interference paradigm (language production/semantic interference), which is not directly covered by the provided Type labels; this pushes Type toward Other rather than Perception. For Pathology, many few-shots explicitly state clinical recruitment (e.g., Parkinson’s, TBI, Dementia). Here, no disorder is mentioned, so by convention this defaults to Healthy/Unknown; given a standard lab ERP study description, Healthy is the best match but with reduced confidence due to missing explicit statement.","metadata_analysis":"Key task/stimulus facts from the provided README include: (1) visual presentation: “They were instructed to watch the LCD video monitor... All stimuli were less than 2° of horizontal and vertical visual angle.” (2) picture-word interference structure: “Participants were presented with 100 unique simple black on white to-be-named line drawings... Each picture was presented twice, once preceded by an unrelated English distractor word and once by a related English distractor word... Prime ‘distractor’ words were presented before the picture for 200 ms.” (3) language production response: “Participants were told to name each picture as quickly as possible in English. Their voice response was digitized online.” No clinical population is described anywhere in the snippet (no mention of patients/diagnoses).","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata SAYS no diagnosis/clinical recruitment is described (e.g., only “Participants sat... wore 32 head and face electrodes...”). Few-shot pattern SUGGESTS that when clinical groups are present they are explicitly named (e.g., “Parkinsons disease”, “mTBI”, “Alzheimer’s disease”); absence of such wording typically aligns with Healthy. ALIGN (default-healthy convention), but weak because metadata does not explicitly say “healthy”.\n\nModality: Metadata SAYS visual stimuli on a monitor: “watch the LCD video monitor” and “line drawings” plus visually presented “distractor word”. Few-shot pattern SUGGESTS screen-based stimuli map to Visual (e.g., visual discrimination task). ALIGN.\n\nType: Metadata SAYS the paradigm is picture naming with semantic/identity related distractor words: “to-be-named line drawings” and “semantically related... or unrelated” primes; primary aim appears to be semantic/lexical interference in naming (language production), not a basic perceptual discrimination. Few-shot pattern SUGGESTS labeling pure visual discrimination as Perception, and explicit cognitive constructs like Memory for digit span. Here, the best fit among allowed Types is ambiguous; language/semantic interference is not a named Type label, so choosing Other over Perception better matches the ‘construct studied’ rule. PARTIAL CONFLICT (Perception is plausible by stimulus nature, but construct is language/semantic interference); metadata-driven interpretation wins -> Other.","decision_summary":"Top-2 candidates — Pathology: (1) Healthy: supported by lack of any clinical recruitment language and standard lab ERP participant description (“Participants sat...”, IRB/lab setting). (2) Unknown: because no explicit “healthy controls” statement is provided. Final: Healthy. Confidence=0.6 (inference from absence of clinical terms; no explicit quote confirming health).\n\nTop-2 candidates — Modality: (1) Visual: “watch the LCD video monitor” and “line drawings” and visually presented “distractor word”. (2) Multisensory: because responses include spoken naming and button presses, but these are responses not stimulus channels. Final: Visual. Confidence=0.85 (multiple explicit stimulus quotes).\n\nTop-2 candidates — Type: (1) Other: language production / semantic interference focus (“name each picture”, “semantically related... or unrelated”). (2) Perception: could be argued due to visual object processing, but task goal is naming/lexical-semantic access rather than perceptual detection. Final: Other. Confidence=0.7 (clear task description but Type label is a coarse fit)."}},"computed_title":"PWIe","nchans_counts":[{"val":32,"count":29}],"sfreq_counts":[{"val":500.0,"count":29}],"stats_computed_at":"2026-04-22T23:16:00.311120+00:00","total_duration_s":35808.9,"canonical_name":null,"name_confidence":0.46,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Holcomb2025"}}