{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a33c8","dataset_id":"ds005279","associated_paper_doi":null,"authors":["Hsi T. Wei","Farhan B. Faisal","Tamara Beck","Claire Shao","Jed A. Meltzer"],"bids_version":"1.9.0","contact_info":["Tiana Wei"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds005279.v1.0.3","datatypes":["meg"],"demographics":{"subjects_count":30,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005279","osf_url":null,"github_url":null,"paper_url":null},"funding":["This research was supported by NSERC Discovery Grant RGPIN-2019-06515"],"ingestion_fingerprint":"c92ee20810b72da78fdc3ca838a022772efb41069d3df73c0fb959b1ec32b8af","license":"CC0","n_contributing_labs":null,"name":"Picture-Word Interference Dataset","readme":"This study was conducted at the Rotman Research Institute at Baycrest Hospital in Toronto, Canada.\nThis dataset contains 30 healthy young adults' MEG (CTF), sMRI, and behavioural data on a picture-word interference (PWI) task. Subjects were shown images of objects one by one and were instructed to retrieve the name of the pictures covertly and judge whether the name ends in a target sound given at the beginning of each task block, by pressing the yes or no buttons with their right hand. Whenever they see an image, they will often also hear a distractor word played through their earphone. The picture and word could be phonologically related, semantically related, or unrelated.\nThere were 3 runs of the PWI task for each participant. Each run contained 120 trials, containing an equal number of trials for each picture-word condition. Behaviourally, the reaction time and accuracy of their button-pressing response were recorded. Meanwhile, the MEG data was epoched to the picture onset and response onset for event-related analyses. Each subject obtained their own structural MRI for MEG source localization.\nCorresponding analysis code can be found under the code folder, with the \"analysis walkthrough\" documenting more detailed explanation of the analysis.","recording_modality":["meg"],"senior_author":"Jed A. Meltzer","sessions":["001"],"size_bytes":63222442220,"source":"openneuro","study_design":null,"study_domain":null,"tasks":[],"timestamps":{"digested_at":"2026-04-22T12:27:29.657194+00:00","dataset_created_at":"2024-06-24T16:17:02.918Z","dataset_modified_at":"2024-07-03T20:27:47.000Z"},"total_files":90,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005279","raw_key":"dataset_description.json","dep_keys":["CHANGES","README"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"ebaed604a1da890e","model":"openai/gpt-5.2","tagged_at":"2026-01-20T17:43:44.238345+00:00"},"tags":{"pathology":["Healthy"],"modality":["Multisensory"],"type":["Other"],"confidence":{"pathology":0.7,"modality":0.85,"type":0.6},"reasoning":{"few_shot_analysis":"Closest few-shot convention match for Modality is the \"Cross-modal Oddball Task\" example, which labels simultaneous visual and auditory cues as \"Multisensory\". This dataset similarly combines visual pictures with an auditory distractor word on (many) trials, so the same modality convention applies. For Pathology, many few-shot examples explicitly map \"healthy\" participant descriptions to the \"Healthy\" label. For Type, there is no direct few-shot analog for picture-word interference / language production; the closest Type examples involve attention/control (e.g., DPX cognitive control) but are not language paradigms, so Type must rely more on metadata facts and the label set limits (likely \"Other\" vs \"Attention\").","metadata_analysis":"Key population and paradigm facts from README: (1) Population: \"This dataset contains 30 healthy young adults' MEG (CTF), sMRI, and behavioural data\". (2) Multisensory stimuli: \"Subjects were shown images of objects\" and \"they will often also hear a distractor word played through their earphone.\" (3) Task/cognitive aim: participants \"retrieve the name of the pictures covertly\" and make a phonological judgment: \"judge whether the name ends in a target sound\"; distractor manipulation: \"phonologically related, semantically related, or unrelated.\"","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology — Metadata says: \"30 healthy young adults\" (explicit). Few-shot pattern suggests labeling such cohorts as \"Healthy\". ALIGN.\nModality — Metadata says both \"shown images of objects\" (visual) and \"hear a distractor word\" (auditory). Few-shot convention (cross-modal oddball) suggests combined auditory+visual => \"Multisensory\". ALIGN.\nType — Metadata says the task is picture naming/lexical retrieval with picture-word interference and phonological judgment (language production/psycholinguistics). Few-shot patterns for \"Attention\" involve cognitive control tasks (e.g., DPX) rather than language; thus weak alignment. Given allowed Type labels do not include Language, assigning \"Other\" fits better than forcing an attention label. PARTIAL CONFLICT/AMBIGUITY resolved in favor of \"Other\" due to construct mismatch with available labels.","decision_summary":"Top-2 candidates per category:\nPathology: (1) Healthy — supported by \"30 healthy young adults\". (2) Unknown — only if no population info (not the case). Final: Healthy. Confidence justified by 1 explicit quote.\nModality: (1) Multisensory — supported by \"shown images of objects\" + \"hear a distractor word\"; few-shot cross-modal oddball uses Multisensory for visual+auditory. (2) Visual — if focusing only on pictures, but auditory distractors are integral. Final: Multisensory. Confidence justified by 2 explicit quotes + strong few-shot analog.\nType: (1) Other — task centers on lexical retrieval/picture-word interference and phonological/semantic manipulation (language-focused construct not represented in allowed Type list). (2) Attention — plausible because distractor interference requires attentional control, but this is secondary to language processing in the description. Final: Other. Confidence moderate due to ambiguity and lack of a perfect label."}},"nemar_citation_count":0,"computed_title":"Picture-Word Interference Dataset","nchans_counts":[],"sfreq_counts":[{"val":1200.0,"count":90}],"stats_computed_at":"2026-04-22T23:16:00.309174+00:00","total_duration_s":null,"author_year":"Wei2024","size_human":"58.9 GB","canonical_name":null}}