{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3402","dataset_id":"ds005565","associated_paper_doi":null,"authors":["Brittany Lee","Sofia E. Ortega","Priscilla M. Martinez","Katherine J. Midgley","Phillip J. Holcomb","Karen Emmorey"],"bids_version":"1.8.0","contact_info":["Sofia E. Ortega"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds005565.v1.0.3","datatypes":["eeg"],"demographics":{"subjects_count":24,"ages":[24,32,27,32,38,33,44,53,37,20,33,26,27,30,34,26,34,35,39,36,22,41,38,40],"age_min":20,"age_max":53,"age_mean":33.375,"species":null,"sex_distribution":{"m":13,"f":11},"handedness_distribution":{"r":22,"l":1}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005565","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"23cd5fa4b56370f56b0535bc61007244498a15f0fe62582a682b8d76fb6d193c","license":"CC0","n_contributing_labs":null,"name":"Neural associations between fingerspelling, print, and signs: An ERP priming study with deaf readers","readme":"Data collection took place at the NeuroCognition Laboratory (NCL) in San Diego, California under the supervision of Dr. Phillip Holcomb. This project followed the San Diego State University’s IRB guidelines.\nParticipants sat in a comfortable chair in a darkened sound attenuated room throughout the experiment. They were given a gamepad for button pressing. They were instructed to watch the LCD video monitor that was at a viewing distance of 150cm.\nParticipants were presented with 300 prime-target pairs. All targets were four-letter English words. Of the 300 critical trials, 100 had English word primes, 100 had ASL sign primes, and 100 had fingerspelled word primes. Half of the primes in each condition were related to the targets. Related English word primes were identity primes to the English word, related fingerspelled word primes were also identity primes, and related ASL primes were ASL translations of the English word targets. The other half of the primes were unrelated to the targets.\nParticipants were instructed to focus on the purple fixation cross that appeared on the screen for 800ms. This fixation cross then turned white for 500ms. Then, one of three prime conditions was presented: an English word, an ASL sign, or a fingerspelled word. English prime words were presented for 300ms. Signed (M = 565ms) and fingerspelled (M = 1173ms) video primes had variable durations. All target stimuli were 4-letter English words presented for 500ms. Related primes were either identity or translations.\nPress any of the 4 buttons on the right of the gamepad whenever you see an animal. It doesn’t matter if the animal is presented as a sign, a word, or fingerspelled. Press for ANY animal. You can blink whenever you see purple. A purple + means you have time for a quick blink. A purple (--) means you can blink as much as you want.","recording_modality":["eeg"],"senior_author":"Karen Emmorey","sessions":[],"size_bytes":2816585202,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["SemanticCategorization"],"timestamps":{"digested_at":"2026-04-22T12:28:38.197174+00:00","dataset_created_at":"2024-10-10T00:40:09.570Z","dataset_modified_at":"2024-10-17T20:57:24.000Z"},"total_files":24,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005565","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv","task-SemanticCategorization_events.json"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"7d10350fd6641259","model":"openai/gpt-5.2","tagged_at":"2026-01-20T18:30:04.890267+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Memory"],"confidence":{"pathology":0.7,"modality":0.85,"type":0.7},"reasoning":{"few_shot_analysis":"Most similar few-shot convention is the digit span dataset (Healthy; Auditory; Memory), which maps a language/cognitive task involving verbal materials to the Type label \"Memory\" rather than \"Perception\". In contrast, the schizophrenia example uses a visual dot-motion discrimination paradigm and is labeled \"Perception\", reflecting a sensory discrimination aim. This dataset uses word/sign primes and word targets with semantic relatedness manipulation (priming), which is closer to semantic/lexical memory processing than low-level perceptual discrimination, so I follow the digit-span convention for labeling higher-level verbal/semantic processing as \"Memory\".\nFor Modality, several few-shots show that the stimulus channel (not button press) drives modality labels; here stimuli are visually presented words and videos, aligning with \"Visual\".","metadata_analysis":"Key population/task/stimulus facts from the README:\n1) Visual language priming paradigm: \"Participants were presented with 300 prime-target pairs.\" and \"All targets were four-letter English words.\" and \"one of three prime conditions was presented: an English word, an ASL sign, or a fingerspelled word.\"\n2) Visual presentation specifics: \"They were instructed to watch the LCD video monitor\" and \"English prime words were presented for 300ms. Signed ... and fingerspelled ... video primes had variable durations.\" and \"All target stimuli were 4-letter English words presented for 500ms.\"\n3) Task requirement (secondary/go-no-go): \"Press any of the 4 buttons ... whenever you see an animal. It doesn’t matter if the animal is presented as a sign, a word, or fingerspelled.\"","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: no diagnosis or clinical recruitment is mentioned; only general procedures like \"Participants sat in a comfortable chair...\".\n- Few-shot pattern suggests: when no disorder is described, label as \"Healthy\" (e.g., multiple few-shots labeled Healthy for standard cognitive tasks).\n- Alignment: ALIGN (no clinical facts to override).\n\nModality:\n- Metadata says: stimuli are shown on a \"video monitor\" and include \"English word\" primes and \"ASL sign\" / \"fingerspelled\" video primes, plus visual fixation crosses.\n- Few-shot pattern suggests: modality follows stimulus channel; visual screen-based stimuli -> \"Visual\".\n- Alignment: ALIGN.\n\nType:\n- Metadata says: prime-target relatedness manipulation with words/signs: \"Half of the primes in each condition were related to the targets\" and \"Related ... primes were either identity or translations\" (semantic priming / lexical-semantic access). Also includes an animal-detection button press instruction.\n- Few-shot pattern suggests: higher-level verbal working-memory tasks map to \"Memory\" (digit span example), while low-level discrimination maps to \"Perception\" (dot motion discrimination example).\n- Alignment: PARTIAL; task involves visual perception of stimuli, but the experimental manipulation (semantic relatedness/translation priming) more strongly matches memory/semantic access than pure perception.","decision_summary":"Top-2 candidates per category (with head-to-head comparison):\n\nPathology:\n1) Healthy — Evidence: no clinical population stated; generic lab EEG experiment description (e.g., \"Data collection took place at the NeuroCognition Laboratory...\"). Matches few-shot convention that non-clinical cognitive tasks are labeled Healthy.\n2) Unknown — Evidence: participant characteristics (e.g., deaf/hearing, ASL proficiency) are not specified.\nDecision: Healthy wins because the dataset provides no explicit clinical recruitment and appears to be a standard cognitive EEG experiment.\nConfidence notes: limited by lack of explicit \"healthy\" wording.\n\nModality:\n1) Visual — Evidence: \"watch the LCD video monitor\"; primes are \"an English word, an ASL sign, or a fingerspelled word\" with video durations; targets are visually presented English words.\n2) Multisensory — Weak evidence: none for auditory/tactile stimuli; only visual is described.\nDecision: Visual wins clearly.\n\nType:\n1) Memory — Evidence: semantic priming/lexical-semantic manipulation: \"prime-target pairs\"; \"related to the targets\"; \"ASL translations of the English word targets\" (semantic/lexical access consistent with semantic memory).\n2) Perception — Evidence: stimuli are visually perceived (words/videos) and there is a detection response for animals; however this is not sensory discrimination as the primary construct.\nDecision: Memory wins because the core manipulation is semantic relatedness/translation priming rather than perceptual discrimination.\n\nNo metadata facts were overridden by few-shot patterns; few-shots were used only to follow labeling conventions."}},"nemar_citation_count":1,"computed_title":"Neural associations between fingerspelling, print, and signs: An ERP priming study with deaf readers","nchans_counts":[{"val":32,"count":24}],"sfreq_counts":[{"val":500.0,"count":24}],"stats_computed_at":"2026-04-22T23:16:00.310624+00:00","total_duration_s":41168.5,"author_year":"Lee2024_StudyWITH","canonical_name":null}}