{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3474","dataset_id":"ds006914","associated_paper_doi":null,"authors":["Ryuzaburo Kochi","Aya Kanno","Hiroshi Uda","Keisuke Hatano","Masaki Sonoda","Hidenori Endo","Michael Cools","Robert Rothermel","Aimee F. Luat","Eishi Asano"],"bids_version":"1.7.0","contact_info":["Ryuzaburo Kochi"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds006914.v1.0.3","datatypes":["ieeg"],"demographics":{"subjects_count":110,"ages":[16,17,8,11,17,17,14,10,10,6,14,11,13,23,10,5,16,16,37,14,5,11,21,17,15,44,37,14,28,20,14,13,41,12,8,10,10,12,9,28,27,17,15,6,12,5,9,30,21,13,12,11,17,16,17,8,13,12,13,11,15,14,11,12,17,11,17,10,11,16,15,6,15,10,10,16,12,8,14,19,16,8,5,16,19,15,14,5,16,13,16,9,13,11,13,20,49,12,15,8,16,17,13,7,6,17,19,15,18,8],"age_min":5,"age_max":49,"age_mean":14.790909090909091,"species":null,"sex_distribution":{"f":52,"m":58},"handedness_distribution":{"r":101,"l":8,"a":1}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds006914","osf_url":null,"github_url":null,"paper_url":null},"funding":["N/A"],"ingestion_fingerprint":"039b8068e6485a29bdecdc640f1c91c7ee291b1dcd8441b94a49d6d914c857ce","license":"CC0","n_contributing_labs":null,"name":"Visual Naming EC","readme":"This dataset, used in the analysis reported by Kochi et al., (2025), contains intracranial EEG recordings from 110 individuals who performed an visual‑naming task. Electrode coordinates are provided in MNI‑305 space.\nEach EDF file is tagged for the visual naming task with the following event codes:\n401 – stimulus onset\n501 – response onset\nReference:\nRyuzaburo Kochi, Aya Kanno, Hiroshi Uda, Keisuke Hatano,  Masaki Sonoda, Hidenori Endo, Michael Cools, Robert Rothermel, Aimee F. Luat, Eishi Asano. Whole-Brain Millisecond-Scale Effective Connectivity Atlases of Speech","recording_modality":["ieeg"],"senior_author":"Eishi Asano","sessions":["1","2","3","4","5"],"size_bytes":18843484223,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["picture"],"timestamps":{"digested_at":"2026-04-22T12:29:49.059721+00:00","dataset_created_at":"2025-11-09T16:48:42.880Z","dataset_modified_at":"2025-11-13T19:50:28.000Z"},"total_files":353,"storage":{"backend":"s3","base":"s3://openneuro.org/ds006914","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"0d5678c7db0124cc","model":"openai/gpt-5.2","tagged_at":"2026-01-20T19:14:00.247932+00:00"},"tags":{"pathology":["Epilepsy"],"modality":["Visual"],"type":["Other"],"confidence":{"pathology":0.6,"modality":0.8,"type":0.8},"reasoning":{"few_shot_analysis":"Few-shot example 'Meta-rdk: Preprocessed EEG data' shows that when metadata explicitly describes a visual stimulus task (\"visual discrimination task\" with moving dots), the Modality is labeled 'Visual' and Type can be 'Perception' when the goal is perceptual discrimination. Another convention is illustrated by 'EEG Motor Movement/Imagery Dataset': despite visual targets on a screen, the Type is 'Motor' because the research purpose is motor execution/imagery rather than perception. For the current dataset, the task is explicitly visual naming and the cited paper focuses on \"Speech\"/language connectivity, which does not map cleanly to Perception/Motor/Memory/Attention in the allowed Type list, so by convention it fits best under Type 'Other' while keeping Modality 'Visual'. Few-shot examples do not provide a direct intracranial-EEG clinical-population mapping; thus pathology relies primarily on metadata facts/inference.","metadata_analysis":"Key quoted metadata facts: (1) Population/recording context: \"contains intracranial EEG recordings from 110 individuals\". (2) Task and stimulus channel: participants \"performed an visual‑naming task\" and events include \"401 – stimulus onset\" and \"501 – response onset\". (3) Study focus hint: reference title includes \"Effective Connectivity Atlases of Speech\".","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology — Metadata says: \"intracranial EEG recordings from 110 individuals\" (no diagnosis stated). Few-shot pattern suggests: intracranial EEG datasets commonly come from pre-surgical epilepsy evaluations, but this is not explicitly shown in the provided few-shots. ALIGN/CONFLICT: No direct alignment possible; inference only. Modality — Metadata says: \"visual‑naming task\" and \"stimulus onset\" events. Few-shot pattern suggests: visual stimulus tasks map to Modality 'Visual' (e.g., visual discrimination task example). ALIGN: Yes. Type — Metadata says: \"visual‑naming task\" and paper reference emphasizes \"Speech\". Few-shot pattern suggests: when the study purpose is not well-covered by existing cognitive construct labels (e.g., language production), use Type 'Other'; and do not force 'Perception' just because stimuli are visual (cf. motor imagery example where visual cues did not imply Perception). ALIGN: Yes.","decision_summary":"Top-2 candidates per category:\n\nPathology:\n1) Epilepsy — Evidence: only indirect/contextual from \"intracranial EEG\" typically being collected in epilepsy monitoring units.\n2) Unknown — Evidence: no explicit diagnosis or recruitment condition is stated.\nHead-to-head: Epilepsy is plausible but not stated; choosing Epilepsy requires inference beyond the text, so confidence is limited.\n\nModality:\n1) Visual — Evidence quotes: \"performed an visual‑naming task\"; event code \"401 – stimulus onset\" in a visual naming paradigm.\n2) Other — would apply if stimuli were not sensory-specific, but metadata explicitly says visual.\nHead-to-head: Visual clearly stronger.\n\nType:\n1) Other — Evidence quotes: \"visual‑naming task\" (language naming/production) and reference to \"Effective Connectivity Atlases of Speech\" indicating a language/speech connectivity aim, which is not a dedicated allowed Type label.\n2) Perception — would fit if the main aim were sensory discrimination/detection, but naming/speech connectivity goes beyond perception.\nHead-to-head: Other stronger because task goal is naming/speech-related rather than perceptual discrimination.\n\nConfidence justification: Modality and Type have 2 explicit supporting quotes each; Pathology is inferred from context only (no diagnosis quote), so it remains lower."}},"computed_title":"Visual Naming EC","nchans_counts":[{"val":128,"count":245},{"val":138,"count":19},{"val":136,"count":19},{"val":140,"count":8},{"val":112,"count":6},{"val":110,"count":6},{"val":150,"count":5},{"val":156,"count":5},{"val":164,"count":4},{"val":134,"count":4},{"val":148,"count":4},{"val":130,"count":4},{"val":118,"count":3},{"val":96,"count":3},{"val":84,"count":3},{"val":144,"count":3},{"val":152,"count":3},{"val":160,"count":3},{"val":154,"count":3},{"val":64,"count":2},{"val":58,"count":1}],"sfreq_counts":[{"val":1000.0,"count":353}],"stats_computed_at":"2026-04-22T23:16:00.312152+00:00","total_duration_s":null,"author_year":"Kochi2025_Visual_Naming_EC","canonical_name":null}}