{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3404","dataset_id":"ds005574","associated_paper_doi":null,"authors":["Zaid Zada","Samuel A. Nastase","Bobbi Aubrey","Itamar Jalon","Ariel Goldstein","Sebastian Michelmann","Haocheng Wang","Liat Hasenfratz","Werner Doyle","Daniel Friedman","Patricia Dugan","Lucia Melloni","Sasha Devore","Orrin Devinsky","Adeen Flinker","Uri Hasson"],"bids_version":"1.10.0","contact_info":["Zaid Zada"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds005574.v1.0.2","datatypes":["ieeg"],"demographics":{"subjects_count":9,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005574","osf_url":null,"github_url":null,"paper_url":null},"funding":["National Institutes of Health grant DP1HD091948","National Institutes of Health grant R01NS109367"],"ingestion_fingerprint":"c795fba73adb54921af76f40ac09f4db729ba3a316427aeaf65743898dd94c0b","license":"CC0","n_contributing_labs":null,"name":"The \"Podcast\" ECoG dataset","readme":"The \"Podcast\" ECoG dataset for modeling neural activity during natural story listening.\nWe introduce the “Podcast” electrocorticography (ECoG) dataset for modeling neural activity supporting natural narrative comprehension. This dataset combines the exceptional spatiotemporal resolution of human intracranial electrophysiology with a naturalistic experimental paradigm for language comprehension. In addition to the raw data, we provide a minimally preprocessed version in the high-gamma spectral band to showcase a simple pipeline and to make it easier to use. Furthermore, we include the auditory stimuli, an aligned word-level transcript, and linguistic features ranging from low-level acoustic properties to large language model (LLM) embeddings. We also include tutorials replicating previous findings and serve as a pedagogical resource and a springboard for new research. The dataset comprises 9 participants with 1,330 electrodes, including grid, depth, and strip electrodes. The participants listened to a 30-minute story with over 5,000 words. By using a natural story with high-fidelity, invasive neural recordings, this dataset offers a unique opportunity to investigate language comprehension.","recording_modality":["ieeg"],"senior_author":"Uri Hasson","sessions":[],"size_bytes":3483284760,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["podcast"],"timestamps":{"digested_at":"2026-04-22T12:28:38.339844+00:00","dataset_created_at":"2024-10-15T20:04:55.411Z","dataset_modified_at":"2025-02-17T13:52:37.000Z"},"total_files":9,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005574","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"722f2a942b28d6dd","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Unknown"],"modality":["Auditory"],"type":["Other"],"confidence":{"pathology":0.55,"modality":0.9,"type":0.75},"reasoning":{"few_shot_analysis":"Most similar few-shot by stimulus modality/paradigm is the example “Subcortical responses to music and speech...” (Healthy / Auditory / Perception): it involves passive/encouraged listening to extended naturalistic audio stimuli, guiding an Auditory modality label. Another relevant convention is the “digit span task” example (Healthy / Auditory / Memory), which shows that when the auditory stimulus is used to probe a higher-level cognitive construct (working memory load), Type can move away from Perception. For this dataset, the described aim is “natural narrative comprehension” (language), which does not map cleanly to Perception or Memory in the allowed Type list, suggesting Type=Other while keeping Modality=Auditory. For Pathology, none of the few-shots cover ECoG specifically, but the epilepsy HFO example shows that when epilepsy is explicitly stated it should be labeled Epilepsy; here it is not explicitly stated, so we cannot directly copy that pathology label.","metadata_analysis":"Key modality/task facts: (1) The readme states the dataset is “for modeling neural activity during natural story listening” and that “The participants listened to a 30-minute story with over 5,000 words.” (2) It also emphasizes an auditory/language paradigm: “naturalistic experimental paradigm for language comprehension” and “we include the auditory stimuli, an aligned word-level transcript, and linguistic features...”.\n\nKey population/pathology facts: The metadata describes invasive recordings (“electrocorticography (ECoG)”, “human intracranial electrophysiology”, “grid, depth, and strip electrodes”) and gives only “The dataset comprises 9 participants with 1,330 electrodes...”, but it does NOT explicitly state a recruitment diagnosis (e.g., epilepsy). Therefore pathology cannot be taken as a stated fact from metadata and must remain uncertain.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: only “electrocorticography (ECoG)”, “human intracranial electrophysiology”, and “grid, depth, and strip electrodes” with “9 participants”; no explicit diagnosis is named.\n- Few-shot pattern suggests: intracranial grid/depth/strip recordings are often from epilepsy monitoring cohorts, but this is an inference rather than an explicit stated recruitment condition.\n- Alignment: PARTIAL/WEAK. There is no conflict, but there is no explicit pathology fact to align with; thus we avoid a high-confidence Epilepsy label.\n\nModality:\n- Metadata says: “natural story listening”, “participants listened to a 30-minute story”, and “include the auditory stimuli”.\n- Few-shot pattern suggests: naturalistic listening datasets map to Auditory modality (as in the music/speech listening example).\n- Alignment: ALIGNS strongly.\n\nType:\n- Metadata says: purpose is “modeling neural activity supporting natural narrative comprehension” and “language comprehension”, with word-level transcript and linguistic/LLM features.\n- Few-shot pattern suggests: (a) simple listening studies can be Perception; (b) if the construct is a higher-level cognition (e.g., working memory), Type can be Memory. However, “language comprehension” is not directly represented as its own allowed Type label.\n- Alignment: PARTIAL. Metadata points to language comprehension; among allowed Types, ‘Other’ best captures language-focused comprehension work without overcommitting to Perception or Memory.","decision_summary":"Top-2 candidates and selection:\n\n1) Pathology:\n- Candidate A: Epilepsy\n  - Evidence: invasive “ECoG” with “grid, depth, and strip electrodes” is commonly collected in epilepsy monitoring units (contextual inference only; not explicitly stated).\n- Candidate B: Unknown\n  - Evidence: no explicit diagnosis/recruitment criterion is provided; only “9 participants” and electrode details are stated.\n- Head-to-head: Unknown wins because the dataset does not explicitly state a clinical diagnosis; Epilepsy would rely on convention rather than a quoted recruitment fact.\n- Confidence basis: absence of explicit pathology statements keeps confidence low.\n\n2) Modality:\n- Candidate A: Auditory\n  - Evidence quotes: “natural story listening”; “participants listened to a 30-minute story”; “include the auditory stimuli”.\n- Candidate B: Other (if treating ‘podcast’/language as not purely sensory)\n  - Evidence: language features/LLM embeddings, but still delivered via listening.\n- Head-to-head: Auditory wins clearly because the stimulus/input is listening to a story.\n- Confidence basis: 3+ explicit modality phrases.\n\n3) Type:\n- Candidate A: Other\n  - Evidence quotes: “natural narrative comprehension”; “language comprehension”; inclusion of “aligned word-level transcript” and “LLM embeddings” indicates language-centric cognitive modeling beyond basic sensation.\n- Candidate B: Perception\n  - Evidence: it is still a listening paradigm; few-shot music/speech listening mapped to Perception.\n- Head-to-head: Other wins because the stated research purpose is language/narrative comprehension rather than primarily auditory detection/discrimination.\n- Confidence basis: multiple explicit ‘comprehension/language’ statements, but mapping to allowed labels is indirect, so not maximal confidence."}},"computed_title":"The \"Podcast\" ECoG dataset","nchans_counts":[{"val":174,"count":1},{"val":91,"count":1},{"val":178,"count":1},{"val":114,"count":1},{"val":124,"count":1},{"val":167,"count":1},{"val":205,"count":1},{"val":264,"count":1},{"val":138,"count":1}],"sfreq_counts":[{"val":512.0,"count":8},{"val":2048.0,"count":1}],"stats_computed_at":"2026-04-21T23:17:03.732124+00:00","total_duration_s":16199.98388671875,"canonical_name":null,"name_confidence":0.55,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Zada2024"}}