{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a33a9","dataset_id":"ds004993","associated_paper_doi":null,"authors":["Liberty S. Hamilton","Maansi Desai","Alyssa Field"],"bids_version":"1.7.0","contact_info":["Liberty Hamilton"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds004993.v1.1.2","datatypes":["ieeg"],"demographics":{"subjects_count":3,"ages":[15,14,19],"age_min":14,"age_max":19,"age_mean":16.0,"species":null,"sex_distribution":{"m":3},"handedness_distribution":{"r":2,"l":1}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds004993","osf_url":null,"github_url":null,"paper_url":null},"funding":["National Institutes of Health - National Institute on Deafness and Other Communication Disorders (R01 DC018579, to LSH)"],"ingestion_fingerprint":"15333073c665fc85b8d63551ad5aca37025023d3073d0441f29831b0c0be6b62","license":"CC0","n_contributing_labs":null,"name":"WIRED ICM Sample Dataset - Workshop on Intracranial Recordings in Humans, Epilepsy, DBS","readme":"WIRED ICM TUTORIAL DATA\n------------------------\n*Contributors:* Liberty S. Hamilton, PhD, Maansi Desai, PhD, Alyssa Field, MEd\n*Email:* liberty.hamilton@austin.utexas.edu\nThis is a sample BIDS dataset for the WIRED ICM course in Paris, France in March 2024.\nThis contains intracranial recordings collected by the Hamilton Lab at the University of Texas at Austin. These recordings include examples of evoked data during natural listening tasks along with some examples of seizure-related activity and vagus nerve stimulator (VNS) artifact for illustrative purposes. All procedures were approved by the University of Texas at Austin Institutional Review Board.\n*Funding:* Support was provided by the National Institutes of Health National Institute on Deafness and Other Communication Disorders (R01 DC018579, to LSH).\nTasks:\n-------\n1. `movietrailers` - this task involves patients listening to movie clips from various Pixar, Disney, Dreamworks, and other movies. We have published previously using these stimuli in EEG (Desai et al. 2021).\n2. `timit4` and `timit5` - these tasks involve patients listening to subsets of the TIMIT acoustic phonetic corpus (Garofolo et al 1993). The events provided in the dataset mark the onset and offset of each sentence. In `timit4`, each sentence is unique, while in `timit5`, 10 sentences are repeated 10 times. This is the same stimulus set used in Mesgarani et al. 2014, Hamilton et al. 2018, Hamilton et al. 2021, and Desai et. al 2021.\nNotes:\n-------\n* The movie trailer data for subject W1 was acquired at the start of a generalized tonic clonic seizure, and the research session was terminated. Large, synchronized spikes can be observed on multiple channels on the right parietal grid throughout the iEEG data.\n* The TIMIT data for subject W2 is an example of fairly clean sentence evoked data.\n* The TIMIT data for subject W3 is a good example of on-and-off VNS artifact. The VNS has a strong artifact at ~20 Hz. Some patients with epilepsy may have these implanted devices to help control their seizures, so you should know how to spot artifact-related activity. Despite these artifacts, the evoked responses to sentences are quite strong.\n* The acquisition number (B3, B8, etc) has to do with the order in which this task was run relative to other tasks in an iEEG session, and can be ignored here.\nReferences\n----------\n* Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\n* Desai, M., Holder, J., Villarreal, C., Clark, N., Hoang, B., & Hamilton, L. S. (2021). Generalizable EEG encoding models with naturalistic audiovisual stimuli. Journal of Neuroscience, 41(43), 8946-8962.\n* Garofolo, J. S., Lamel, L. F., Fisher, W. M., Fiscus, J. G., & Pallett, D. S. (1993). DARPA TIMIT acoustic-phonetic continous speech corpus CD-ROM. NIST speech disc 1-1.1. NASA STI/Recon technical report n, 93, 27403.\n* Hamilton, L. S., Edwards, E., & Chang, E. F. (2018). A spatial map of onset and sustained responses to speech in the human superior temporal gyrus. Current Biology, 28(12), 1860-1871.\n* Hamilton, L. S., Oganian, Y., Hall, J., & Chang, E. F. (2021). Parallel and distributed encoding of speech across human auditory cortex. Cell, 184(18), 4626-4639.\n* Holdgraf, C., Appelhoff, S., Bickel, S., Bouchard, K., D'Ambrosio, S., David, O., … Hermes, D. (2019). iEEG-BIDS, extending the Brain Imaging Data Structure specification to human intracranial electrophysiology. Scientific Data, 6, 102. https://doi.org/10.1038/s41597-019-0105-7\n* Mesgarani, N., Cheung, C., Johnson, K., & Chang, E. F. (2014). Phonetic feature encoding in human superior temporal gyrus. Science, 343(6174), 1006-1010.","recording_modality":["ieeg"],"senior_author":"Alyssa Field","sessions":["iemu"],"size_bytes":319927247,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["movietrailers","timit4","timit5"],"timestamps":{"digested_at":"2026-04-22T12:27:10.715221+00:00","dataset_created_at":"2024-02-25T23:36:29.775Z","dataset_modified_at":"2024-03-01T17:54:40.000Z"},"total_files":3,"storage":{"backend":"s3","base":"s3://openneuro.org/ds004993","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"e6770795336db154","model":"openai/gpt-5.2","tagged_at":"2026-01-20T17:38:36.935111+00:00"},"tags":{"pathology":["Epilepsy"],"modality":["Auditory"],"type":["Perception"],"confidence":{"pathology":0.75,"modality":0.9,"type":0.7},"reasoning":{"few_shot_analysis":"For Modality+Type, the closest few-shot is the healthy auditory stimulus dataset “Subcortical responses to music and speech...” which is labeled Modality=Auditory and Type=Perception; it shows that listening to speech/music stimuli maps to Auditory/Perception even when participants are mostly passive. For Pathology, the pediatric epilepsy dataset shows the convention that when epilepsy is an explicit clinical context (seizures, epilepsy patients), Pathology should be Epilepsy (even if the recording contains sleep/rest segments or other illustrative content).","metadata_analysis":"Key population/clinical context facts: (1) “This contains intracranial recordings… include examples of… seizure-related activity and vagus nerve stimulator (VNS) artifact” and (2) “The movie trailer data for subject W1 was acquired at the start of a generalized tonic clonic seizure…”. Also: “Some patients with epilepsy may have these implanted devices [VNS] to help control their seizures”.\n\nKey task/stimulus facts: (1) “these tasks involve patients listening to movie clips” and (2) “these tasks involve patients listening to subsets of the TIMIT acoustic phonetic corpus… The events… mark the onset and offset of each sentence.” This supports an auditory speech/natural listening paradigm with evoked responses.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says seizure activity and references epilepsy context (e.g., “generalized tonic clonic seizure”, “patients with epilepsy… VNS”). Few-shot pattern suggests labeling clinical recruitment by the named disorder (Epilepsy example). ALIGN.\n\nModality: Metadata says “listening to movie clips” and “listening to… TIMIT… sentence” (auditory speech). Few-shot pattern (music/speech ABR) maps listening stimuli to Auditory. ALIGN.\n\nType: Metadata emphasizes “evoked data during natural listening tasks” and sentence/movie listening; this aligns with a Perception-focused labeling convention in the auditory perception few-shot. No conflicting metadata suggesting memory/decision/motor/resting. ALIGN.","decision_summary":"Top-2 candidates considered per category:\n\nPathology: (1) Epilepsy — supported by “seizure-related activity”, “generalized tonic clonic seizure”, and “patients with epilepsy… VNS”. (2) Other — possible if participants were mixed iEEG clinical cases without a single diagnosis explicitly declared; however the seizure/VNS epilepsy framing is strong. Final: Epilepsy. Evidence alignment: aligned with epilepsy few-shot convention.\n\nModality: (1) Auditory — supported by “patients listening to movie clips” and “listening to… TIMIT… sentence”. (2) Multisensory — movie trailers could in principle be audiovisual, but the dataset repeatedly frames tasks as listening and includes TIMIT speech. Final: Auditory. Evidence alignment: aligned with auditory perception few-shot.\n\nType: (1) Perception — naturalistic speech/movie listening with evoked responses (“evoked data during natural listening tasks”, sentence onsets/offsets). (2) Other — could be language/encoding-model specific, but within allowed labels Perception best matches auditory stimulus processing. Final: Perception. Evidence alignment: aligned.\n\nConfidence justification: Pathology has multiple epilepsy/seizure/VNS quotes but recruitment criteria are not formally stated → moderate-high. Modality has repeated explicit listening/speech stimulus descriptions → high. Type is supported by task framing but not explicitly named as ‘perception’ → moderate."}},"nemar_citation_count":0,"computed_title":"WIRED ICM Sample Dataset - Workshop on Intracranial Recordings in Humans, Epilepsy, DBS","nchans_counts":[{"val":160,"count":1},{"val":106,"count":1},{"val":148,"count":1}],"sfreq_counts":[{"val":512.0,"count":2},{"val":2048.0,"count":1}],"stats_computed_at":"2026-04-21T23:17:03.730974+00:00","total_duration_s":827.70263671875,"canonical_name":null,"name_confidence":0.78,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Hamilton2024"}}