{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a344b","dataset_id":"ds006434","associated_paper_doi":null,"authors":["Thomas J Stoll","Nathan D Vandjelovic","Melissa J Polonenko","Nadja R S Li","Adrian K C Lee","Ross K Maddox"],"bids_version":"1.7.0","contact_info":["Ross Maddox","Thomas Stoll"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds006434.v1.2.0","datatypes":["eeg"],"demographics":{"subjects_count":66,"ages":[21,20,24,19,27,27,29,22,20,31,26,21,21,38,23,21,20,29,23,27,29,18,18,19,22,25,20,19,22,23,39,18,18,23,19,20,22,21,30,18,21,21],"age_min":18,"age_max":39,"age_mean":23.19047619047619,"species":null,"sex_distribution":{"o":25,"f":29,"m":12},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds006434","osf_url":null,"github_url":null,"paper_url":null},"funding":["NIH R00 DC014288","NIH R21 DC019489","NIH R01 DC013260 ","NSF 2142612","NSF 2448814"],"ingestion_fingerprint":"861df1c7178cb98d3f59ec4daf79960952c9f46fb5939ee3c6c3d0313627c404","license":"CC0","n_contributing_labs":null,"name":"The auditory brainstem response to natural speech is not affected by selective attention","readme":"Overview\n--------\nThis is the dataset for our study investigating the\neffects of selective attention to speech stimuli in the subcortex and cortex,\nentitled \"The auditory brainstem response to natural speech is not affected\nby selective attention\" by Stoll et al. (2025). Please cite our paper if\nyou use our dataset.\nIt contains EEG data for three experiments, detailed in the paper and\nbriefly summarized below. Code and stimuli to derive the responses are\nprovided in the Dataset folder and on our lab's github:\nhttps://github.com/maddoxlab/stoll_et_al_selective_attention.\nExperiment 1 - diotic stimuli (exp1Diotic)\nThis \"task\" includes EEG data for 28 subjects who listened to 120 trials\neach (64 s each; total 128 minutes) of two audiobooks - A Wrinkle in Time\n(Female narrator) and The Alchemyst (male narrator). Stimuli were set to\n65 dB SPL then summed together to be presented diotically.\nSubjects sat at a computer desk in a soundproof room.\nThey were instructed to attend to only one narrator on each trial, with cues\ngiven before they started the trial and through a fixation dot which remained\nfor the duration of the trial. For details, see the\n`Details about the experiment` section and refer to our paper.\nEEG was recorded simultaneously from a 32 channel activate montage (to examine\ncortical responses) and a 2 channel passive bipolar montage (FCz to earlobes,\nto examine subcortical responses). On a subset of the subjects\n(1, 3, 4, 7, 8, 9, 10, 11, 12, 13, 16, 18) an additional electrode was placed\non the eardrum. Data are split into cortical (active) electrodes and\nsubcortical (passive) electrodes. Since data was collected simultaneously,\ndata from all electrodes were sampled at 25 kHz. To reduce file size and\ncomputation time, the cortical electrodes were downsampled to 1 kHz and\nthe subcortical electrodes were downsampled to 10 kHz.\nExperiment 2 - dichotic stimuli (exp2Dichotic)\nThis \"task\" contains EEG data for 25 subjects who listened to 60 trials\neach (64 s each; total 64 minutes) of two audiobooks - A Wrinkle in Time\n(Female narrator) and The Alchemyst (male narrator). Stimuli were set to\n65 dB SPL and presented diotically. Subjects sat at a computer desk in a\nsoundproof room. They were instructed to attend to only one narrator on\neach trial (indicated by the story name, talker sex, and direction) with cues\ngiven before they started the trial and through a fixation dot with an arrow\nwhich remained for the duration of the trial. For details, see the\n`Details about the experiment` section and refer to our paper. The records\nof individual participant age and sex no longer exist, but overall statistics\nare reported in the paper.\nEEG was recorded simultaneously from a 32 channel activate montage (to examine\ncortical responses) and passive electrodes using a bipolar montage, with the\nnoninverting electrode placed on FCz and the inverting electrode on the earlobe,\nwith ground on the forehead. The side the electrode was placed on was\ncounterbalanced across subjects.\nExperiment 3 - passive listening to stimuli from Forte et al. (exp3Passive)\nThis \"task\" contains EEG data for 14 subjects who listened to 32 trials\neach (~117 s each; total ~62 minutes) of four audiobooks - Tales of Troy:\nUlysses the Sacker of Cities and The Green Forest Fairy Book narrated by\nJames K. White for the male speech and The Children of Odin and The Adventures\nof Odysseus and the Tale of Troy narrated by Elizabeth Klett for the female\nspeech. These audiobooks were selected to match the study by Forte et al. (2017),\nwho provided us with the audio files. Stimuli were set to 73 dB SPL then\nsummed together to be presented diotically (i.e., at 76 dB SPL). The stories\nwere paired in the same manner as in Forte et al. (2017). Subjects sat at a\ncomputer desk in a soundproof room. They were instructed to ignore the audio\nas best they could and distract themselves by watching silent captioned videos\nof their choosing or by reading.  For details, see the `Details about the experiment`\nsection and refer to our paper.\nEEG was recorded with a passive electrodes using a bipolar montage, with the\nnoninverting electrode placed on FCz and the inverting electrode on the earlobe,\nwith ground on the forehead.\nFormat\n------\nThe dataset is formatted according to the EEG Brain Imaging Data Structure.\nSee the `dataset_description.json` file for the specific version used.\nGenerally, you can find detailed event data in the .tsv files and descriptions\nin the accompanying .json files. Raw eeg files are provided in the Brain\nProducts format.\nDetails about the experiment\n----------------------------\nFor a detailed description of the task, see Stoll et al. (2025) as well\nas the supplied file json files.\nTrigger onset times have already been corrected for the tubing delay of the\ninsert earphones. Trial numbers and more metadata of the events are in each\nof the '*_eeg_events.tsv\" file, which is sufficient to know which trial\ncorresponded to which chapter and which narrator\nthe subjects were instructed to attend. As chapters were\norganized to allow subjects to follow to stories, all subjects had the same\ntrial order in experiment 1 and 2. Story order was randomized in experiment\n3, with that information stored in the '*_eeg_evnets.tsv\" file.","recording_modality":["eeg"],"senior_author":"Ross K Maddox","sessions":[],"size_bytes":110646361086,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["exp1DioticCortex","exp1DioticSubcortex","exp2DichoticCortex","exp2DichoticSubcortex","exp3Passive"],"timestamps":{"digested_at":"2026-04-22T12:29:26.884694+00:00","dataset_created_at":"2025-07-01T16:32:15.143Z","dataset_modified_at":"2025-09-11T17:43:36.000Z"},"total_files":118,"storage":{"backend":"s3","base":"s3://openneuro.org/ds006434","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"2724ade0ceb66011","model":"openai/gpt-5.2","tagged_at":"2026-01-20T18:57:53.410834+00:00"},"tags":{"pathology":["Healthy"],"modality":["Auditory"],"type":["Attention"],"confidence":{"pathology":0.7,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Closest few-shot match is the dataset “Subcortical responses to music and speech are alike while cortical responses diverge” (Healthy / Auditory / Perception). It shares the same core measurement goal (auditory brainstem responses and cortical responses to naturalistic speech/music) and similar passive/low-demand listening structure. However, the target dataset explicitly manipulates/selects attention (“selective attention to speech stimuli”), so by the few-shot labeling conventions this shifts Type from Perception to Attention while keeping Modality = Auditory and Pathology = Healthy (no clinical recruitment mentioned).","metadata_analysis":"Key quoted metadata indicating auditory selective attention and healthy/non-clinical recruitment context:\n1) Study goal/construct: “investigating the effects of selective attention to speech stimuli in the subcortex and cortex” and title snippet: “The auditory brainstem response to natural speech is not affected by selective attention”.\n2) Auditory stimulus: “listened to … two audiobooks … Stimuli were set to 65 dB SPL” and “summed together to be presented diotically.”\n3) Attention manipulation: “They were instructed to attend to only one narrator on each trial, with cues given before they started the trial”.\n4) Additional passive condition (still auditory): “passive listening… instructed to ignore the audio… watching silent captioned videos… or by reading.”\n5) Participants described as “subjects” with no diagnosis terms: “EEG data for 28 subjects…”, “25 subjects…”, “14 subjects…”, with no mention of any patient groups or disorders.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: only “subjects” are described (e.g., “EEG data for 28 subjects…”, “25 subjects…”, “14 subjects…”) with no clinical recruitment/diagnosis mentioned.\n- Few-shot suggests: ABR/speech listening datasets without clinical population are labeled Healthy (e.g., “Subcortical responses to music and speech…”).\n- Alignment: ALIGN (no conflict).\n\nModality:\n- Metadata says: auditory inputs throughout: “listened to… audiobooks”, “Stimuli were set to 65 dB SPL”, “insert earphones”.\n- Few-shot suggests: similar ABR/speech datasets are Auditory.\n- Alignment: ALIGN.\n\nType:\n- Metadata says: explicit attention manipulation: “effects of selective attention to speech stimuli” and “instructed to attend to only one narrator”.\n- Few-shot suggests: when the purpose is selective attention (rather than mere sensory encoding), label Type as Attention rather than Perception (even if the stimuli are speech).\n- Alignment: ALIGN (attention is the primary construct).","decision_summary":"Top-2 candidates with head-to-head comparisons:\n\nPathology:\n1) Healthy (selected): Supported by lack of any clinical recruitment language and generic participant description (“EEG data for 28 subjects…”, “25 subjects…”, “14 subjects…”), matching few-shot convention that non-clinical ABR datasets are Healthy.\n2) Unknown (runner-up): Could be considered because the README does not explicitly say “healthy”, and it notes missing demographics for Exp 2 (“records… age and sex no longer exist”).\nDecision: Healthy wins because there is positive evidence of typical lab subjects and zero evidence of any disorder-specific recruitment.\nConfidence basis: 2+ quotes indicating non-clinical subject framing and no disorder terms.\n\nModality:\n1) Auditory (selected): Strong direct evidence: “listened to… audiobooks”, “Stimuli were set to 65 dB SPL”, “insert earphones”, “ignore the audio”.\n2) Multisensory (runner-up): Exp 3 includes watching silent captioned videos/reading, but these are distractors; the experimental stimulus of interest remains speech audio.\nDecision: Auditory wins because the manipulated/recorded stimulus is speech audio across experiments.\nConfidence basis: 3+ explicit auditory-stimulus quotes.\n\nType:\n1) Attention (selected): Directly stated construct: “effects of selective attention to speech stimuli” and instruction: “attend to only one narrator on each trial”.\n2) Perception (runner-up): The study also concerns encoding of “natural speech” and ABR/cortical responses, which could be framed as sensory processing.\nDecision: Attention wins because selective attention is explicitly the manipulated variable and central claim (“not affected by selective attention”).\nConfidence basis: 2+ explicit attention-related quotes and strong few-shot analogy showing ABR datasets are Perception unless attention is the main manipulation."}},"computed_title":"The auditory brainstem response to natural speech is not affected by selective attention","nchans_counts":[{"val":32,"count":52},{"val":2,"count":28},{"val":1,"count":24},{"val":3,"count":14}],"sfreq_counts":[{"val":10000.0,"count":52},{"val":1000.0,"count":28},{"val":500.0,"count":24},{"val":25000.0,"count":14}],"stats_computed_at":"2026-04-22T23:16:00.311565+00:00","total_duration_s":947287.9593400001,"author_year":"Stoll2025","canonical_name":null}}