{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a33e2","dataset_id":"ds005408","associated_paper_doi":null,"authors":["Melissa J. Polonenko","Ross K. Maddox"],"bids_version":"1.7.0","contact_info":["Melissa Polonenko"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds005408.v1.0.1","datatypes":["eeg"],"demographics":{"subjects_count":25,"ages":[37,37,19,26,19,20,35,20,20,21,21,22,19,20,25,20,20,28,23,22,22,20,28,19,21],"age_min":19,"age_max":37,"age_mean":23.36,"species":null,"sex_distribution":{"f":16,"m":9},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005408","osf_url":null,"github_url":null,"paper_url":null},"funding":["NIDCD R01DC017962"],"ingestion_fingerprint":"2c6319492d5dccf5ed9b5c15406914fb875bfb1955f9aabc22fd494098ca384e","license":"CC0","n_contributing_labs":null,"name":"The effect of speech masking on the human subcortical response to continuous speech","readme":"README\n------\nDetails related to access to the data\n-------------------------------------\nPlease contact the following authors for further information:\n    Melissa Polonenko (email: mpolonen@umn.edu) [corresponding author]\n    Ross Maddox (email: rkmaddox@med.umich.edu)\nOverview\n--------\nThis is the \"peaky_snr\" dataset for the paper by\nPolonenko MJ & Maddox RK, with citation listed below.\neNeuro: Polonenko, M. J., & Maddox, R. K. (2025). The effect of speech masking on the human subcortical response to continuous speech. eNeuro 24 March 2025, 12 (4) ENEURO.0561-24.2025; https://doi.org/10.1523/ENEURO.0561-24.2025\nBioRxiv: https://www.biorxiv.org/content/10.1101/2024.12.10.627771v1\nAuditory brainstem responses (ABRs) were derived to continuous peaky speech\nfrom between one up to five simultaneously presented talkers and from clicks.\nData was collected from June to July 2021.\nGoal: To better understand masking’s effects on the subcortical neural encoding\nof naturally uttered speech in human listeners.\nTo do this we leveraged our recently developed method for determining the\nauditory brainstem response (ABR) to speech (Polonenko and Maddox, 2021).\nWhereas our previous work was aimed at encoding of single talkers, here we\ndetermined the ABR to speech in quiet as well as in the presence of varying\nnumbers of other talkers.\nThe details of the experiment can be found at Polonenko & Maddox (2024).\nStimuli:\n    1) randomized click trains at an average rate of 40 Hz,\n    60 x 10 s trials for a total of 10 minutes;\n    2) peaky speech for up to 5 male narrators. 30 minutes of each SNR\n    (clean, 0 dB, -3 dB, -6 dB), corresponding to 1, 2, 3, and 5 talkers\n    presented simultaneously, each set to 65 dB.\n    NOTE: files for each story were completely randomized. Random combinations\n    were created so that each story was equally represented in the data.\nThe code for stimulus preprocessing and EEG analysis is available on Github:\n    https://github.com/polonenkolab/peaky_snr\nFormat\n------\nThe dataset is formatted according to the EEG Brain Imaging Data Structure. It\nincludes EEG recording from participant 01 to 25 in raw brainvision format\n(3 files: .eeg, .vhdr, .vmrk) and stimuli files in format of .hdf5. The stimuli\nfiles contain the audio ('audio'), and regressors for the deconvolution\n('pinds' are the pulse indices, 'anm' is an auditory nerve model regressor,\n which was used during analyses but was not included as part of the article).\nGenerally, you can find detailed event data in the .tsv files and descriptions\nin the accompanying .json files. Raw eeg files are provided in the Brain\nProducts format.\nParticipants\n------------\n25 participants, mean ± SD age of 23.4 ± 5.5 years (19-37 years)\nInclusion criteria:\n    1) Age between 18-40 years\n    2) Normal hearing: audiometric thresholds 20 dB HL or better from 500 to 8000 Hz\n    3) Speak English as their primary language\nPlease see participants.tsv for more information.\nApparatus\n---------\nParticipants sat in a darkened sound-isolating booth and rested or watched\nsilent videos with closed captioning. Stimuli were presented at an average level\nof 65 dB SPL (per story; total for 5 talkers = 71 dB) and a sampling rate of\n48 kHz through ER-2 insert earphones plugged into an RME Babyface Pro digital\nsound card. Custom python scripts using expyfun were used to control the\nexperiment and stimulus presentation.\nDetails about the experiment\n----------------------------\nFor a detailed description of the task, see Polonenko & Maddox (2024) and the\nsupplied `task-peaky_snr_eeg.json` file. The 4 SNR speech conditions and the\nstory tokens were randomized. This means that the participant would not be able\nto follow the stories. For clicks the trials were not randomized\n(already random clicks).\nTrigger onset times in the tsv files have already been corrected for the tubing\ndelay of the insert earphones (but not in the events of the raw files).\nTriggers with values of \"1\" were recorded to the onset of the 10 s audio, and\nshortly after triggers with values of \"4\" or \"8\" were stamped to indicate info\nabout the trial. This was done by converting the decimal trial number to bits,\ndenoted b, then calculating 2 ** (b + 2). We've specified these trial triggers\nand more metadata of the events in each of the '*_eeg_events.tsv\" file, which\nis sufficient to know which trial corresponded to which type of stimulus\n(clicks or speech), snr, and which files of which stories were presented.\ne.g., alice_000_peaky_diotic_regress.hdf5 for the first file of the story\ncalled 'alice' (Alice in Wonderland).","recording_modality":["eeg"],"senior_author":"Ross K. Maddox","sessions":[],"size_bytes":40639242081,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["peakysnr"],"timestamps":{"digested_at":"2026-04-22T12:27:44.876654+00:00","dataset_created_at":"2024-08-10T00:49:48.180Z","dataset_modified_at":"2026-04-07T15:26:48.000Z"},"total_files":29,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005408","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv","task-peakysnr_eeg.json","task-peakysnr_events.json"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"e8f3661defd14f8f","model":"openai/gpt-5.2","tagged_at":"2026-01-20T17:52:49.725344+00:00"},"tags":{"pathology":["Healthy"],"modality":["Auditory"],"type":["Perception"],"confidence":{"pathology":0.85,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Closest match is the few-shot example “Subcortical responses to music and speech are alike while cortical responses diverge” (Healthy / Auditory / Perception). That example is also an auditory-brainstem-response (ABR) dataset derived from continuous sound stimuli with participants primarily listening/resting, and it is labeled as Perception. This guides mapping ABR-to-speech/clicks paradigms to Modality=Auditory and Type=Perception (rather than Motor/Decision-making/etc.).","metadata_analysis":"Population: The README describes typical healthy-hearing inclusion criteria rather than a patient group: “25 participants, mean ± SD age of 23.4 ± 5.5 years (19-37 years)” and “Inclusion criteria: 1) Age between 18-40 years 2) Normal hearing: audiometric thresholds 20 dB HL or better…”.\nStimulus modality: clearly auditory: “Auditory brainstem responses (ABRs) were derived to continuous peaky speech… and from clicks.” and “Stimuli… peaky speech for up to 5 male narrators… presented… through ER-2 insert earphones”.\nStudy aim/type: focused on auditory encoding/ masking effects: “Goal: To better understand masking’s effects on the subcortical neural encoding of naturally uttered speech in human listeners.”","paper_abstract_analysis":"No useful paper information (only a brief citation to a BioRxiv preprint title in the README; no abstract text provided here).","evidence_alignment_check":"Pathology: Metadata says healthy/normal-hearing participants (e.g., “Inclusion criteria… Normal hearing…”). Few-shot pattern suggests Healthy for ABR auditory studies in non-clinical cohorts (e.g., music-vs-speech ABR example). ALIGN.\nModality: Metadata says auditory stimuli delivered via earphones (“peaky speech… clicks… through ER-2 insert earphones”). Few-shot ABR example is labeled Auditory. ALIGN.\nType: Metadata emphasizes sensory neural encoding of sound and masking (“subcortical neural encoding of… speech”). Few-shot ABR example uses Type=Perception for similar auditory-encoding goals. ALIGN.","decision_summary":"Pathology top-2: (1) Healthy — supported by “25 participants…” + “Inclusion criteria… Normal hearing…”, no disorder recruitment stated. (2) Unknown — would apply if recruitment were unclear, but metadata is explicit. Final: Healthy.\nModality top-2: (1) Auditory — supported by “ABRs… to… speech… and… clicks” + “presented… through… insert earphones”. (2) Multisensory — weak (silent videos mentioned but stimuli of interest are auditory; videos are ancillary). Final: Auditory.\nType top-2: (1) Perception — supported by “subcortical neural encoding of… speech” and ABR-to-speech/clicks sensory processing focus. (2) Attention — weak because participants “rested or watched silent videos” and story randomization reduced comprehension, but the scientific target is sensory encoding rather than attentional manipulation. Final: Perception.\nConfidence notes: Pathology and Modality have direct, repeated textual support; Type is strongly implied by stated goal and close few-shot analog but has slightly more overlap with possible Attention interpretations."}},"computed_title":"The effect of speech masking on the human subcortical response to continuous speech","nchans_counts":[{"val":2,"count":29}],"sfreq_counts":[{"val":10000.0,"count":29}],"stats_computed_at":"2026-04-21T23:17:03.731630+00:00","total_duration_s":204738.397,"author_year":"Polonenko2024_effect_speech","canonical_name":null}}