{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a33e1","dataset_id":"ds005407","associated_paper_doi":null,"authors":["Melissa J. Polonenko","Ross K. Maddox"],"bids_version":"1.7.0","contact_info":["Melissa Polonenko"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds005407.v1.0.1","datatypes":["eeg"],"demographics":{"subjects_count":25,"ages":[37,37,19,26,19,20,35,20,20,21,21,22,19,20,25,20,20,28,23,22,22,20,28,19,21],"age_min":19,"age_max":37,"age_mean":23.36,"species":null,"sex_distribution":{"f":16,"m":9},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005407","osf_url":null,"github_url":null,"paper_url":null},"funding":["NIDCD R01DC017962"],"ingestion_fingerprint":"4e63ea7541d412c1406484289ff2ce65609b84be78f843da7f0aff13c38037de","license":"CC0","n_contributing_labs":null,"name":"The effect of speech masking on the subcortical response to speech","readme":"README\n------\nDetails related to access to the data\n-------------------------------------\nPlease contact the following authors for further information:\n    Melissa Polonenko(email: mpolonen@umn.edu)\n    Ross Maddox (email: rkmaddox@med.umich.edu)\nOverview\n--------\nThis is the \"peaky_snr\" dataset for the paper\nPolonenko MJ & Maddox RK (2024), with citation listed below.\neNeuro: Polonenko, M. J., & Maddox, R. K. (2025). eNeuro 24 March 2025, 12 (4) ENEURO.0561-24.2025; https://doi.org/10.1523/ENEURO.0561-24.2025\nBioRxiv: The effect of speech masking on the subcortical response to speech. https://www.biorxiv.org/content/10.1101/2024.12.10.627771v1\nAuditory brainstem responses (ABRs) were derived to continuous peaky speech\nfrom between one up to five simultaneously presented talkers and from clicks.\nData was collected from June to July 2021.\nGoal: To better understand masking’s effects on the subcortical neural encoding\nof naturally uttered speech in human listeners.\nTo do this we leveraged our recently developed method for determining the\nauditory brainstem response (ABR) to speech (Polonenko and Maddox, 2021).\nWhereas our previous work was aimed at encoding of single talkers, here we\ndetermined the ABR to speech in quiet as well as in the presence of varying\nnumbers of other talkers.\nThe details of the experiment can be found at Polonenko & Maddox (2024).\nStimuli:\n    1) randomized click trains at an average rate of 40 Hz,\n    60 x 10 s trials for a total of 10 minutes;\n    2) peaky speech for up to 5 male narrators. 30 minutes of each SNR\n    (clean, 0 dB, -3 dB, -6 dB), corresponding to 1, 2, 3, and 5 talkers\n    presented simultaneously, each set to 65 dB.\n    NOTE: files for each story were completely randomized. Random combinations\n    were created so that each story was equally represented in the data.\nThe code for stimulus preprocessing and EEG analysis is available on Github:\n    https://github.com/polonenkolab/peaky_snr\nFormat\n------\nThe dataset is formatted according to the EEG Brain Imaging Data Structure. It\nincludes EEG recording from participant 01 to 25 in raw brainvision format\n(3 files: .eeg, .vhdr, .vmrk) and stimuli files in format of .hdf5. The stimuli\nfiles contain the audio ('audio'), and regressors for the deconvolution\n('pinds' are the pulse indices, 'anm' is an auditory nerve model regressor,\n which was used during analyses but was not included as part of the article).\nGenerally, you can find detailed event data in the .tsv files and descriptions\nin the accompanying .json files. Raw eeg files are provided in the Brain\nProducts format.\nParticipants\n------------\n25 participants, mean ± SD age of 23.4 ± 5.5 years (19-37 years)\nInclusion criteria:\n    1) Age between 18-40 years\n    2) Normal hearing: audiometric thresholds 20 dB HL or better from 500 to 8000 Hz\n    3) Speak English as their primary language\nPlease see participants.tsv for more information.\nApparatus\n---------\nParticipants sat in a darkened sound-isolating booth and rested or watched\nsilent videos with closed captioning. Stimuli were presented at an average level\nof 65 dB SPL (per story; total for 5 talkers = 71 dB) and a sampling rate of\n48 kHz through ER-2 insert earphones plugged into an RME Babyface Pro digital\nsound card. Custom python scripts using expyfun were used to control the\nexperiment and stimulus presentation.\nDetails about the experiment\n----------------------------\nFor a detailed description of the task, see Polonenko & Maddox (2024) and the\nsupplied `task-peaky_snr_eeg.json` file. The 4 SNR speech conditions and the\nstory tokens were randomized. This means that the participant would not be able\nto follow the stories. For clicks the trials were not randomized\n(already random clicks).\nTrigger onset times in the tsv files have already been corrected for the tubing\ndelay of the insert earphones (but not in the events of the raw files).\nTriggers with values of \"1\" were recorded to the onset of the 10 s audio, and\nshortly after triggers with values of \"4\" or \"8\" were stamped to indicate info\nabout the trial. This was done by converting the decimal trial number to bits,\ndenoted b, then calculating 2 ** (b + 2). We've specified these trial triggers\nand more metadata of the events in each of the '*_eeg_events.tsv\" file, which\nis sufficient to know which trial corresponded to which type of stimulus\n(clicks or speech), snr, and which files of which stories were presented.\ne.g., alice_000_peaky_diotic_regress.hdf5 for the first file of the story\ncalled 'alice' (Alice in Wonderland).","recording_modality":["eeg"],"senior_author":"Ross K. Maddox","sessions":[],"size_bytes":40639241863,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["peakysnr"],"timestamps":{"digested_at":"2026-04-06T13:08:07.430713+00:00","dataset_created_at":"2024-08-10T00:31:25.134Z","dataset_modified_at":"2026-03-27T14:28:55.000Z"},"total_files":29,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005407","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv","task-peakysnr_eeg.json","task-peakysnr_events.json"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"e8f3661defd14f8f","model":"openai/gpt-5.2","tagged_at":"2026-01-20T17:52:30.641371+00:00"},"tags":{"pathology":["Healthy"],"modality":["Auditory"],"type":["Perception"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Closest few-shot match is the dataset titled \"Subcortical responses to music and speech are alike while cortical responses diverge\" (Healthy / Auditory / Perception). It shares the same core paradigm features: continuous auditory stimuli (speech, plus clicks), ABR/brainstem-response focus, and a largely passive listening setup. That example supports mapping an ABR-to-continuous-speech dataset to Modality=Auditory and Type=Perception (sensory/neural encoding of sound), with Pathology=Healthy when inclusion criteria emphasize normal hearing and no clinical recruitment.","metadata_analysis":"Pathology-related evidence indicates a non-clinical sample: (1) \"25 participants, mean ± SD age of 23.4 ± 5.5 years (19-37 years)\" and (2) \"Inclusion criteria: 1) Age between 18-40 years 2) Normal hearing: audiometric thresholds 20 dB HL or better...\" and (3) \"Speak English as their primary language\"—no disorder group is described.\n\nAuditory modality is explicit throughout: (1) \"Auditory brainstem responses (ABRs) were derived...\" (2) \"Stimuli: 1) randomized click trains... 2) peaky speech...\" and (3) \"Stimuli were presented... through ER-2 insert earphones\".\n\nType (research purpose) centers on auditory/speech encoding under masking: (1) \"Goal: To better understand masking’s effects on the subcortical neural encoding of naturally uttered speech\" (2) \"The effect of speech masking on the subcortical response to speech\" and (3) \"determined the ABR to speech in quiet as well as in the presence of varying numbers of other talkers.\" This is best characterized as sensory/perceptual processing rather than decision-making, learning, or motor control.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says participants are selected for \"Normal hearing\" with age/language criteria and no patient group; few-shot convention assigns Healthy for such normative recruitment. ALIGN.\n\nModality: Metadata says stimuli are \"click trains\" and \"peaky speech\" delivered via \"insert earphones\"; few-shot convention labels similar ABR-to-speech datasets as Auditory. ALIGN.\n\nType: Metadata says the goal is \"subcortical neural encoding\" and \"subcortical response to speech\" under masking (speech-in-noise / multiple talkers). Few-shot convention for similar ABR/speech encoding work maps to Perception (sensory encoding) rather than attention/decision-making. ALIGN.","decision_summary":"Pathology top-2: (1) Healthy — supported by \"Inclusion criteria... Normal hearing\" and absence of any clinical recruitment; (2) Unknown — only if participant health status were unspecified. Winner: Healthy (clear explicit inclusion criteria).\n\nModality top-2: (1) Auditory — \"click trains\", \"peaky speech\", \"insert earphones\"; (2) Multisensory — minor visual component (watching silent captioned videos) but it is not the stimulus under study. Winner: Auditory.\n\nType top-2: (1) Perception — explicit focus on \"subcortical neural encoding\" and ABR to speech under masking; (2) Attention — participants may \"rested or watched silent videos\" but attention is not the main construct studied. Winner: Perception.\n\nConfidence justification: Pathology has 2+ explicit quotes about non-clinical inclusion (normal hearing, age range) and no contradictory statements; Modality has 3+ explicit auditory-stimulus quotes; Type has 3+ explicit goal/aim quotes plus a strong few-shot analog to ABR speech encoding labeled Perception."}},"computed_title":"The effect of speech masking on the subcortical response to speech","nchans_counts":[{"val":2,"count":29}],"sfreq_counts":[{"val":10000.0,"count":29}],"stats_computed_at":"2026-04-04T21:29:34.901538+00:00","total_duration_s":204738.397,"author_year":"Polonenko2024_effect"}}