{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a333d","dataset_id":"ds004356","associated_paper_doi":null,"authors":["Tong Shan","Madeline S. Cappelloni","Ross K. Maddox"],"bids_version":"1.7.0","contact_info":["Tong Shan"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds004356.v2.2.1","datatypes":["eeg"],"demographics":{"subjects_count":22,"ages":[37,20,20,19,19,19,20,22,20,19,35,19,22,23,23,25,30,28,20,19,20,20],"age_min":19,"age_max":37,"age_mean":22.681818181818183,"species":null,"sex_distribution":{"f":11,"m":11},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds004356","osf_url":null,"github_url":null,"paper_url":null},"funding":["the Schmitt Program in Neuroscience"],"ingestion_fingerprint":"c43f1c567ac4690be11ea1031e96036cbe9139215ccc1936c8d0c0a391bab64d","license":"CC0","n_contributing_labs":null,"name":"Subcortical responses to music and speech are alike while cortical responses diverge","readme":"# README\n## Details related to access to the data\nPlease contact the following authors for further information:\n- Tong Shan (email: tshan@ur.rochester.edu)\n- Ross K. Maddox (email: rmaddox@ur.rochester.edu)\n## Overview\nThe goal of this study is to derive Auditory Brainstem Response (ABR) from continuous music and speech stimuli using deconvolution method. Data collected from Jun to Aug, 2021.\nThe details of the experiment can be found at Shan et al. (2024). There were two phases in this experiment. For the first phase, ten trials of one-minute clicks were presented to the subjects. For the second phase, the 12 types (six genres of music and six types of speech) of 12 s stimuli clips were presented. There were 40 trials\nfor each type with shuffled order. Between trials, there was a 0.5 s pause.\nThe code for stimulus preprocessing and EEG analysis is available on Github:\nhttps://github.com/maddoxlab/Music_vs_Speech_abr\n## Format\nThis dataset is formatted according to the EEG Brain Imaging Data Structure. It includes EEG recording from subject 001 to subject 024 (excluding subject 014 and subject 021) in raw brainvision format (including `.eeg`, `.vhdr`, and `.vmrk` triplet) and stimuli files in format of `.wav`.\nFor some subjects (sub-03 & sub-19), there are 2 \"runs\" of data that the first run (`run-01`) only contains the click phase (phase 1), and the second run includes the data for the ABR analysis.\nTriggers with values of \"1\" were recorded to the onset of the stimulus, and shortly after triggers with values of \"4\" or \"8\" were stamped to indicate the stimulus types and the trial number out of 40. This was done by converting the decimal trial number to bits, denoted b, then calculating 2 ** (b + 2). Triggers of \"999\" denote the start of a new segment of EEG. We've specified these trial numbers and more metadata of the events in each of the `*_eeg_events.tsv` file, which is sufficient to know which trial corresponded to which type of stimulus and which file.\n## Subjects\n24 subjects participated in this study.\n**Subject inclusion criteria**\n1. Age between 18-40.\n2. Normal hearing: audiometric thresholds of 20 dB HL or better from 500 to 8000 Hz.\n3. Speak English as their primary language.\n4. Self-reported normal or correctable to normal vision.\n**Subject exclusion criteria**\n1. Subject 014 self-withdrew partway through the experiment.\n2. Subject 021 was excluded because of technical problems during data collection that led to unusable data.\nTherefore, after excluding the two subjects, there were 22 subjects (11 male and 11 female) with an age of 22.7 ± 5.1 (mean ± SD) years that we included in the analysis. Please see `subjects.tsv` for more demography.\n## Apparatus\nSubjects were seated in a sound-isolating booth on a chair in front of a 24-inch BenQ monitor with a viewing distance of approximately 60 cm. Stimuli were presented at an average level of 65 dB SPL and a sampling rate of 48000 Hz through ER-2 insert earphones plugged into an RME Babyface Pro digital sound card. The stimulus presentation for the experiment was controlled by a python script using a custom package, `expyfun`.","recording_modality":["eeg"],"senior_author":"Ross K. Maddox","sessions":[],"size_bytes":228796285688,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["MusicvsSpeech"],"timestamps":{"digested_at":"2026-04-22T12:26:29.422515+00:00","dataset_created_at":"2022-12-06T20:13:46.690Z","dataset_modified_at":"2024-01-18T18:40:33.000Z"},"total_files":24,"storage":{"backend":"s3","base":"s3://openneuro.org/ds004356","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv","task-MusicvsSpeech_eeg.json","task-MusicvsSpeech_events.json"]},"nemar_citation_count":2,"computed_title":"Subcortical responses to music and speech are alike while cortical responses diverge","nchans_counts":[{"val":34,"count":24}],"sfreq_counts":[{"val":10000.0,"count":24}],"stats_computed_at":"2026-04-22T23:16:00.307459+00:00","tags":{"modality":"Auditory","pathology":"Healthy","type":"Perception"},"total_duration_s":null,"author_year":"Shan2022","canonical_name":null}}