{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3463","dataset_id":"ds006735","associated_paper_doi":null,"authors":["Tong Shan","Edmund C. Lalor","Ross K. Maddox"],"bids_version":"1.2.1","contact_info":["Tong Shan"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds006735.v2.0.0","datatypes":["eeg"],"demographics":{"subjects_count":27,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds006735","osf_url":null,"github_url":null,"paper_url":null},"funding":["NSF CAREER grant 2142612","the Schmitt Program in Neuroscience"],"ingestion_fingerprint":"7df435b3aea02d9f0e5dce69f951300477e030ac3a2aa7a09bcab748bb4bbf56","license":"CC0","n_contributing_labs":null,"name":"Chimeric music reveals an interaction of pitch and time in electrophysiological signatures of music encoding","readme":"# Details related to access to the data\nPlease contact the following authors for further information:\nTong Shan (email: tongshan@stanford.edu)\nRoss K. Maddox (email: rkmaddox@med.umich.edu)\n# Overview\nThis study examines pitch-time interactions in music processing by introducing “chimeric music,” which pairs two distinct melodies, and exchanges their pitch contours and note onset-times to create two new melodies, thereby distorting musical pattern while maintaining the marginal statistics of the original pieces’ pitch and temporal sequences.\nData collected from Sep to Nov, 2023.\nThe details of the experiment can be found at Shan et al. (2024). There were two phases in this experiment. For the first phase, ten trials of one-minute clicks were presented to the subjects. For the second phase, the 2 types of monophonic music (original and chimeric) clips were presented. There were 33 trials for each type with shuffled order. Between trials, there was a 0.5 s pause.\nThe code for analysis for this study can be found in GitHub repo (https://github.com/maddoxlab/Chimeric_music).\n# Format\nThis dataset is formatted according to the EEG Brain Imaging Data Structure. It includes EEG recording from subject 001 to subject 027 in raw brainvision format (including .eeg, .vhdr, and .vmrk triplet).\n# Subjects\n27 subjects participated in this study.\n## Subject inclusion criteria\nAge between 18-40.\nNormal hearing: audiometric thresholds of 20 dB HL or better from 500 to 8000 Hz.\nSpeak English as their primary language.\nSelf-reported normal or correctable to normal vision.\nTwenty-seven participants participated in this experiment with an age of 22.9 ± 3.9 (mean ± STD) years.\n# Apparatus\nSubjects were seated in a sound-isolating booth on a chair in front of a 24-inch BenQ monitor with a viewing distance of approximately 60 cm. Stimuli were presented at an average level of 60 dB SPL and a sampling rate of 48000 Hz through ER-2 insert earphones plugged into an RME Babyface Pro digital sound card. The stimulus presentation for the experiment was controlled by a python script using a custom package, expyfun.\nFollowing the experimental session, participants completed a self-reported musicianship questionnaire (adapted from Whiteford et al, 2025). The questionnaire is included in this repository.","recording_modality":["eeg"],"senior_author":"Ross K. Maddox","sessions":[],"size_bytes":188843790172,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["ChimericMusic"],"timestamps":{"digested_at":"2026-04-22T12:29:35.070950+00:00","dataset_created_at":"2025-09-30T00:41:21.669Z","dataset_modified_at":"2026-01-15T01:24:28.000Z"},"total_files":27,"storage":{"backend":"s3","base":"s3://openneuro.org/ds006735","raw_key":"dataset_description.json","dep_keys":["CHANGES","README"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"5da8bd2a7c6b09cb","model":"openai/gpt-5.2","tagged_at":"2026-01-20T19:07:05.013064+00:00"},"tags":{"pathology":["Healthy"],"modality":["Auditory"],"type":["Perception"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.7},"reasoning":{"few_shot_analysis":"Closest few-shot match is the dataset titled “Subcortical responses to music and speech are alike while cortical responses diverge”, which involves auditory click stimuli and music/speech listening and is labeled Modality=Auditory and Type=Perception. The current dataset similarly presents (1) one-minute clicks and (2) music clips (original vs chimeric) via earphones, aiming to characterize auditory/music processing rather than motor control, learning, or clinical outcomes. This few-shot example guides mapping passive/evoked auditory stimulus processing to Type=Perception and Modality=Auditory.","metadata_analysis":"Key population facts (no clinical recruitment; normal sensory screening):\n- “Age between 18-40.”\n- “Normal hearing: audiometric thresholds of 20 dB HL or better from 500 to 8000 Hz.”\n\nKey task/stimulus facts (auditory clicks + music through earphones):\n- “For the first phase, ten trials of one-minute clicks were presented to the subjects.”\n- “For the second phase, the 2 types of monophonic music (original and chimeric) clips were presented.”\n- “Stimuli were presented… through ER-2 insert earphones…”\n\nKey study aim (music perception):\n- “This study examines pitch-time interactions in music processing… introducing ‘chimeric music’… to create two new melodies…”","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n1) Metadata says: non-clinical inclusion criteria with sensory screening, e.g., “Age between 18-40.” and “Normal hearing… 20 dB HL or better…”.\n2) Few-shot suggests: such criteria typically correspond to Healthy cohorts.\n3) Alignment: ALIGN.\n\nModality:\n1) Metadata says: auditory stimuli, e.g., “one-minute clicks were presented” and “music… clips were presented” and “through ER-2 insert earphones”.\n2) Few-shot suggests: clicks/music listening -> Auditory.\n3) Alignment: ALIGN.\n\nType:\n1) Metadata says: perceptual processing aim, e.g., “examines pitch-time interactions in music processing”.\n2) Few-shot suggests: auditory stimulus processing datasets (clicks/music) map to Perception.\n3) Alignment: ALIGN.","decision_summary":"Pathology (Top-2):\n- Healthy (selected): Supported by “Age between 18-40.” and “Normal hearing… 20 dB HL or better…”, with no mention of any disorder-based recruitment.\n- Unknown (runner-up): Would apply if health status were not inferable beyond age, but the explicit normal-hearing screening supports Healthy.\nAlignment status: few-shot convention aligns with metadata.\nConfidence notes: 2 explicit population quotes supporting non-clinical/healthy screening.\n\nModality (Top-2):\n- Auditory (selected): “one-minute clicks…”, “music… clips…”, and “through ER-2 insert earphones”.\n- Multisensory (runner-up): A monitor is mentioned (“in front of a… monitor”), but no visual stimulus content is described as the experimental manipulation.\nAlignment status: few-shot convention aligns with metadata.\nConfidence notes: 3 explicit auditory-stimulus/apparatus quotes.\n\nType (Top-2):\n- Perception (selected): Study aim is auditory/music perception: “examines pitch-time interactions in music processing” using manipulated melodies (“chimeric music”).\n- Attention (runner-up): Participants might attend to sounds, but attention is not stated as the primary construct; the stated goal is music processing/perceptual interactions.\nAlignment status: few-shot convention aligns with metadata.\nConfidence notes: 1 strong explicit aim quote plus close few-shot analog (auditory clicks/music -> perception)."}},"computed_title":"Chimeric music reveals an interaction of pitch and time in electrophysiological signatures of music encoding","nchans_counts":[{"val":36,"count":24},{"val":63,"count":2},{"val":34,"count":1}],"sfreq_counts":[{"val":10000.0,"count":27}],"stats_computed_at":"2026-04-22T23:16:00.311903+00:00","total_duration_s":null,"author_year":"Shan2025","canonical_name":null}}