{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a332c","dataset_id":"ds004256","associated_paper_doi":null,"authors":["Ole Bialas","Marc Schoenwiesner","Burkhard Maess"],"bids_version":"1.6.0","contact_info":["Ole Bialas"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds004256.v1.0.5","datatypes":["eeg"],"demographics":{"subjects_count":53,"ages":[23,31,23,24,24,21,24,24,25,24,22,24,23,24,25,27,23,22,22,25,27,27,27,23,25,28,23,20,26,25],"age_min":20,"age_max":31,"age_mean":24.366666666666667,"species":null,"sex_distribution":{"o":30,"m":23},"handedness_distribution":{"r":27,"l":3}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds004256","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"e2e279e3476148ca19ce6514b1f293cf899c43062f7f12b8118330018509e826","license":"CC0","n_contributing_labs":null,"name":" Encoding of Sound Source Elevation in Human Cortex","readme":"# Overview\nThe dataset consists of data from two experiments in which subjects were presented bursts of noise from loudspeakers at different elevations. Subjects who participated in either experiment were initially tested in their ability to localize elevated sound sources. Both experiments were conducted in a hemi-anechoic chamber.\n# Localization Tests\nBursts of pink noise were presented from loudspeakers at different elevations and 10° azimuth (to the listeners right). In the localization test preceding experiment I, these loudspeakers were positioned at elevations of +50°, +25°, 0° and -25° while the localization test preceding experiment II also included a loudspeaker at -50° elevation. Localization test data is missing for sub-001, sub-002 and sub-003\n# Deviant Detection (Experiment 1)\nSubjects 001-023 participated in this experiment. Subjects heard a long trail of noise from one loudspeaker (adapter), followed by a short burst of noise from another loudspeaker (probe). The elevation of the adapter and probe are encoded in the event values:\n2: adapter at 37.5°, probe at 12.5°\n3: adapter at 37.5°, probe at -12.5°\n4: adapter at 37.5°, probe at -37.5°\n5: adapter at -37.5°, probe at 37.5°\n6: adapter at -37.5°, probe at 12.5°\n7: adapter at -37.5°, probe at -12.5°\n8: no adapter, any non-target location (deviant)\nThe behavioral data contains the trial numbers where a deviant was presented and weather the subject responded within one second by pressing a button.\n# One-Back (Experiment II)\nSubjects 100-129 participated in this experiment. Subjects heard a long trail of white noise through open headphones followed by a short burst of noise from one of the loudspeakers. The loudspeaker's elevation is encoded in the event values:\n1: 37.5°, 2: 12.5°, 3:-23.5°, 4:-37.5°\nRoughly five percent of trials were targets where subjects heard a beep after the trial, prompting them to localize the previously heard sound. The number of those target trials, as well as the target's elevation and the subject's response can be found in thee behavioral data.","recording_modality":["eeg"],"senior_author":"Burkhard Maess","sessions":[],"size_bytes":19516271222,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["deviantdetection","oneback"],"timestamps":{"digested_at":"2026-04-22T12:26:22.621564+00:00","dataset_created_at":"2022-08-29T01:25:10.410Z","dataset_modified_at":"2022-09-24T16:01:53.000Z"},"total_files":53,"storage":{"backend":"s3","base":"s3://openneuro.org/ds004256","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"c20e0a9b923f57ec","model":"openai/gpt-5.2","tagged_at":"2026-01-20T10:35:48.025587+00:00"},"tags":{"pathology":["Healthy"],"modality":["Auditory"],"type":["Perception"],"confidence":{"pathology":0.6,"modality":0.9,"type":0.75},"reasoning":{"few_shot_analysis":"Most similar few-shot examples by stimulus modality are the auditory-perception datasets (e.g., “Subcortical responses to music and speech…” labeled Modality=Auditory, Type=Perception) and the auditory oddball-style datasets (e.g., “EEG: Three-Stim Auditory Oddball…”). These examples show the convention that when the stimulus is sound/noise/music/speech, Modality should be Auditory. They also show that oddball/deviant paradigms can fall under perceptual processing unless the dataset’s primary aim is cognitive control/clinical biomarkers. Here, the dataset is centered on auditory spatial localization and detecting deviants, which aligns more with perceptual processing conventions than with decision-making or clinical/intervention labeling.","metadata_analysis":"Key quoted metadata from the README:\n1) Auditory stimuli and spatial manipulation: “subjects were presented bursts of noise from loudspeakers at different elevations.”\n2) Explicit localization/perceptual focus: “subjects were initially tested in their ability to localize elevated sound sources.”\n3) Deviant detection paradigm: “Deviant Detection (Experiment 1)… The behavioral data contains the trial numbers where a deviant was presented and whether the subject responded within one second by pressing a button.”\n4) Additional task structure with probe sounds: “Subjects heard a long trail of noise from one loudspeaker (adapter), followed by a short burst of noise from another loudspeaker (probe).”\nNo lines describe recruiting a clinical group or any diagnosis/condition.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: only “subjects” and task descriptions; no disorder/diagnosis is mentioned anywhere (e.g., “subjects were presented bursts of noise…”).\n- Few-shot pattern suggests: when no clinical recruitment is described, label as Healthy.\n- Alignment: ALIGN (metadata absence of pathology is consistent with Healthy cohort convention).\n\nModality:\n- Metadata says: “bursts of noise from loudspeakers”, “white noise through open headphones”, “heard a beep”.\n- Few-shot pattern suggests: sound/noise stimuli -> Auditory.\n- Alignment: ALIGN.\n\nType:\n- Metadata says: strong perceptual/spatial-hearing emphasis (“ability to localize elevated sound sources”, loudspeaker elevation manipulation) plus deviant detection/one-back elements.\n- Few-shot pattern suggests: sensory discrimination/detection and stimulus property processing -> Perception; deviant/oddball-like detection can sometimes be Attention depending on stated aim.\n- Alignment: PARTIAL (task includes attention-like deviant detection), but overall emphasis on sound localization supports Perception as the primary construct.","decision_summary":"Top-2 candidates (with head-to-head comparison) and final choices:\n\nPathology:\n1) Healthy — Evidence: README only references generic “subjects” and provides no diagnosis/clinical recruitment; e.g., “subjects were initially tested in their ability to localize elevated sound sources.”\n2) Unknown — Evidence: no explicit statement like “healthy volunteers.”\nDecision: Healthy wins because the dataset describes standard psychophysics-style auditory experiments with no clinical framing; per catalog convention, absence of any clinical recruitment implies Healthy.\nConfidence basis: no explicit ‘healthy’ quote, only absence-of-pathology -> moderate confidence.\n\nModality:\n1) Auditory — Evidence: “bursts of noise from loudspeakers”, “white noise through open headphones”, “heard a beep”.\n2) Multisensory — Weak evidence: button press is a response, not a stimulus modality.\nDecision: Auditory wins clearly because all described stimuli are sounds/noise.\nConfidence basis: multiple explicit auditory-stimulus quotes.\n\nType:\n1) Perception — Evidence: primary measures involve auditory spatial localization and stimulus elevation (“ability to localize elevated sound sources”; loudspeakers “at different elevations”).\n2) Attention — Evidence: “Deviant Detection” and a “One-Back” style target prompting localization could be framed as attention/vigilance.\nDecision: Perception wins because the dominant experimental manipulation and outcome is auditory localization (a perceptual/spatial-hearing construct), with deviant detection serving as a detection component within auditory perception.\nConfidence basis: explicit localization emphasis plus deviant-detection text, but no explicit statement of studying ‘attention’, so moderate-high confidence."}},"nemar_citation_count":0,"computed_title":"Encoding of Sound Source Elevation in Human Cortex","nchans_counts":[{"val":64,"count":53}],"sfreq_counts":[{"val":500.0,"count":53}],"stats_computed_at":"2026-04-21T23:17:03.729539+00:00","total_duration_s":152413.33,"author_year":"Bialas2022","canonical_name":null}}