{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3452","dataset_id":"ds006468","associated_paper_doi":null,"authors":["Till Habersetzer","Bernd T. Meyer"],"bids_version":"1.7.0","contact_info":["Till Habersetzer"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds006468.v1.1.2","datatypes":["meg"],"demographics":{"subjects_count":24,"ages":[29,27,23,33,24,31,22,23,23,27,20,28,27,26,27,23,25,22,23,27,21,22,22,28],"age_min":20,"age_max":33,"age_mean":25.125,"species":null,"sex_distribution":{"m":8,"f":16},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds006468","osf_url":null,"github_url":null,"paper_url":null},"funding":["This work was funded by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy– EXC 2177/1- Project ID 390895286 and by Forschungspoolmittel Potentialbereich mHealth from the School VI Medicine and Health Sciences at Carl von Ossietzky University Oldenburg (PB mHealth 2020-13)."],"ingestion_fingerprint":"8faa165a82e739856d9c269fa44f1c4be621e485c94c57230242476f12f29871","license":"CC0","n_contributing_labs":null,"name":"MEG-SCANS - A comprehensive magnetoencephalography speech dataset with Stories, Chirps And Noisy Sentences.","readme":"References\n----------\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896\nNiso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110.https://doi.org/10.1038/sdata.2018.110\nDescription\n-----------\nThe MEG-SCANS (Stories, Chirps, And Noisy Sentences) dataset provides raw and MaxFiltered magnetoencephalography (MEG) recordings from 24 German-speaking participants, collected over three months. Each participant engaged in an auditory experiment, listening to approximately one hour of stimuli, including two audiobooks (approx. 20 minutes each), 120 sentences from the Oldenburger Matrix Sentence Test (OLSA) presented at varying speech intelligibility levels (20% to 95%) for Speech Reception Threshold (SRT) assessment, and short up-chirps used for MEG signal quality assessment. For each participant, the dataset comprises raw MEG data, corresponding MaxFiltered data, two empty-room MEG recordings (pre- and post-session), a structural MRI scan of the head, behavioral audiogram and SRT results from hearing screenings, and the corresponding audio stimulus material (audiobooks, envelopes, and chirp stimuli). Auxiliary channels recorded include the left audio channel (MISC001), right audio channel (MISC002), and the instructor's microphone (MISC007), all sampled at 1000 Hz. Organized according to the Brain Imaging Data Structure (BIDS), this dataset offers a robust benchmark for large-scale encoding/decoding analyses of temporally-resolved brain responses to speech. Note that sub-01 served as a pilot so that its data resembles a slightly different experimental design, specifically lacking chirp stimuli and featuring different audiobooks; this variation is accounted for in the provided analysis pipelines. Comprehensive Matlab and Python code are included alongside the entire analysis pipeline [https://doi.org/10.5281/zenodo.17397581] to replicate key data validations, ensuring transparency and reproducibility. The dataset is described in an accompanying data descriptor paper [https://doi.org/10.1038/s41597-025-06397-4].","recording_modality":["meg"],"senior_author":"Bernd T. Meyer","sessions":[],"size_bytes":108674320301,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["audiobook1","audiobook2","noise","olsa"],"timestamps":{"digested_at":"2026-04-22T12:29:30.030325+00:00","dataset_created_at":"2025-07-14T17:00:44.163Z","dataset_modified_at":"2025-12-10T17:26:05.000Z"},"total_files":189,"storage":{"backend":"s3","base":"s3://openneuro.org/ds006468","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"975795e3557d3eaf","model":"openai/gpt-5.2","tagged_at":"2026-01-20T19:00:37.954986+00:00"},"tags":{"pathology":["Healthy"],"modality":["Auditory"],"type":["Perception"],"confidence":{"pathology":0.7,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Closest few-shot match by paradigm and stimulus modality is the example dataset titled \"Subcortical responses to music and speech are alike while cortical responses diverge\" (Healthy, Auditory, Perception). That example is an auditory listening paradigm using continuous naturalistic stimuli (music/speech) to study brain responses to sound. The current dataset is also an auditory listening experiment with continuous speech (audiobooks, sentences) and acoustically-defined chirps for signal quality, which by the few-shot convention maps to Modality=Auditory and Type=Perception. By contrast, the TBI auditory oddball few-shot involves an oddball/cognitive-control style task and a clinical cohort, which does not match the present naturalistic speech-listening design.","metadata_analysis":"Key metadata indicates an auditory speech-listening MEG experiment in a non-clinical participant sample. Quotes: (1) \"raw and MaxFiltered magnetoencephalography (MEG) recordings from 24 German-speaking participants\". (2) \"Each participant engaged in an auditory experiment, listening to approximately one hour of stimuli\". (3) \"including two audiobooks... 120 sentences... presented at varying speech intelligibility levels... and short up-chirps\". Additional supporting detail for auditory input: \"Auxiliary channels recorded include the left audio channel (MISC001), right audio channel (MISC002), and the instructor's microphone (MISC007)\" and mention of \"hearing screenings\" (\"behavioral audiogram and SRT results\"). No recruitment based on a disorder is described.","paper_abstract_analysis":"No paper abstract text was provided in the input (only a link to a data descriptor). No useful paper information.","evidence_alignment_check":"Pathology: Metadata says \"24 German-speaking participants\" with no diagnosis or patient group described, and includes hearing screening/audiogram measures (suggesting typical/hearing-screened participants). Few-shot pattern for non-clinical participant samples without disorder focus maps to Healthy. ALIGN.\nModality: Metadata explicitly says \"auditory experiment\" and describes listening to \"audiobooks\", \"sentences\" and \"chirps\" with recorded \"audio channel\" auxiliaries. Few-shot convention for sound stimuli maps to Auditory. ALIGN.\nType: Metadata emphasizes brain responses to speech and \"encoding/decoding analyses of temporally-resolved brain responses to speech\" plus intelligibility manipulation for SRT assessment. Few-shot convention for sensory stimulus processing/listening studies maps to Perception (as in the music vs speech ABR example). ALIGN.","decision_summary":"Top-2 candidates — Pathology: (1) Healthy: supported by lack of any disorder recruitment (\"24 German-speaking participants\") and general experimental/hearing-screening context (\"behavioral audiogram and SRT results from hearing screenings\"). (2) Unknown: possible because metadata does not explicitly say \"healthy\" or \"controls\". Winner: Healthy (no clinical cohort stated). Confidence=0.7.\nTop-2 candidates — Modality: (1) Auditory: supported by \"auditory experiment\", \"listening\", \"audiobooks\", \"sentences\", \"up-chirps\", and recorded \"audio channel\" auxiliaries. (2) Multisensory: weak, only because there is mention of an \"instructor's microphone\" and MRI, but these are not participant stimulus modalities. Winner: Auditory. Confidence=0.9.\nTop-2 candidates — Type: (1) Perception: supported by listening to speech stimuli and intelligibility manipulation (\"varying speech intelligibility levels\") and focus on \"brain responses to speech\" and encoding/decoding. (2) Attention: possible because long-duration listening could involve attentional engagement, but no explicit attention manipulation is described. Winner: Perception. Confidence=0.8."}},"computed_title":"MEG-SCANS - A comprehensive magnetoencephalography speech dataset with Stories, Chirps And Noisy Sentences.","nchans_counts":[{"val":341,"count":153},{"val":347,"count":7},{"val":372,"count":5}],"sfreq_counts":[{"val":1000.0,"count":165}],"stats_computed_at":"2026-04-22T23:16:00.311659+00:00","total_duration_s":78258.835,"canonical_name":null,"name_confidence":0.98,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Habersetzer2025"}}