{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3443","dataset_id":"ds006334","associated_paper_doi":null,"authors":["Biau E","Wang D","Park H","Jensen O","Hanslmayr S"],"bids_version":"1.0.2","contact_info":["Emmanuel Biau"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds006334.v1.0.0","datatypes":["meg"],"demographics":{"subjects_count":30,"ages":[25,23,22,23,25,24,24,26,30,22,21,24,23,24,25,23,23,22,26,21,25,26,31,24,19,20,23,25,22,22],"age_min":19,"age_max":31,"age_mean":23.766666666666666,"species":null,"sex_distribution":{"m":17,"f":13},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds006334","osf_url":null,"github_url":null,"paper_url":null},"funding":["Sir Henry Wellcome Fellowship (210924/Z/18/Z)","European Research Council (Consolidator Grant 647954)","Economic and Social Research Council (ES/R010072/1)"],"ingestion_fingerprint":"c78ee3bd80769897d75cf581978eec9d69b0dc20a92f104e7f93ebb52ac3c195","license":"CC0","n_contributing_labs":null,"name":"Neocortical and Hippocampal Theta Oscillations Track Audiovisual Integration and Replay of Speech Memories","readme":"General information:\nThis repository contains the raw MEG data, T1-weighted anatomical scans, the corresponding behavioural logfiles, as well as the scripts to perform analyses and results reported in the manuscript:\nBiau, E., Wang, D., Park, H., Jensen, O., & Hanslmayr, S. (2025). Neocortical and hippocampal theta oscillations track audiovisual integration and replay of speech memories. Journal of Neuroscience, 45(21).\nTask overview:\nThe experimental paradigm consisted of repeated blocks, with each block being composed of three successive tasks: encoding, distractor, and retrieval task.\n1) Encoding: participants were presented with a series of audiovisual speech movies and performed an audiovisual synchrony detection. Each trial started with a brief fixation cross (jittered duration, 1,000–1,500 ms) followed by the presentation of a random synchronous or asynchronous audiovisual speech movie (5 s). After the movie end, participants had to determine whether video and sound were presented in synchrony or asynchrony in the movie, by pressing the index finger (synchronous) or the middle finger (asynchronous) button of the response device as fast and accurate as possible. The next trial started after the participant’s response. After the encoding, the participants did a short distractor task. Each trial started with a brief fixation cross (jittered duration, 1,000–1,500 ms) followed by the presentation of a random number (from 1 to 99) displayed at the center of the screen.\n2) Distractor: Participants were instructed to determine as fast and accurate as possible whether this number was odd or even by pressing the index (odd) or the middle finger (even) button of the response device. Each distractor task contained 20 trials. The purpose of the distractor task was only to clear memory up. After the distractor task, the participants performed the retrieval task to assess their memory. Each trial started with a brief fixation cross (jittered duration, 1,000–1,500 ms) followed by the presentation of a static frame depicting the face of a speaker from a movie attended in the previous encoding.\n3) Retrieval: During this visual cueing (5 s), participants were instructed to recall as accurately as possible every auditory information previously associated with the speaker’s speech during the movie presentation. At the end of the visual cueing, participants were provided the possibility to listen two auditory speech stimuli: one stimulus corresponded to the speaker’s auditory speech from the same movie (i.e., matching). The other auditory stimulus was taken from another random movie with the same speaker gender (i.e., unmatching). Participants chose to listen each stimulus sequentially by pressing the index finger (Speech 1) or the middle finger (Speech 2) button of the response device. The order of displaying was free, but for every trial, participants were allowed to listen to each auditory stimulus only one time to avoid speech restudy. At the end of the second auditory stimulus, participants were instructed to determine as fast and accurate as possible which auditory speech stimulus corresponded to the speaker’s face frame, by pressing the index finger (Speech1) or the middle finger (Speech2) button of the response device. The next retrieval trial started after the participant’s response.\nAfter the last trial of the retrieval, participants took a short break, before starting a new block (encoding–distractor–retrieval).\nEvents and corresponding trigger values in .fif raw MEG data:\nEach participant underwent only one session. Run1to5 are simply the chunks of the continuous MEG recording during the unique session, and were split automatically by the software.\nAudiovisual movie onset [1]; Visual cue onset [2]; Speech 1 onset [4]; Speech 2 onset [8]; Probe response key press [16]; Movie Localiser onset [32] and Sound Localiser onset [64].\nSome data have their associated individual T1w anatomy scans, other do not.","recording_modality":["meg"],"senior_author":"Hanslmayr S","sessions":[],"size_bytes":178454548010,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["AVspeech"],"timestamps":{"digested_at":"2026-04-22T12:29:17.579861+00:00","dataset_created_at":"2025-06-10T16:14:43.829Z","dataset_modified_at":"2025-06-11T09:26:21.000Z"},"total_files":128,"storage":{"backend":"s3","base":"s3://openneuro.org/ds006334","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.tsv"]},"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"70bd2cf21899c8d2","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Multisensory"],"type":["Memory"],"confidence":{"pathology":0.65,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot conventions:\n1) Cross-modal Oddball Task (Parkinson’s; Modality=Multisensory): demonstrates that when trials include concurrent auditory+visual cues, the modality label should be “Multisensory”. This guides labeling here because the task uses “audiovisual speech movies”.\n2) Digit span task (Healthy; Modality=Auditory; Type=Memory): demonstrates that when the paradigm is explicitly organized around encoding/maintenance/retrieval and recall/recognition, the Type should be “Memory” rather than “Perception” or “Attention”. This guides labeling here because the dataset has explicit “encoding… retrieval task” and instructed recall/recognition of previously encoded speech.\n3) Auditory music vs speech ABR (Healthy; Type=Perception): shows that passive/low-demand stimulus processing maps to “Perception”; used as the main runner-up comparator since this dataset also includes a synchrony detection component, but overall includes explicit memory retrieval.","metadata_analysis":"Key metadata facts (quoted):\n- Population/demographics only (no diagnosis indicated): “Subjects: 30; Sex: {'m': 17, 'f': 13}; Age range: 19-31”.\n- Multisensory audiovisual stimulation during encoding: “participants were presented with a series of audiovisual speech movies and performed an audiovisual synchrony detection.”\n- Explicit memory design: “each block being composed of three successive tasks: encoding, distractor, and retrieval task.”\n- Explicit instructed recall at retrieval: “participants were instructed to recall as accurately as possible every auditory information previously associated with the speaker’s speech”.\n- Recognition/choice between two speech items: “Participants chose to listen two auditory speech stimuli… determine… which auditory speech stimulus corresponded to the speaker’s face frame”.\n- Events corroborate AV + memory retrieval structure: “Audiovisual movie onset [1]; Visual cue onset [2]; Speech 1 onset [4]; Speech 2 onset [8]”.","paper_abstract_analysis":"No useful paper information (abstract not provided in metadata).","evidence_alignment_check":"Pathology:\n- Metadata says: only demographics (“Subjects: 30… Age range: 19-31”) and no recruitment diagnosis/clinical group is mentioned.\n- Few-shot pattern suggests: in non-clinical cognitive experiments with typical adult participants and no disorder terms, label Pathology as “Healthy”.\n- Alignment: ALIGN (no clinical population stated; few-shot convention supports Healthy).\n\nModality:\n- Metadata says: “audiovisual speech movies” and “audiovisual synchrony detection” (explicitly both visual+auditory).\n- Few-shot pattern suggests: concurrent cross-modal stimulation maps to “Multisensory” (as in the cross-modal oddball example).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: “encoding… distractor… retrieval task” and “recall… auditory information previously associated” plus forced-choice recognition of the correct speech.\n- Few-shot pattern suggests: tasks centered on encoding/retrieval/recall/recognition map to “Memory” (digit span example), even if they also include perceptual judgments.\n- Alignment: ALIGN (memory is explicit and central; perception is present but secondary).","decision_summary":"Top-2 comparative selection:\n\n1) Pathology\n- Candidate A: Healthy\n  - Evidence: absence of any diagnosis/clinical recruitment language; only demographics are given: “Subjects: 30… Age range: 19-31”.\n- Candidate B: Unknown\n  - Evidence: metadata does not explicitly say “healthy”, “control”, or screening criteria.\n- Head-to-head: Healthy wins because the dataset is described as a standard cognitive MEG experiment with adult demographics and no clinical-group indicators; “Unknown” remains plausible only due to lack of an explicit ‘healthy’ statement.\n- Final: Healthy. Confidence evidence basis: 1 demographic quote + contextual inference from lack of pathology terms.\n\n2) Modality\n- Candidate A: Multisensory\n  - Evidence: “audiovisual speech movies”; “audiovisual synchrony detection”; triggers include both “Audiovisual movie onset” and later auditory “Speech 1/2 onset”.\n- Candidate B: Auditory\n  - Evidence: retrieval includes listening to “two auditory speech stimuli”.\n- Head-to-head: Multisensory wins because the primary stimulus during encoding is explicitly audiovisual (audio+video), and the study focus includes “audiovisual integration”.\n- Final: Multisensory. Confidence evidence basis: 3 explicit modality-supporting quotes/features.\n\n3) Type\n- Candidate A: Memory\n  - Evidence: “encoding… distractor… retrieval task”; “recall… auditory information previously associated”; recognition decision: “determine… which auditory speech stimulus corresponded to the speaker’s face frame”.\n- Candidate B: Perception\n  - Evidence: encoding task includes “audiovisual synchrony detection” (a perceptual judgment).\n- Head-to-head: Memory wins because the paradigm is explicitly organized around memory encoding and retrieval with instructed recall and a recognition choice; perception is a component but not the overarching construct.\n- Final: Memory. Confidence evidence basis: 3+ explicit memory-design quotes."}},"computed_title":"Neocortical and Hippocampal Theta Oscillations Track Audiovisual Integration and Replay of Speech Memories","nchans_counts":[{"val":331,"count":74},{"val":332,"count":54}],"sfreq_counts":[{"val":1000.0,"count":128}],"stats_computed_at":"2026-04-22T23:16:00.311482+00:00","total_duration_s":132290.0,"canonical_name":null,"name_confidence":0.78,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Biau2025"}}