{"success":true,"database":"eegdash","data":{"_id":"6953f4239276ef1ee07a32de","dataset_id":"ds003633","associated_paper_doi":null,"authors":["Xingyu Liu","Yuxuan Dai","Hailun Xie","Zonglei Zhen"],"bids_version":"1.4.0","contact_info":["Xingyu Liu"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds003633.v1.0.3","datatypes":["meg"],"demographics":{"subjects_count":12,"ages":[21,19,22,22,23,21,24,22,20,23,25],"age_min":19,"age_max":25,"age_mean":22.0,"species":null,"sex_distribution":{"f":6,"m":5},"handedness_distribution":{"r":11}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds003633","osf_url":null,"github_url":null,"paper_url":null},"funding":["National Key R&D Program of China (Grant No. 2019YFA0709503)","National Natural Science Foundation of China (Grant No. 31771251)"],"ingestion_fingerprint":"077d928e98ba3e9c4c70c1704cfded878ff8eff857c090062ea22212ddfa5ba3","license":"CC0","n_contributing_labs":null,"name":" ForrestGump-MEG","readme":"**ForrestGump-MEG: A audio-visual movie watching MEG dataset**\nFor details please refer to our paper on https://www.biorxiv.org/content/10.1101/2021.06.04.446837v1.\nThis dataset contains MEG data recorded from 11 subjects while watching the 2h long Chinese-dubbed audio-visual movie 'Forrest Gump'. The data were acquired with a 275-channel CTF MEG. Auxiliary data (T1w) as well as derivation data such as preprocessed data and MEG-MRI co-registration are also included.\n**Pre-process procedure description**\nThe T1w images stored as NIFTI files were minimally-preprocessed using the anatomical preprocessing pipeline from fMRIPrep with default settings.\nMEG data were pre-processed using MNE following a three-step procedure: 1. bad channels were detected and removed. 2. a high-pass filter of 1 Hz was applied to remove possible slow drifts from the continuous MEG data. 3. artifacts removal was performed with ICA.\n**Stimulus material**\nThe audio-visual stimulus materials were from the Chinese-dubbed 'Forrest Gump' DVD released in 2013 (ISBN: 978-7-7991-3934-0), which cannot be publicly released due to copyright restrictions. The stimulus materials are available upon reasonable request and on condition of a research-only data use agreement (correspondence with Xingyu Liu, liuxingyu987@foxmail.com).\n**Dataset content overview**\nthe data were organized following the MEG-BIDS using MNE-BIDS toolbox.\n*the pre-processed MEG data*\nThe preprocessed MEG recordings including the preprocessed MEG data, the event files, the ICA decomposition and label files and the MEG-MRI coordinate transformation file are hosted here.\n\t|---./derivatives/preproc_meg-mne_mri-fmriprep/sub-xx/ses-movie/meg/\n\t\t|---sub-xx_ses-movie_coordsystem.json\n\t\t|---sub-xx_ses-movie_task-movie_run-xx_channels.tsv\n\t\t|---sub-xx_ses-movie_task-movie_run-xx_decomposition.tsv\n\t\t|---sub-xx_ses-movie_task-movie_run-xx_events.tsv\n\t\t|---sub-xx_ses-movie_task-movie_run-xx_ica.fif.gz\n\t\t|---sub-xx_ses-movie_task-movie_run-xx_meg.fif\n\t\t|---sub-xx_ses-movie_task-movie_run-xx_meg.json\n\t\t|---...\n\t\t|---sub-xx_ses-movie_task-movie_trans.fif\n*the pre-processed MRI data*\nThe preprocessed MRI volume, reconstructed surface, and other associations including transformation files are hosted here\n\t|---./derivatives/preproc_meg-mne_mri-fmriprep/sub-xx/ses-movie/anat/\n\t\t|---sub-xx_ses-movie_desc-preproc_T1w.nii.gz\n\t\t|---sub-xx_ses-movie_hemi-L_inflated.surf.gii\n\t\t|---sub-xx_ses-movie_hemi-L_midthickness.surf.gii\n\t\t|---sub-xx_ses-movie_hemi-L_pial.surf.gii\n\t\t|---sub-xx_ses-movie_hemi-L_smoothwm.surf.gii\n\t\t|---sub-xx_ses-movie_hemi-R_inflated.surf.gii\n\t\t|---sub-xx_ses-movie_hemi-R_midthickness.surf.gii\n\t\t|---sub-xx_ses-movie_hemi-R_pial.surf.gii\n\t\t|---sub-xx_ses-movie_hemi-R_smoothwm.surf.gii\n\t\t|---sub-xx_ses-movie_space-MNI152NLin2009cAsym_desc-preproc_T1w.nii.gz\n\t\t|---sub-xx_ses-movie_space-MNI152NLin6Asym_desc-preproc_T1w.nii.gz\n\t\t|---...\nthe FreeSurfer surface data, the high-resolution head surface and the MRI-fiducials are provided here\n\t|---./derivatives/preproc_meg-mne_mri-fmriprep/sourcedata/\n\t\t|---freesurfer\n\t\t    |---sub-xx\n\t\t    |---...\n*the raw data*\n\t|---./sub-xx/ses-movie/\n\t\t|---meg/\n\t\t|\t|---sub-xx_ses-movie_coordsystem.json\n\t\t|\t|---sub-xx_ses-movie_task-movie_run-xx_channels.tsv\n\t\t|\t|---sub-xx_ses-movie_task-movie_run-xx_events.tsv\n\t\t|\t|---sub-xx_ses-movie_task-movie_run-xx_meg.ds\n\t\t|\t|---sub-xx_ses-movie_task-movie_run-xx_meg.json\n\t\t|\t|---...\n\t\t|---anat/\n\t\t\t|---sub-xx_ses-movie_T1w.json\n\t\t\t|---sub-xx_ses-movie_T1w.nii.gz","recording_modality":["meg"],"senior_author":"Zonglei Zhen","sessions":["20190429","20190604","20190606","20190610","20190611","20190612","20190613","movie"],"size_bytes":78926559646,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["movie","noise"],"timestamps":{"digested_at":"2026-04-22T12:25:36.519004+00:00","dataset_created_at":"2021-04-24T03:09:05.237Z","dataset_modified_at":"2022-12-29T03:52:29.000Z"},"total_files":96,"storage":{"backend":"s3","base":"s3://openneuro.org/ds003633","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"0328090baee1ebe9","model":"openai/gpt-5.2","tagged_at":"2026-01-20T10:17:45.887521+00:00"},"tags":{"pathology":["Healthy"],"modality":["Multisensory"],"type":["Perception"],"confidence":{"pathology":0.6,"modality":0.9,"type":0.7},"reasoning":{"few_shot_analysis":"Most similar few-shot convention is the “Cross-modal Oddball Task” example, which maps simultaneous auditory+visual input to the Modality label “Multisensory”. While that example is a clinical cohort and oddball paradigm, it establishes the convention that cross-modal stimulation should be labeled Multisensory. For Type, the schizophrenia visual discrimination example maps explicit sensory discrimination to “Perception”; here the task is passive/naturalistic sensory processing (movie watching) rather than learning/memory/motor, so Perception is the closest conventionally used Type label among the allowed set.","metadata_analysis":"Key stimulus/task facts from the README: (1) naturalistic movie watching: \"MEG data recorded from 11 subjects while watching the 2h long Chinese-dubbed audio-visual movie 'Forrest Gump'.\" (2) explicitly audio-visual: \"ForrestGump-MEG: A audio-visual movie watching MEG dataset\" and \"The audio-visual stimulus materials were from the Chinese-dubbed 'Forrest Gump' DVD\". Population/pathology is not stated beyond \"11 subjects\" (no diagnosis terms present).","paper_abstract_analysis":"No useful paper information (only a bioRxiv link is provided; abstract text not included here).","evidence_alignment_check":"Pathology: Metadata says only \"11 subjects\" with no clinical recruitment/diagnosis described; few-shot patterns would label non-clinical cohorts as Healthy when no disorder is mentioned. ALIGN (no conflict).\nModality: Metadata explicitly says \"audio-visual movie\" and \"audio-visual stimulus materials\"; few-shot convention (cross-modal oddball) suggests Multisensory for combined auditory+visual. ALIGN.\nType: Metadata says participants are \"watching\" a long movie (passive naturalistic stimulus processing). Few-shot conventions map sensory-stimulus-focused paradigms to Perception; there is no indication the primary aim is memory, learning, decision-making, motor, sleep, or resting-state. ALIGN (no conflict).","decision_summary":"Top-2 candidates per category:\n\nPathology:\n1) Healthy — Evidence: no disorder/diagnosis terms; \"MEG data recorded from 11 subjects\" implies a typical non-clinical sample.\n2) Unknown — Counter-evidence: metadata never explicitly says “healthy controls”.\nHead-to-head: Healthy is more consistent with EEGDash convention when no clinical recruitment is described. (Alignment: aligned)\n\nModality:\n1) Multisensory — Evidence: \"audio-visual movie watching\"; \"audio-visual stimulus materials\"; \"Chinese-dubbed audio-visual movie\".\n2) Visual — Runner-up because movie includes strong visual input, but audio is clearly present and central.\nHead-to-head: Multisensory wins due to explicit audiovisual stimulus. (Alignment: aligned)\n\nType:\n1) Perception — Evidence: passive viewing/listening to an audiovisual movie: \"watching the 2h long ... audio-visual movie\"; stimulus-focused dataset description; no explicit higher-level task construct (learning/memory/decision).\n2) Other — Runner-up because ‘naturalistic movie’ studies can target multiple constructs, but none are specified.\nHead-to-head: Perception is the best match given the stimulus-driven paradigm and available labels. (Alignment: aligned)\n\nConfidence justification:\n- Modality high due to 3 explicit audiovisual quotes.\n- Pathology lower because no explicit “healthy” statement, only absence of diagnosis.\n- Type moderate: movie-watching clearly implies sensory/perceptual processing, but primary research construct is not explicitly stated."}},"nemar_citation_count":1,"computed_title":"ForrestGump-MEG","nchans_counts":[{"val":409,"count":89},{"val":378,"count":7}],"sfreq_counts":[{"val":600.0,"count":89},{"val":1200.0,"count":7}],"stats_computed_at":"2026-04-22T23:16:00.222295+00:00","total_duration_s":79121.73916666667,"canonical_name":null,"name_confidence":0.42,"name_meta":{"suggested_at":"2026-04-14T10:18:35.342Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Liu2021"}}