{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a33d8","dataset_id":"ds005346","associated_paper_doi":null,"authors":["Jixing Li","Yike Wang","Chengcheng Wang","Zhengwu Ma"],"bids_version":"1.8.0","contact_info":["Zhengwu Ma","Jixing Li","Chengcheng WANG","Zhengwu Ma","Jixing Li"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds005346.v1.0.5","datatypes":["meg"],"demographics":{"subjects_count":30,"ages":[24,20,22,25,25,24,24,25,21,25,25,23,19,22,22,19,21,24,25,20,23,21,24,24,21,25,24,24,24,30,24,24,27,22,21,22,21,21,24,26,24,21,20,19,21,24,23,25,20,21,22,22,22,24,21,22,22,24,26,25],"age_min":19,"age_max":30,"age_mean":22.916666666666668,"species":null,"sex_distribution":{"f":33,"m":27},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005346","osf_url":null,"github_url":null,"paper_url":null},"funding":[""],"ingestion_fingerprint":"a460b547798b71a3b438a2557e90d54e592653c874858a316c4c2bbbd6672c24","license":"CC0","n_contributing_labs":null,"name":"Naturalistic fMRI and MEG recordings during viewing of a reality TV show","readme":"### Participants\nThirty participants (17 females, mean age=23.17±2.31 years) were recruited for the fMRI experiment at Shanghai International Studies University, Shanghai, China. An additional thirty participants (16 females, mean age=22.67±1.99 years) were recruited from the West China Hospital of Sichuan University, Chengdu, China for MEG experiment. All participants were right-handed, had normal or corrected-to-normal vision, and reported no history of neurological disorders. Before the experiment, all participants provided written informed consent and were compensated for their participation.\nData from 6 participants in the MEG experiment exhibited distinct PSD patterns that diverged from the other 24 participants (10 females, mean age=22.75±1.94 years; see figure below), we excluded their data from the ISC and regression analysis for MEG data. However, all datasets remain available in the OpenNeuro repository for other researchers’ use.\n![Power Spectrum Analysis](https://raw.githubusercontent.com/compneurolinglab/baba/main/psd.png)\n### Experiment Procedure\nThe experimental procedures for both fMRI and MEG experiments were identical. Participants watched the video while inside the scanner. The video was presented via a mirror attached to the head coil in the fMRI and MEG. Audio was delivered through MRI-compatible headphones (Sinorad, Shenzhen, China) during the fMRI experiment and MEG-compatible insert earphones (ComfortBuds 24, Sinorad, Shenzhen, China) during the MEG experiment. Following the video, participants were visually presented with 5 multiple-choice questions on the screen to assess their comprehension and ensure engagement with the stimuli. Participants responded using a button press, with a maximum response time of 10 seconds per question. If no response was recorded within this time, the experiment proceeded to the next question automatically. After the quiz, participants were instructed to close their eyes for 15 minutes without an explicit task. This period allowed for the recording of neural activity, capturing spontaneous mental replay of the video stimulus. The entire experimental procedure lasted approximately 45 minutes per participant.\nThe fMRI experiment was approved by the Ethics Committee of Shanghai Key Laboratory of Brain-Machine Intelligence for Information Behavior (No. 2024BC028), and the MEG experiment was approved by the West China Hospital of Sichuan University Biomedical Research Ethics Committee (No. 2024[657]).\n### Stimuli\nThe video stimulus was extracted from the first episode of the Chinese reality TV show “Where Are We Going, Dad? (Season 1)” (openly available at https://www.youtube.com/watch?v=ZgRdRHmYuN8), which originally aired in 2013. The show features unscripted interactions between fathers and their child as they travel to a rural village and engage in daily activities. The selected excerpt has a total duration of 25 minutes and 19 seconds. The original video had a resolution of 640×368 pixels with a frame rate of 15 frames per second. It was presented in full-color (RGB) format, without embedded subtitles or captions.\n### Acquisition\nThe fMRI data was collected in a 3.0 T Siemens Prisma MRI scanner at Shanghai International Studies University, Shanghai. Anatomical scans were obtained using a Magnetization Prepared RApid Gradient-Echo (MP-RAGE) ANDI iPAT2 pulse sequence with T1-weighted contrast (192 single-shot interleaved sagittal slices with A/P phase encoding direction; voxel size=1×1×1 mm; FOV=256 mm; TR=2300 ms; TE=2.98 ms; TI=900 ms; flip angle=9°; acquisition time=6 min; GRAPPA in-plane acceleration factor=2). Functional scans were acquired using T2-weighted echo planar imaging (63 interleaved axial slices with A/P phase encoding direction, voxel size=2.5×2.5×2.5 mm; FOV=220 mm; TR=2000ms; TE=30 ms; acceleration factor=3; flip angle=60°).\nMEG data were recorded at West China Hospital of Sichuan University in Chengdu, China, using a 64-channel optically pumped magnetometer (OPM) MEG system (Quanmag, Beijing, China). The system consists of 64 single-axis OPM sensors (radial direction, fixed helmet) with a 1000 Hz sampling rate, <20 fT/√Hz sensitivity, and >100 Hz bandwidth. Each sensor (16 × 19 × 66 mm³) contains a 4 × 4 × 4 mm³ rubidium vapor cell and an integrated laser. The sensitive volume is located ~6 mm from the sensor’s outer surface. Sensors were mounted on a rigid, adult-sized helmet providing full-brain coverage. The system was housed in a six-layer magnetically shielded cylinder (1.5 mm permalloy, 10 mm aluminum), with residual magnetic field ≤1 nT and typical system noise of 20–30 fT/√Hz. Participants lay on a scanning bed inserted into the cylinder, wearing air-conduction headphones during the auditory task. Sensor positions were fixed by the helmet geometry, without additional digitization. OPM-MEG is a new type of MEG instrumentation that offers several advantages over conventional MEG systems. These include higher signal sensitivity, improved spatial resolution, and more uniform scalp coverage. Additionally, OPM-MEG allows for greater participant comfort and compliance, supports free movement during scanning, and features lower system complexity, making it a promising tool for more flexible and accessible neuroimaging. The MEG Data were sampled at 1,000 Hz and bandpass-filtered online between 0 and 500 Hz. To facilitate source localization, T1-weighted MRI scans were acquired from the participants using a 3.0 T Siemens TrioTim MRI scanner at West China Hospital of Sichuan University (176 single-shot interleaved sagittal slices with A/P phase encoding direction; voxel size=1×1×1 mm; FOV=256 mm; TR=1900 ms; TE=2.3 ms; TI=900 ms; flip angle=9°; acquisition time=7 min). All participants provided written informed consent outlining the experimental procedures and the data sharing plan prior to participation. They were compensated for their time and contribution.\n### Preprocessing\nAll Digital Imaging and Communications in Medicine (DICOM) files of the raw fMRI data were first converted into the Brain Imaging Data Structure (BIDS) format using dcm2bids (v3.1.1) and subsequently transformed into Neuroimaging Informatics Technology Initiative (NIfTI) format via dcm2niix (v1.0.20220505). Facial features were removed from anatomical images using PyDeface (v2.0.2). Preprocessing was carried out with fMRIPrep (v20.2.0), following standard neuroimaging pipelines. For anatomical images, T1-weighted scans underwent bias field correction, skull stripping, and tissue segmentation into gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF). These images were then spatially normalized to the Montreal Neurological Institute (MNI) space using the MNI152NLin2009cAsym:res-2 template, ensuring consistent alignment across participants. Functional MRI preprocessing included skull stripping, motion correction, slice-timing correction, and co-registration to the T1-weighted anatomical reference. For each BOLD run, head-motion parameters with respect to the BOLD reference (transformation matrices, and six corresponding rotation and translation parameters) are estimated before any spatiotemporal filtering using ‘mcflirt’ (FSL 5.0.9) and slice timing correction was applied using 3dTshift (AFNI 20160207). Co-registration to the anatomical image was done with flirt using boundary-based registration (6 degrees of freedom). No susceptibility distortion correction was applied. Confound regressors included motion parameters (and their derivatives/quadratics), framewise displacement (FD), DVARS, global signals, and t/aCompCor components computed from white matter and CSF after high-pass filtering (128s cutoff). Volumes exceeding FD>0.5 mm or standardized DVARS>1.5 were flagged as motion outliers. All transforms were applied in a single interpolation step using antsApplyTransforms with Lanczos interpolation. We further performed spatial smoothing on the preprocessed fMRI data (post-fMRIPrep) using an isotropic Gaussian kernel with an 8 mm FWHM. However, the versions uploaded to OpenNeuro remain unsmoothed so that researchers can choose whether to apply smoothing.\nMEG data preprocessing was conducted using MNE-Python (v1.8.0). We first applied a bandpass filter (1–38 Hz) to remove low-frequency drifts and high-frequency noise. We then identified bad channels through visual inspection and cross-validated using PyPREP (v0.4.3), these bad channels were interpolated to maintain data integrity. To mitigate physiological artifacts, we performed independent component analysis (ICA) and removed components corresponding to heartbeat and eye movements. The data were then segmented into three task-related epochs corresponding to the video watching, question answering, and post-task replay conditions. Because our paradigm uses naturalistic video viewing rather than discrete event trials, there is no true pre‐stimulus baseline period for noise covariance estimation. Instead, we computed the noise covariance from the mean over each full epoch. T1-weighted MRI data were converted to NIfTI format and processed with FreeSurfer (v7.3.2) to reconstruct cortical surfaces and generate boundary element model (BEM) surfaces using a single-layer conductivity of 0.3 S/m. MEG-MRI coregistration was performed with fiducial points and refined via MNE-Python’s graphical interface. A source space (resolution=5 mm) was generated using a fourth-order icosahedral mesh, and a BEM solution was computed to model head conductivity. A forward model was then created based on anatomical MRI and digitized head shape. Noise covariance matrices were estimated from raw MEG recordings, and inverse operators were constructed using minimum norm estimation (SNR=3). Source reconstruction employed dynamic statistical parametric mapping (dSPM) for noise-normalized estimates. Task-related epochs (video watching, question answering, post-task replay) were used to compute source estimates, which were morphed onto the FreeSurfer average brain template for group-level comparisons.","recording_modality":["meg"],"senior_author":"Zhengwu Ma","sessions":[],"size_bytes":41821126308,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["baba","question","replay"],"timestamps":{"digested_at":"2026-04-22T12:27:41.690352+00:00","dataset_created_at":"2024-07-16T14:15:32.746Z","dataset_modified_at":"2025-12-18T13:13:46.000Z"},"total_files":90,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005346","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"d711225b1d52fca4","model":"openai/gpt-5.2","tagged_at":"2026-01-20T17:48:59.199869+00:00"},"tags":{"pathology":["Healthy"],"modality":["Multisensory"],"type":["Memory"],"confidence":{"pathology":0.85,"modality":0.8,"type":0.7},"reasoning":{"few_shot_analysis":"For Modality conventions, the few-shot example \"Cross-modal Oddball Task\" is labeled \"Multisensory\" because it includes concurrent visual and auditory cues; this guides mapping any paradigm with both video/visual input and audio delivery to \"Multisensory\" rather than choosing only the dominant (e.g., Visual). For Type conventions, the few-shot examples labeled \"Resting-state\" use explicit passive rest/eyes open/closed with no task (e.g., \"A Resting-state EEG Dataset for Sleep Deprivation\"), while memory-focused datasets explicitly involve memorization/recall demands (e.g., \"digit span task\" labeled \"Memory\"). This dataset contains both naturalistic stimulus viewing and a long eyes-closed period explicitly framed as replay, making \"Memory\" vs \"Resting-state\" the key type decision.","metadata_analysis":"Healthy/non-clinical population is explicit: \"reported no history of neurological disorders\" and participants are standard volunteers (right-handed, normal/corrected vision). Stimulus/input channels are explicit: \"Participants watched the video\" and \"Audio was delivered through MRI-compatible headphones ... and MEG-compatible insert earphones\". The post-stimulus period is explicitly replay-oriented: \"participants were instructed to close their eyes for 15 minutes without an explicit task\" and \"capturing spontaneous mental replay of the video stimulus.\" The procedure is described as three epochs: \"video watching, question answering, and post-task replay conditions.\"","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says participants are non-clinical (\"reported no history of neurological disorders\"); few-shot pattern suggests \"Healthy\" for normative cohorts. ALIGN.\nModality: Metadata says both visual video viewing (\"watched the video\") and auditory presentation (\"Audio was delivered through ... headphones/earphones\"); few-shot pattern (cross-modal task) suggests labeling combined audio+visual stimuli as \"Multisensory\". ALIGN.\nType: Metadata says there is a long passive eyes-closed period (\"close their eyes for 15 minutes without an explicit task\") which aligns with few-shot \"Resting-state\" conventions; but metadata also explicitly frames this as replay/memory (\"capturing spontaneous mental replay\") and defines a \"post-task replay\" epoch, which aligns more with a memory/replay construct. PARTIAL CONFLICT between two plausible few-shot conventions; metadata provides the tie-breaker by explicitly naming replay (memory-related), so Memory wins.","decision_summary":"Pathology top-2: (1) Healthy — supported by \"reported no history of neurological disorders\" and standard volunteer demographics; (2) Unknown — would apply if recruitment health status were unstated. Final: Healthy (clear explicit statement). Confidence 0.85 based on direct recruitment/health-status quote.\nModality top-2: (1) Multisensory — supported by \"watched the video\" plus \"Audio was delivered...\"; (2) Visual — if treating video as dominant and ignoring audio. Final: Multisensory (explicit dual-channel stimulus; aligns with cross-modal few-shot convention). Confidence 0.8 (two strong explicit quotes).\nType top-2: (1) Memory — supported by \"capturing spontaneous mental replay of the video stimulus\" and \"post-task replay conditions\"; (2) Resting-state — supported by \"close their eyes for 15 minutes without an explicit task.\" Final: Memory because replay is explicitly the intended cognitive construct of the resting period, not merely generic rest. Confidence 0.7 (explicit replay phrasing, but mixed design includes true resting-like period)."}},"computed_title":"Naturalistic fMRI and MEG recordings during viewing of a reality TV show","nchans_counts":[{"val":66,"count":72},{"val":65,"count":18}],"sfreq_counts":[{"val":1000.0,"count":90}],"stats_computed_at":"2026-04-22T23:16:00.309381+00:00","total_duration_s":73293.827,"author_year":"Li2024_Naturalistic_fMRI_viewing","canonical_name":null}}