{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3419","dataset_id":"ds005810","associated_paper_doi":null,"authors":["Guohao Zhang","Ming Zhou","Shuyi Zhen","Shaohua Tang","Zheng Li","Zonglei Zhen"],"bids_version":"1.10.1","contact_info":["Guohao Zhang"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds005810.v2.0.0","datatypes":["meg"],"demographics":{"subjects_count":31,"ages":[22,21,25,21,22,24,19,21,26,21,21,21,18,20,22,20,20,20,24,24,23,19,18,20,22,21,22,21,18,21],"age_min":18,"age_max":26,"age_mean":21.233333333333334,"species":null,"sex_distribution":{"f":17,"m":13},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005810","osf_url":null,"github_url":null,"paper_url":null},"funding":["Beijing Natural Science Foundation (#L247010)","National Natural Science Foundation of China (#62433015, #31771251)","STI 2030-436 Major Projects of the Ministry of Science and Technology of China437 (#2021ZD0200407)"],"ingestion_fingerprint":"149edb09415ba0265d2654504205d15270e3b94b83740cf1fc45c6fa1872f50f","license":"CC0","n_contributing_labs":null,"name":"NOD-MEG","readme":"# Summary\nThe human brain can rapidly recognize meaningful objects from natural scenes encountered in everyday life. Neuroimaging with large-scale naturalistic stimuli is increasingly employed to elucidate these neural mechanisms of object recognition across these rich and daily natural scenes. However, most existing large-scale neuroimaging datasets with naturalistic stimuli primarily rely on functional magnetic resonance imaging (fMRI), which provides high spatial resolution to characterize spatial representation patterns but is limited in capturing the temporal dynamics inherent in visual cognitive processing.\nTo address this limitation, we extended our previously collected Natural Object Dataset-fMRI (NOD-fMRI) by collecting both magnetoencephalography (MEG) and electroencephalography (EEG) data from the same subjects while viewing the same set of naturalistic stimuli. As a result, the NOD uniquely integrates three different modalities—fMRI, MEG, and EEG—thus offering promising avenues to examine brain activity induced by naturalistic stimuli with both high spatial and high temporal resolutions. Additionally, the NOD encompasses a diverse array of naturalistic stimuli and a broader subject pool, enabling researchers to explore differences in neural activation patterns across both stimuli and subjects.\nWe anticipate that the NOD dataset will serve as a valuable resource for advancing our understanding of the cognitive and neural mechanisms underlying object recognition.\nThe EEG data's accession number is `ds005811`.\n---\n# Data Records\n## Directory Structure\nThe raw data from each subject are stored in the `sub-subID` directory, while preprocessed data and epoch data are stored in the following directories:\n- **Preprocessed Data:** `derivatives/preprocessed/raw`\n- **Epoch Data:** `derivatives/preprocessed/epochs`\n### Stimulus Images\nThe stimulus images used for MEG and EEG are identical and are stored in the `stimuli/ImageNet` directory. Images within this folder are named in the `synsetID_imageID.JPEG` Where:\n- `synsetID` is the ILSVRC category information.\n- `imageID` is the unique number for the image within that category.\nThe image metadata, including category information, is available in the table files under the `stimuli/metadata` directory.\n### Raw Data\nRaw MEG data are stored in BIDS format. Each subject's directory contains multiple session folders, designated as `ses-sesID`. Comprehensive trial information for each subject is documented in the file: `derivatives/detailed_events/sub-subID_events.csv` Where each row corresponds to a trial, and each column contains metadata for that trial, including the session and run number, category information of the stimuli, and subject response.\n### Preprocessed Data\nThe full time series data of preprocessed data are archived in the `derivatives/raw` directory, named as: `sub-subID_ses-sesID_task-ImageNet_run-runID_meg_clean.fif`. The epoch data derived from preprocessed data are stored within the `derivatives/epochs` directory. In this directory, all data for each subject are concatenated into a single file, labeled as: `sub-subID_epo.fif`\nThe trial information within each subject's epochs data can be accessed via the metadata of the epochs data, which are aligned with the content of the subject's `sub-subID_events.csv` file.\n---\n# References\nNiso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. *Scientific Data, 5*, 180110. [https://doi.org/10.1038/sdata.2018.110](https://doi.org/10.1038/sdata.2018.110)","recording_modality":["meg"],"senior_author":"Zonglei Zhen","sessions":["20210413","20210428","20210512","20210513","20210519","20210520","20210526","20210531","20210602","20210603","20210607","20210610","20210616","20210617","20210619","20210621","20210626","20211108","20211111","20211114","ImageNet01","ImageNet02","ImageNet03","ImageNet04"],"size_bytes":191777285350,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["ImageNet","noise"],"timestamps":{"digested_at":"2026-04-22T12:28:46.011956+00:00","dataset_created_at":"2025-01-10T11:07:26.509Z","dataset_modified_at":"2026-03-12T05:26:51.000Z"},"total_files":305,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005810","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"786cbde964b5fd5a","model":"openai/gpt-5.2","tagged_at":"2026-01-20T18:38:59.749910+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.6,"modality":0.85,"type":0.75},"reasoning":{"few_shot_analysis":"Most similar few-shot convention is the schizophrenia example (Meta-rdk), which labels a visually presented discrimination/object-related task as Modality=Visual and Type=Perception. Although the population differs (clinical vs not specified here), the mapping from visual stimulus viewing/discrimination to Perception guides selecting Type=Perception for an object-recognition/scene-image viewing paradigm. The auditory ABR example further supports the convention that stimulus class (sounds vs images) determines Modality, and that sensory encoding/recognition questions map to Perception rather than Motor or Clinical/Intervention.","metadata_analysis":"Key stimulus/task facts indicate a visual naturalistic image-viewing/object-recognition paradigm: (1) \"collecting both ... EEG data from the same subjects while viewing the same set of naturalistic stimuli\" (2) \"We anticipate that the NOD dataset will serve as a valuable resource for advancing our understanding of the cognitive and neural mechanisms underlying object recognition.\" (3) \"The stimulus images used for MEG and EEG are identical and are stored in the `stimuli/ImageNet` directory.\" There is no explicit clinical recruitment/diagnosis statement in the provided metadata; it describes a general-purpose cognitive neuroscience dataset integrating modalities (fMRI/MEG/EEG) rather than a disorder-focused cohort.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says nothing explicit about diagnosis/clinical recruitment (e.g., only \"same subjects\" / no disorder terms). Few-shot pattern suggests that when no disorder is described and the dataset is basic cognitive neuroscience, it is typically labeled Healthy. ALIGN (no conflict), but pathology remains an inference due to missing explicit participant health statement.\n\nModality: Metadata explicitly says subjects were \"viewing\" \"stimulus images\" from \"ImageNet\". Few-shot convention labels image-viewing/discrimination as Visual. ALIGN.\n\nType: Metadata emphasizes \"object recognition\" and \"naturalistic stimuli\" viewing to study \"visual cognitive processing\". Few-shot convention maps sensory recognition/discrimination paradigms to Perception rather than Decision-making/Motor/Resting-state. ALIGN.","decision_summary":"Pathology top-2: (1) Healthy—supported by absence of any disorder focus and general neurotypical object-recognition framing (no clinical terms anywhere in the README); (2) Unknown—because there is no explicit quote stating participants are healthy/controls. Winner: Healthy (convention for normative cognitive datasets), but with moderate confidence due to lack of explicit recruitment text.\n\nModality top-2: (1) Visual—\"viewing ... naturalistic stimuli\" and \"stimulus images ... ImageNet\"; (2) Multisensory—only weakly plausible because the dataset includes multiple recording modalities (fMRI/MEG/EEG), but that is not stimulus modality. Winner: Visual with high confidence.\n\nType top-2: (1) Perception—explicit focus on \"object recognition\" from \"natural scenes\"; (2) Attention—could be relevant in naturalistic viewing, but not stated as the primary construct. Winner: Perception with fairly high confidence."}},"computed_title":"NOD-MEG","nchans_counts":[{"val":409,"count":285},{"val":378,"count":20}],"sfreq_counts":[{"val":1200.0,"count":305}],"stats_computed_at":"2026-04-22T23:16:00.310924+00:00","total_duration_s":91882.4125,"canonical_name":null,"name_confidence":0.66,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Zhang2025_MEG"}}