{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a341a","dataset_id":"ds005811","associated_paper_doi":null,"authors":["Guohao Zhang","Ming Zhou","Shuyi Zhen","Shaohua Tang","Zheng Li","Zonglei Zhen"],"bids_version":"1.10.1","contact_info":["Guohao Zhang"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds005811.v1.0.9","datatypes":["eeg"],"demographics":{"subjects_count":19,"ages":[22,21,25,21,22,24,19,21,26,21,21,21,18,20,20,21,22,18,21],"age_min":18,"age_max":26,"age_mean":21.263157894736842,"species":null,"sex_distribution":{"f":8,"m":11},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005811","osf_url":null,"github_url":null,"paper_url":null},"funding":["Beijing Natural Science Foundation (#L247010)","National Natural Science Foundation of China (#62433015, #31771251)","STI 2030-436 Major Projects of the Ministry of Science and Technology of China437 (#2021ZD0200407)"],"ingestion_fingerprint":"f8fd3dadc5961bf055376c1ad2534a28c87fea298b4d289857df532b0b0326a7","license":"CC0","n_contributing_labs":null,"name":"NOD-EEG","readme":"# Summary\nThe human brain can rapidly recognize meaningful objects from natural scenes encountered in everyday life. Neuroimaging with large-scale naturalistic stimuli is increasingly employed to elucidate these neural mechanisms of object recognition across these rich and daily natural scenes. However, most existing large-scale neuroimaging datasets with naturalistic stimuli primarily rely on functional magnetic resonance imaging (fMRI), which provides high spatial resolution to characterize spatial representation patterns but is limited in capturing the temporal dynamics inherent in visual cognitive processing.\nTo address this limitation, we extended our previously collected Natural Object Dataset-fMRI (NOD-fMRI) by collecting both magnetoencephalography (MEG) and electroencephalography (EEG) data from the same subjects while viewing the same set of naturalistic stimuli. As a result, the NOD uniquely integrates three different modalities—fMRI, MEG, and EEG—thus offering promising avenues to examine brain activity induced by naturalistic stimuli with both high spatial and high temporal resolutions. Additionally, the NOD encompasses a diverse array of naturalistic stimuli and a broader subject pool, enabling researchers to explore differences in neural activation patterns across both stimuli and subjects.\nWe anticipate that the NOD dataset will serve as a valuable resource for advancing our understanding of the cognitive and neural mechanisms underlying object recognition.\nThe MEG data's accession number is `ds005810`.\n---\n# Data Records\n## Directory Structure\nThe raw data from each subject are stored in the `sub-subID` directory, while preprocessed data and epoch data are stored in the following directories:\n- **Preprocessed Data:** `derivatives/preprocessed/raw`\n- **Epoch Data:** `derivatives/preprocessed/epochs`\n### Stimulus Images\nThe stimulus images used for MEG and EEG are identical and are stored in the `stimuli/ImageNet` directory. Images within this folder are named in the `synsetID_imageID.JPEG` Where:\n- `synsetID` is the ILSVRC category information.\n- `imageID` is the unique number for the image within that category.\nThe image metadata, including category information, is available in the table files under the `stimuli/metadata` directory.\n### Raw Data\nRaw EEG data are stored in BIDS format. Each subject's directory contains multiple session folders, designated as `ses-sesID`. Comprehensive trial information for each subject is documented in the file: `derivatives/detailed_events/sub-subID_events.csv` Where each row corresponds to a trial, and each column contains metadata for that trial, including the session and run number, category information of the stimuli, and subject response.\n### Preprocessed Data\nThe full time series data of preprocessed data are archived in the `derivatives/raw` directory, named as: `sub-subID_ses-sesID_task-ImageNet_run-runID_eeg_clean.fif`. The epoch data derived from preprocessed data are stored within the `derivatives/epochs` directory. In this directory, all data for each subject are concatenated into a single file, labeled as: `sub-subID_epo.fif`\nThe trial information within each subject's epochs data can be accessed via the metadata of the epochs data, which are aligned with the content of the subject's `sub-subID_events.csv` file.\n---\n# References\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A., and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. *Journal of Open Source Software, 4*(1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896)","recording_modality":["eeg"],"senior_author":"Zonglei Zhen","sessions":["ImageNet01","ImageNet02","ImageNet03","ImageNet04"],"size_bytes":17432462486,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["ImageNet"],"timestamps":{"digested_at":"2026-04-22T12:28:53.690674+00:00","dataset_created_at":"2025-01-10T16:03:00.727Z","dataset_modified_at":"2026-03-13T20:53:48.000Z"},"total_files":448,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005811","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"74a7ebdaa69c5743","model":"openai/gpt-5.2","tagged_at":"2026-01-20T18:39:26.101244+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.6,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Closest few-shot by paradigm/modality is the schizophrenia visual discrimination dataset (Meta-rdk), labeled as Modality=Visual and Type=Perception, because it involves visual stimuli and perceptual judgments. Although the current dataset uses naturalistic ImageNet images (not motion dots), the few-shot convention indicates that visual object/stimulus recognition paradigms map to Visual + Perception rather than Motor/Attention/Decision-making. The Parkinson’s cross-modal oddball example guides that Modality should follow stimulus channel (visual/auditory), not responses.","metadata_analysis":"Key quoted metadata indicating visual natural-image viewing and object recognition focus:\n1) \"collecting both magnetoencephalography (MEG) and electroencephalography (EEG) data from the same subjects while viewing the same set of naturalistic stimuli\"\n2) \"We anticipate that the NOD dataset will serve as a valuable resource for advancing our understanding of the cognitive and neural mechanisms underlying object recognition.\"\n3) \"The stimulus images used for MEG and EEG are identical and are stored in the `stimuli/ImageNet` directory.\"\n4) \"Images within this folder are named in the `synsetID_imageID.JPEG`\" (ImageNet-style labeled images)\nNo explicit mention of a clinical recruitment criterion/diagnosis is present in the provided README snippet.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: only \"same subjects\" / \"from the same subjects\" with no diagnosis or patient group specified.\n- Few-shot pattern suggests: for typical cognitive neuroscience viewing tasks without disorder mention, label as Healthy.\n- Alignment: PARTIAL (metadata is silent; few-shot provides convention). No conflict.\n\nModality:\n- Metadata says: \"while viewing ... naturalistic stimuli\" and explicitly references \"stimulus images\" in \"stimuli/ImageNet\".\n- Few-shot pattern suggests: visual stimuli -> Modality=Visual.\n- Alignment: ALIGN.\n\nType:\n- Metadata says: focus on \"object recognition\" and \"naturalistic stimuli\" viewing.\n- Few-shot pattern suggests: perceptual/recognition of sensory stimuli -> Type=Perception.\n- Alignment: ALIGN.","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n- Candidate 1: Healthy — supported by absence of any disorder terms and general cognitive/object-recognition framing (\"same subjects\"; no patient group mentioned).\n- Candidate 2: Unknown — plausible because the provided snippet lacks explicit participant health description.\nWinner: Healthy (dataset purpose is normative visual cognition; no clinical recruitment mentioned). Confidence constrained because metadata does not explicitly state \"healthy\".\n\nModality:\n- Candidate 1: Visual — supported by \"viewing ... naturalistic stimuli\"; \"stimulus images ... stimuli/ImageNet\"; images are JPEGs with synset/category metadata.\n- Candidate 2: Multisensory — unlikely; no auditory/tactile channels described.\nWinner: Visual. High confidence due to multiple explicit image/viewing statements.\n\nType:\n- Candidate 1: Perception — supported by explicit aim: \"object recognition\" using natural images; aligns with visual perception conventions in few-shot.\n- Candidate 2: Attention — less supported; no explicit manipulation of attention/cueing reported.\nWinner: Perception. Confidence high but not maximal since task demands (e.g., categorization vs passive viewing) are not fully specified in the snippet."}},"computed_title":"NOD-EEG","nchans_counts":[{"val":64,"count":440},{"val":66,"count":8}],"sfreq_counts":[{"val":500.0,"count":288},{"val":1000.0,"count":160}],"stats_computed_at":"2026-04-22T23:16:00.310941+00:00","total_duration_s":85327.92,"canonical_name":null,"name_confidence":0.78,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Zhang2025_EEG"}}