{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c46","dataset_id":"ds007353","associated_paper_doi":null,"authors":["Guohao Zhang","Sai Ma","Ming Zhou","Shaohua Tang","Shuyi Zhen","Zheng Li","Zonglei Zhen"],"bids_version":"1.10.1","contact_info":["Guohao Zhang"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds007353.v1.0.0","datatypes":["eeg","meg"],"demographics":{"subjects_count":32,"ages":[21,24,21,22,21,21,21,19,18,24,24,20,22,22,25,20,24,24,22,19,23,25,21,24,20,24,29,22,23,20,31],"age_min":18,"age_max":31,"age_mean":22.451612903225808,"species":null,"sex_distribution":{"m":14,"f":17},"handedness_distribution":{"r":31}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds007353","osf_url":null,"github_url":null,"paper_url":null},"funding":["Beijing Natural Science Foundation (#L247010)","National Natural Science Foundation of China (#62433015, #31771251)","STI 2030-436 Major Projects of the Ministry of Science and Technology of China437 (#2021ZD0200407)"],"ingestion_fingerprint":"35fdd6b1922e0db21d7b3baa7d0b7b19a025586f3275abf5daa668060b70e1df","license":"CC0","n_contributing_labs":null,"name":"HAD-MEEG","readme":"Human action recognition is a core component of social cognition, engaging spatially distributed and temporally evolving neural responses that encode visual information and infer intention. To map the brain’s spatial organization supporting this process, we previously released the Human Action Dataset (HAD), a functional magnetic resonance imaging (fMRI) resource. However, fMRI’s limited temporal resolution constrains its ability to capture rapid neural dynamics. Here, we present the HAD-MEEG dataset, which extends HAD-fMRI, leveraging the millisecond-level temporal resolution of magnetoencephalography (MEG) and electroencephalography (EEG). HAD-MEEG were recorded in the same participants and with the same stimuli as HAD-fMRI, in which 30 participants viewed 21,600 video clips spanning 180 categories of human action. By integrating the temporal precision of M/EEG with the spatial precision of fMRI, HAD enables comprehensive spatiotemporal investigation of the neural mechanisms underlying human action recognition.","recording_modality":["eeg","meg"],"senior_author":"Zonglei Zhen","sessions":["20211104","20211108","20211109","20211110","20211111","20211112","20211114","20211117","20211118","eeg","meg"],"size_bytes":193902192538,"source":"openneuro","storage":{"backend":"s3","base":"s3://openneuro.org/ds007353","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","datacite.yml","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["action","noise"],"timestamps":{"digested_at":"2026-04-22T12:30:11.115287+00:00","dataset_created_at":"2026-02-02T07:11:52.115Z","dataset_modified_at":"2026-02-03T07:31:39.000Z"},"total_files":473,"computed_title":"HAD-MEEG","nchans_counts":[{"val":409,"count":240},{"val":64,"count":224},{"val":378,"count":9}],"sfreq_counts":[{"val":1200.0,"count":249},{"val":1000.0,"count":224}],"stats_computed_at":"2026-04-22T23:16:00.312731+00:00","total_duration_s":161375.6625,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"19b7f4929adc4289","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.7,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot convention is the visual discrimination/perception example (Meta-rdk) labeled as Modality=Visual and Type=Perception, where participants view visual stimuli and the scientific aim is characterizing perceptual processing. HAD-MEEG similarly uses visually presented stimuli (video clips) to study recognition/encoding of visual action information, which by convention maps to Visual + Perception rather than Motor (no movement execution/imagery) or Resting-state (task present).","metadata_analysis":"Key population/task/stimulus facts from metadata:\n- Visual stimulus and paradigm: \"30 participants viewed 21,600 video clips spanning 180 categories of human action.\" \n- Visual-processing focus: \"engaging ... neural responses that encode visual information and infer intention.\" \n- Dataset purpose: \"HAD-MEEG dataset ... leveraging ... MEG and EEG\" and \"investigation of the neural mechanisms underlying human action recognition.\" \n- Participants: \"30 participants\" (readme) and participants_overview shows typical adult demographics: \"Age range: 18-31\" with no mention of a clinical diagnosis/recruitment.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: participants are described only as \"30 participants\" with \"Age range: 18-31\" and no clinical recruitment/diagnosis stated.\n- Few-shot pattern suggests: when no disorder is described and participants are typical adults, label as Healthy.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"viewed 21,600 video clips\" and the study concerns \"visual information\".\n- Few-shot pattern suggests: video/dots/images presented visually map to Modality=Visual.\n- Alignment: ALIGN.\n\nType:\n- Metadata says: \"Human action recognition\" and neural responses that \"encode visual information\" while viewing action videos.\n- Few-shot pattern suggests: sensory recognition/discrimination of presented stimuli maps to Type=Perception.\n- Alignment: ALIGN (even though action recognition relates to social cognition, the task is fundamentally visual perception/recognition of actions).","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n1) Healthy (selected): No clinical group mentioned; only general-participant descriptors: \"30 participants\" and \"Age range: 18-31\".\n2) Unknown: could be chosen if recruitment health status were unspecified, but the dataset framing implies a typical non-clinical cohort.\nAlignment: aligns with few-shot Healthy conventions.\n\nModality:\n1) Visual (selected): \"viewed ... video clips\"; \"encode visual information\"; \"video clips spanning ... human action\".\n2) Multisensory: videos could in principle include audio, but audio is not mentioned; emphasis is explicitly visual.\nAlignment: aligns with few-shot visual-stimulus conventions.\n\nType:\n1) Perception (selected): primary aim is action recognition from viewed stimuli and visual encoding (perceptual recognition).\n2) Other: could be argued as social cognition (intention inference), but allowed labels lack a specific social-cognition type and the dominant construct is perceptual recognition.\nAlignment: aligns with few-shot perception-task conventions.\n\nConfidence notes (evidence count): Modality has 3 explicit supporting phrases (video viewing + visual information focus). Type has 2 explicit supporting phrases (action recognition + encode visual information). Pathology lacks an explicit 'healthy' statement but has absence of diagnosis plus adult demographics."}},"canonical_name":null,"name_confidence":0.78,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Zhang2026"}}