{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3329","dataset_id":"ds004212","associated_paper_doi":null,"authors":["Martin N. Hebart","Oliver Contier","Lina Teichmann","Adam H. Rockter","Charles Zheng","Alexis Kidder","Anna Corriveau","Maryam Vaziri-Pashkam","Chris I. Baker"],"bids_version":"1.21","contact_info":["Lina Teichmann","Oliver Contier"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds004212.v3.0.0","datatypes":["meg"],"demographics":{"subjects_count":5,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":{"m":2,"f":2,"o":1},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds004212","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"e711f3154ac8aad1938e8c11add2a808302294d7998975fbbee83b6a41015d24","license":"CC0","n_contributing_labs":null,"name":"THINGS-MEG","readme":"# THINGS-MEG\nUnderstanding object representations visual and semantic processing of objects requires a broad, comprehensive sampling of the objects in our visual world\nwith dense measurements of brain activity and behavior. This densely sampled fMRI dataset is part of THINGS-data, a multimodal collection\nof large-scale datasets comprising functional MRI, magnetoencephalographic recordings, and 4.70 million behavioral judgments in response to thousands of photographic images\nfor up to 1,854 object concepts. THINGS-data is unique in its breadth of richly-annotated objects, allowing for testing countless novel hypotheses at scale while assessing\nthe reproducibility of previous findings. The multimodal data allows for studying both the temporal and spatial dynamics of object representations and their relationship\nto behavior and additionally provides the means for combining these datasets for novel insights into object processing. THINGS-data constitutes the core release of\nthe [THINGS initiative](https://things-initiative.org) for bridging the gap between disciplines and the advancement of cognitive neuroscience.\n# Dataset overview\nWe collected extensively sampled object representations using magnetoencephalography (MEG). To this end, we drew on the THINGS database [(Hebart et al., 2019)](https://doi.org/10.1371/journal.pone.0223792),\na richly-annotated database of 1,854 object concepts representative of the American English language which contains 26,107 manually-curated naturalistic object images.\nDuring the fMRI experiment, participants were shown a representative subset of THINGS images, spread across 12 separate sessions (N=4, 22,448 unique images of 1,854 objects).\nImages were shown in fast succession (1.5±0.2s), and participants were instructed to maintain central fixation. To ensure engagement, participants performed an oddball detection task\nresponding to occasional artificially-generated images. A subset of images (n=200) were shown repeatedly in each session.\nBeyond the core functional imaging data in response to THINGS images, we acquired T1-weighted MRI scans to allow for cortical source localization.\nEye movements were monitored in the MEG to ensure participants maintained central fixation.","recording_modality":["meg"],"senior_author":"Chris I. Baker","sessions":["01","02","03","04","05","06","07","08","09","10","11","12","20190501","20190508","20190517","20190520","20190522","20190524","20190530","20190605","20190618","20190624","20190705","20190709","20190712","20190715","20190726","20190730"],"size_bytes":255218599475,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["main"],"timestamps":{"digested_at":"2026-04-22T12:26:18.708257+00:00","dataset_created_at":"2022-07-14T15:47:41.319Z","dataset_modified_at":"2025-05-29T20:15:29.000Z"},"total_files":500,"storage":{"backend":"s3","base":"s3://openneuro.org/ds004212","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv","task-main_events.json"]},"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"7c10403d29b35f26","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.65,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Closest few-shot convention match is the visual discrimination dataset (Meta-rdk) labeled as Modality=Visual and Type=Perception, where participants view visual stimuli and make a simple detection/discrimination response. THINGS-MEG similarly presents visual stimuli (photographic object images) and includes an oddball detection response primarily to ensure engagement, aligning with the convention that visually driven object/stimulus processing studies map to Visual + Perception rather than Motor. The Parkinson’s oddball example shows oddball paradigms can be labeled Clinical/Intervention when the main focus is a clinical cohort; THINGS-MEG has no such clinical recruitment, supporting Pathology=Healthy.","metadata_analysis":"Key quoted metadata facts:\n1) Visual stimulus content: \"thousands of photographic images\" and \"naturalistic object images.\" \n2) Visual presentation procedure: \"participants were shown a representative subset of THINGS images\" and \"Images were shown in fast succession (1.5±0.2s).\"\n3) Task nature: \"performed an oddball detection task responding to occasional artificially-generated images\" and \"instructed to maintain central fixation.\"\n4) Study aim: \"Understanding object representations visual and semantic processing of objects\" and \"studying both the temporal and spatial dynamics of object representations.\" \n5) Population info is minimal and non-clinical: \"Subjects: 5\" with no mention of patient groups/diagnoses.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: only \"Subjects: 5\" (no diagnosis/patient recruitment stated).\n- Few-shot pattern suggests: datasets without explicit clinical recruitment are labeled \"Healthy\" (e.g., multiple few-shots labeled Healthy when no disorder is mentioned).\n- Alignment: ALIGN (no conflict).\n\nModality:\n- Metadata says: \"photographic images\" / \"naturalistic object images\" and participants were \"shown\" images.\n- Few-shot pattern suggests: image viewing paradigms map to \"Visual\" (e.g., visual discrimination task labeled Visual).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: goal is \"object representations\" and \"visual and semantic processing of objects\"; task includes \"oddball detection\" mainly \"To ensure engagement\".\n- Few-shot pattern suggests: when the primary manipulation is sensory stimulus processing/discrimination, label as \"Perception\"; oddball can sometimes map to Attention, but only when attentional control/oddball effects are the primary construct.\n- Alignment: Mostly ALIGN; minor ambiguity (Perception vs Attention) resolved by the stated primary aim being object representation processing, with oddball explicitly framed as engagement.","decision_summary":"Top-2 candidates per category and final choice:\n\nPathology:\n- Candidate 1: Healthy — Evidence: no clinical terms/diagnoses; only \"Subjects: 5\" and standard cognitive neuroscience object-representation framing.\n- Candidate 2: Unknown — Evidence: metadata does not explicitly say \"healthy\".\nHead-to-head: Healthy wins because absence of any clinical recruitment language strongly implies a normative cohort per catalog convention.\nConfidence basis: inference from lack of clinical descriptors (no explicit quote of \"healthy\").\n\nModality:\n- Candidate 1: Visual — Evidence: \"photographic images\"; \"naturalistic object images\"; \"participants were shown ... images\".\n- Candidate 2: Multisensory — Weak evidence only from the mention of \"behavioral judgments\" and multimodal initiative context, but the task stimuli described here are visual.\nHead-to-head: Visual clearly wins.\nConfidence basis: 3+ explicit visual-stimulus quotes.\n\nType:\n- Candidate 1: Perception — Evidence: aim is \"object representations\" and \"visual and semantic processing\" driven by viewing object images; fast visual presentation.\n- Candidate 2: Attention — Evidence: \"oddball detection task\" and fixation instruction could indicate attentional demands.\nHead-to-head: Perception wins because oddball detection is explicitly to \"ensure engagement\" rather than the primary construct.\nConfidence basis: 2+ explicit aim/task-description quotes supporting object/visual processing; some residual ambiguity with Attention."}},"nemar_citation_count":3,"computed_title":"THINGS-MEG","nchans_counts":[{"val":310,"count":470}],"sfreq_counts":[{"val":1200.0,"count":470}],"stats_computed_at":"2026-04-22T23:16:00.307220+00:00","total_duration_s":162863.61,"canonical_name":null,"name_confidence":0.83,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Hebart2022"}}