{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a33da","dataset_id":"ds005363","associated_paper_doi":null,"authors":["Marleen Haupt","Douglas D. Garrett","Radoslaw M. Cichy"],"bids_version":"1.7.0","contact_info":["Marleen Haupt"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds005363.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":43,"ages":[29,23,24,33,28,33,24,22,26,20,31,22,34,24,20,24,22,20,20,33,23,70,68,62,74,74,66,69,70,60,63,71,62,70,62,71,68,70,68,73,60,64,68],"age_min":20,"age_max":74,"age_mean":46.93023255813954,"species":null,"sex_distribution":{"f":21,"m":22},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005363","osf_url":null,"github_url":null,"paper_url":null},"funding":["CI241/1-1","CI241/3-1","CI241/7-1","ERC-StG-2018-803370","INST 272/297-1"],"ingestion_fingerprint":"4b60d905441b21d6417f97ee5062990f1f071feb18a9801a8ae2539ccb133c3e","license":"CC0","n_contributing_labs":null,"name":"Object recognition in healthy aging (ORHA) - EEG","readme":"This dataset contains the raw EEG data accompanying the paper \"Healthy aging delays and dedifferentiates high-level visual representations\". Please cite the above paper if you use this data.\nThe dataset includes:\nBrainvision files (.eeg, .vhdr, .vmrk) for all participants.\nThe events files contain the onsets, durations, trial types and values for all trials in the corresponding run. Stimuli are images presented on a grey background with a central fixation:\nimages of faces     = S1-16\nimages of animals   = S17-32\nimages of places    = S33-48\nimages of objects   = S49-64\ncatch trials        = S65-69\nOther triggers:\nbutton_press\t\t = S99\nrun_onset          \t = S100+run_number (8 runs in total)\nrun_end              = S199\nFor a full description of the paradigm and the employed procedures please see the paper.\nReferences for MNE BIDS conversion\n----------\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8","recording_modality":["eeg"],"senior_author":"Radoslaw M. Cichy","sessions":["01"],"size_bytes":19011100939,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["objrecog"],"timestamps":{"digested_at":"2026-04-22T12:27:42.392353+00:00","dataset_created_at":"2024-07-23T17:57:27.581Z","dataset_modified_at":"2024-07-27T11:07:23.000Z"},"total_files":43,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005363","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"42ffe9775b37a076","model":"openai/gpt-5.2","tagged_at":"2026-01-20T17:49:48.074893+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.7,"modality":0.8,"type":0.7},"reasoning":{"few_shot_analysis":"Most similar few-shot convention is the visual discrimination/perception example (Meta-rdk: Preprocessed EEG data), which labels a visual stimulus discrimination paradigm as Modality=Visual and Type=Perception. Although our dataset is not clinical, it similarly presents visual stimuli (images) to probe visual representations, which by convention maps to Visual + Perception rather than Attention or Motor. Also consistent with other visual-stimulus datasets in the few-shots that label image/screen-based paradigms as Visual modality.","metadata_analysis":"Key stimulus/task facts from the dataset README include: (1) population framing as non-clinical aging: \"Healthy aging delays and dedifferentiates high-level visual representations\"; (2) explicit visual stimulus description: \"Stimuli are images presented on a grey background with a central fixation\"; and (3) category-defined image sets: \"images of faces = S1-16\" / \"images of animals\" / \"images of places\" / \"images of objects\" plus \"button_press = S99\" and \"catch trials\" indicating a simple perceptual categorization/detection-style paradigm.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says \"Healthy aging...\" (non-clinical aging framing) and provides no diagnosis-based recruitment; few-shot pattern suggests labeling explicit diagnoses when present, otherwise Healthy. ALIGN.\nModality: Metadata says \"Stimuli are images...\" and lists image categories (faces/animals/places/objects); few-shot convention maps image/screen stimuli to Visual. ALIGN.\nType: Metadata emphasizes \"high-level visual representations\" with image categories and fixation; few-shot convention maps sensory stimulus processing/discrimination to Perception (rather than Attention unless explicitly attention-manipulated). ALIGN.","decision_summary":"Pathology top-2: (1) Healthy — supported by \"Healthy aging...\" with no disorder named (ALIGN); (2) Dementia — plausible only due to aging topic, but no dementia/clinical diagnosis is stated (weak, metadata does not support). Final: Healthy. Confidence=0.7 (single explicit non-clinical framing, no participant table).\nModality top-2: (1) Visual — \"Stimuli are images...\" and image categories (faces/animals/places/objects) (ALIGN); (2) Multisensory/Other — no evidence of non-visual stimuli (weak). Final: Visual. Confidence=0.8 (2+ explicit visual-stimulus quotes).\nType top-2: (1) Perception — \"high-level visual representations\" + categorical images with fixation/catch trials implies perceptual processing; (2) Attention — fixation/catch trials could involve vigilance, but attention is not stated as the primary construct. Final: Perception. Confidence=0.7 (one strong conceptual quote plus task-context inference)."}},"nemar_citation_count":1,"computed_title":"Object recognition in healthy aging (ORHA) - EEG","nchans_counts":[{"val":64,"count":43}],"sfreq_counts":[{"val":1000.0,"count":43}],"stats_computed_at":"2026-04-22T23:16:00.309420+00:00","source_url":"https://openneuro.org/datasets/ds005363","total_duration_s":155107.137,"canonical_name":null,"name_confidence":0.82,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Haupt2024_Object"}}