{"success":true,"database":"eegdash","data":{"_id":"696fdefaac44fa1028dc631d","dataset_id":"ds007162","associated_paper_doi":null,"authors":["[Unspecified1]","[Unspecified2]"],"bids_version":"1.7.0","contact_info":["Pablo Oyarzo"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds007162.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":34,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds007162","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"158cc54f9fcdf72ee218b6bb976e6f5fdf1af629e46a6d8bd1646ced87fef3b4","license":"CC0","n_contributing_labs":null,"name":"Adaptive recruitment of cortex-wide recurrence for visual object recognition (EEG)","readme":"# Dataset Description\nThis dataset contains the EEG data accompanying the study\n**\"Adaptive recruitment of cortex-wide recurrence for visual object recognition\"** (Link to preprint: https://www.biorxiv.org/content/10.1101/2025.10.17.682937v2).\nPlease cite the above paper if you use this data.\n---\n## Dataset Overview\n- 34 participants, each with 1 session\n---\n## Experimental Design\nThe EEG experiment used a stimulus set of 242 images (121 “challenge” and 121 “control” images) derived from comparisons between human behavioural performance and AlexNet.\n- **Main task:** Each trial consisted of a single image presented for 200 ms followed by a 100 ms blank. Trials were grouped into sequences of 14 images. At the end of each sequence, participants reported whether a paper clip appeared anywhere in that sequence.\n---\n## Derivatives\nThe derivatives/ folder contains outputs from the decoding analyses, including time-resolved decoding accuracy matrices for object identity.","recording_modality":["eeg"],"senior_author":"[Unspecified2]","sessions":["01"],"size_bytes":65339062772,"source":"openneuro","storage":{"backend":"s3","base":"s3://openneuro.org/ds007162","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["rcor"],"timestamps":{"digested_at":"2026-04-22T12:30:05.445561+00:00","dataset_created_at":"2026-01-04T08:52:41.695Z","dataset_modified_at":"2026-01-05T12:41:00.000Z"},"total_files":69,"computed_title":"Adaptive recruitment of cortex-wide recurrence for visual object recognition (EEG)","nchans_counts":[{"val":63,"count":69}],"sfreq_counts":[{"val":1000.0,"count":69}],"stats_computed_at":"2026-04-22T23:16:00.312379+00:00","total_duration_s":258538.97,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"7f7e3bbc76f7565f","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.6,"modality":0.85,"type":0.75},"reasoning":{"few_shot_analysis":"Most similar few-shot by task/stimulus is the schizophrenia-spectrum visual discrimination dataset (Meta-rdk), labeled as Modality=Visual and Type=Perception because participants make perceptual judgments about briefly presented visual stimuli. This dataset likewise uses brief visual image presentations and emphasizes “visual object recognition,” which by convention maps to Type=Perception rather than task mechanics like button press. The Parkinson cross-modal oddball example illustrates that target detection can sometimes be framed as cognitive control/clinical purpose, but here there is no clinical cohort and the stated scientific aim is object recognition/decoding, making Perception the closer convention.","metadata_analysis":"Key metadata facts:\n1) Visual stimulus/object recognition focus: “Adaptive recruitment of cortex-wide recurrence for visual object recognition” and “stimulus set of 242 images (121 ‘challenge’ and 121 ‘control’ images)”.\n2) Trial structure and participant query: “Each trial consisted of a single image presented for 200 ms followed by a 100 ms blank” and “At the end of each sequence, participants reported whether a paper clip appeared anywhere in that sequence.”\n3) Analysis goal reinforces perception/object identity: “outputs from the decoding analyses, including time-resolved decoding accuracy matrices for object identity.”\n4) Population info present but no diagnosis: “34 participants, each with 1 session.”","paper_abstract_analysis":"No useful paper information (only a link to a preprint is provided; no abstract text included in the metadata payload).","evidence_alignment_check":"Pathology:\n- Metadata says: “34 participants” with no mention of any diagnosis/patient recruitment.\n- Few-shot pattern suggests: when no disorder is stated, label as Healthy.\n- Alignment: ALIGN (no clinical population stated; default to Healthy).\n\nModality:\n- Metadata says: “stimulus set of 242 images” and “single image presented for 200 ms”.\n- Few-shot pattern suggests: image-based tasks map to Visual modality.\n- Alignment: ALIGN.\n\nType:\n- Metadata says: “visual object recognition” and “decoding accuracy matrices for object identity,” which indicates a perceptual/object-recognition construct.\n- Few-shot pattern suggests: visual discrimination/object processing tasks are labeled Perception (e.g., Meta-rdk visual discrimination → Perception).\n- Potential alternative from metadata: the “paper clip appeared anywhere in that sequence” question could suggest Attention (target detection) or Memory (across a 14-image sequence), but these are secondary to the explicitly stated object-recognition/identity-decoding goal.\n- Alignment: mostly ALIGN with Perception convention; minor ambiguity due to the end-of-sequence target query.","decision_summary":"Top-2 candidates per category with head-to-head comparison:\n\nPathology:\n1) Healthy — Evidence: no disorder/patient terms; only “34 participants, each with 1 session.” Few-shot convention uses Healthy when no clinical recruitment is specified.\n2) Unknown — Would apply if population were insufficiently described; however, absence of any clinical-recruitment language makes Healthy more appropriate than Unknown.\nFinal: Healthy. (Alignment: aligns with few-shot convention; no conflicting explicit pathology facts.)\n\nModality:\n1) Visual — Evidence: “stimulus set of 242 images” and “single image presented for 200 ms”.\n2) Multisensory/Other — Not supported; no auditory/tactile stimuli described.\nFinal: Visual. (Alignment: clear.)\n\nType:\n1) Perception — Evidence: “visual object recognition”; “decoding… for object identity”; brief image presentations consistent with perceptual recognition studies, matching the Meta-rdk visual discrimination → Perception convention.\n2) Attention — Evidence: “reported whether a paper clip appeared anywhere in that sequence” (target detection across a sequence) could imply attentional monitoring, but the dataset framing and derivatives emphasize object identity decoding/recognition.\nFinal: Perception. (Alignment: strong with stated study aim; attention component considered secondary.)"}},"canonical_name":null,"name_confidence":0.55,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"DS7162_VisualRecognition"}}