{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a342c","dataset_id":"ds005960","associated_paper_doi":null,"authors":["Pena, P.","Palenciano, A.F.","González-García, C.","Ruz, M."],"bids_version":"v1.2.1","contact_info":["Paula Pena"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds005960.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":41,"ages":[21,20,20,22,21,27,21,22,21,19,23,21,23,20,27,22,23,24,19,23,22,24,23,24,21,22,21,22,23,19,23,18,21,22,19,20,26,22,19,27,19],"age_min":18,"age_max":27,"age_mean":21.853658536585368,"species":null,"sex_distribution":{"f":26,"m":15},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005960","osf_url":null,"github_url":null,"paper_url":null},"funding":["This research was supported by grant PID2022-138940NB-100 awarded to MR and grant PID2023-151911NA-I00 awarded to AFP, funded by MCIN/EI/10.13039/501100011033/ and FEDER, UE. PP was supported by scholarship PREP2023-002013 funded by MCIN/AEI/10.13039/501100011033 and FSE+. AFP was supported by Grant PAIDI21_00207 of the Andalusian Autonomic Government. CCGP was supported by Project PID2020-116342GA-I00 funded by MCIN/AEI/10.13039/501100011033, and Grant RYC2021-033536-I funded by MCIN/AEI/10.13039/501100011033 and by the European Union NextGeneration EU/PRTR. The Mind, Brain and Behavior Research Center receives funding from grants CEX2023-001312-M by MCIN/AEI /10.13039/501100011033 and UCE-PP2023-11 by the University of Granada."],"ingestion_fingerprint":"0a69b566206b0ffbe5d45e296c1d18ed5a9a354e03714bbdbddc48c34217cf04","license":"CC0","n_contributing_labs":null,"name":"General Info: inst-comp-eeg","readme":"The experiment consisted of two tasks: the main instruction-following task and an additional localizer task. The data of each participant was recorded in one session.\nFor the main instruction-following task, participants saw four sequential screens -screen display of 200 ms and 800 ms as interscreen interval- that contained the full instruction, after a pretarget interval, they were presented with the target images -two images framed by a colored shape, on display for 200 ms-.\nThey had to respond if the instruction was fulfilled or not by the targets.\nThe first two screens of the instruction indicated if the participant had to pay attention to both images -integration- or to just one -selection-, and which specific images were set to appear -animate or inanimate images per trial-.\nThe third instruction refered to the relevant feature they had to pay attention to, either the color or the shape surrounding the image.\nThe last instruction indicated the key to press if the instruction was fulfilled by the target images -either “A” or “L”-.\nEach trial consisted of a novel combination of the instruction components.\nAdditional catch trials were added, to ensure that participants were maintaining all information.\nIf any of the target images was different from the ones previously instructed, the participant had to indicate it by pressing both “A” and “L” simultaneously.\nThe localizer task was a 1-back task. Participants saw one target image per trial, and they had to indicate with a keypress -“A” and “L”- if the image was from the same subcategory as the image from the previous trial.\nEach block of the main instruction-following task consisted of 32 trials, with a total of 16 blocks.\nAll the conditions were fully counterbalanced to ensure no statistical dependencies within the blocks.\nEach of the 8 localizer blocks consisted of 40 trials.\nTo counterbalance the presentation of the blocks for the whole experiment session, the blocks of the main task were further divided according to the features -blocks of features 1 and blocks of features 2-,and then the sequence of main task and localizer blocks was counterbalanced.","recording_modality":["eeg"],"senior_author":"Ruz, M.","sessions":[],"size_bytes":61945670760,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["INSTCOMP"],"timestamps":{"digested_at":"2026-04-22T12:29:00.928050+00:00","dataset_created_at":"2025-03-05T13:34:26.909Z","dataset_modified_at":"2025-03-13T14:43:51.000Z"},"total_files":41,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005960","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"27a06e170d9487b5","model":"openai/gpt-5.2","tagged_at":"2026-01-20T18:46:01.307876+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.6,"modality":0.8,"type":0.7},"reasoning":{"few_shot_analysis":"Most similar few-shot by paradigm is the visual cognitive-control/goal-maintenance style task in the TBI DPX example (labeled Type=Attention, Modality=Visual): it involves cue/instruction information that must be maintained to guide a later response. A secondary similarity is the digit-span example (Type=Memory) because this dataset includes a 1-back localizer, but here the 1-back is explicitly an “additional localizer task” rather than the main scientific focus. These examples guide the convention that (a) image-based screen stimuli map to Visual modality, and (b) instruction-following/feature selection/integration maps more naturally to Attention (cognitive control/goal-directed attention) than to pure Memory when working-memory demands serve attentional control.","metadata_analysis":"Key task facts from the provided README:\n- Visual stimuli/instructions: participants “saw four sequential screens ... that contained the full instruction” and then “were presented with the target images -two images framed by a colored shape-”.\n- Attention manipulation/cognitive control: “The first two screens of the instruction indicated if the participant had to pay attention to both images -integration- or to just one -selection-” and “The third instruction refered to the relevant feature they had to pay attention to, either the color or the shape”.\n- Secondary localizer with short-term comparison: “The localizer task was a 1-back task... indicate ... if the image was from the same subcategory as the image from the previous trial.”\nNo participant clinical recruitment/diagnosis information is included in the provided metadata.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n1) Metadata says: no diagnosis/clinical recruitment is mentioned (no quoted pathology terms present).\n2) Few-shot pattern suggests: many cognitive instruction/feature tasks are typically Healthy cohorts.\n3) Alignment: partially align but metadata is silent.\n4) Resolution: choose Healthy by contextual inference (normative cognitive task; no clinical language).\n\nModality:\n1) Metadata says: clearly visual stimuli: “target images” and “colored shape”, plus sequential instruction screens.\n2) Few-shot pattern suggests: image/screen-based paradigms are labeled Visual.\n3) Alignment: align.\n\nType:\n1) Metadata says: explicit attentional set/feature selection: “pay attention to both images -integration- or to just one -selection-” and “pay attention to... color or the shape”; also includes a “1-back task”.\n2) Few-shot pattern suggests: DPX-style goal maintenance/selection aligns with Attention; n-back/digit-span aligns with Memory when it is the primary paradigm.\n3) Alignment: mostly align with Attention as primary, with Memory as runner-up due to 1-back.\n4) Resolution: select Attention because instruction-following/feature selection is the central described task; 1-back is labeled as an additional localizer.","decision_summary":"Top-2 candidates per category and final choices:\n\nPathology:\n- Healthy: Supported by absence of any clinical recruitment language and a standard cognitive instruction-following paradigm (contextual inference).\n- Unknown: Also plausible because the metadata provides no participant health/demographics.\nDecision: Healthy (metadata is silent; infer normative cohort).\nConfidence basis: no explicit quotes about health status → inference-only.\n\nModality:\n- Visual: Strongly supported by “target images -two images framed by a colored shape-” and “four sequential screens ... contained the full instruction”.\n- Multisensory/Other: weak (no evidence of auditory/tactile stimuli).\nDecision: Visual.\nConfidence basis: multiple explicit visual-stimulus quotes.\n\nType:\n- Attention: Supported by explicit manipulation of what to attend to (“pay attention to both images... or to just one”, “pay attention to... color or the shape”) and instruction-guided responding.\n- Memory: Runner-up due to “localizer task was a 1-back task”.\nDecision: Attention.\nConfidence basis: explicit attention/selection/integration wording + task structure; memory component appears secondary."}},"computed_title":"General Info: inst-comp-eeg","nchans_counts":[{"val":63,"count":41}],"sfreq_counts":[{"val":1000.0,"count":41}],"stats_computed_at":"2026-04-22T23:16:00.311174+00:00","total_duration_s":240858.2,"author_year":"Pena2025","canonical_name":null}}