{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a343c","dataset_id":"ds006171","associated_paper_doi":null,"authors":["María Melcón","Enrique Stern","Lydia Arana","Almudena Capilla"],"bids_version":"1.8.0","contact_info":["María Melcón"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds006171.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":36,"ages":[60,67,44,48,39,33,32,42,31,32,41,36,30,40,67,31,46,31,27,42,70,35,28,31,27,65,63,66,27,30,30,29,29,34,31,34,30,37,38,34,41,30,32,41,32,30,38,30,27,27,32,69,32,28,28],"age_min":27,"age_max":70,"age_mean":38.25454545454546,"species":null,"sex_distribution":{"f":24,"m":31},"handedness_distribution":{"r":49,"l":6}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds006171","osf_url":null,"github_url":null,"paper_url":null},"funding":["Ministerio de Ciencia, Innovación y Universidades / Agencia Estatal de Investigación, Spain / FEDER, UE (MCIU/AEI/FEDER, UE), Grant/ Award Number: PGC2018-100682- B-I00 and PID2019-111335GA-I00; Ministerio de Ciencia e Innovación / Agencia Estatal de Investigación, Spain / FEDER, UE (MCIN/AEI/ 10.13039/501100011033 /FEDER, UE), Grant/Award Number: PID2021- 125841NB-I00; Comunidad de Madrid. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript","Ministerio de Ciencia e Innovación / Agencia Estatal de Investigación, Spain / FEDER, UE (MCIN/AEI/10.13039/501100011033 /FEDER,UE), Grant/Award Number: PID2021-125841NB-I00; Comunidad de Madrid"],"ingestion_fingerprint":"089e121b638098df36656ecd8f6591838404c15e618c82b745ec1d3a83aecc48","license":"CC0","n_contributing_labs":null,"name":"EEG data during three near-threshold visual detection tasks: a no-cue task, a noninformative cue task (50% validity), and an informative cue task (100% validity)","readme":null,"recording_modality":["eeg"],"senior_author":"Almudena Capilla","sessions":[],"size_bytes":72833078851,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["informative","nocue","noninformative"],"timestamps":{"digested_at":"2026-04-22T12:29:12.095247+00:00","dataset_created_at":"2025-04-24T09:05:51.130Z","dataset_modified_at":"2025-04-24T13:46:57.000Z"},"total_files":104,"storage":{"backend":"s3","base":"s3://openneuro.org/ds006171","raw_key":"dataset_description.json","dep_keys":["CHANGES","participants.tsv"]},"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"05a955cb213d6851","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.6,"modality":0.8,"type":0.75},"reasoning":{"few_shot_analysis":"Most similar few-shot conventions:\n- The TBI DPX example is labeled Type=Attention in a visually cued/controlled task context (\"Dot Probe Continuous Performance Task\" with cue/probe events). This guides mapping cue-driven performance tasks to the Attention construct rather than generic perception.\n- The schizophrenia visual discrimination example is labeled Type=Perception for a \"visual discrimination task\" (moving dots left/right), showing that when the key manipulation is stimulus discrimination without explicit cue-validity/attentional orienting emphasis, Perception is preferred.\nGiven this dataset explicitly contrasts \"no-cue\", \"noninformative cue (50% validity)\", and \"informative cue (100% validity)\", few-shot conventions suggest an Attention label (cueing/validity manipulation) over Perception.","metadata_analysis":"Key metadata facts (quoted):\n- Task/paradigm: \"three near-threshold visual detection tasks: a no-cue task, a noninformative cue task (50% validity), and an informative cue task (100% validity)\".\n- Modality: \"visual detection\" and \"near-threshold visual detection\" (same title quote).\n- Tasks listed: \"informative\", \"nocue\", \"noninformative\".\n- Participants: \"Subjects: 36\" with no mention of any diagnosis or clinical recruitment in the provided metadata.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: no disorder/diagnosis is mentioned; only \"Subjects: 36\".\n- Few-shot pattern suggests: when no clinical population is specified, label as Healthy.\n- Alignment: ALIGN (no conflicting clinical fact).\n\nModality:\n- Metadata says: \"near-threshold visual detection tasks\".\n- Few-shot pattern suggests: visual discrimination/detection tasks map to Modality=Visual.\n- Alignment: ALIGN.\n\nType:\n- Metadata says: includes \"no-cue\", \"noninformative cue (50% validity)\", and \"informative cue (100% validity)\"—a cueing/validity manipulation typical of attentional orienting.\n- Few-shot pattern suggests: cue/control-demand manipulations are labeled Attention (e.g., DPX example), whereas plain discrimination is Perception (moving-dots example).\n- Alignment: ALIGN; cue-validity structure supports Attention more than Perception.","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n1) Healthy — Evidence: no clinical descriptors; only \"Subjects: 36\" in participants overview; dataset title/task description is purely experimental.\n2) Unknown — Competing because the provided snippet never explicitly says \"healthy\".\nDecision: Healthy (default for non-clinical experimental EEG datasets when no diagnosis is stated). Confidence limited by lack of an explicit \"healthy\" quote.\n\nModality:\n1) Visual — Evidence: \"near-threshold visual detection tasks\"; tasks are cueing variants of a visual detection paradigm.\n2) Multisensory — Only weakly possible if cues were non-visual, but nothing indicates auditory/tactile cues.\nDecision: Visual.\n\nType:\n1) Attention — Evidence: explicit manipulation of cue presence and cue validity (\"no-cue\", \"noninformative cue task (50% validity)\", \"informative cue task (100% validity)\") indicates attentional orienting/expectancy.\n2) Perception — Also plausible because the core behavior is \"visual detection\" near threshold.\nDecision: Attention, because cue validity manipulations primarily index attentional allocation rather than pure sensory discrimination.\n\nConfidence justifications:\n- Pathology: single negative-evidence fact (no diagnosis mentioned) → moderate.\n- Modality: multiple explicit mentions of \"visual\" in title + task framing → high.\n- Type: strong cue-validity wording but still tied to detection → moderately high."}},"computed_title":"EEG data during three near-threshold visual detection tasks: a no-cue task, a noninformative cue task (50% validity), and an informative cue task (100% validity)","nchans_counts":[{"val":144,"count":104}],"sfreq_counts":[{"val":1024.0,"count":104}],"stats_computed_at":"2026-04-22T23:16:00.311394+00:00","total_duration_s":null,"canonical_name":null,"name_confidence":0.66,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Melcon2025"}}