{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a338c","dataset_id":"ds004830","associated_paper_doi":null,"authors":["Matthew Ning","Sudan Duwadi","Meryem A. Yucel","Alexander Von Luhmann","David A. Boas","Kamal Sen"],"bids_version":"1.7.1","contact_info":["Sudan Duwadi"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds004830.v1.0.1","datatypes":["fnirs"],"demographics":{"subjects_count":12,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds004830","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"089a998f259ecb47836db7c1d1110b38d963c6054a8bd23313d8c46b534e7bbb","license":"CC0","n_contributing_labs":null,"name":"Spatial Attention Decoding using fNIRS During Complex Scene Analysis","readme":"This dataset comes with published paper which can be found in https://www.frontiersin.org/journals/human-neuroscience/articles/10.3389/fnhum.2024.1329086/full\nPlease cite the paper if you use this dataset for your publication.","recording_modality":["fnirs"],"senior_author":"Kamal Sen","sessions":[],"size_bytes":1321472833,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["01","02","03","04","06"],"timestamps":{"digested_at":"2026-04-22T12:27:01.762857+00:00","dataset_created_at":"2023-10-27T17:27:46.316Z","dataset_modified_at":"2026-02-27T13:55:01.000Z"},"total_files":19,"storage":{"backend":"s3","base":"s3://openneuro.org/ds004830","raw_key":"dataset_description.json","dep_keys":["CHANGES","README.txt","participants.tsv"]},"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"ac3f0472b6045d6f","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.6,"modality":0.7,"type":0.7},"reasoning":{"few_shot_analysis":"Few-shot example \"EEG: DPX Cog Ctl Task in Acute Mild TBI\" shows the convention that tasks centered on attentional control/attentional orienting are labeled Type=\"Attention\" and, when stimuli are screen-based cues/probes, Modality=\"Visual\". For pathology, multiple few-shot examples label Pathology=\"Healthy\" when no diagnosis/clinical recruitment is stated (e.g., \"EEG: Three armed bandit gambling task\", \"EEG Motor Movement/Imagery Dataset\"). This guides assigning Healthy/Visual/Attention here, unless metadata explicitly indicates a clinical population (none does).","metadata_analysis":"Key available metadata is sparse and mainly in the title/task name:\n- Title: \"Spatial Attention Decoding using fNIRS During Complex Scene Analysis\" (explicitly names spatial attention and scene analysis).\n- Tasks: [\"overt\"] (suggests an overt attention condition, commonly meaning overt spatial attention/eye-gaze shifts).\n- Participants: \"Subjects: 12\" (no mention of patients, diagnosis, or clinical recruitment).\n- README: \"This dataset comes with published paper ...\" (no additional task/population details included in the provided snippet).","paper_abstract_analysis":"No useful paper information. (Only a link is provided; no abstract text is included in the metadata supplied here.)","evidence_alignment_check":"Pathology:\n- Metadata says: only \"Subjects: 12\" with no diagnosis stated.\n- Few-shot pattern suggests: when no disorder is mentioned, label as Healthy.\n- ALIGN (no conflict).\n\nModality:\n- Metadata says: \"Complex Scene Analysis\" and \"Spatial Attention\" in the title.\n- Few-shot pattern suggests: scene/screen-based attention paradigms are typically Visual modality.\n- ALIGN (no conflict).\n\nType:\n- Metadata says: \"Spatial Attention Decoding\" (explicit construct is attention).\n- Few-shot pattern suggests: attention-focused paradigms map to Type=\"Attention\".\n- ALIGN (no conflict).","decision_summary":"Top-2 candidates per category (with head-to-head selection):\n\nPathology:\n1) Healthy — Supported by absence of any clinical terms plus \"Subjects: 12\" and typical convention in few-shots to default to Healthy when no recruitment pathology is stated.\n2) Unknown — Plausible due to missing explicit statement of healthy controls.\nWinner: Healthy (no metadata evidence for any disorder).\n\nModality:\n1) Visual — Supported by \"Complex Scene Analysis\" (scenes are typically visual) and \"Spatial Attention\" framing.\n2) Unknown — Possible because stimulus modality is not explicitly described beyond the title.\nWinner: Visual.\n\nType:\n1) Attention — Directly supported by \"Spatial Attention Decoding\".\n2) Perception — Could apply if the focus were primarily scene perception rather than attentional selection.\nWinner: Attention (title explicitly foregrounds attention/decoding). \n\nConfidence justification: limited explicit metadata beyond the title and task name; strongest evidence is for Type (explicit term \"Spatial Attention\"), moderate for Modality (inferred from \"scene\"), and weakest for Pathology (no explicit \"healthy\" statement).","decision_summary_confidence_quotes_features":"Pathology evidence/features: \"Subjects: 12\" (no diagnosis mentioned); no other population descriptors.\nModality evidence/features: \"Complex Scene Analysis\" (implies visual scenes); task name \"overt\" consistent with overt spatial attention.\nType evidence/features: \"Spatial Attention Decoding\" explicitly names the construct."}},"computed_title":"Spatial Attention Decoding using fNIRS During Complex Scene Analysis","nchans_counts":[{"val":72,"count":27},{"val":84,"count":6}],"sfreq_counts":[{"val":50.0,"count":32},{"val":50.00000000000001,"count":1}],"stats_computed_at":"2026-04-22T23:16:00.308453+00:00","total_duration_s":null,"canonical_name":null,"name_confidence":0.74,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Ning2023"}}