{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3378","dataset_id":"ds004661","associated_paper_doi":null,"authors":["Tony Johnson","Stephen Gordon","Jon Touryan","Kevin King"],"bids_version":"1.8.0","contact_info":["Kevin King"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds004661.v1.1.0","datatypes":["eeg"],"demographics":{"subjects_count":17,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds004661","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"4a2537721912493ad55b4df84a9422eca59758e1635f15289ed92c5a88665b9d","license":"CC0","n_contributing_labs":null,"name":"ANDI","readme":"Participants (N=17, all males) with an average age of 32.8 years performed a guided visual search task in parallel with a second binaurally presented auditory task (Ries, et al., 2016). EEG data from each participant were recorded using a 64-channel BioSemi ActiveTwo system digitized at 512 Hz. Four external electrodes were used to record bipolar horizontal and vertical EOG signals, and a single external electrode was placed on each of the left and right mastoids to provide the reference signals. Fourteen participants were included in the original study, with three additional participants later added, resulting in 17 participants.\nThe visual search task for this experiment required participants to follow a red annulus around the screen and press a button if the annulus stopped at a prespecified target. The auditory task for this experiment was an N-back matching task in which participants listened to a string of numbers presented at approximately 2 second intervals and were required to indicate whether the current number matched a previously presented number. For the N=0, this would be the number immediately prior. For N=1 this would be the number one level before that, and so on. In the example string “1”, “1”, “2”, “1”, “3”, “2”, the second “1” should generate a match in the N=0 condition, the third “1” should generate a match in the N=1 condition, and the second “2” should generate a match in the N=2 condition. The task was composed of a baseline condition in which participants were presented with both visual and auditory stimuli but were instructed to ignore the auditory component. Next, were three dual-task conditions with N-back levels of N=0, N=1, and N=2.","recording_modality":["eeg"],"senior_author":"Kevin King","sessions":[],"size_bytes":1505576944,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["nback"],"timestamps":{"digested_at":"2026-04-22T12:26:47.423512+00:00","dataset_created_at":"2023-08-05T16:52:47.866Z","dataset_modified_at":"2024-09-06T14:53:36.000Z"},"total_files":17,"storage":{"backend":"s3","base":"s3://openneuro.org/ds004661","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.tsv","task-nback_events.json"]},"nemar_citation_count":0,"computed_title":"ANDI","nchans_counts":[{"val":64,"count":17}],"sfreq_counts":[{"val":128.0,"count":17}],"stats_computed_at":"2026-04-22T23:16:00.308163+00:00","tags":{"pathology":["Healthy"],"modality":["Multisensory"],"type":["Attention"],"confidence":{"pathology":0.7,"modality":0.9,"type":0.7},"reasoning":{"few_shot_analysis":"Closest few-shot match on stimulus structure is the Cross-modal Oddball Task example (Parkinson's; Modality=Multisensory): it combines a visual cue with an auditory cue presented in the same trial, and it is labeled as Multisensory based on simultaneous visual+auditory inputs. ANDI similarly uses concurrent visual search plus binaural auditory number streams, so it follows the same modality convention (visual+auditory => Multisensory). For Type, the DPX Cog Ctl Task example (TBI; Type=Attention) reflects a task designed to tax cognitive control/attention, while the digit span example (Healthy; Type=Memory) reflects an explicit working-memory span paradigm. ANDI contains both a visual search (attention) component and an N-back (working memory) component; per the few-shot conventions, that makes Attention vs Memory the main competition for Type.","metadata_analysis":"Key population/task facts from the README:\n- Population appears non-clinical: \"Participants (N=17, all males) with an average age of 32.8 years performed...\"\n- Explicitly dual-task, visual+auditory: \"performed a guided visual search task in parallel with a second binaurally presented auditory task\"\n- Visual attention/search component: \"follow a red annulus around the screen and press a button if the annulus stopped at a prespecified target\"\n- Auditory working-memory component: \"The auditory task for this experiment was an N-back matching task in which participants listened to a string of numbers...\"\n- Workload/divided-attention manipulation: \"three dual-task conditions with N-back levels of N=0, N=1, and N=2\" and \"baseline condition... instructed to ignore the auditory component\"","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: only demographics and tasks; no diagnosis mentioned (e.g., \"Participants (N=17, all males)...\").\n- Few-shot pattern suggests: when no clinical recruitment is described, label as Healthy.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: concurrent visual and auditory stimuli (\"visual search task in parallel with... auditory task\"; \"binaurally presented auditory task\").\n- Few-shot pattern suggests: combined visual+auditory inputs => Multisensory (as in Cross-modal Oddball labeled Multisensory).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: includes a visual search detection/monitoring component (\"guided visual search\"; \"press a button if the annulus stopped at a prespecified target\") and an auditory N-back component (\"N-back matching task\"). Also explicitly dual-task vs ignore-auditory baseline (\"instructed to ignore the auditory component\"), indicating divided attention/workload manipulation.\n- Few-shot pattern suggests: n-back/digit-span-like tasks map to Memory (digit span example), while tasks emphasizing control/monitoring/divided attention map to Attention (DPX example).\n- Alignment: PARTIAL (mixed signals: both Memory and Attention are plausible). No explicit statement of the primary research construct is given, so choose the stronger overall framing: dual-task/divided-attention demands.","decision_summary":"Top-2 candidates per category:\n\nPathology:\n1) Healthy (selected) — Evidence: no clinical recruitment/diagnosis; only \"Participants (N=17, all males)\".\n2) Unknown — Would apply if recruitment status were unclear, but metadata implies a standard adult participant sample with no disorder terms.\nAlignment status: aligns with few-shot convention (non-clinical => Healthy).\n\nModality:\n1) Multisensory (selected) — Evidence: \"visual search task in parallel with... auditory task\" and \"binaurally presented auditory task\"; both visual and auditory stimuli presented.\n2) Visual — would fit if auditory were absent/only responses, but auditory stream is an explicit stimulus/task.\nAlignment status: aligns with Cross-modal Oddball few-shot convention (visual+auditory => Multisensory).\n\nType:\n1) Attention (selected) — Evidence emphasizes visual search and divided-attention manipulation: \"guided visual search task\"; baseline where subjects \"ignore the auditory component\"; \"three dual-task conditions\" indicating attentional load sharing.\n2) Memory — Evidence: \"N-back matching task\" with N=0/1/2 levels.\nAlignment status: mixed; selection favors the dataset-level framing as dual-task/visual-search with auditory load manipulation (Attention) rather than a pure working-memory study.\n\nConfidence justification (quotes/features):\n- Pathology 0.7: supported mainly by absence of diagnosis plus demographic-only description (\"Participants (N=17, all males)\").\n- Modality 0.9: multiple explicit modality quotes (\"visual search\", \"binaurally presented auditory task\", \"presented with both visual and auditory stimuli\") plus strong few-shot analog to Multisensory.\n- Type 0.7: explicit presence of both attention (visual search; ignore vs dual-task) and memory (N-back) features; decision based on stronger overall dual-task/divided-attention framing rather than a single unambiguous statement of study aim."}},"total_duration_s":36493.53125,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"30707ea90c12ae22","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"canonical_name":null,"name_confidence":0.62,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Johnson2023_ANDI"}}