{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3316","dataset_id":"ds004105","associated_paper_doi":null,"authors":["Javier Garcia (data)","Justin Brooks (data)","Scott Kerick (data)","Tony Johnson (data and curation)","Tim Mullen (data)","Jean Vettel (data)","Jonathan Touryan (curation)","Kay Robbins (curation)"],"bids_version":"1.7.0","contact_info":["Kay Robbins","Jonathan Touryan"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds004105.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":17,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds004105","osf_url":null,"github_url":null,"paper_url":null},"funding":["This research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-10-0-0002."],"ingestion_fingerprint":"a91b8b75c00093a31b0daee545e75b1d01a5e8bcb8e5f04c51f69d8e2843f90c","license":"CC0","n_contributing_labs":null,"name":"BCIT Auditory Cueing","readme":"### Introduction\n**Overview:** Subjects in the Auditory Cueing study performed a long-duration simulated driving task with\nperturbations and audio stimuli in a visually sparse environment.\nThe purpose of this effort was to supplement and extend the related driving research to collect\nprolonged time-on-task measurements of subjects performing a driving task in a simulated environment\nin order to assess fatigue-based performance through novel biomarkers.\nSimilar to the Baseline Driving study, the Auditory Cueing study was intended to identify periods\nof driver fatigue via predictive algorithms formulated from the analysis of driver EEG data,\nin comparison to the objective performance measures, and in contrast with the (non-fatigued)\nCalibration driving session for the subject. Auditory Cueing extended the Baseline Driving\nparadigm by adding predictive and non-predictive (random) pre-perturbation onset audio cues and\nincreasing the frequency and magnitude of perturbation events vs. baseline driving.\nFurther information is available on request from [cancta.net](https://cancta.net).\n### Methods\n**Subjects:** Volunteers from the local community recruited through advertisements.\n**Apparatus:**  Driving simulator with steering wheel and brake / foot pedals (Real Time Technologies; Dearborn, MI);\nVideo Refresh Rate (VRR) = 900 Hz; Vehicle data log file Sampling Rate (SR) = 100 Hz);\nEEG (BioSemi 64 (+8) channel systems with 4 eye and 2 mastoid channels recorded; SR=2048 Hz);\nEye Tracking (Sensomotoric Instruments (SMI); REDEYE250).\n**Initial setup:** Upon arrival to the lab, subjects were given an introduction to the\nprimary study for which they were recruited and provided informed consent and provided demographics information.\nThis was followed by a practice session, to acclimate the subject to the driving simulator.\nThe driving practice task lasted 10-15 min, until asymptotic performance in steering and speed control\nwas demonstrated and lack of motion sickness was reported.\nSubjects were then outfitted and prepped for eye tracking and EEG acquisition.\n**Task organization within the study:** Subjects always began recording sessions by performing\na Calibration Driving task, which was a 15-minute drive where the subject controlled only the steering\n(and speed was controlled by the simulator). Following this, subjects would perform Auditory Cueing\ncondition A and Auditory Cueing condition B, with counter-balancing used across subjects as to\nwhich of them came first. This study only contains the Auditory Cueing portion of the study.\n**Auditory cueing task details:** Auditory Cueing A was 45 minutes of continuous driving,\nwith subjects responsible for steering and maintaining speed, while a tone was played periodically at random.\nAuditory Cueing B was similar, but the tones were correlated with the onset of a perturbation event.\nBoth driving tasks were conducted on the same simulated long, straight road.\nIn each case, the subject was instructed to stay within the boundaries of the right-most lane,\nand to drive at the posted speed limits.\nThe vehicle was periodically subject to lateral perturbing forces, which could be applied to\neither side of the vehicle, pushing the vehicle out of the center of the lane;\nand the subject was instructed to execute corrective steering actions to return the vehicle to the center of the lane.\n**Independent variables:** Auditory Cue (randomly presented before perturbation vs. predictive)\n**Dependent variables:** Reaction times to perturbations, continuous performance based on\nvehicle log (steering wheel angle, lane position, heading error, etc.),\nreaction times to target vehicles (police), Task-Induced Fatigue Scale (TIFS),\nKarolinska Sleepiness Scale (KSS), Visual Analog Scale of Fatigue (VAS-F).\nNote: Questionnaire data is available upon request from [cancta.net](https://cancta.net).\n**Additional data acquired:** Participant Enrollment Questionnaire, Subject Questionnaire\nfor Current Session, Simulator Sickness Questionnaire.\n**Experimental Location:** Teledyne Corporation, Durham, NC.\n**Note:** This dataset has a corresponding dataset in the BCIT Calibration Driving ds004118 which has the\n15 minute driving task performed prior to this one.","recording_modality":["eeg"],"senior_author":"Kay Robbins (curation)","sessions":["01"],"size_bytes":21876513335,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["DriveRandomSound"],"timestamps":{"digested_at":"2026-04-22T12:26:07.654371+00:00","dataset_created_at":"2022-04-21T18:46:23.669Z","dataset_modified_at":"2022-05-04T23:03:34.000Z"},"total_files":34,"storage":{"backend":"s3","base":"s3://openneuro.org/ds004105","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv","task-DriveRandomSound_eeg.json","task-DriveRandomSound_events.json"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"97f4eaee975f857b","model":"openai/gpt-5.2","tagged_at":"2026-01-20T10:28:21.806415+00:00"},"tags":{"pathology":["Healthy"],"modality":["Multisensory"],"type":["Attention"],"confidence":{"pathology":0.65,"modality":0.8,"type":0.75},"reasoning":{"few_shot_analysis":"For Modality conventions, the few-shot example “Cross-modal Oddball Task” is labeled Multisensory because it explicitly combines “a visual pre-cue and an auditory pre-cue, which occurred at the same time”. This guides that when both auditory and visual stimulus streams are integral to the task context, Multisensory is appropriate. For Type conventions, the TBI DPX cognitive control dataset is labeled Attention in a context of performance monitoring during a demanding task, which is similar in intent (sustained task performance and vigilance/fatigue monitoring) even though the paradigms differ.","metadata_analysis":"Key task/stimulus facts from the README include: (1) “performed a long-duration simulated driving task with perturbations and audio stimuli in a visually sparse environment” (explicitly indicates both auditory and visual stimulus channels). (2) “Auditory Cueing extended the Baseline Driving paradigm by adding predictive and non-predictive (random) pre-perturbation onset audio cues” (auditory cues are a primary manipulation). (3) Purpose/construct: “collect prolonged time-on-task measurements… in order to assess fatigue-based performance through novel biomarkers” and “identify periods of driver fatigue via predictive algorithms… from the analysis of driver EEG data” (fatigue/vigilance construct). (4) Population description: “Volunteers from the local community recruited through advertisements.”","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says “Volunteers from the local community recruited through advertisements.” (no diagnosis; implies normative participants). Few-shot patterns: many normative volunteer task datasets map to Healthy. ALIGN (no conflicting clinical recruitment stated).\n\nModality: Metadata says “audio stimuli” and “pre-perturbation onset audio cues” but also “simulated driving… in a visually sparse environment” (visual scene is part of the task). Few-shot pattern suggests Multisensory when both auditory and visual stimuli are integral (as in Cross-modal Oddball). ALIGN (both channels present; no conflict).\n\nType: Metadata says purpose is “assess fatigue-based performance” and “identify periods of driver fatigue” during “long-duration… driving task” with performance/RT measures. Few-shot pattern suggests Attention for sustained performance/cognitive control monitoring tasks (e.g., DPX Cog Ctl labeled Attention). ALIGN (fatigue/vigilance fits sustained attention rather than purely Motor mechanics).","decision_summary":"Pathology top-2: (A) Healthy — supported by “Volunteers from the local community recruited through advertisements.” and no clinical inclusion criteria; (B) Unknown — because the README does not explicitly say “healthy” or “controls”. Winner: Healthy. Confidence reflects lack of an explicit ‘healthy’ statement.\n\nModality top-2: (A) Multisensory — supported by “driving task… in a visually sparse environment” (visual) plus “audio stimuli” / “audio cues” (auditory); consistent with the cross-modal few-shot convention. (B) Auditory — because the manipulation is specifically auditory cueing (“pre-perturbation onset audio cues”). Winner: Multisensory because the task necessarily includes continuous visual driving input alongside auditory cues. \n\nType top-2: (A) Attention — supported by “long-duration” time-on-task with goal to “assess fatigue-based performance” and “identify periods of driver fatigue”, which aligns with vigilance/sustained attention constructs; (B) Motor — because driving involves continuous steering/braking. Winner: Attention because the stated research purpose is fatigue/vigilance biomarker identification, not motor control per se."}},"nemar_citation_count":0,"computed_title":"BCIT Auditory Cueing","nchans_counts":[{"val":74,"count":34}],"sfreq_counts":[{"val":1024.0,"count":34}],"stats_computed_at":"2026-04-22T23:16:00.306978+00:00","total_duration_s":null,"canonical_name":null,"name_confidence":0.38,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Garcia2022"}}