{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a331d","dataset_id":"ds004121","associated_paper_doi":null,"authors":["Jonathan Touryan (data and curation)","Greg Apker (data)","Brent Lance (data)","Scott Kerick (data)","Anthony Ries (data)","Justin Brooks (data)","Kaleb McDowell (data)","Tony Johnson (curation)","Kay Robbins (curation)"],"bids_version":"1.7.0","contact_info":["Kay Robbins","Jonathan Touryan"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds004121.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":21,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds004121","osf_url":null,"github_url":null,"paper_url":null},"funding":["This research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-10-0-0002."],"ingestion_fingerprint":"c55beae1fa3cf1eabd48322892e259b1df2b6077728c2f270d2a0de3bb2e950c","license":"CC0","n_contributing_labs":null,"name":"BCIT Mind Wandering","readme":"## BCIT Mind Wandering\n### Introduction\n**Overview:** Subjects in the Mind Wandering study performed a long-duration simulated driving task\nwith perturbations and audio stimuli in a visually complex environment.\nThe purpose of this effort was to supplement and extend the related driving research to collect\nprolonged time-on-task measurements of subjects performing a driving task in a simulated environment\nin order to assess fatigue-based performance through novel biomarkers. Similar to the Baseline Driving study,\nthe Mind Wandring study was intended to identify periods of driver fatigue via predictive algorithms formulated\nfrom the analysis of driver EEG data, in comparison to the objective performance measures,\nand in contrast with the (non-fatigued) Calibration driving session for the subject.\nMind Wandering extended the paradigm by adding different types of background audio\n(task relevant, non-task relevant, internal focus) and a vigilance task (identify police vehicles),\nin addition to increasing perturbation magnitude and frequency vs. baseline driving.\nFurther information is available on request from [cancta.net](https://cancta.net).\n### Methods\n**Subjects:** Volunteers from the local community recruited through advertisements.\n**Apparatus:**  Driving simulator with steering wheel and brake / foot pedals (Real Time Technologies; Dearborn, MI);\nVideo Refresh Rate (VRR) = 900 Hz; Vehicle data log file Sampling Rate (SR) = 100 Hz);\nEEG (BioSemi 64 (+8) channel systems with 4 eye and 2 mastoid channels recorded; SR=2048 Hz);\nEye Tracking (Sensomotoric Instruments (SMI); REDEYE250).\n**Initial setup:** Upon arrival to the lab, subjects were given an introduction to the primary\nstudy for which they were recruited and provided informed consent and provided demographics information.\nThis was followed by a practice session, to acclimate the subject to the driving simulator.\nThe driving practice task lasted 10-15 min, until asymptotic performance in steering and\nspeed control was demonstrated and lack of motion sickness was reported.\nSubjects were then outfitted and prepped for eye tracking and EEG acquisition.\n**Task organization within the study:** Subjects always began recording sessions by performing a\nCalibration Driving task, which was a 15-minute drive where the subject controlled only\nthe steering (and speed was controlled by the simulator).\n**Mind wandering task details:** Subjects would perform Mind Wandering conditions\nA, B, and C, with counter-balancing used across subjects as to which of them came first.\nMind Wandering A was 30 minutes of continuous driving, with subjects responsible for\nsteering and maintaining speed, while task relevant audio (traffic safety) played in the background.\nSubjects were instructed to look for police vehicles and respond by pressing a button on the steering wheel.\nMind Wandering B and C were similar, with non-task relevant audio (e.g. sports broadcast) in B\nand internal focus audio (mindfulness breathing exercise) in C.\nBoth driving tasks were conducted on the same simulated long, straight road,\nthat contained a mix of regular traffic and police vehicles.\nIn each case, the subject was instructed to stay within the boundaries of the right-most lane,\nand to drive at the posted speed limits.\nThe vehicle was periodically subject to lateral perturbing forces,\nwhich could be applied to either side of the vehicle, pushing the vehicle out of the center\nof the lane; and the subject was instructed to execute corrective steering actions to return\nthe vehicle to the center of the lane.\n**Independent variables:** Background Audio (task relevant vs. non-task relevant vs. internal focus).\n**Dependent variables:** Reaction times to perturbations, continuous performance based on vehicle log\n(steering wheel angle, lane position, heading error, etc.), reaction times to target vehicles (police),\nTask-Induced Fatigue Scale (TIFS), Karolinska Sleepiness Scale (KSS), Visual Analog Scale of Fatigue (VAS-F).\nNote: questionnaire data is available upon request from [cancta.net](https://cancta.net).\n**Additional data acquired:** Participant Enrollment Questionnaire, Subject Questionnaire for Current Session,\nSimulator Sickness Questionnaire.\n**Experimental Location:** Teledyne Corporation, Durham, NC.\n**Note:** This dataset has a corresponding dataset in the BCIT Calibration Driving ds004118 which has the\n15 minute driving task prior to this one.","recording_modality":["eeg"],"senior_author":"Kay Robbins (curation)","sessions":["01"],"size_bytes":25671365743,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["DriveWithTaskAudio"],"timestamps":{"digested_at":"2026-04-22T12:26:08.895330+00:00","dataset_created_at":"2022-05-03T11:46:09.577Z","dataset_modified_at":"2022-05-03T23:43:05.000Z"},"total_files":60,"storage":{"backend":"s3","base":"s3://openneuro.org/ds004121","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv","task-DriveWithTaskAudio_eeg.json","task-DriveWithTaskAudio_events.json"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"98b5686132de129a","model":"openai/gpt-5.2","tagged_at":"2026-01-20T10:30:27.145140+00:00"},"tags":{"pathology":["Healthy"],"modality":["Multisensory"],"type":["Attention"],"confidence":{"pathology":0.7,"modality":0.8,"type":0.7},"reasoning":{"few_shot_analysis":"Most similar convention cues come from the Cross-modal Oddball Task few-shot example (Parkinson's; Modality=Multisensory) because it explicitly mixes visual and auditory cues in the same paradigm; this guides labeling this driving task with background audio and a visually complex simulation as Multisensory. For Type, the DPX cognitive control/attention TBI example demonstrates the convention that vigilance/cognitive control demands map to Attention rather than Motor, even when responses are button presses. No few-shot example directly covers driving fatigue, so the Type label is chosen by closest construct match (vigilance + time-on-task fatigue/mind wandering → Attention).","metadata_analysis":"Key task/stimulus facts from the README:\n1) Visual driving simulation: \"performed a long-duration simulated driving task ... in a visually complex environment\" and \"Both driving tasks were conducted on the same simulated long, straight road, that contained a mix of regular traffic and police vehicles.\"\n2) Auditory stimulation: \"Mind Wandering extended the paradigm by adding different types of background audio\" and \"Mind Wandering A was 30 minutes of continuous driving ... while task relevant audio (traffic safety) played in the background\" (also \"non-task relevant audio\" and \"internal focus audio\").\n3) Attention/vigilance + fatigue aim: \"identify periods of driver fatigue via predictive algorithms\" and \"a vigilance task (identify police vehicles)\" plus \"Subjects were instructed to look for police vehicles and respond by pressing a button\".\n4) Population recruitment: \"Subjects: Volunteers from the local community recruited through advertisements.\" (no clinical diagnosis described).","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Volunteers from the local community\" with no disorder/diagnosis recruitment described.\n- Few-shot pattern suggests: community volunteer cohorts without diagnoses are labeled Healthy.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"visually complex environment\" (visual driving scene) and \"background audio\" (auditory stimuli).\n- Few-shot pattern suggests: simultaneous visual+auditory paradigms map to Multisensory (per cross-modal oddball example).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: aim is \"assess fatigue-based performance\" / \"identify periods of driver fatigue\" and includes \"a vigilance task (identify police vehicles)\".\n- Few-shot pattern suggests: vigilance/cognitive control tasks map to Attention (e.g., DPX Cog Ctl labeled Attention).\n- Alignment: ALIGN (fatigue/mind wandering operationalized via vigilance/time-on-task attention).","decision_summary":"Top-2 candidates per category with head-to-head comparison:\n\nPathology:\n1) Healthy — Evidence: \"Volunteers from the local community\"; no mention of patients/diagnoses; study framed as performance/fatigue biomarkers rather than clinical recruitment.\n2) Unknown — Would apply if population health status were not inferable; however the community-volunteer framing strongly implies a normative cohort.\nSelected: Healthy. Confidence=0.7 because there is no explicit statement like \"healthy participants,\" but recruitment is clearly non-clinical.\n\nModality:\n1) Multisensory — Evidence: \"visually complex environment\" (visual driving) + \"background audio\" with multiple audio conditions; combined sensory inputs are central to the paradigm.\n2) Visual — Driving scene is continuous and dominant, but auditory manipulation is an explicit independent variable.\nSelected: Multisensory. Confidence=0.8 due to multiple explicit quotes indicating both visual and auditory stimuli.\n\nType:\n1) Attention — Evidence: explicit \"vigilance task (identify police vehicles)\"; time-on-task fatigue/mind wandering assessed during continuous task performance; reaction time/performance measures tied to vigilance and sustained attention.\n2) Other — Could be argued as human factors/fatigue biomarker study not cleanly fitting classic cognitive domains; however the operational task construct is sustained attention/vigilance.\nSelected: Attention. Confidence=0.7 based on clear vigilance wording but no single canonical cognitive-task label (e.g., oddball/n-back) provided."}},"nemar_citation_count":0,"computed_title":"BCIT Mind Wandering","nchans_counts":[{"val":74,"count":60}],"sfreq_counts":[{"val":1024.0,"count":60}],"stats_computed_at":"2026-04-22T23:16:00.307074+00:00","total_duration_s":null,"canonical_name":null,"name_confidence":0.98,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Touryan2022_BCIT_Mind"}}