{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3376","dataset_id":"ds004657","associated_paper_doi":null,"authors":["Jason Metcalfe","Amar Marathe","Tony Johnson","Stephen Gordon","Jon Touryan","Kevin King"],"bids_version":"1.8.0","contact_info":["Kevin King"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds004657.v1.0.3","datatypes":["eeg"],"demographics":{"subjects_count":24,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds004657","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"f641ee7d54547bbb9b0415bc5c423c97166372a3e5c89a8a2e31f5d0788d0c39","license":"CC0","n_contributing_labs":null,"name":"Driving with Autonomous Aids","readme":"TX20 dataset\nVehicle survivability is critically important in today’s military. Survivability is critically impacted by the performance of human operators – especially as it degrades with various factors. Significant DoD investments have focused on developing and integrating autonomous technologies to mitigate the effects of human error. However, simply implementing autonomy without having a clear plan for integrating with human operators can lead to relatively poor performance and thus low user acceptance. Human trust in automation (TiA) is a well-documented determinant of acceptance and use, but more important than achieving a certain level of trust is to find an appropriate match between the capabilities of the technology and the operator's trust. Finding means to calibrate TiA to elicit the desired use of the autonomy is an important goal, but requires reliable quantitative indicators that can be continuously monitored. Considerable research on interpersonal trust has revealed measurable patterns of physiological change that correlate significantly with changing levels of subjective trust and trust-based decision making. This research was aimed at facilitating the eventual real-time management of TiA by developing initial psychophysiology-based metrics for monitoring and predicting continuous changes in trust and/or trust-related behaviors.\nComplete a semi-automated driving task involving lane maintenance, following distance from a lead vehicle, and collision avoidance (with oncoming traffic and frequently appearing pedestrians). Under certain conditions, an automated driving assistant was available and could be engaged and disengaged at the discretion of the driver.  The automated assistant was capable of managing limited aspects of the driving task (maintainance of following distance alone or maintaining following distance and lane position), but was not capable of collision avoidance. Separate driver responses (button presses) were required to successfully avoid collisions with pedestrians.\nThis research was conducted to develop and validate methods for monitoring and predicting varying degrees of trust in automation (TiA) using both physiological and behavioral metrics characterizing real-time human-automation interactions. The overarching goal of this research was to develop and validate methods for measuring and drawing inferences about TiA, either directly or indirectly through correlated constructs. In particular, we examined operator trust in vehicle automation as it is reflected in changes observed in subjective reports as well as behavioral and physiological state variables during the execution of a shared human-autonomy driving task. The stated aims underlying this goal included:\nAim #1: To develop and experimentally validate metrics (dependent variables) that index changes in TiA. Rather than focusing on single-modality metrics, we will record and explore the patterns of correlation and co-variance among a variety of psychophysiological and behavioral variables and focus particularly on metrics that predict decisions around sharing vehicle control with the autonomy in each condition. State measures will be derived from EEG, EOG (electrooculography), ECG, EDA, and gaze position tracking as well as the subject vehicle control behaviors.\nAim #2: To develop an understanding of factors (independent variables and covariates) that influence the subject’s TiA. Whereas the Aim #1 targets the identification of metrics, or groups of metrics, that reliably predict trust-based decision-making, here we seek to gain insight as to which factors influence the likelihood and directionality of those same trust-based decisions. Such factors will include real-time tracking of variables such as task load, collision risk, and recent performance history or trending changes in success rate.\nSessions/Conditions\nSCPB: PractB\nSCMM: Manual driving\nSCFB: Full Bad autonomy\nSCFG: Full Good autonomy\nSCSB: Speed Bad autonomy\nSCSG: Speed Good autonomy.","recording_modality":["eeg"],"senior_author":"Kevin King","sessions":["SCFB","SCFG","SCMM","SCPB","SCSB","SCSG"],"size_bytes":46237302253,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["Drive"],"timestamps":{"digested_at":"2026-04-22T12:26:46.806039+00:00","dataset_created_at":"2023-08-05T02:11:45.552Z","dataset_modified_at":"2023-11-14T15:46:52.000Z"},"total_files":119,"storage":{"backend":"s3","base":"s3://openneuro.org/ds004657","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.tsv","task-Drive_events.json"]},"nemar_citation_count":1,"computed_title":"Driving with Autonomous Aids","nchans_counts":[{"val":74,"count":119}],"sfreq_counts":[{"val":1024.0,"count":111},{"val":8192.0,"count":8}],"stats_computed_at":"2026-04-22T23:16:00.308139+00:00","tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Decision-making"],"confidence":{"pathology":0.7,"modality":0.7,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot examples by construct are the decision-focused tasks: (1) “EEG: Three armed bandit gambling task” (Healthy / Visual / Affect) and (2) “EEG: Reinforcement Learning in Parkinson's” (Parkinson's / Visual / Decision-making). These examples show the catalog convention that when the primary research aim is choices under uncertainty/reward/trust (i.e., “decision making” is central), the Type tends to be labeled “Decision-making” rather than “Attention” or “Other”. For modality, several few-shot datasets with screen-based tasks are labeled “Visual”, supporting a Visual label for a driving simulation task dominated by visual scene input. For pathology, few-shot examples consistently use a clinical label only when the metadata explicitly recruits a diagnosed group (e.g., Parkinson’s, TBI); otherwise they use Healthy.","metadata_analysis":"Key metadata facts:\n1) Purpose/construct: The README repeatedly frames the study around trust and trust-based decisions: “Human trust in automation (TiA) is a well-documented determinant of acceptance and use” and the work targets “monitoring and predicting continuous changes in trust and/or trust-related behaviors.”\n2) Decision component: “an automated driving assistant was available and could be engaged and disengaged at the discretion of the driver” and the work focuses on “metrics that predict decisions around sharing vehicle control with the autonomy in each condition.”\n3) Task/paradigm: “Complete a semi-automated driving task involving lane maintenance, following distance… and collision avoidance… frequently appearing pedestrians.”\n4) Population: only “Subjects: 24” is given; no diagnosis/clinical recruitment terms appear.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: only “Subjects: 24” with no clinical descriptors.\n- Few-shot pattern suggests: absent explicit diagnosis/recruitment condition, label as Healthy.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: “driving task… lane maintenance… following distance… collision avoidance… appearing pedestrians” (a visually guided simulation), with no explicit auditory/tactile stimulus emphasis.\n- Few-shot pattern suggests: screen-based tasks are typically labeled Visual unless clearly multimodal (e.g., explicit auditory + visual oddball).\n- Alignment: ALIGN (inference-based).\n\nType:\n- Metadata says: focus on “trust in automation (TiA)” and explicitly “trust-based decision making” plus “predict decisions around sharing vehicle control with the autonomy”.\n- Few-shot pattern suggests: when decisions/policy selection are the primary aim (bandit/RL examples), Type maps to Decision-making.\n- Alignment: ALIGN.","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n- Candidate 1: Healthy — Evidence: no diagnosis/recruitment condition stated; only “Subjects: 24”. Matches few-shot convention that clinical labels require explicit diagnosis.\n- Candidate 2: Unknown — Would apply if population info were insufficient/ambiguous, but here absence of pathology focus supports Healthy as normative cohort.\n- Final: Healthy (alignment: yes).\n\nModality:\n- Candidate 1: Visual — Evidence: simulated driving with “lane maintenance… pedestrians” implies dominant visual scene processing.\n- Candidate 2: Multisensory — Could be argued for real driving (vestibular/proprioception), but metadata does not describe controlled multisensory stimuli (contrast with few-shot cross-modal oddball explicitly stating auditory+visual cues).\n- Final: Visual (alignment: yes; inference from task description).\n\nType:\n- Candidate 1: Decision-making — Evidence: “engaged and disengaged at the discretion of the driver” and goal to “predict decisions around sharing vehicle control”; also mentions “trust-based decision making.”\n- Candidate 2: Attention — Driving involves attention demands, but the stated primary scientific aim is trust calibration and choice to rely on automation.\n- Final: Decision-making (alignment: yes).\n\nConfidence notes:\n- Pathology confidence limited because there is no explicit statement like “healthy participants,” only absence of pathology terms.\n- Modality confidence limited because stimulus modality is implied by driving/pedestrian visuals rather than explicitly stated.\n- Type confidence higher due to multiple explicit decision/trust decision quotes."}},"total_duration_s":97939.0,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"168e011415deec9a","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"canonical_name":null,"name_confidence":0.55,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Metcalfe2023_Driving"}}