{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3317","dataset_id":"ds004106","associated_paper_doi":null,"authors":["Jonathan Touryan (data and curation)","Brent Lance (data)","Scott Kerick (data)","Anthony Ries (data)","Kaleb McDowell(data)","Tony Johnson (curation)","Kay Robbins (curation)"],"bids_version":"1.7.0","contact_info":["Kay Robbins","Jonathan Touryan"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds004106.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":27,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds004106","osf_url":null,"github_url":null,"paper_url":null},"funding":["This research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-10-0-0002."],"ingestion_fingerprint":"a818aac2fd5ba177372ebc8cb725c90980f28633f1dab6cbfbdc71988d119bb4","license":"CC0","n_contributing_labs":null,"name":"BCIT Advanced Guard Duty","readme":"### Introduction\n**Overview:** The Advanced Guard Duty study was designed to measure sustained vigilance in realistic settings by having subjects verify information on replica ID badges. The task was performed in conjunction with two other tasks a calibration driving task and a baseline driving task. The data collected for the two driving tasks is not included in this dataset. Another study (Basic Guard Duty) not included in this collection had a similar set-up but a different experimental design and a different subject pool. In the Basic Guard Duty study the rate of ID presentation varied among tasks. In the Advanced Guard Duty study both the rate of ID presentation and the criteria for verification varied among blocks. Further information is available on request from [cancta.net](https://cancta.net).\n### Methods\n**Subjects:** volunteers from the local community recruited through advertisements.\n**Apparatus:**  Driving simulator with steering wheel and brake / foot pedals (Real Time Technologies; Dearborn, MI);\nVideo Refresh Rate (VRR) = 900 Hz; Vehicle data log file Sampling Rate (SR) = 100 Hz);\nEEG (BioSemi 256 (+8) channel systems with 4 eye and 2 mastoid channels recorded; SR=1024 Hz);\nEye Tracking (Sensomotoric Instruments (SMI); REDEYE250).\n**Initial setup:** Upon arrival to the lab, subjects were given an introduction to the primary study\nfor which they were recruited and provided informed consent and provided demographics information.\nThis was followed by a practice session, to acclimate the subject to the driving simulator.\nThe driving practice task lasted 10-15 min, until asymptotic performance in steering and speed\ncontrol was demonstrated and lack of motion sickness was reported. Subjects were then outfitted\nand prepped for eye tracking and EEG acquisition.\n**Task organization:** Subjects always began recording sessions by performing a Calibration Driving task,\nwhich was a 15-minute drive where the subject controlled only the steering (and speed was controlled by the simulator).\nFollowing this, subjects would perform the Baseline Driving task and the Guard Duty task,\nwith counter-balancing used across subjects as to which of them came first.\nThis dataset only contains the Guard Duty task.\nThe Baseline Driving run was 60 minutes of driving, performed in 6 blocks of 10 minutes each,\nwith subjects responsible for speed and steering control. The Calibration and Baseline driving\ntasks were conducted on the same simulated long, straight road in a visually sparse environment.\nThe subject was instructed to stay within the boundaries of the right-most lane, and to drive\nat the posted speed limits.\nThe vehicle was periodically subject to lateral perturbing forces, which could be applied to either\nside of the vehicle, pushing the vehicle out of the center of the lane; and the subject was instructed\n to execute corrective steering actions to return the vehicle to the center of the lane.\n**Guard duty task details:** The guard duty task entailed a serial presentation of replica identification (ID) cards\n(750 x 450 pixels) paired with a reference image (300 x 400 pixels).\nThe replica ID cards had eight components or fields in addition to a common background.\nThese components were: photo, name, date of birth (DOB), date of issue, date of expiration, area access,\nID number, bar code and watermark. The reference images consisted of color photographs of faces.\nBoth the ID photo and reference image were chosen from the Multi-PIE database\n(Gross, Matthews, Cohn, Kanade, & Baker, 2010). This database consists of color photographs\n(forward facing head shots) of individuals taken at different points in time.\nTherefore, while the ID photo and reference image were of the same individual,\nthe images were not identical (e.g., different hair style, different clothes, different lighting).\nThe task was divided into ten blocks of five minutes each.\nAt the beginning of each block, participants were instructed that they were guarding a restricted area\nthat required a particular letter designation on the ID card for access (e.g., area C access required).\nParticipants were asked to determine if the individual in the image, paired with the corresponding ID card,\nshould have access to their restricted area. Some of the ID cards were valid and some were not\n(e.g., expiration date passed, incorrect access area, or photos did not match).\nParticipants were instructed to press either an *allow* or *deny* button for each image-ID pairing.\nThe two-alternative forced-choice response was self-paced with a maximum time limit of 20 s.\nIf the participants chose to deny access, they were subsequently asked to provide a reason.\nReasons for denied access were selected from a numerical list of five options:\n1:incorrect access, 2:expired ID, 3:suspicious DOB, 4:face mismatch, 5:no watermark.\nIf the participant did not respond within the allotted time, the computer forced a deny decision.\nThe restricted area (area A-E) assigned at the beginning of each block was randomly chosen without\nreplacement such that all participants completed two blocks guarding each of the five areas.\nTo maintain consistency across participants, expiration dates were automatically generated at\nthe beginning of the experiment to have a symmetrical distribution around the current date.\nThis distribution was such that the majority of IDs had expiration dates temporally close\nto the current date (i.e., in the near future or recent past).\nIn each block, the image-ID pairings were presented at one of six different stochastic queuing rates,\nranging from 1 to 25 per minute (1, 2.5, 10, 15, 20, and 25 per minute).\nThe queuing rate varied within each block according to a predefined profile.\nThe rate profile had randomly permuted epochs of each queuing rate.\nEach epoch lasted 30 s with approximately twice as many low rate epochs (1 and 2.5 image-IDs per minute) as high.\nThe rate profiles were shifted for each participant (Latin square design) so that each rate profile\nwas assigned to every block for at least two participants. The current rate was indicated through\na processing queue, on the extreme right-hand side of the display, notifying each participant how\nmany IDs are waiting to be checked. For slow rates, most participants were able to process all IDs\nin their queue and had periods where they were waiting for the next ID (i.e., blank screen).\nFor fast rates, most participants were not able to processes IDs as quickly as they were added to the queue,\nincreasing the size of the processing queue. IDs in the queue persisted until they were processed by the\nparticipant or the block ended.\nAt the beginning of the experiment, participants were instructed to correctly process each image-ID while\nkeeping the queue as short as possible. The stochastic queuing rate was used to increase task realism,\nincorporating periods of high and low task demand, the dynamic rate itself was not explicitly considered\nan independent factor in the present study.\nAll blocks contained the same ratio of valid and invalid image-ID pairings (82% valid, 18% invalid).\nThe majority of invalid IDs were due to incorrect access (6%) and expiration (6%) whereas the rest were\ninvalid for the other reasons: suspicious DOB (2%), face mismatch (2%), no watermark (2%).\nThis second group of invalid IDs served as catch trials to verify that participants were examining all fields of the ID.\n**Independent variables:** ID presentation rate and verification criteria (varied by block).\n**Dependent variables:** ID disposition accuracy and processing times, Task-Induced Fatigue Scale (TIFS),\nKarolinska Sleepiness Scale (KSS), Visual Analog Scale of Fatigue (VAS-F).\nNote: questionnaire data is available upon request from [cancta.net](https://cancta.net).\n**Additional data acquired:** Participant Enrollment Questionnaire, Subject Questionnaire for Current Session,\nSimulator Sickness Questionnaire.\n**Experimental Location:** Science Applications International Corporation, Louisville, CO.","recording_modality":["eeg"],"senior_author":"Kay Robbins (curation)","sessions":["01"],"size_bytes":72590770632,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["GuardDuty"],"timestamps":{"digested_at":"2026-04-22T12:26:07.869809+00:00","dataset_created_at":"2022-04-21T22:44:56.688Z","dataset_modified_at":"2022-04-29T19:16:16.000Z"},"total_files":29,"storage":{"backend":"s3","base":"s3://openneuro.org/ds004106","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv","task-GuardDuty_eeg.json","task-GuardDuty_events.json"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"f843f3b7d99c58e7","model":"openai/gpt-5.2","tagged_at":"2026-01-20T10:28:45.252135+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.7,"modality":0.85,"type":0.85},"reasoning":{"few_shot_analysis":"Closest convention match is the few-shot example \"EEG: DPX Cog Ctl Task in Acute Mild TBI\" which is labeled Type=Attention with a visually presented cognitive control/monitoring task and event structure involving cues/probes and responses. Although our dataset is not DPX, it similarly targets sustained vigilance and monitoring over time with button responses to visual stimuli. This supports mapping a vigilance/guard-duty monitoring paradigm to Type=Attention and Modality=Visual. For Pathology, several few-shots show that when participants are patients (e.g., TBI, Parkinson's, Dementia), Pathology reflects the recruited clinical group; here the metadata indicates community volunteers without a diagnosis, aligning with the convention to label as Healthy.","metadata_analysis":"Key task and population facts from the README: (1) Purpose/type: \"The Advanced Guard Duty study was designed to measure sustained vigilance in realistic settings\" and the task is a continuous monitoring/verification task over blocks. (2) Visual stimulus modality: \"serial presentation of replica identification (ID) cards ... paired with a reference image\" and \"The reference images consisted of color photographs of faces.\" (3) Population: \"Subjects: volunteers from the local community recruited through advertisements.\" (4) Response/decision component (secondary evidence): \"Participants were instructed to press either an allow or deny button for each image-ID pairing\" and if denied \"asked to provide a reason\".","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says participants are \"volunteers from the local community\" with no stated diagnosis; few-shot convention suggests labeling non-clinical community samples as Healthy. ALIGN.\nModality: Metadata says stimuli are \"replica identification (ID) cards\" and \"reference images\" with \"color photographs of faces\"; few-shot convention maps image-based tasks to Visual. ALIGN.\nType: Metadata explicitly states the study was designed to \"measure sustained vigilance\" and describes a prolonged monitoring/verification task across blocks; few-shot convention maps vigilance/cognitive control monitoring paradigms to Attention rather than Perception/Motor. ALIGN.","decision_summary":"Pathology top-2: (1) Healthy — supported by \"volunteers from the local community recruited through advertisements\" and no clinical recruitment described; (2) Unknown — possible because health screening is not explicitly stated. Winner: Healthy (community volunteer sample strongly implies non-clinical recruitment). Evidence alignment: aligns with few-shot conventions.\nModality top-2: (1) Visual — supported by \"serial presentation of replica identification (ID) cards ... paired with a reference image\" and \"reference images consisted of color photographs of faces\"; (2) Multisensory — weak possibility because driving simulator context is mentioned, but the included dataset is only the guard duty task and stimuli described are visual. Winner: Visual. Evidence alignment: aligns with few-shot conventions.\nType top-2: (1) Attention — supported by \"designed to measure sustained vigilance\" plus long, block-based monitoring with time pressure; (2) Decision-making — because participants choose allow/deny and reasons, but the stated research aim is vigilance rather than value/policy decisions. Winner: Attention. Evidence alignment: aligns with few-shot conventions.\nConfidence justification: Pathology has only indirect evidence (no explicit 'healthy' screening) so lower; Modality and Type have direct explicit phrases in metadata."}},"nemar_citation_count":0,"computed_title":"BCIT Advanced Guard Duty","nchans_counts":[{"val":262,"count":29}],"sfreq_counts":[{"val":1024.0,"count":29}],"stats_computed_at":"2026-04-22T23:16:00.306995+00:00","total_duration_s":null,"canonical_name":null,"name_confidence":0.22,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Touryan2022"}}