{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a331b","dataset_id":"ds004119","associated_paper_doi":null,"authors":["Jonathan Touryan (data and curation)","Brent Lance (data)","Scott Kerick (data)","Anthony Ries (data)","Kaleb McDowell (data)","Tony Johnson (curation)","Kay Robbins (curation)"],"bids_version":"1.7.0","contact_info":["Kay Robbins","Jonathan Touryan"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds004119.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":21,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds004119","osf_url":null,"github_url":null,"paper_url":null},"funding":["This research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-10-0-0002."],"ingestion_fingerprint":"c43b0649c429dcaeefa9ce1e3363f52c65150fa46de877d77a1b08f430d1efa1","license":"CC0","n_contributing_labs":null,"name":"BCIT Basic Guard Duty","readme":"## BCIT Basic Guard Duty\n### Introduction\n**Overview:** The Basic Guard Duty study was designed to measure sustained vigilance in realistic settings\nby having subjects verify information on replica ID badges.\nThe task was performed in conjunction with two other tasks a calibration driving task and a baseline driving task.\nThe data collected for the two driving tasks is not included in this dataset.\nAnother study (Advanced Guard Duty), which included a similar set-up but a different experimental design\nand a different subject pool, is not included in this dataset. In the Basic Guard Duty study the rate of\nID presentation varied among tasks. In the Advanced Guard Duty study both the rate of ID presentation\nand the criteria for verification varied among blocks. Further information is available on request\nfrom [cancta.net](https://cancta.net).\n### Methods\n**Subjects:** Volunteers from the local community recruited through advertisements.\n**Apparatus:**  Driving simulator with steering wheel and brake / foot pedals (Real Time Technologies; Dearborn, MI);\nVideo Refresh Rate (VRR) = 900 Hz; Vehicle data log file Sampling Rate (SR) = 100 Hz);\nEEG (BioSemi 256 (+8) channel systems with 4 eye and 2 mastoid channels recorded; SR=1024 Hz);\nEye Tracking (Sensomotoric Instruments (SMI); REDEYE250).\n**Initial setup:** Upon arrival to the lab, subjects were given an introduction to the primary study\nfor which they were recruited and provided informed consent and provided demographics information.\nThis was followed by a practice session, to acclimate the subject to the driving simulator.\nThe driving practice task lasted 10-15 min, until asymptotic performance in steering and speed control\nwas demonstrated and lack of motion sickness was reported. Subjects were then outfitted and prepped\nfor eye tracking and EEG acquisition.\n**Task organization:** Subjects always began recording sessions by performing a Calibration Driving task,\nwhich was a 15-minute drive where the subject controlled only the steering (and speed was controlled by the simulator).\nFollowing this, subjects would perform the Baseline Driving task and the Guard Duty task,\nwith counter-balancing used across subjects as to which of them came first.\nThe Baseline Driving and Calibration Driving tasks are not included in this dataset.\n**Guard duty task details:** The guard duty task entailed a serial presentation of replica\nidentification (ID) cards (750 ? 450 pixels) paired with a reference image (300 x 400 pixels).\nThe replica ID cards had eight components or fields in addition to a common background.\nThese components were: photo, name, date of birth (DOB), date of issue, date of expiration,\narea access, ID number, bar code and watermark. The reference images consisted of color photographs of faces.\nBoth the ID photo and reference image were chosen from the Multi-PIE database\n(Gross, Matthews, Cohn, Kanade, & Baker, 2010). This database consists of color photographs (forward facing head shots) of individuals taken at different points in time. Therefore, while the ID photo and reference image were of the same individual, the images were not identical (e.g., different hair style, different clothes, different lighting). The task was divided into ten blocks of five minutes each.\nAt the beginning of each block, participants were instructed that they were guarding a restricted area\nthat required a particular letter designation on the ID card for access (e.g., area C access required).\nParticipants were asked to determine if the individual in the image, paired with the corresponding ID card,\nshould have access to their restricted area. Some of the ID cards were valid and some were not\n(e.g., expiration date passed, incorrect access area, or photos did not match).\nParticipants were instructed to press either an *allow* or *deny* button for each image-ID pairing.\nThe two-alternative forced-choice response was self-paced with a maximum time limit of 20s.\nIf the participants chose to deny access, they were subsequently asked to provide a reason.\nReasons for denied access were selected from a numerical list of five options:\n1:incorrect access, 2:expired ID, 3:suspicious DOB, 4:face mismatch, 5:no watermark.\nIf the participant did not respond within the allotted time, the computer forced a deny decision.\nThe restricted area (area A-E) assigned at the beginning of each block was randomly chosen without\nreplacement such that all participants completed two blocks guarding each of the five areas.\nTo maintain consistency across participants, expiration dates were automatically generated\nat the beginning of the experiment to have a symmetrical distribution around the current date.\nThis distribution was such that the majority of IDs had expiration dates temporally close to\nthe current date (i.e., in the near future or recent past).\nIn each block, the image-ID pairings were presented at one of six different stochastic queuing rates,\nranging from 1 to 25 per minute (1, 2.5, 10, 15, 20, and 25 per minute).\nThe queuing rate varied within each block according to a predefined profile.\nThe rate profile had randomly permuted epochs of each queuing rate.\nEach epoch lasted 30s with approximately twice as many low rate epochs (1 and 2.5 image-IDs per minute) as high.\nThe rate profiles were shifted for each participant (Latin square design) so that each rate profile\nwas assigned to every block for at least two participants. The current rate was indicated through\na processing queue, on the extreme right-hand side of the display, notifying each participant\nhow many IDs are waiting to be checked. For slow rates, most participants were able to process\nall IDs in their queue and had periods where they were waiting for the next ID (i.e., blank screen).\nFor fast rates, most participants were not able to processes IDs as quickly as they were added to the queue,\nincreasing the size of the processing queue. IDs in the queue persisted until they were processed by\nthe participant or the block ended. At the beginning of the experiment, participants were instructed\nto correctly process each image-ID while keeping the queue as short as possible.\nWhereas the stochastic queuing rate was used to increase task realism, incorporating periods of high\nand low task demand, the dynamic rate itself was not explicitly considered an independent factor in the present study.\nAll blocks contained the same ratio of valid and invalid image-ID pairings (82% valid, 18% invalid).\nThe majority of invalid IDs were due to incorrect access (6%) and expiration (6%) whereas the rest\nwere invalid for the other reasons: suspicious DOB (2%), face mismatch (2%), no watermark (2%).\nThis second group of invalid IDs served as catch trials to verify that participants were examining\nall fields of the ID.\n**Independent variables:** ID presentation rate (varied by block)\n**Dependent variables:** ID disposition accuracy and processing times, Task-Induced Fatigue Scale (TIFS),\nKarolinska Sleepiness Scale (KSS), Visual Analog Scale of Fatigue (VAS-F).\nNote: The questionnaire data is available upon request from [cancta.net](https://cancta.net).\n**Additional data acquired:** Participant Enrollment Questionnaire, Subject Questionnaire\nfor Current Session, Simulator Sickness Questionnaire.\n**Experimental Location:** Science Applications International Corporation, Louisville, CO\n**Note 1:** This dataset has corresponding runs in the BCIT Calibration Driving ds004118 during\nwhich a the 15 minute driving task was performed prior to this one.\n**Note 2:** This dataset has a corresponding runs in the BCIT Baseline Driving ds004120 which\nwere conducted on the same subject during the same session, counterbalanced with these.","recording_modality":["eeg"],"senior_author":"Kay Robbins (curation)","sessions":["01"],"size_bytes":59203206625,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["GuardDuty"],"timestamps":{"digested_at":"2026-04-22T12:26:08.524150+00:00","dataset_created_at":"2022-05-02T20:24:56.474Z","dataset_modified_at":"2022-05-04T22:45:18.000Z"},"total_files":22,"storage":{"backend":"s3","base":"s3://openneuro.org/ds004119","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv","task-GuardDuty_eeg.json","task-GuardDuty_events.json"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"4948bea59564f5e0","model":"openai/gpt-5.2","tagged_at":"2026-01-20T10:29:33.942768+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.8,"modality":0.8,"type":0.75},"reasoning":{"few_shot_analysis":"Most similar few-shot convention is the visual cognitive-control/vigilance style dataset labeled as Type=Attention (e.g., the DPX cognitive control task example). That example shows that when the study aim is sustained attention/cognitive control (even with button responses), the catalog Type is mapped to \"Attention\" rather than \"Motor\". Also, the schizophrenia visual discrimination example demonstrates the convention that Modality follows the stimulus channel (visual dots), not the response device. These conventions guide choosing Modality=Visual and Type=Attention here.","metadata_analysis":"Key task/purpose and population facts from the README:\n1) Purpose/construct: \"The Basic Guard Duty study was designed to measure sustained vigilance in realistic settings\".\n2) Participants: \"Subjects: Volunteers from the local community recruited through advertisements.\" (no diagnosis/clinical recruitment described).\n3) Stimuli are visual images: \"serial presentation of replica identification (ID) cards ... paired with a reference image\" and \"The reference images consisted of color photographs of faces.\" \n4) Responses are button presses but are not the modality driver: \"press either an allow or deny button for each image-ID pairing.\"","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Volunteers from the local community\" with no clinical condition mentioned.\n- Few-shot pattern suggests: when no disorder-based recruitment is described, label Pathology=Healthy.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"serial presentation of replica identification (ID) cards ... paired with a reference image\" and \"color photographs of faces\".\n- Few-shot pattern suggests: modality follows stimulus channel (e.g., visual discrimination task -> Visual; oddball with visual+auditory -> Multisensory).\n- Alignment: ALIGN (stimuli are dominantly visual).\n\nType:\n- Metadata says: \"designed to measure sustained vigilance\" and involves continuous checking/verification over blocks.\n- Few-shot pattern suggests: sustained vigilance/cognitive control paradigms map to Type=Attention (even with forced-choice responses).\n- Alignment: ALIGN (primary construct is vigilance/attention, not motor execution).","decision_summary":"Pathology top-2:\n1) Healthy — supported by \"Volunteers from the local community recruited through advertisements\" and no clinical inclusion criteria.\n2) Unknown — possible if population details were missing, but here they are described as community volunteers.\nWinner: Healthy. Confidence=0.8 (clear explicit population description + aligns with few-shot convention for non-clinical cohorts).\n\nModality top-2:\n1) Visual — supported by \"replica identification (ID) cards ... paired with a reference image\" and \"color photographs of faces\".\n2) Multisensory — weak alternative (no explicit auditory/tactile stimulus described).\nWinner: Visual. Confidence=0.8 (multiple explicit stimulus descriptions; strong fit to few-shot modality convention).\n\nType top-2:\n1) Attention — supported by \"measure sustained vigilance\" and prolonged block-based verification with varying workload/queue.\n2) Decision-making — plausible because subjects choose allow/deny and provide reasons, but the stated research aim is vigilance rather than value-based/strategic decision policy.\nWinner: Attention. Confidence=0.75 (explicit construct term 'sustained vigilance' strongly supports Attention; runner-up plausible but less consistent with stated aim)."}},"nemar_citation_count":0,"computed_title":"BCIT Basic Guard Duty","nchans_counts":[{"val":262,"count":22}],"sfreq_counts":[{"val":1024.0,"count":22}],"stats_computed_at":"2026-04-22T23:16:00.307048+00:00","total_duration_s":null,"canonical_name":null,"name_confidence":0.43,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Touryan2022_BCIT_Basic"}}