{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3421","dataset_id":"ds005872","associated_paper_doi":null,"authors":["Martyna Beata Płomecka","Ard Kastrati","Nicolas Langer"],"bids_version":"1.7.0","contact_info":["Scott  Huberty"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds005872.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":1,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005872","osf_url":null,"github_url":null,"paper_url":null},"funding":["Velux Stiftung Project No. 1126","Schweizerischer Nationalfonds zur Förderungder Wissenschaftlichen Forschung (SNF) Grant 100014175875"],"ingestion_fingerprint":"37883463bf58e980db2ffdaf4bcef988a646256d83513ff747e21949271e6e9b","license":"CC0","n_contributing_labs":null,"name":"EEGEyeNet Dataset","readme":"This is a BIDS standardized version of simultaneously collected EEG and eye-tracking data, taken from one subject from the [EEGEYENET](https://osf.io/ktv7m/) dataset.\nAcknowledgements go to Martyna Beata Płomecka, Ard Kastrati, and Nicolas Langer who designed the study, collected the data, and published the dataset to Open Science Framework.\nFor access to the full dataset, please refer to the dataset DOI.","recording_modality":["eeg"],"senior_author":"Nicolas Langer","sessions":["01"],"size_bytes":41874965,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["dots"],"timestamps":{"digested_at":"2026-04-22T12:28:57.750384+00:00","dataset_created_at":"2025-01-23T01:24:16.999Z","dataset_modified_at":"2025-01-23T02:51:19.000Z"},"total_files":1,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005872","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"a963e13bfd8fa2fc","model":"openai/gpt-5.2","tagged_at":"2026-01-20T18:42:04.857541+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.6,"modality":0.7,"type":0.55},"reasoning":{"few_shot_analysis":"No few-shot example explicitly involves simultaneous EEG + eye-tracking. For labeling conventions, the closest matches are the Visual-task examples where screen-based paradigms imply a Visual stimulus modality (e.g., the schizophrenia visual discrimination task and the visual bandit task). For Type, few-shot conventions separate perceptual discrimination (Perception) from cognitive control/selection (Attention/Decision-making). Because eye-tracking is typically used to study gaze allocation and visual orienting, it weakly resembles the Attention-oriented visual paradigms in the examples, but the dataset metadata here does not specify an actual task.","metadata_analysis":"Key available facts are very limited. The README states: (1) \"simultaneously collected EEG and eye-tracking data\" and (2) \"taken from one subject from the EEGEYENET dataset.\" No diagnosis, recruitment, stimulus, or task description is provided in the supplied metadata.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata SAYS nothing about any clinical recruitment/diagnosis (only \"one subject\"). Few-shot pattern SUGGESTS that when no disorder is mentioned, datasets are labeled Healthy. ALIGN (no conflict), but this is inference from absence of pathology information.\nModality: Metadata SAYS \"EEG and eye-tracking\" (implying participants visually fixate/follow stimuli on a display). Few-shot pattern SUGGESTS Visual modality for screen-based paradigms. ALIGN.\nType: Metadata SAYS nothing about the cognitive construct/task. Few-shot pattern SUGGESTS Attention is a common Type when gaze/visual orienting is central, but Perception is also plausible. PARTIAL alignment only; insufficient metadata to disambiguate.","decision_summary":"Pathology top-2: (1) Healthy—supported only by absence of any clinical description in \"one subject from the EEGEYENET dataset\"; (2) Unknown—because recruitment criteria are not stated. Selected Healthy as the stronger convention-based choice.\nModality top-2: (1) Visual—supported by the explicit \"eye-tracking\" mention (commonly tied to visual display stimuli) and alignment with Visual examples; (2) Unknown—because the stimulus channel is not explicitly described. Selected Visual.\nType top-2: (1) Attention—eye-tracking commonly used for gaze allocation/visual orienting; (2) Perception—if the underlying EEGEYENET task is perceptual processing during viewing. Selected Attention, but with low confidence due to missing task description."}},"computed_title":"EEGEyeNet Dataset","nchans_counts":[{"val":129,"count":1}],"sfreq_counts":[{"val":500.0,"count":1}],"stats_computed_at":"2026-04-22T23:16:00.311036+00:00","source_url":"https://openneuro.org/datasets/ds005872","total_duration_s":323.464,"canonical_name":null,"name_confidence":0.98,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Plomecka2025"}}