{"success":true,"database":"eegdash","data":{"_id":"69a33a3b897a7725c66f3eeb","dataset_id":"ds007338","associated_paper_doi":null,"authors":["Martyna Beata Płomecka","Ard Kastrati","Nicolas Langer"],"bids_version":"1.7.0","contact_info":["Scott  Huberty"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds007338.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":1,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds007338","osf_url":null,"github_url":null,"paper_url":null},"funding":["Velux Stiftung Project No. 1126","Schweizerischer Nationalfonds zur Förderungder Wissenschaftlichen Forschung (SNF) Grant 100014175875"],"ingestion_fingerprint":"9f68984c2d08db647522945c5cbd767ea61aeab363ee5655329432d40f902de1","license":"CC0","n_contributing_labs":null,"name":"EEGEyeNet Dataset","readme":"This is a BIDS standardized version of simultaneously collected EEG and eye-tracking data, taken from one subject from the [EEGEYENET](https://osf.io/ktv7m/) dataset.\nAcknowledgements go to Martyna Beata Płomecka, Ard Kastrati, and Nicolas Langer who designed the study, collected the data, and published the dataset to Open Science Framework.\nFor access to the full dataset, please refer to the dataset DOI.\nReferences\n----------\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8","recording_modality":["eeg"],"senior_author":"Nicolas Langer","sessions":["01"],"size_bytes":41876427,"source":"openneuro","storage":{"backend":"s3","base":"s3://openneuro.org/ds007338","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["dots"],"timestamps":{"digested_at":"2026-04-22T12:30:10.962399+00:00","dataset_created_at":"2026-01-27T19:08:19.937Z","dataset_modified_at":"2026-01-27T19:21:26.000Z"},"total_files":1,"computed_title":"EEGEyeNet Dataset","nchans_counts":[{"val":129,"count":1}],"sfreq_counts":[{"val":500.0,"count":1}],"stats_computed_at":"2026-04-22T23:16:00.312604+00:00","total_duration_s":323.464,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"bfc1586ed7fa659c","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.6,"modality":0.7,"type":0.55},"reasoning":{"few_shot_analysis":"Most similar few-shot example by paradigm keyword is the schizophrenia dataset using a visual moving-dots discrimination task (few-shot: “We used a visual discrimination task. Stimuli consisted of 100 moving dots…”), which is conventionally labeled as Modality=Visual and Type=Perception. The target dataset’s only task label is “dots”, which aligns most naturally with that convention (visual dot stimulus paradigms are typically perception-oriented). Few-shot healthy resting/motor/learning examples are less task-similar.","metadata_analysis":"Key available metadata is sparse. Relevant quotes: (1) “simultaneously collected EEG and eye-tracking data” (suggests a visually guided paradigm, since eye-tracking is typically used with visual stimuli). (2) tasks: [\"dots\"] (the only task name provided; implies dot-based visual stimulation but does not specify the exact cognitive construct). (3) “taken from one subject” and participants_overview: “Subjects: 1” (no indication of a clinical recruitment criterion). No explicit diagnosis/condition is mentioned anywhere in the provided metadata.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata SAYS no disorder/diagnosis (no clinical terms; only “Subjects: 1”). Few-shot pattern SUGGESTS that when no clinical population is described, label as Healthy. ALIGN.\nModality: Metadata SAYS “EEG and eye-tracking” and task name “dots” (no explicit “visual” word, but eye-tracking+dots strongly implies visual stimuli). Few-shot pattern SUGGESTS dot tasks are Visual. ALIGN.\nType: Metadata SAYS only “dots” (no explicit goal like attention/working memory/decision). Few-shot pattern SUGGESTS dot-stimulus tasks are commonly treated as Perception (e.g., moving-dot discrimination). PARTIAL ALIGN (inference required due to missing explicit task description).","decision_summary":"Pathology top-2: (1) Healthy — supported by absence of any clinical recruitment/diagnosis language and generic single-subject dataset (“Subjects: 1”, no disorder mentioned). (2) Unknown — plausible because metadata never explicitly states “healthy”. Winner: Healthy. Confidence reflects inference-by-absence.\nModality top-2: (1) Visual — supported by “eye-tracking” + task name “dots”, and few-shot convention that dot paradigms are visual. (2) Other — possible if “dots” referred to non-visual markers, but unlikely given eye-tracking. Winner: Visual.\nType top-2: (1) Perception — best match to dot-based visual stimulation conventions (few-shot moving-dots discrimination labeled Perception) and lack of explicit higher-order constructs. (2) Attention — also plausible for dot/fixation/visual tracking tasks, but not stated. Winner: Perception, with lower confidence due to underspecified task details.\nConfidence justification quotes/features: “simultaneously collected EEG and eye-tracking data”; tasks: [“dots”]; participants_overview: “Subjects: 1”; plus few-shot dot-discrimination convention mapping to Visual/Perception."}},"canonical_name":null,"name_confidence":0.9,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Plomecka2026"}}