{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a33e6","dataset_id":"ds005416","associated_paper_doi":null,"authors":["Yan Wu","Chunguang Tao","Qi Li"],"bids_version":"1.1.1","contact_info":["陶春光"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds005416.v1.0.1","datatypes":["eeg"],"demographics":{"subjects_count":23,"ages":[24,23,24,23,24,23,27,24,22,25,23,24,22,24,24,23,25,26,24,24,26,23,23],"age_min":22,"age_max":27,"age_mean":23.91304347826087,"species":null,"sex_distribution":{"m":13,"f":10},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005416","osf_url":null,"github_url":null,"paper_url":null},"funding":["Jilin Scientific and Technology Development Program (grant no. 20240101358JC)"],"ingestion_fingerprint":"f90756022286e2cb00b31c070ab530f042a625c4cd3652b64162c932be68565e","license":"CC0","n_contributing_labs":null,"name":"Fatigue Characterization of EEG under Mixed Reality Stereo Vision","readme":"In this study, we selected 24 electrodes for EEG recording: Fp1, Fp2, AF3, AF4, F7, Fz, F8, FC5, FC6 (frontal), FT7, FT8 (temporal), C3, Cz, C4, CP3, CP4 (central), P3, Pz, P4, PO3, PO4 (parietal), and O1, Oz, O2 (occipital).\nEach participant was required to complete watching 2 resting scenes and 15 movement scenes.\nA rating scene appeared to rate each exercise scene watched.\nEach movement scene consisted of 20 trials of reciprocal periodic movements at a fixed depth and velocity.\nWe focused on analyzing EEG data from watching resting scenes.\nResearchers can use this EEG data to do resting-state analysis (corresponding to events '11' and '13') as well as task-state analysis (corresponding to event '12').","recording_modality":["eeg"],"senior_author":"Qi Li","sessions":[],"size_bytes":22869324774,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["WatchingTask"],"timestamps":{"digested_at":"2026-04-22T12:27:46.028543+00:00","dataset_created_at":"2024-08-19T09:02:50.476Z","dataset_modified_at":"2024-12-17T01:14:50.000Z"},"total_files":23,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005416","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"dcbcc2ae4ebb1297","model":"openai/gpt-5.2","tagged_at":"2026-01-20T17:54:39.403529+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Resting-state"],"confidence":{"pathology":0.6,"modality":0.7,"type":0.7},"reasoning":{"few_shot_analysis":"Closest convention match is the few-shot example \"A Resting-state EEG Dataset for Sleep Deprivation\" (Healthy + Resting State + Resting-state): it demonstrates that when metadata explicitly frames the data as \"resting-state\" (even if recorded in a controlled lab setting), the Type label should be \"Resting-state\". For stimulus modality, the motor/imagery example (\"EEG Motor Movement/Imagery Dataset\") shows that motor-related labels are used when movement execution/imagery is the core experimental content; here, however, the described paradigm is explicitly \"watching\" scenes, aligning more with Visual stimulation conventions rather than Motor modality.","metadata_analysis":"Key quoted facts from the provided README:\n1) Task/stimulus context: \"Each participant was required to complete watching 2 resting scenes and 15 movement scenes.\"\n2) Analysis focus: \"We focused on analyzing EEG data from watching resting scenes.\"\n3) Explicit labeling of conditions: \"Researchers can use this EEG data to do resting-state analysis (corresponding to events '11' and '13') as well as task-state analysis (corresponding to event '12').\"","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: no diagnosis/clinical recruitment is mentioned (no patient group described).\n- Few-shot pattern suggests: absent clinical terms typically maps to \"Healthy\" (e.g., the resting-state sleep deprivation example uses Healthy when no disorder recruitment is indicated).\n- Alignment: ALIGN (both point away from clinical labels), but metadata lacks an explicit \"healthy\" statement.\n\nModality:\n- Metadata says: \"complete watching ... scenes\" (visual viewing of scenes).\n- Few-shot pattern suggests: watching/visual stimuli map to \"Visual\" modality (vs \"Resting State\" modality which is used for no-stimulus eyes-open/closed baselines).\n- Alignment: ALIGN toward Visual.\n\nType:\n- Metadata says: \"resting-state analysis (corresponding to events '11' and '13')\" and focus on \"watching resting scenes\".\n- Few-shot pattern suggests: when a dataset’s primary intended use is resting-state analysis, Type should be \"Resting-state\" (as in the resting-state sleep deprivation example).\n- Alignment: ALIGN, though the presence of \"movement scenes\" makes a secondary interpretation (Motor/Perception) plausible if the task-state portion were primary.","decision_summary":"Pathology (top-2):\n1) Healthy — Supported by lack of any clinical recruitment language and typical convention in few-shot examples where non-clinical cohorts default to Healthy.\n2) Unknown — Also plausible because metadata never explicitly states \"healthy\" or \"controls\".\nFinal: Healthy. Evidence alignment: mostly aligned, but inferred.\n\nModality (top-2):\n1) Visual — Strongly supported by: \"required to complete watching ... scenes\".\n2) Resting State — Possible only if the 'resting scenes' were truly no-stimulus rest, but metadata indicates viewing scenes.\nFinal: Visual. Evidence alignment: aligned.\n\nType (top-2):\n1) Resting-state — Explicitly stated: \"do resting-state analysis (events '11' and '13')\" and they \"focused on\" resting scenes.\n2) Perception — Also plausible because the experimental act is passive viewing of scenes; however, the dataset framing emphasizes resting-state analysis rather than perceptual discrimination.\nFinal: Resting-state. Evidence alignment: aligned, but limited detail about exact resting condition."}},"nemar_citation_count":0,"computed_title":"Fatigue Characterization of EEG under Mixed Reality Stereo Vision","nchans_counts":[{"val":64,"count":23}],"sfreq_counts":[{"val":1000.0,"count":23}],"stats_computed_at":"2026-04-22T23:16:00.309580+00:00","source_url":"https://openneuro.org/datasets/ds005416","total_duration_s":88847.0,"author_year":"Wu2024","canonical_name":null}}