{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3409","dataset_id":"ds005628","associated_paper_doi":null,"authors":["Juan Pablo Rosado-Aíza","Fernando José Domínguez-Morales","Tania Yareni Pech-Canul","Paola Guadalupe Vázquez-Rodríguez","Gustavo Navas-Reascos","Luz María Alonso-Valerdi","David I. Ibarra Zarate"],"bids_version":"1.8.0","contact_info":["Gustavo Sebastián Navas-Reascos"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds005628.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":102,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005628","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"f3abf276b965c386ad7b349f6668113d703c3b731aec6abfacb606023b1b20a0","license":"CC0","n_contributing_labs":null,"name":"Dataset of Visual and Audiovisual Stimuli in Virtual Reality from the Edzna Archaeological Site","readme":"# README\n- Authors\nJuan Pablo Rosado-Aíza, Fernando José Domínguez-Morales, Tania Yareni Pech-Canul, Paola Guadalupe Vázquez-Rodríguez, Gustavo Navas-Reascos, Luz María Alonso-Valerdi, David I. Ibarra Zarate\n- Contact person\nGustavo Navas-Reascos\nhttps://orcid.org/0000-0003-0250-765X\nA01681952@tec.mx\n## Overview\n- Project name\nDataset of Visual and Audiovisual Stimuli in Virtual Reality from the Edzna Archaeological Site\n- Year that the project ran\n2024\n- Brief overview\nThe purpose of this dataset is to analyze user experience in a virtual reality (VR) environment, focusing on a comparative study between visual and audiovisual stimuli based on the archaeological site of Edzna, Mexico. The immersive experience allowed participants to explore the site without needing to physically being there, and the experiment was conducted in a museum setting, offering a unique experience that goes beyond traditional visual-only exhibits. The dataset includes both Electroencephalography (EEG) recordings from eight channels (Fz, C3, Cz, C4, Pz, PO7, Oz, and PO8) and user responses to the User Experience Questionnaire (UEQ), providing necessary data for future studies on how immersive environments affect user perception.\nThe EEG data was collected using a Unicorn Hybrid Black EEG system with a sampling rate of 250 Hz. Participants were exposed to two conditions: a visual-only stimulus and an audiovisual stimulus, both of which represented scenes from the archaeological site in VR. Prior to exposure, a baseline measurement was taken to capture the initial state of the participants. Data collection was conducted in MOSTLA, a digital innovation lab at Tecnologico de Monterrey campus, and the Museum of Contemporary Art in Monterrey.\nEach EEG recording is shared in .set format and follows the BIDS structure. The recordings include eight channels of brainwave recordings for the baseline, visual, and audiovisual conditions. The signals are presented in both formats: raw and preprocessed. Additionally, an .xlsx file is provided with basic participant metadata, such as age, gender, unique identifier as well as the UEQ responses.\nEach EEG file contains data segmented into the three phases of the experiment: baseline, visual stimulus, and audiovisual stimulus, allowing researchers to directly compare neural responses across conditions.\nThis dataset offers a comprehensive resource for researchers interested in investigating the effects of immersive VR environments on user engagement, and attention, making it highly applicable and useful.\n- Description of the contents of the dataset\n\tsub-N  - Raw data\n\tsub-Np - Preprocesed data\n\tExample:\n\tsub-1  - Raw data of subject 1\n    \tsub-1p - Preprocesed data of subject 1\n### Subjects\nA total of 51 participants were obtained.\n### Apparatus\nUnicorn Hybrid Black EEG system\nVR Headset\nHeadphones\n### Experimental location\nMOSTLA place at Tecnologico de Monterrey. It is located at Av. Eugenio Garza Sada 2501 Sur, Tecnologico, 64849 Monterrey, N.L., Mexico.\nMARCO a contemporary art museum located in Monterrey at Zuazua y Jardón, Centro, 64000 Monterrey, N.L., Mexico.\n### Notes\nAll the metadata information, including the UEQ answers could be obtained from the file metadata.xlsx\nThe videos presented to the participants are shown in:\nAudiovisual video:\nhttps://youtu.be/FBWbtSFwVuo\nVisual video:\nhttps://youtu.be/aLzzl0ygBnc","recording_modality":["eeg"],"senior_author":"David I. Ibarra Zarate","sessions":[],"size_bytes":664444547,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["Edzna"],"timestamps":{"digested_at":"2026-04-22T12:28:39.895279+00:00","dataset_created_at":"2024-11-14T18:10:30.402Z","dataset_modified_at":"2024-11-14T18:19:55.000Z"},"total_files":306,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005628","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv","task-Edzna_events.json"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"6e54e2f296c23b65","model":"openai/gpt-5.2","tagged_at":"2026-01-20T18:33:00.500509+00:00"},"tags":{"pathology":["Healthy"],"modality":["Multisensory"],"type":["Attention"],"confidence":{"pathology":0.7,"modality":0.85,"type":0.7},"reasoning":{"few_shot_analysis":"Most similar few-shot by paradigm/modality is the \"Cross-modal Oddball Task\" example, which uses simultaneous visual+auditory cues and is labeled Modality=Multisensory. While the paradigm differs (oddball vs VR viewing), it establishes the convention that combined visual+auditory stimulation should be labeled \"Multisensory\". For Type, the few-shot set includes perception-focused stimulus-processing datasets (e.g., visual discrimination; auditory music/speech), guiding that stimulus-driven viewing/listening tasks map to Perception/Attention depending on whether the stated aim is sensory experience vs attentional control. For Pathology, multiple examples show that when no clinical recruitment is described, label as Healthy.","metadata_analysis":"Key facts from the dataset README:\n1) Stimulus conditions and sensory channels: \"Participants were exposed to two conditions: a visual-only stimulus and an audiovisual stimulus\" and \"VR Headset\" + \"Headphones\".\n2) Study purpose/cognitive focus: \"analyze user experience in a virtual reality (VR) environment, focusing on a comparative study between visual and audiovisual stimuli\" and \"future studies on how immersive environments affect user perception\" and \"investigating the effects of immersive VR environments on user engagement, and attention\".\n3) Population: \"A total of 51 participants were obtained.\" No diagnosis/clinical recruitment criteria are mentioned anywhere in the provided metadata.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: only \"A total of 51 participants were obtained.\" with no disorder/diagnosis terms.\n- Few-shot pattern suggests: absence of clinical recruitment details عادة maps to \"Healthy\".\n- Alignment: ALIGN (no conflict; metadata is simply non-clinical).\n\nModality:\n- Metadata says: \"visual-only stimulus and an audiovisual stimulus\" and apparatus includes \"Headphones\" (auditory) plus VR (visual).\n- Few-shot pattern suggests: combined auditory+visual input should be labeled \"Multisensory\" (as in the cross-modal oddball example).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: aims include \"analyze user experience\" and explicitly mentions \"user engagement, and attention\" plus \"user perception\".\n- Few-shot pattern suggests: stimulus-driven tasks often map to Perception, but when the stated construct is engagement/attention, Type=\"Attention\" is appropriate.\n- Alignment: PARTIAL ALIGN (both Perception and Attention are plausible; metadata mentions both). No direct conflict, just ambiguity in primary construct.","decision_summary":"Top-2 candidates per category with head-to-head decision:\n\nPathology:\n1) Healthy — Evidence: no clinical group described; only \"51 participants\" with no diagnosis terms.\n2) Unknown — Evidence: no explicit statement like \"healthy\" or inclusion/exclusion criteria.\nDecision: Healthy wins because the dataset is framed as general VR user-experience testing with no clinical recruitment described.\nConfidence notes: based on absence of pathology mention (inference), not an explicit \"healthy\" quote.\n\nModality:\n1) Multisensory — Evidence: \"visual-only\" vs \"audiovisual\" condition; use of \"Headphones\" indicates auditory + visual stimulation.\n2) Visual — Evidence: one condition is visual-only and VR is primarily visual.\nDecision: Multisensory wins because the experiment explicitly includes an audiovisual (audio+visual) stimulus condition as a main comparison.\nConfidence notes: directly supported by explicit stimulus description.\n\nType:\n1) Attention — Evidence: dataset positioned for studying \"user engagement, and attention\" in immersive VR; comparative stimulus conditions likely modulate attentional engagement.\n2) Perception — Evidence: explicit mention of \"how immersive environments affect user perception\" and comparison of sensory stimulus richness.\nDecision: Attention wins slightly because attention/engagement is explicitly highlighted as a key application focus, whereas Perception is more general.\nConfidence notes: moderate because both Attention and Perception are explicitly mentioned."}},"computed_title":"Dataset of Visual and Audiovisual Stimuli in Virtual Reality from the Edzna Archaeological Site","nchans_counts":[{"val":8,"count":306}],"sfreq_counts":[{"val":250.0,"count":306}],"stats_computed_at":"2026-04-22T23:16:00.310716+00:00","total_duration_s":75431.512,"author_year":"RosadoAiza2024","canonical_name":null}}