{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a341b","dataset_id":"ds005815","associated_paper_doi":null,"authors":["Yan-Han Chang","Hsi-An Chen","Min-Jiun Tsai","Chun-Lung Tseng","Ching-Huei Lo","Kuan-Chih Huang","Chun-Shu Wei"],"bids_version":"1.10.0","contact_info":["cecnl lab"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds005815.v2.0.1","datatypes":["eeg"],"demographics":{"subjects_count":20,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005815","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"895a70142cafcd6e0de5a1b838fd5e5b4d2a0830436e462b000ea9f7854d0945","license":"CC0","n_contributing_labs":null,"name":"A Human EEG\nDataset for Multisensory Perception and Mental\nImagery","readme":"The YOTO (You Only Think Once) dataset presents a human electroencephalography (EEG) resource for exploring multisensory perception and mental imagery. The study enrolled 20 participants who performed tasks involving both unimodal and multimodal stimuli. Researchers collected high-resolution EEG signals at a 1000 Hz sampling rate to capture high-temporal-resolution neural activity related to internal mental representations. The protocol incorporated visual, auditory, and combined cues to investigate the integration of multiple sensory modalities, and participants provided self-reported vividness ratings that indicate subjective perceptual strength. Technical validation involved event-related potentials (ERPs) and power spectral density (PSD) analyses, which demonstrated the reliability of the data and confirmed distinct neural responses across stimuli. This dataset aims to foster studies on neural decoding, perception, and cognitive modeling, and it is publicly accessible for researchers who seek to advance multimodal mental imagery research and related applications.","recording_modality":["eeg"],"senior_author":"Chun-Shu Wei","sessions":["1","2"],"size_bytes":8155837338,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["rest1","rest2","task"],"timestamps":{"digested_at":"2026-04-22T12:28:54.755634+00:00","dataset_created_at":"2025-01-12T10:33:26.182Z","dataset_modified_at":"2025-06-17T11:32:34.000Z"},"total_files":103,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005815","raw_key":"dataset_description.json","dep_keys":["CHANGES","README.md"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"c4c55790ed32ea7d","model":"openai/gpt-5.2","tagged_at":"2026-01-20T18:39:47.002017+00:00"},"tags":{"pathology":["Healthy"],"modality":["Multisensory"],"type":["Perception"],"confidence":{"pathology":0.6,"modality":0.85,"type":0.7},"reasoning":{"few_shot_analysis":"Closest convention match on stimulus modality is the few-shot example \"Cross-modal Oddball Task\" (Parkinson’s; Modality=Multisensory): it shows that when both visual and auditory cues are presented within the same paradigm, the catalog uses the label \"Multisensory\" (even if one cue is primary). For Type conventions, the few-shot example \"Meta-rdk: Preprocessed EEG data\" (Visual discrimination) is labeled Type=\"Perception\" when the scientific aim is perceptual processing/discrimination; similarly, the current dataset explicitly targets \"multisensory perception\" and sensory integration rather than e.g., motor execution. No few-shot is an exact match for 'mental imagery' emphasis, so Type is chosen by the closest construct (perceptual/imagery strength and multisensory integration).","metadata_analysis":"Key facts from the provided README: (1) Research focus: \"exploring multisensory perception and mental imagery\". (2) Stimulus structure: \"tasks involving both unimodal and multimodal stimuli\" and \"The protocol incorporated visual, auditory, and combined cues\"—this directly supports a Multisensory modality label. (3) Participant population description is non-clinical: \"The study enrolled 20 participants\" with no mention of any disorder/diagnosis group. (4) Outcomes/constructs: \"participants provided self-reported vividness ratings\" and validation via \"event-related potentials (ERPs)\" showing \"distinct neural responses across stimuli\"—consistent with perception/imagery strength rather than clinical intervention.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata SAYS: \"enrolled 20 participants\" (no diagnosis mentioned). Few-shot pattern SUGGESTS labeling as \"Healthy\" when participants are not recruited for a disorder focus (many few-shots with general participants are Healthy). ALIGN (no conflict; metadata lacks clinical facts).\n\nModality: Metadata SAYS: \"visual, auditory, and combined cues\" and \"unimodal and multimodal stimuli\". Few-shot pattern SUGGESTS \"Multisensory\" when both auditory and visual cues are used in the same task (as in the Cross-modal Oddball example). ALIGN.\n\nType: Metadata SAYS: \"multisensory perception and mental imagery\" and \"investigate the integration of multiple sensory modalities\". Few-shot pattern SUGGESTS \"Perception\" for studies centered on sensory processing/representation (e.g., visual discrimination; music/speech auditory encoding). Mostly ALIGN, though 'mental imagery' could also push toward Type=\"Other\" if treated as not strictly external perception; however the dataset explicitly ties imagery to sensory cues and perceptual strength (vividness), keeping Perception the stronger match.","decision_summary":"Top-2 candidates per category:\n\nPathology:\n1) Healthy — Evidence: no clinical recruitment described (\"enrolled 20 participants\"), general-purpose cognitive/perceptual dataset language, no patient/control groups mentioned. Alignment: aligns with few-shot convention for non-clinical cohorts.\n2) Unknown — Evidence: README never explicitly says 'healthy' or 'controls'.\nFinal: Healthy (inferred normative cohort).\n\nModality:\n1) Multisensory — Evidence: \"visual, auditory, and combined cues\"; \"unimodal and multimodal stimuli\"; focus on \"integration of multiple sensory modalities\". Aligns with cross-modal few-shot convention.\n2) Other — would apply only if modality were unclear or non-standard, which it is not.\nFinal: Multisensory.\n\nType:\n1) Perception — Evidence: \"multisensory perception\"; \"integration of multiple sensory modalities\"; ERP differences \"across stimuli\"; vividness ratings as perceptual/imagery strength.\n2) Other — plausible because 'mental imagery' is not explicitly a listed Type and could be treated as a broader cognitive construct.\nFinal: Perception, because the primary stated aim is multisensory perception/integration and stimulus-evoked neural responses, with imagery framed as internal perceptual representations."}},"computed_title":"A Human EEG\nDataset for Multisensory Perception and Mental\nImagery","nchans_counts":[{"val":31,"count":103}],"sfreq_counts":[{"val":1000.0,"count":103}],"stats_computed_at":"2026-04-22T23:16:00.310955+00:00","source_url":"https://openneuro.org/datasets/ds005815","total_duration_s":14362.917,"author_year":"Chang2025","canonical_name":null}}