{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a335a","dataset_id":"ds004517","associated_paper_doi":null,"authors":["Milan Rybář","Riccardo Poli","Ian Daly"],"bids_version":"1.7.0","contact_info":["Milan Rybář"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds004517.v1.0.2","datatypes":["eeg"],"demographics":{"subjects_count":7,"ages":[44,33,35,37,25,37,27],"age_min":25,"age_max":44,"age_mean":34.0,"species":null,"sex_distribution":{"m":5,"f":2},"handedness_distribution":{"r":7}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds004517","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"e3293ace4fc86916cdd92695e15b9dd6fecbfba3477807f8076f3d42ecf7d3a1","license":"CC0","n_contributing_labs":null,"name":"EEG recordings for semantic decoding of imagined animals and tools during auditory imagery task","readme":"﻿### Description\nThis dataset contains electroencephalography (EEG) signals recorded from 7 participants while performing an auditory imagery task. Participants were asked to imagine the sounds made by an object for 5 seconds.\n### EEG\nEEG data were acquired with a BioSemi ActiveTwo system with 64 electrodes positioned according to the international 10-20 system, plus one electrode on each earlobe as references ('EXG1' channel is the left ear electrode and 'EXG2' channel is the right ear electrode).\nElectrooculography (EOG) was also recorded to monitor eye movements. Two electrodes were placed above ('EXG7' channel) and below ('EXG8') the right eye to capture the vertical oculogram, while two more electrodes were placed near the canthus of each eye ('EXG5' by the left eye and 'EXG6' by the right eye) to record the horizontal oculogram.\nAdditionally, two electrodes were placed on the left ('EXG3') and right ('EXG4') wrists for additional physiological measurements (e.g., heart rate variability), and respiration was recorded using a belt placed around the waist ('Resp' channel).\nThe sampling rate was 2048 Hz.\n### Stimulus\nFolder 'stimuli' contains all images of the semantic categories of animals and tools presented to participants.\n### Example code\nWe have prepared an example script to demonstrate how to load the EEG data into Python using MNE and MNE-BIDS packages. This script is located in the 'code' directory.\n### References\nThis dataset was analyzed in the following publications:\n[1] Rybář, M., Poli, R. and Daly, I., 2024. Using data from cue presentations results in grossly overestimating semantic BCI performance. Scientific Reports, 14(1), p.28003.\n[2] Rybář, M., 2023. Towards EEG/fNIRS-based semantic brain-computer interfacing (Doctoral dissertation, University of Essex).","recording_modality":["eeg"],"senior_author":"Ian Daly","sessions":[],"size_bytes":13610005494,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["eeg"],"timestamps":{"digested_at":"2026-04-22T12:26:41.702640+00:00","dataset_created_at":"2023-03-03T13:31:51.775Z","dataset_modified_at":"2025-04-03T23:41:33.000Z"},"total_files":7,"storage":{"backend":"s3","base":"s3://openneuro.org/ds004517","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"ec7e7de578b0a10b","model":"openai/gpt-5.2","tagged_at":"2026-01-20T10:42:08.539011+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Other"],"confidence":{"pathology":0.7,"modality":0.8,"type":0.7},"reasoning":{"few_shot_analysis":"Closest few-shot by task structure is the \"EEG Motor Movement/Imagery Dataset\" example (Healthy, Visual, Motor): it shows the convention that imagery tasks are often visually cued (targets/images on a screen) and Modality is labeled by the presented stimulus channel (visual cue), not by the response. For Modality-vs-task-content disambiguation, the \"Subcortical responses to music and speech...\" example (Auditory, Perception) indicates that Auditory modality is used when actual sounds are presented; here the dataset describes imagery without auditory playback, suggesting Visual rather than Auditory modality by convention.","metadata_analysis":"Key facts from README: (1) Population: \"EEG signals recorded from 7 participants\" (no diagnosis/clinical recruitment stated). (2) Task: \"performing an auditory imagery task\" and \"asked to imagine the sounds made by an object for 5 seconds.\" (3) Stimulus/input: \"Folder 'stimuli' contains all images ... presented to participants.\" These indicate a healthy/non-clinical sample, visually presented cues, and an imagery/semantic-BCI style paradigm rather than a standard perception/attention/motor execution construct.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says non-clinical participants (\"7 participants\"; no disorder mentioned). Few-shot convention labels such cohorts as Healthy. ALIGN.\nModality: Metadata says stimuli are pictures (\"images ... presented to participants\") while the task is to imagine sounds (\"auditory imagery task\"). Few-shot convention defines Modality by presented stimulus channel (e.g., motor imagery dataset is Visual due to visual targets; auditory dataset is Auditory when sounds are played). ALIGN: choose Visual because the cue is visual and no explicit auditory stimulus is described.\nType: Metadata emphasizes imagery/semantic BCI context (\"auditory imagery task\"; citation about \"semantic BCI performance\"). Few-shot examples do not provide a direct 'imagery/BCI semantic' type label; convention is to use a closest construct label when clear, otherwise Other. ALIGN (no conflict): choose Other over Memory/Perception because the primary aim is semantic/imagery BCI rather than explicit memory encoding/recall or sensory detection.","decision_summary":"Top-2 candidates per category:\n- Pathology: (1) Healthy vs (2) Unknown. Evidence for Healthy: \"recorded from 7 participants\" with no diagnostic recruitment language; aligns with few-shot Healthy labeling for non-clinical cohorts. Final: Healthy. Confidence 0.7 (one explicit non-clinical participant description; no explicit 'healthy' keyword).\n- Modality: (1) Visual vs (2) Auditory. Evidence for Visual: \"images ... presented to participants\"; evidence for Auditory: \"auditory imagery task\" / \"imagine the sounds\" (but describes imagery, not auditory stimulation). Few-shot convention: modality is based on presented stimuli; auditory label used when sounds are presented. Final: Visual. Confidence 0.8 (two clear stimulus/task quotes + strong few-shot convention match).\n- Type: (1) Other vs (2) Memory. Evidence for Other: \"auditory imagery task\" and semantic-BCI framing (\"overestimating semantic BCI performance\"); evidence for Memory: imagery relies on stored sound representations but no explicit memory paradigm (no encode/recall). Final: Other. Confidence 0.7 (clear task description but Type mapping is less direct than standard paradigms)."}},"computed_title":"EEG recordings for semantic decoding of imagined animals and tools during auditory imagery task","nchans_counts":[{"val":80,"count":7}],"sfreq_counts":[{"val":2048.0,"count":7}],"stats_computed_at":"2026-04-22T23:16:00.307791+00:00","total_duration_s":27687.99658203125,"author_year":"Rybar2023_semantic","canonical_name":null}}