{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cbd","dataset_id":"nm000242","associated_paper_doi":null,"authors":["Jing'ao Gao","Yao Liu","Zhengshuang Li","Kaixin Huang","Fan Wang","Jiaping Xu","Lei Zhao","Tianwen Li","Yunfa Fu"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":22,"ages":[74,74,74,74,74,74,74,74,74,74,74,74,74,74,74,74,74,74,74,74,74,74],"age_min":74,"age_max":74,"age_mean":74.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000242","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"c1856cf414bcfc352e4885ce0a2d4694d5e56a716ae9fa67ad51a1c8ee169289","license":"CC-BY-NC-ND-4.0","n_contributing_labs":null,"name":"Visual imagery EEG dataset from Gao et al 2026","readme":"# Visual imagery EEG dataset from Gao et al 2026\nVisual imagery EEG dataset from Gao et al 2026.\n## Dataset Overview\n- **Code**: Gao2026\n- **Paradigm**: imagery\n- **DOI**: 10.1038/s41597-025-06512-5\n- **Subjects**: 22\n- **Sessions per subject**: 2\n- **Events**: dog=1, bird=2, fish=3, pentagram=11, square=12, circle=13, scissor=21, watch=22, cup=23, chair=24\n- **Trial interval**: [0, 4] s\n- **Runs per session**: 3\n- **File format**: BDF\n## Acquisition\n- **Sampling rate**: 1000.0 Hz\n- **Number of channels**: 32\n- **Channel types**: eeg=32\n- **Montage**: standard_1005\n- **Hardware**: Neuracle NeuSenW32\n- **Reference**: CPz\n- **Ground**: AFz\n- **Sensor type**: Ag/AgCl\n- **Line frequency**: 50.0 Hz\n- **Online filters**: {'sampling_rate': 1000}\n## Participants\n- **Number of subjects**: 22\n- **Health status**: healthy\n- **Age**: min=20.0, max=23.0\n- **Gender distribution**: male=17, female=5\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Number of classes**: 10\n- **Class labels**: dog, bird, fish, pentagram, square, circle, scissor, watch, cup, chair\n- **Trial duration**: 4.0 s\n- **Study design**: Visual imagery of animals, figures, and objects with simultaneous 32-channel EEG recording\n- **Feedback type**: none\n- **Stimulus type**: image cues\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  dog\n    ├─ Sensory-event\n    └─ Label/dog\n  bird\n    ├─ Sensory-event\n    └─ Label/bird\n  fish\n    ├─ Sensory-event\n    └─ Label/fish\n  pentagram\n    ├─ Sensory-event\n    └─ Label/pentagram\n  square\n    ├─ Sensory-event\n    └─ Label/square\n  circle\n    ├─ Sensory-event\n    └─ Label/circle\n  scissor\n    ├─ Sensory-event\n    └─ Label/scissor\n  watch\n    ├─ Sensory-event\n    └─ Label/watch\n  cup\n    ├─ Sensory-event\n    └─ Label/cup\n  chair\n    ├─ Sensory-event\n    └─ Label/chair\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: dog, bird, fish, pentagram, square, circle, scissor, watch, cup, chair\n## Data Structure\n- **Trials**: 16800\n- **Trials context**: 20 subjects x 2 sessions x 400 trials + 2 subjects x 1 session x 400 trials = 16800\n## Signal Processing\n- **Classifiers**: EEGNet, CSP+KNN\n- **Feature extraction**: CSP, deep_learning\n- **Frequency bands**: bandpass=[5.0, 30.0] Hz\n- **Spatial filters**: CSP, CAR\n## Cross-Validation\n- **Method**: train-test split\n- **Evaluation type**: within_subject\n## BCI Application\n- **Applications**: human_machine_interaction\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Visual\n- **Type**: Research\n## Documentation\n- **DOI**: 10.1038/s41597-025-06512-5\n- **License**: CC-BY-NC-ND-4.0\n- **Investigators**: Jing'ao Gao, Yao Liu, Zhengshuang Li, Kaixin Huang, Fan Wang, Jiaping Xu, Lei Zhao, Tianwen Li, Yunfa Fu\n- **Institution**: Kunming University of Science and Technology\n- **Country**: CN\n- **Repository**: Figshare\n- **Data URL**: https://doi.org/10.6084/m9.figshare.30227503.v1\n- **Publication year**: 2026\n## References\nGao, J., Liu, Y., Li, Z., Huang, K., Wang, F., Xu, J., Zhao, L., Li, T., & Fu, Y. (2026). An EEG Dataset for Visual Imagery-Based Brain-Computer Interface. Scientific Data. https://doi.org/10.1038/s41597-025-06512-5\nGao, J. et al. (2026). EEG Dataset for Visual Imagery. Figshare. https://doi.org/10.6084/m9.figshare.30227503.v1\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1"],"size_bytes":34047540206,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000242","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:09:35.969431+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-25T21:59:35Z"},"total_files":125,"computed_title":"Visual imagery EEG dataset from Gao et al 2026","nchans_counts":[{"val":32,"count":125}],"sfreq_counts":[{"val":1000.0,"count":125}],"stats_computed_at":"2026-05-01T13:49:34.646008+00:00","total_duration_s":354521.875,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"0fa9099ecd220645","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Other"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.75},"reasoning":{"few_shot_analysis":"Closest few-shot by paradigm is the \"EEG Motor Movement/Imagery Dataset\" (labeled Type=Motor), which shows the convention that imagery/BCI-style paradigms are often categorized by the primary cognitive intent (imagery for control). However, that example is explicitly *motor* movement/imagery (fist/feet), while the current dataset is explicitly *visual imagery* of objects/animals/shapes. Closest few-shot by stimulus modality is the schizophrenia visual discrimination example (Modality=Visual, Type=Perception), which supports labeling the stimulus channel as Visual when image cues are used. Because this dataset is imagery-based rather than perceptual discrimination, Type does not cleanly map to Perception or Motor; this pushes toward Type=Other under the catalog’s allowed labels.","metadata_analysis":"Key metadata facts:\n- Population: \"**Health status**: healthy\" and \"**Subjects**: 22\" (readme).\n- Age conflict: readme says \"**Age**: min=20.0, max=23.0\" but participants_overview says \"Subjects: 22; Age range: 74-74\".\n- Stimulus/modality: \"**Stimulus type**: image cues\" and \"**Stimulus modalities**: visual\" / \"**Primary modality**: visual\".\n- Task/purpose: title and reference state \"Visual imagery EEG dataset\" and \"An EEG Dataset for **Visual Imagery-Based Brain-Computer Interface**\"; also \"**Study design**: Visual imagery of animals, figures, and objects\" and \"**BCI Application**\" with \"human_machine_interaction\".\n- Paradigm ambiguity: readme lists \"**Paradigm**: imagery\" and also \"**Detected paradigm**: motor_imagery\" (auto-detected tag), which appears inconsistent with the dataset’s explicit description of visual imagery.","paper_abstract_analysis":"No useful paper information. (A DOI is provided, but no abstract text is included in the supplied metadata.)","evidence_alignment_check":"Pathology:\n1) Metadata says: \"Health status: healthy\".\n2) Few-shot pattern suggests: when participants are healthy controls/volunteers, label Pathology=Healthy.\n3) ALIGN.\n\nModality:\n1) Metadata says: \"Stimulus type: image cues\" and \"Stimulus modalities: visual\" / \"Primary modality: visual\".\n2) Few-shot pattern suggests: tasks with visual stimuli are labeled Modality=Visual.\n3) ALIGN.\n\nType:\n1) Metadata says: \"Study design: Visual imagery of animals, figures, and objects\" and labels it a \"Visual Imagery-Based Brain-Computer Interface\".\n2) Few-shot pattern suggests: imagery paradigms can map to Type=Motor when the imagery is motor (as in the motor movement/imagery example), or Type=Perception when it is a visual discrimination/perceptual task.\n3) PARTIAL CONFLICT/AMBIGUITY: This dataset is imagery-based but not motor imagery; it also is not a straightforward perceptual discrimination task. Given allowed labels, Type=Other best fits \"visual imagery / BCI\" as a construct not explicitly represented by the Type taxonomy.","decision_summary":"Top-2 candidates per category (with head-to-head selection):\n\nPathology:\n- Healthy: Supported by \"Health status: healthy\".\n- Unknown: Only if population were unclear due to the age-range conflict.\n चयन: Healthy wins because recruitment/health status is explicitly stated as healthy (diagnosis-free).\nConfidence evidence: quote \"Health status: healthy\" (+ general consistency with title).\n\nModality:\n- Visual: Supported by \"Stimulus type: image cues\", \"Stimulus modalities: visual\", and \"Primary modality: visual\".\n- Other: Only if imagery were considered non-sensory; but cues are explicitly visual.\n चयन: Visual wins with strong direct metadata.\nConfidence evidence: 3 explicit modality phrases.\n\nType:\n- Other: Supported by \"Visual imagery\" and \"Visual Imagery-Based Brain-Computer Interface\" framing, which is not clearly Motor/Perception/Memory/etc.\n- Perception: Runner-up because the task uses image cues and relates to visual representations.\n चयन: Other wins because the primary construct is imagery/BCI control rather than sensory discrimination or decision-making.\nConfidence evidence: explicit \"Visual imagery\" + \"...Brain-Computer Interface\" + \"Study design: Visual imagery...\"; but taxonomy mismatch keeps confidence below very high."}},"canonical_name":null,"name_confidence":0.78,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Gao2026_Visual_imagery_et"}}