{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3358","dataset_id":"ds004514","associated_paper_doi":null,"authors":["Milan Rybář","Riccardo Poli","Ian Daly"],"bids_version":"1.7.0","contact_info":["Milan Rybář"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds004514.v1.1.2","datatypes":["eeg","fnirs"],"demographics":{"subjects_count":12,"ages":[26,32,57,47,23,21,29,50,27,33,28,20],"age_min":20,"age_max":57,"age_mean":32.75,"species":null,"sex_distribution":{"f":9,"m":3},"handedness_distribution":{"r":12}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds004514","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"311a69d43d35372a2f547fb62c5095bb19aa27e5f05ae73dc824b596a57f15e4","license":"CC0","n_contributing_labs":null,"name":"Simultaneous EEG and fNIRS recordings for semantic decoding of imagined animals and tools","readme":"﻿### Description\nThis dataset contains simultaneous electroencephalography (EEG) and near-infrared spectroscopy (fNIRS) signals recorded from 12 participants while performing a silent naming task and three sensory-based imagery tasks using visual, auditory, and tactile perception.\nParticipants were asked to visualize an object in their minds, imagine the sounds made by the object, and imagine the feeling of touching the object.\n### EEG\nEEG data were acquired with a BioSemi ActiveTwo system with 64 electrodes positioned according to the international 10-20 system, plus one electrode on each earlobe as references ('EXG1' channel is the left ear electrode and 'EXG2' channel is the right ear electrode).\nAdditionally, 2 electrodes placed on the left hand measured galvanic skin response ('GSR1' channel) and a respiration belt around the waist measured respiration ('Resp' channel).\nThe sampling rate was 2048 Hz.\nThe electrode names were saved in a default BioSemi labeling scheme (A1-A32, B1-B32). See the Biosemi documentation for the corresponding international 10-20 naming scheme (https://www.biosemi.com/pics/cap_64_layout_medium.jpg, https://www.biosemi.com/headcap.htm).\nFor convenience, the following ordered channels\n```\n['A1', 'A2', 'A3', 'A4', 'A5', 'A6', 'A7', 'A8', 'A9', 'A10', 'A11', 'A12', 'A13', 'A14', 'A15', 'A16', 'A17', 'A18', 'A19', 'A20', 'A21', 'A22', 'A23', 'A24', 'A25', 'A26', 'A27', 'A28', 'A29', 'A30', 'A31', 'A32', 'B1', 'B2', 'B3', 'B4', 'B5', 'B6', 'B7', 'B8', 'B9', 'B10', 'B11', 'B12', 'B13', 'B14', 'B15', 'B16', 'B17', 'B18', 'B19', 'B20', 'B21', 'B22', 'B23', 'B24', 'B25', 'B26', 'B27', 'B28', 'B29', 'B30', 'B31', 'B32']\n```\ncan thus be renamed to\n```\n['Fp1', 'AF7', 'AF3', 'F1', 'F3', 'F5', 'F7', 'FT7', 'FC5', 'FC3', 'FC1', 'C1', 'C3', 'C5', 'T7', 'TP7', 'CP5', 'CP3', 'CP1', 'P1', 'P3', 'P5', 'P7', 'P9', 'PO7', 'PO3', 'O1', 'Iz', 'Oz', 'POz', 'Pz', 'CPz', 'Fpz', 'Fp2', 'AF8', 'AF4', 'AFz', 'Fz', 'F2', 'F4', 'F6', 'F8', 'FT8', 'FC6', 'FC4', 'FC2', 'FCz', 'Cz', 'C2', 'C4', 'C6', 'T8', 'TP8', 'CP6', 'CP4', 'CP2', 'P2', 'P4', 'P6', 'P8', 'P10', 'PO8', 'PO4', 'O2']\n```\n### fNIRS\nfNIRS data were acquired with a NIRx NIRScoutXP continuous wave imaging system equipped with 4 light detectors, 8 light emitters (sources), and low-profile fNIRS optodes.\nBoth electrodes and optodes were placed in a NIRx NIRScap for integrated fNIRS-EEG layouts.\nTwo different montages were used: frontal and temporal, see references for more information.\n### Stimulus\nFolder 'stimuli' contains all images of the semantic categories of animals and tools presented to participants.\n### Example code\nWe have prepared example scripts to demonstrate how to load the EEG and fNIRS data into Python using MNE and MNE-BIDS packages. These scripts are located in the 'code' directory.\n### References\nThis dataset was analyzed in the following publications:\n[1] Rybář, M., Poli, R. and Daly, I., 2024. Using data from cue presentations results in grossly overestimating semantic BCI performance. Scientific Reports, 14(1), p.28003.\n[2] Rybář, M., Poli, R. and Daly, I., 2021. Decoding of semantic categories of imagined concepts of animals and tools in fNIRS. Journal of Neural Engineering, 18(4), p.046035.\n[3] Rybář, M., 2023. Towards EEG/fNIRS-based semantic brain-computer interfacing (Doctoral dissertation, University of Essex).","recording_modality":["eeg","fnirs"],"senior_author":"Ian Daly","sessions":[],"size_bytes":25930603322,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["eeg","nirs"],"timestamps":{"digested_at":"2026-04-22T12:26:41.315746+00:00","dataset_created_at":"2023-02-27T13:39:47.155Z","dataset_modified_at":"2025-04-03T23:41:15.000Z"},"total_files":24,"storage":{"backend":"s3","base":"s3://openneuro.org/ds004514","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"2e1647d3c64fef2d","model":"openai/gpt-5.2","tagged_at":"2026-01-20T10:41:40.823301+00:00"},"tags":{"pathology":["Healthy"],"modality":["Multisensory"],"type":["Other"],"confidence":{"pathology":0.6,"modality":0.75,"type":0.65},"reasoning":{"few_shot_analysis":"Closest convention match for Modality is the few-shot example “Cross-modal Oddball Task”, which was labeled Modality=“Multisensory” because it explicitly combined visual and auditory cues. This dataset similarly states multiple sensory channels (visual, auditory, tactile), so the same Multisensory convention applies. For Type, the few-shot “Meta-rdk: Preprocessed EEG data” shows that a perceptual discrimination paradigm maps to Type=“Perception”; here the tasks are sensory-based imagery (visual/auditory/tactile imagery) rather than a decision or learning paradigm, so “Perception” is a plausible runner-up. However, because the stated goal is semantic/BCI decoding rather than pure perception, “Other” is also plausible (consistent with how non-standard paradigms are handled when no clear cognitive-construct label fits).","metadata_analysis":"Key task/stimulus facts in the README: (1) “recorded from 12 participants while performing a silent naming task and three sensory-based imagery tasks using visual, auditory, and tactile perception.” (2) “Participants were asked to visualize an object in their minds, imagine the sounds made by the object, and imagine the feeling of touching the object.” (3) “Folder 'stimuli' contains all images of the semantic categories of animals and tools presented to participants.” These indicate a semantic/imagery paradigm with visual cue images and imagery across auditory and tactile modalities; no clinical recruitment criteria or diagnoses are mentioned.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says nothing about a disorder/diagnosis (only “12 participants”); few-shot convention suggests labeling as Healthy when no clinical recruitment is stated. ALIGN.\nModality: Metadata explicitly says “visual, auditory, and tactile” imagery tasks; few-shot convention (cross-modal oddball) maps multi-sensory designs to “Multisensory”. ALIGN.\nType: Metadata says “silent naming task” and “sensory-based imagery tasks” with “semantic categories of animals and tools” (semantic/BCI decoding framing); few-shot patterns suggest “Perception” for sensory paradigms, but the semantic-BCI/imagery framing doesn’t cleanly match a single construct label like Memory/Learning/Decision-making. PARTIAL ALIGN; choosing “Other” because the primary stated purpose is semantic decoding/BCI rather than a standard perception/attention/memory construct.","decision_summary":"Top-2 candidates per category:\n\nPathology:\n1) Healthy — evidence: no disorder mentioned; participants described generically (“12 participants”).\n2) Unknown — alternative because “healthy” is not explicitly stated.\nDecision: Healthy (metadata lacks any clinical recruitment; default to Healthy). Confidence 0.6 because there is no explicit “healthy controls” quote.\n\nModality:\n1) Multisensory — evidence: “three sensory-based imagery tasks using visual, auditory, and tactile perception”; also cues are visual images while imagery spans auditory/tactile.\n2) Visual — evidence: “stimuli… contains all images… presented to participants” (visual cueing could be dominant input).\nDecision: Multisensory (explicit multi-channel sensory framing outweighs purely visual cue interpretation). Confidence 0.75 based on two direct modality quotes.\n\nType:\n1) Other — evidence: combined “silent naming” + multi-sensory imagery + semantic category decoding/BCI context (“semantic categories of animals and tools”).\n2) Perception — evidence: “sensory-based imagery tasks using visual, auditory, and tactile perception.”\nDecision: Other (paradigm is semantic/imagery/BCI-oriented and not clearly a standard perception-only experiment). Confidence 0.65 due to mixed cues between Perception vs a broader ‘Other’ semantic-imagery purpose."}},"computed_title":"Simultaneous EEG and fNIRS recordings for semantic decoding of imagined animals and tools","nchans_counts":[{"val":80,"count":12},{"val":28,"count":6},{"val":22,"count":6}],"sfreq_counts":[{"val":2048.0,"count":12},{"val":7.8125,"count":6},{"val":8.928571428571429,"count":6}],"stats_computed_at":"2026-04-22T23:16:00.307763+00:00","total_duration_s":105509.898140625,"author_year":"Rybar2023_Simultaneous","canonical_name":null}}