{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3431","dataset_id":"ds006033","associated_paper_doi":null,"authors":["Foteini Simistira Liwicki"],"bids_version":"1.7.0","contact_info":["Foteini Simistira Liwicki"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds006033.v1.0.1","datatypes":["eeg"],"demographics":{"subjects_count":3,"ages":[35,35,25],"age_min":25,"age_max":35,"age_mean":31.666666666666668,"species":null,"sex_distribution":{"m":2,"f":1},"handedness_distribution":{"r":3}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds006033","osf_url":null,"github_url":null,"paper_url":null},"funding":["Kompetensutveckling till professor, Lulea University of Technology, Sweden [LTU-154-2023]","Labbfonden Medel för infrastruktur vid Luleå tekniska universitet, 2023 [LTU-4908-2022]","Kempe Foundation, Sweden [JCSMK23-0102]"],"ingestion_fingerprint":"95027ae3889b238118f3ac198e597ec4e88ba8496971679ff9c71ce1df107b18","license":"CC0","n_contributing_labs":null,"name":"Synchronous EEG and fMRI dataset on inner speech","readme":"# Inner Speech EEG-fMRI Dataset\n## Description\nThis dataset contains simultaneous EEG-fMRI recordings for inner speech experiments. Data were collected using a 3T MRI scanner and 64-channel BrainProducts EEG system. The EEG data have undergone preprocessing, including pulse artifact removal, using the BrainVision Analyzer software. No further data transformations have been applied to ensure the dataset remains BIDS-compliant as \"raw\".\n## Subjects\n- Number of subjects: 3\n- Sessions per subject: 2\n- Tasks: Inner speech\n## Experimental Protocol\n- Each trial includes a fixation period (2s), stimulus display (2s), and rest (12s).\n- 8 words were presented in random order, each repeated 40 times.\n- EEG sampled at 5000 Hz, fMRI acquired with TR=2s.\n## Data Organization\n- Functional MRI data: `sub-xx/ses-xx/func/`\n- EEG data: `sub-xx/ses-xx/eeg/`\n- Event markers: `events.tsv`\n- BIDS-compatible metadata included in JSON sidecars.\n## Contact\nFor inquiries, contact: Foteini Simistira Liwicki (Foteini.liwicki@ltu.se)","recording_modality":["eeg"],"senior_author":"Foteini Simistira Liwicki","sessions":["01","02"],"size_bytes":16449271220,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["innerspeech"],"timestamps":{"digested_at":"2026-04-22T12:29:02.437351+00:00","dataset_created_at":"2025-03-19T10:16:24.423Z","dataset_modified_at":"2025-05-08T09:02:04.000Z"},"total_files":5,"storage":{"backend":"s3","base":"s3://openneuro.org/ds006033","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"985d6cea6be08cc1","model":"openai/gpt-5.2","tagged_at":"2026-01-20T18:47:44.329611+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Other"],"confidence":{"pathology":0.6,"modality":0.8,"type":0.6},"reasoning":{"few_shot_analysis":"No few-shot example directly matches an inner-speech (covert speech) paradigm. However, few-shot conventions help in two ways: (1) For Pathology, many examples label normative/non-clinical studies as \"Healthy\" when no disorder-based recruitment is described (e.g., \"EEG: Three armed bandit gambling task\" explicitly says \"Healthy control college students\" and is labeled Healthy). Here, the dataset does not mention any diagnosis, so the convention suggests \"Healthy\" rather than a disease label. (2) For Modality, examples map stimulus channel to modality (e.g., visual discrimination of dots -> Visual; auditory clicks/music/speech -> Auditory). Here the trial includes a \"stimulus display\" of words, matching the Visual mapping convention.","metadata_analysis":"Key quoted facts from the dataset README: (1) Task/purpose: \"simultaneous EEG-fMRI recordings for inner speech experiments\" and \"Tasks: Inner speech\". (2) Stimulus channel: \"Each trial includes a fixation period (2s), stimulus display (2s), and rest (12s).\" plus \"8 words were presented in random order\". (3) No clinical recruitment: the README lists \"Number of subjects: 3\" but provides no mention of any diagnosis, patient group, or disorder-based inclusion criteria.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata SAYS nothing about patients/diagnoses (no quoted disorder terms; only \"Number of subjects: 3\"). Few-shot pattern SUGGESTS labeling non-clinical cohorts as \"Healthy\" when no pathology is indicated. ALIGN (no conflict).\nModality: Metadata SAYS \"stimulus display (2s)\" and \"8 words were presented\" implying visually presented word stimuli. Few-shot pattern SUGGESTS using the stimulus input channel (e.g., dots on screen -> Visual; sounds -> Auditory). ALIGN.\nType: Metadata SAYS \"inner speech experiments\" / \"Tasks: Inner speech\" which is a language/covert-speech cognitive domain not explicitly represented in the allowed Type list. Few-shot pattern does not provide a direct mapping for inner speech; it suggests choosing a closest cognitive construct label when clear (e.g., digit span -> Memory), otherwise \"Other\". ALIGN (no conflict, but weak guidance).","decision_summary":"Top-2 candidates per category:\n- Pathology: (A) Healthy vs (B) Unknown. Evidence for Healthy: absence of any clinical recruitment language alongside a standard cognitive experiment description (\"Tasks: Inner speech\"; no diagnoses mentioned). Evidence for Unknown: no explicit statement like \"healthy volunteers\". Winner: Healthy. Confidence limited because it is inferred from omission.\n- Modality: (A) Visual vs (B) Other. Evidence for Visual: \"stimulus display (2s)\" and \"8 words were presented\" (words displayed during trials). Evidence for Other: could argue inner speech is not an external sensory modality, but the explicit stimulus presentation is visual. Winner: Visual.\n- Type: (A) Other vs (B) Motor. Evidence for Other: explicit focus is \"inner speech\" (language/covert speech) which is not covered by the provided Type labels (no \"Language\"). Evidence for Motor: inner speech is a form of covert production, but there is no movement/imagery-of-limb-movement focus. Winner: Other. Confidence moderate-low due to label-set mismatch."}},"computed_title":"Synchronous EEG and fMRI dataset on inner speech","nchans_counts":[{"val":66,"count":5}],"sfreq_counts":[{"val":5000.0,"count":5}],"stats_computed_at":"2026-04-22T23:16:00.311241+00:00","total_duration_s":7878.1342,"author_year":"Liwicki2025","canonical_name":null}}