{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a345c","dataset_id":"ds006593","associated_paper_doi":null,"authors":["Basak Celik","Tab Memmott","Matthew Lawhead","Srikar Ananthoju","Deniz Erdogmus"],"bids_version":"1.7.0","contact_info":["Basak Celik","Deniz Erdogmus","Tab Memmott"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds006593.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":21,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds006593","osf_url":null,"github_url":null,"paper_url":null},"funding":["This research was funded by the National Institutes of Health (DC009834)."],"ingestion_fingerprint":"c2cd6ad0e5996f68142cbfee52ce3bd04adbe82f4ea0fb55466cb5b2e718dd1c","license":"CC0","n_contributing_labs":null,"name":"cBCI Matrix Multimodal Dataset","readme":"# Multimodal Sensor Fusion for EEG-Based BCI Typing Systems\n## Dataset Overview\nThis dataset contains recordings of EEG and EyeTracking for a BCI spelling task. The data were collected in 2023 at Northeastern University.\n- N=21\n- Calibration task were proctored using BciPy [1]\n- The dataset is organized in accordance with the Brain Imaging Data Structure (BIDS) specification (version 1.7.0).\n## Methodology\nCalibration data were collected from control participants (n=21, mean age 23.6 ± 3.1 years) in a quiet lab room at Northeastern University. EEG data were collected using the DSI-24, dry electrode cap (Wearable Sensing, San Diego CA) at a sampling rate of 300 Hz. The device employs a hardware filter permitting a collection bandwidth of 0.003–150 Hz. Data were recorded from Fp1/2, Fz, F3/4, F7/8, Cz, C3/4, T7/T8, T3/T4, Pz, P3/P4, P7/P8, T5/T6, O1/2 with linked-ear reference (A1 and A2) and ground at A1. All data were collected using a Lenovo Legion 5 Pro Laptop with Windows 11, an Intel Core i7-11800H @ 2.30 GHz, 16 GB DDR4 RAM, and a NVIDIA GeForce RTX 3050. Trigger fidelity on the experiment laptop was verified using the Matrix Time Test Task in BciPy and a photodiode. The results of this timing test were used to determine static offsets between hardware and prevent experimentation with any timing violations greater than +/− 10 ms. The Eyetracker data were collected using a portable eye tracker (Tobii Pro Nano) at a sampling rate of 60 Hz. The matrix paradigm and the data acquisition modules are developed in BciPy [1], which is a standalone application for experimental data collection. This work focuses on a specific BCI paradigm called single-character-presentation (SCP) based visual presentation, which consists of symbols presented in matrix form and individually highlighted in randomized order. Calibration task presented letter characters at a rate of 4 Hz, with 100 inquiries consisting of 10 letters each (1 target, 9 non-target). In 10% of the inquiries, only non-target characters were shown. The stimuli included all 26 letters of the English alphabet, as well as the characters “_” for space and “<“ for backspace. The order of target stimuli was randomly distributed among the inquiries. Between inquiries, there was a two-second blank screen. Each inquiry consisted of a one-second prompt showing the target letter, followed by a 0.5s fixation, and then the presentation of 10 letters. The letters were displayed in the center of the screen, in white on a black background. Target prompts and stimuli were presented in white, while fixation crosses were rendered in red.\nThe experimental protocol was approved by the Northeastern University Institutional Review Board (IRB). All participants provided written informed consent prior to participation.\n## Directory Structure\nThe datasets follows the BIDS convention with the following structure: /sub-[subject]/ses-[session]/[eeg or et]. To load the BIDS formatted data into BciPy Simulator, please see the following directory: /sourcedata/bcipy_metadata. This directory contains the raw BciPy parameter files. It also contains the output of the matrix display (matrix.png) for eyetracking visualization.\n## Contact Information\nFor questions or issues regarding this dataset, please contact the corresponding author [Basak Celik](celik.b@northeastern.edu) via email.\n[1] Memmott T, Koçanaoğulları A, Lawhead M, Klee D, Dudy S, Fried-Oken M, Oken B. BciPy: brain-computer interface software in Python. Brain-Computer Interfaces, 8(4), 137-53, 2021.","recording_modality":["eeg"],"senior_author":"Deniz Erdogmus","sessions":["01"],"size_bytes":463412434,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["MatrixCalibration"],"timestamps":{"digested_at":"2026-04-22T12:29:33.657625+00:00","dataset_created_at":"2025-08-23T19:31:47.377Z","dataset_modified_at":"2025-08-23T21:11:35.000Z"},"total_files":21,"storage":{"backend":"s3","base":"s3://openneuro.org/ds006593","raw_key":"dataset_description.json","dep_keys":["CHANGES","README.md","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"4891df4bbbfbc46a","model":"openai/gpt-5.2","tagged_at":"2026-01-20T19:04:29.608587+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.75},"reasoning":{"few_shot_analysis":"Most similar few-shot convention is the \"Cross-modal Oddball Task\" example: it has a target vs standard/oddball structure and is labeled with an attention/control-oriented Type (though that example’s Type is set to Clinical/Intervention due to PD being the recruitment focus). Another relevant convention is the \"EEG: Three-Stim Auditory Oddball...TBI\" example, which shows that oddball-style target detection paradigms are treated as task-evoked cognition (not Resting-state/Sleep). These guide mapping the present target vs non-target BCI spelling calibration to a task-evoked attention/target-detection construct with a visual stimulus modality. For Pathology, multiple few-shots demonstrate that explicit recruitment groups dominate (e.g., PD, TBI, schizophrenia); here the dataset explicitly says 'control participants', so the convention implies 'Healthy'.","metadata_analysis":"Key population facts: (1) \"Calibration data were collected from control participants (n=21, mean age 23.6 ± 3.1 years)\" and (2) \"All participants provided written informed consent\" with no mention of a disorder/diagnosis.\nKey task/modality facts: (1) \"BCI spelling task\" and \"single-character-presentation (SCP) based visual presentation\" and (2) \"letters were displayed in the center of the screen, in white on a black background\" with a target/non-target structure: \"100 inquiries consisting of 10 letters each (1 target, 9 non-target)\" plus \"In 10% of the inquiries, only non-target characters were shown.\"","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says participants are \"control participants\" with no clinical recruitment described; few-shot pattern suggests 'Healthy' when only controls/volunteers are recruited (ALIGN).\nModality: Metadata says \"visual presentation\" with letters on a screen; few-shot pattern maps screen-based paradigms (e.g., visual discrimination) to Visual modality (ALIGN).\nType: Metadata describes a target vs non-target BCI spelling calibration (oddball-like selective attention to targets). Few-shot conventions for oddball/target-detection tasks support an Attention-style cognitive construct (more than Resting-state/Motor). This aligns with the Type definition emphasizing cognitive construct rather than mechanics (mostly ALIGN, with Perception as a secondary plausible label).","decision_summary":"Pathology top-2: (1) Healthy — supported by \"control participants\" and no disorder terms; (2) Unknown — would apply if recruitment health status were unclear, but here 'control' strongly implies normative cohort. FINAL: Healthy.\nModality top-2: (1) Visual — supported by \"visual presentation\" and on-screen letters; (2) Multisensory — possible due to Eyetracking being recorded, but eyetracking is a measurement modality not a stimulus channel; stimuli are visual. FINAL: Visual.\nType top-2: (1) Attention — supported by target vs non-target structure (\"1 target, 9 non-target\"; \"only non-target\" catch inquiries) consistent with selective attention/oddball target detection; (2) Perception — plausible because stimuli are letters visually presented, but the paradigm emphasis is target selection for BCI spelling rather than sensory discrimination. FINAL: Attention.\nConfidence notes: Pathology confidence driven by explicit 'control participants' plus absence of any diagnosis; Modality confidence driven by multiple explicit 'visual' descriptions; Type confidence moderate-high due to clear target/non-target attention structure but without explicitly naming 'oddball'/'P300' in metadata."}},"computed_title":"cBCI Matrix Multimodal Dataset","nchans_counts":[{"val":19,"count":21}],"sfreq_counts":[{"val":300.0,"count":21}],"stats_computed_at":"2026-04-22T23:16:00.311823+00:00","total_duration_s":20199.536666666667,"author_year":"Celik2025","canonical_name":null}}