{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cd9","dataset_id":"nm000329","associated_paper_doi":null,"authors":["Stephanie Brandl","Benjamin Blankertz","Tobias Dahne"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.3389/fnins.2020.566147","datatypes":["eeg"],"demographics":{"subjects_count":16,"ages":[26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26],"age_min":26,"age_max":26,"age_mean":26.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000329","osf_url":null,"github_url":null,"paper_url":null},"funding":["BMBF/BIFOLD (01IS18025A, 01IS18037A)"],"ingestion_fingerprint":"77cb6cdcece40855871044225a0620264b0d9c10094be85c46d5df13c030659b","license":"CC-BY-NC-ND-4.0","n_contributing_labs":null,"name":"Brandl et al. 2020 — Motor Imagery Under Distraction: An Open Access BCI Dataset","readme":"Brandl2020\n==========\nMotor Imagery under distraction dataset from Brandl and Blankertz 2020.\nDataset Overview\n----------------\n  Code: Brandl2020\n  Paradigm: imagery\n  DOI: 10.3389/fnins.2020.566147\n  Subjects: 16\n  Sessions per subject: 1\n  Events: left_hand=1, right_hand=2\n  Trial interval: [0, 4.5] s\n  Runs per session: 7\n  File format: MAT (HDF5 v7.3)\nAcquisition\n-----------\n  Sampling rate: 1000.0 Hz\n  Number of channels: 63\n  Channel types: eeg=63\n  Channel names: AF3, AF4, AF7, AF8, AFz, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fp1, Fp2, Fpz, Fz, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO7, PO8, POz, Pz, T7, T8, TP10, TP7, TP8, TP9\n  Montage: standard_1005\n  Hardware: 2x BrainAmp (Brain Products)\n  Software: BBCI Toolbox (MATLAB)\n  Reference: nose\n  Sensor type: Ag/AgCl wet\n  Line frequency: 50.0 Hz\n  Cap manufacturer: EasyCap\n  Cap model: Fast'n Easy Cap\nParticipants\n------------\n  Number of subjects: 16\n  Health status: healthy\n  Age: mean=26.3\n  Gender distribution: female=6, male=10\n  BCI experience: mostly naive (3/16 had prior BCI experience)\nExperimental Protocol\n---------------------\n  Paradigm: imagery\n  Number of classes: 2\n  Class labels: left_hand, right_hand\n  Trial duration: 4.5 s\n  Tasks: calibration, clean, eyesclosed, news, numbers, flicker, stimulation\n  Study design: Motor imagery under distraction: 1 calibration run (no feedback, no distraction) + 6 feedback runs with different distraction conditions (clean, eyes closed, news, number search, flicker, vibro-tactile stimulation)\n  Feedback type: auditory\n  Stimulus type: auditory\n  Stimulus modalities: auditory\n  Primary modality: auditory\n  Synchronicity: cue-based\n  Mode: online\n  Training/test split: False\n  Instructions: Subjects received auditory cues ('links' for left, 'rechts' for right) and performed motor imagery of left or right hand movement\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  left_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Left, Hand\n  right_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: motor_imagery\n  Imagery tasks: left_hand, right_hand\n  Imagery duration: 4.5 s\nData Structure\n--------------\n  Trials: 504\n  Trials per class: left_hand=252, right_hand=252\n  Blocks per session: 7\n  Trials context: 7 runs per subject: 1 calibration (72 trials) + 6 feedback runs (72 trials each, 6 distraction conditions)\nPreprocessing\n-------------\n  Data state: raw\n  Preprocessing applied: False\nSignal Processing\n-----------------\n  Classifiers: CSP+LDA\n  Feature extraction: CSP, bandpower\n  Frequency bands: mu=[8.0, 13.0] Hz; beta=[13.0, 30.0] Hz\n  Spatial filters: CSP\nCross-Validation\n----------------\n  Method: holdout\n  Evaluation type: within_subject\nBCI Application\n---------------\n  Applications: motor_control\n  Environment: laboratory\n  Online feedback: True\nTags\n----\n  Pathology: Healthy\n  Modality: Motor\n  Type: Motor Imagery\nDocumentation\n-------------\n  DOI: 10.3389/fnins.2020.566147\n  License: CC-BY-NC-ND-4.0\n  Investigators: Stephanie Brandl, Benjamin Blankertz, Tobias Dahne\n  Senior author: Benjamin Blankertz\n  Institution: Technische Universitaet Berlin\n  Department: Department of Neurotechnology\n  Country: DE\n  Repository: DepositOnce TU Berlin\n  Data URL: https://depositonce.tu-berlin.de/handle/11303/10934.2\n  Publication year: 2020\n  Funding: BMBF/BIFOLD (01IS18025A, 01IS18037A)\n  Ethics approval: Approved by the ethics committee of the Charite University Medicine Berlin\n  How to acknowledge: Please cite: Brandl, S. and Blankertz, B. (2020). Motor Imagery Under Distraction -- An Open Access BCI Dataset. Frontiers in Neuroscience, 14, 566147. https://doi.org/10.3389/fnins.2020.566147\n  Keywords: brain-computer interface, motor imagery, EEG, distraction, open access, BCI\nAbstract\n--------\nWe present an open-access dataset of a motor imagery brain-computer interface (BCI) experiment conducted under six different distraction conditions. Sixteen healthy participants performed left vs. right hand motor imagery while being distracted by flickering video, number search tasks, news listening, eyes closed, vibro-tactile stimulation, or no distraction. Each participant completed one calibration run without feedback and six feedback runs under the different distraction conditions, resulting in 504 trials per subject.\nMethodology\n-----------\nParticipants completed one session with 7 runs of 72 trials each. Run 1 was calibration (no feedback, no distraction). Runs 2-7 included auditory feedback and one of six distraction conditions. Auditory cues indicated left or right hand imagery. Trial duration was 4.5 s with 2.5 s ITI. Online classification used CSP with LDA. EEG recorded at 1000 Hz with 63 channels, nose reference, using two BrainAmp amplifiers.\nReferences\n----------\nBrandl, S. and Blankertz, B. (2020). Motor Imagery Under Distraction -- An Open Access BCI Dataset. Frontiers in Neuroscience, 14, 566147. https://doi.org/10.3389/fnins.2020.566147\nNotes\n.. versionadded:: 1.2.0\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":66090570018,"source":"openneuro","storage":{"backend":"s3","base":"s3://openneuro.org/nm000329","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-22T12:52:26.947469+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":112,"computed_title":"Brandl et al. 2020 — Motor Imagery Under Distraction: An Open Access BCI Dataset","nchans_counts":[{"val":63,"count":112}],"sfreq_counts":[{"val":1000.0,"count":112}],"stats_computed_at":"2026-04-22T23:16:00.314572+00:00","total_duration_s":349601.888,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"a8a5097280af541a","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Auditory"],"type":["Motor"],"confidence":{"pathology":0.8,"modality":0.8,"type":0.9},"reasoning":{"few_shot_analysis":"Closest few-shot match is the \"EEG Motor Movement/Imagery Dataset\" (Schalk et al.), which is labeled Type=Motor for motor execution/imagery paradigms. Brandl2020 is also explicitly a motor imagery BCI dataset (left vs right hand imagery), so that convention guides Type=Motor. For Modality, the Schalk example uses Visual because cues/targets are on a screen; in Brandl2020 the metadata instead emphasizes auditory cues/feedback, so the same convention (stimulus channel, not response) maps here to Auditory rather than Visual.","metadata_analysis":"Pathology/participants: explicit healthy cohort: \"Health status: healthy\" and also \"Tags\\n----\\n  Pathology: Healthy\".\nTask/type: explicit motor imagery: \"Motor Imagery under distraction dataset\", \"Sixteen healthy participants performed left vs. right hand motor imagery\", and \"Detected paradigm: motor_imagery\".\nStimulus/modality: auditory cueing/feedback is central: \"Subjects received auditory cues ('links' for left, 'rechts' for right)\"; \"Feedback type: auditory\"; and \"Stimulus modalities: auditory\" / \"Primary modality: auditory\". (Distraction conditions include other channels: \"flickering video\" and \"vibro-tactile stimulation\", but the dataset’s stated primary modality is auditory.)","paper_abstract_analysis":"No useful paper information beyond what is already included verbatim in the README/Abstract section (motor imagery BCI under distraction; auditory cues/feedback).","evidence_alignment_check":"Pathology: Metadata SAYS \"Health status: healthy\" and \"Pathology: Healthy\". Few-shot pattern SUGGESTS motor imagery datasets are typically healthy volunteer cohorts unless a disorder is stated. ALIGN.\nModality: Metadata SAYS \"Subjects received auditory cues\", \"Feedback type: auditory\", and \"Primary modality: auditory\". Few-shot pattern SUGGESTS motor imagery often uses visual cues (as in the Schalk example), but this is a convention not a fact; here explicit metadata indicates auditory cueing/feedback. PARTIAL CONFLICT resolved in favor of metadata facts (auditory), while noting additional distraction channels.\nType: Metadata SAYS \"Motor Imagery under distraction\", \"performed left vs. right hand motor imagery\", and \"Detected paradigm: motor_imagery\". Few-shot pattern SUGGESTS motor imagery/BCI paradigms map to Type=Motor. ALIGN.","decision_summary":"Top-2 candidates and final selections:\n- Pathology: (1) Healthy vs (2) Unknown. Evidence for Healthy: \"Health status: healthy\"; \"Pathology: Healthy\"; \"Sixteen healthy participants\". -> Select Healthy. Alignment: aligned.\n- Modality: (1) Auditory vs (2) Multisensory. Evidence for Auditory: \"Subjects received auditory cues\"; \"Feedback type: auditory\"; \"Primary modality: auditory\". Evidence for Multisensory: distraction includes \"flickering video\" (visual) and \"vibro-tactile stimulation\" (tactile). Head-to-head: dataset explicitly declares auditory as stimulus/primary modality, so Auditory wins though multisensory is plausible due to distraction conditions. Alignment: minor conflict with common visual-cue motor imagery convention; metadata wins.\n- Type: (1) Motor vs (2) Attention. Evidence for Motor: \"Motor Imagery under distraction\"; \"performed left vs. right hand motor imagery\"; \"Detected paradigm: motor_imagery\"; plus BCI framing (\"brain-computer interface\"). Evidence for Attention: \"under distraction\" could motivate attentional analyses, but task goal is MI-BCI control. -> Select Motor. Alignment: aligned.\nConfidence justifications: Pathology supported by 3 explicit healthy mentions; Modality supported by 3 explicit auditory mentions but some competing multisensory distraction evidence; Type supported by multiple explicit motor imagery/paradigm statements and strong few-shot analog."}},"canonical_name":null,"name_confidence":0.75,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Brandl2020"}}