{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c74","dataset_id":"nm000139","associated_paper_doi":null,"authors":["Michael Tangermann","Klaus-Robert Müller","Ad Aertsen","Niels Birbaumer","Christoph Braun","Clemens Brunner","Robert Leeb","Carsten Mehring","Kai J. Miller","Gernot R. Müller-Putz","Guido Nolte","Gert Pfurtscheller","Hubert Preissl","Gerwin Schalk","Alois Schlögl","Carmen Vidaurre","Stephan Waldert","Benjamin Blankertz"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":"10.82901/nemar.nm000139","datatypes":["eeg"],"demographics":{"subjects_count":9,"ages":[22,24,26,24,24,23,25,23,17],"age_min":17,"age_max":26,"age_mean":23.11111111111111,"species":null,"sex_distribution":{"f":4,"m":5},"handedness_distribution":{"r":9}},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000139","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"3bf6d1dececd549e8abddc39814b98c3be92e65579b26b1cbbe372b298142cc8","license":"CC-BY-ND-4.0","n_contributing_labs":null,"name":"BNCI 2014-001 Motor Imagery dataset","readme":"[![DOI](https://img.shields.io/badge/DOI-10.82901%2Fnemar.nm000139-blue)](https://doi.org/10.82901/nemar.nm000139)\n# BNCI 2014-001 Motor Imagery dataset\nBNCI 2014-001 Motor Imagery dataset.\n## Dataset Overview\n- **Code**: BNCI2014-001\n- **Paradigm**: imagery\n- **DOI**: 10.3389/fnins.2012.00055\n- **Subjects**: 9\n- **Sessions per subject**: 2\n- **Events**: left_hand=1, right_hand=2, feet=3, tongue=4\n- **Trial interval**: [2, 6] s\n- **Runs per session**: 6\n- **File format**: GDF\n- **Data preprocessed**: True\n## Acquisition\n- **Sampling rate**: 250.0 Hz\n- **Number of channels**: 25\n- **Channel types**: eeg=22, eog=3\n- **Channel names**: C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CPz, Cz, EOG1, EOG2, EOG3, FC1, FC2, FC3, FC4, FCz, Fz, P1, P2, POz, Pz\n- **Montage**: custom\n- **Hardware**: BrainAmp MR plus\n- **Software**: BCI2000\n- **Reference**: left mastoid\n- **Ground**: unknown\n- **Sensor type**: Ag/AgCl\n- **Line frequency**: 50.0 Hz\n- **Online filters**: bandpass 0.05-200 Hz\n- **Cap manufacturer**: EASYCAP GmbH\n## Participants\n- **Number of subjects**: 9\n- **Health status**: healthy\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Number of classes**: 4\n- **Class labels**: left_hand, right_hand, feet, tongue\n- **Trial duration**: 4.0 s\n- **Study design**: Two-class motor imagery (selected from left hand, right hand, and foot) with asynchronous/continuous control periods\n- **Feedback type**: none\n- **Stimulus type**: arrow_cue\n- **Stimulus modalities**: visual, auditory\n- **Primary modality**: multisensory\n- **Synchronicity**: asynchronous\n- **Mode**: offline\n- **Instructions**: Subjects instructed to perform motor imagery during cued periods\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  left_hand\n    ├─ Sensory-event\n    │  ├─ Experimental-stimulus\n    │  ├─ Visual-presentation\n    │  └─ Leftward, Arrow\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Left, Hand\n  right_hand\n    ├─ Sensory-event\n    │  ├─ Experimental-stimulus\n    │  ├─ Visual-presentation\n    │  └─ Rightward, Arrow\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\n  feet\n    ├─ Sensory-event\n    │  ├─ Experimental-stimulus\n    │  ├─ Visual-presentation\n    │  └─ Downward, Arrow\n    └─ Agent-action\n       └─ Imagine, Move, Foot\n  tongue\n    ├─ Sensory-event\n    │  ├─ Experimental-stimulus\n    │  ├─ Visual-presentation\n    │  └─ Upward, Arrow\n    └─ Agent-action\n       └─ Imagine, Move, Tongue\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: left_hand, right_hand, foot\n- **Cue duration**: 4.0 s\n- **Imagery duration**: 4.0 s\n## Data Structure\n- **Trials**: {'training': 200, 'test': 240}\n- **Blocks per session**: 6\n- **Trials context**: per subject (2 training runs + 4 test runs)\n## Preprocessing\n- **Data state**: minimally preprocessed (bandpass and notch filtered)\n- **Preprocessing applied**: True\n- **Steps**: bandpass filtering\n- **Highpass filter**: 0.05 Hz\n- **Lowpass filter**: 200 Hz\n- **Bandpass filter**: {'low_cutoff_hz': 0.05, 'high_cutoff_hz': 200.0}\n- **Filter type**: analog\n- **Re-reference**: none\n- **Downsampled to**: 100.0 Hz\n- **Notes**: Data provided in two versions: original at 1000 Hz and downsampled to 100 Hz (with Chebyshev Type II filter order 10, stop band ripple 50 dB, stop band edge 49 Hz)\n## Signal Processing\n- **Classifiers**: LDA, SVM, Neural Network, Naive Bayes, RBF Neural Network\n- **Feature extraction**: CSP, FBCSP, Bandpower, ERD, ERS\n- **Frequency bands**: mu=[8, 12] Hz; beta=[16, 24] Hz\n## Cross-Validation\n- **Method**: train-test split\n- **Evaluation type**: within_session\n## Performance (Original Study)\n- **Mse**: 0.382\n## BCI Application\n- **Applications**: cursor_control, communication\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Motor\n- **Type**: Motor\n## Documentation\n- **Description**: Review of the BCI competition IV - Data set 1: Asynchronous Motor Imagery\n- **DOI**: 10.3389/fnins.2012.00055\n- **License**: CC-BY-ND-4.0\n- **Investigators**: Michael Tangermann, Klaus-Robert Müller, Ad Aertsen, Niels Birbaumer, Christoph Braun, Clemens Brunner, Robert Leeb, Carsten Mehring, Kai J. Miller, Gernot R. Müller-Putz, Guido Nolte, Gert Pfurtscheller, Hubert Preissl, Gerwin Schalk, Alois Schlögl, Carmen Vidaurre, Stephan Waldert, Benjamin Blankertz\n- **Senior author**: Michael Tangermann\n- **Contact**: michael.tangermann@tu-berlin.de\n- **Institution**: Berlin Institute of Technology\n- **Department**: Machine Learning Laboratory\n- **Address**: FR 6-9, Franklinstr. 28/29, 10587 Berlin, Germany\n- **Country**: AT\n- **Repository**: BNCI Horizon\n- **Data URL**: http://www.bbci.de/competition/iv/\n- **Publication year**: 2012\n- **Keywords**: brain-computer interface, BCI, competition\n## References\nTangermann, M., Muller, K.R., Aertsen, A., Birbaumer, N., Braun, C., Brunner, C., Leeb, R., Mehring, C., Miller, K.J., Mueller-Putz, G. and Nolte, G., 2012. Review of the BCI competition IV. Frontiers in neuroscience, 6, p.55.\nNotes\n.. note::\n``BNCI2014_001`` was previously named ``BNCI2014001``. ``BNCI2014001`` will be removed in version 1.1.\n.. versionadded:: 0.4.0\nThis is one of the most widely used motor imagery datasets in BCI research, commonly referred to as \"BCI Competition IV Dataset 2a\". It serves as a standard benchmark for 4-class motor imagery classification algorithms.\nThe dataset is particularly useful for:\n- Multi-class motor imagery classification (4 classes) - Transfer learning studies (9 subjects, 2 sessions each) - Cross-session variability analysis\nSee Also\nBNCI2014_004 : BCI Competition 2008 2-class motor imagery (Dataset B) BNCI2003_004 : BCI Competition III 2-class motor imagery\nExamples\n>>> from moabb.datasets import BNCI2014_001 >>> dataset = BNCI2014_001() >>> dataset.subject_list [1, 2, 3, 4, 5, 6, 7, 8, 9]\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.4.3 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0train","1test"],"size_bytes":705483228,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000139","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:08:39.288617+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-17T13:51:14Z"},"total_files":108,"computed_title":"BNCI 2014-001 Motor Imagery dataset","nchans_counts":[{"val":22,"count":108}],"sfreq_counts":[{"val":250.0,"count":108}],"stats_computed_at":"2026-05-01T13:49:34.645040+00:00","total_duration_s":41789.087999999996,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"2af9bff6f7061a88","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Multisensory"],"type":["Motor"],"confidence":{"pathology":0.9,"modality":0.8,"type":0.9},"reasoning":{"few_shot_analysis":"Closest few-shot match is the \"EEG Motor Movement/Imagery Dataset\" example (Healthy; cue-driven motor movement/imagery; labeled Type=Motor). This guides labeling the construct as Motor for motor imagery/BCI paradigms, and Pathology as Healthy when participants are volunteers with no diagnosis. For Modality, that example was labeled Visual because targets appear on a screen; however, in the current dataset the metadata explicitly states both visual and auditory cueing with a declared \"Primary modality: multisensory\", so we follow the metadata fact rather than copying the example’s visual-only convention.","metadata_analysis":"Key population facts: (1) \"Health status: healthy\". (2) \"Participants\\n- **Number of subjects**: 9\\n- **Health status**: healthy\". (3) \"## Tags\\n- **Pathology**: Healthy\".\n\nKey modality/task facts: (1) \"**Stimulus type**: arrow_cue\" with HED showing \"Visual-presentation\" arrows for each class. (2) \"**Stimulus modalities**: visual, auditory\". (3) \"**Primary modality**: multisensory\".\n\nKey construct/type facts: (1) \"**Paradigm**: imagery\". (2) \"Subjects instructed to perform motor imagery during cued periods\". (3) \"Detected paradigm: motor_imagery\" and \"BCI Application\\n- **Applications**: cursor_control, communication\".","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says \"Health status: healthy\" and tags \"Pathology: Healthy\". Few-shot pattern suggests Healthy for volunteer MI datasets (e.g., EEG Motor Movement/Imagery Dataset). ALIGN.\n\nModality: Metadata says \"Stimulus modalities: visual, auditory\" and explicitly \"Primary modality: multisensory\". Few-shot pattern for MI often maps screen cues to Visual (e.g., EEG Motor Movement/Imagery Dataset labeled Visual). PARTIAL CONFLICT: few-shot suggests Visual, but metadata explicitly states both channels and the dataset’s declared primary modality; metadata wins.\n\nType: Metadata says \"Paradigm: imagery\", \"motor_imagery\", and participants are instructed to perform motor imagery. Few-shot motor imagery/movement datasets are labeled Type=Motor. ALIGN.","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n- Healthy (winner): explicit \"Health status: healthy\"; \"Pathology: Healthy\" tag; human volunteer subjects.\n- Unknown (runner-up): only if health status were not specified.\nDecision: Healthy. Evidence alignment: ALIGN. Confidence justification: 3 explicit metadata statements.\n\nModality:\n- Multisensory (winner): explicit \"Stimulus modalities: visual, auditory\" and \"Primary modality: multisensory\".\n- Visual (runner-up): arrows as cues (\"Stimulus type: arrow_cue\"; HED shows \"Visual-presentation\") and common MI convention in few-shot.\nDecision: Multisensory because both auditory + visual are explicitly listed and marked primary. Evidence alignment: PARTIAL CONFLICT resolved in favor of metadata facts. Confidence justification: 2 explicit metadata statements plus supporting cue description.\n\nType:\n- Motor (winner): \"Paradigm: imagery\"; \"motor_imagery\"; \"Subjects instructed to perform motor imagery\"; BCI control context.\n- Attention (runner-up): could be argued due to cue-following, but not the primary construct.\nDecision: Motor. Evidence alignment: ALIGN. Confidence justification: 3 explicit metadata statements."}},"canonical_name":null,"name_confidence":0.8,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Tangermann2014"}}