{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c78","dataset_id":"nm000143","associated_paper_doi":null,"authors":["Guido Dornhege","Benjamin Blankertz","Gabriel Curio","Klaus-Robert Müller"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":"10.82901/nemar.nm000143","datatypes":["eeg"],"demographics":{"subjects_count":5,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000143","osf_url":null,"github_url":null,"paper_url":null},"funding":["Bundesministerium für Bildung und Forschung (BMBF) under Grants FKZ 01IBB02A and FKZ 01IBB02B"],"ingestion_fingerprint":"b1d1ed3752f87c0a2ddba612bf10e2bc9f121563f1cb55587cc6f6e0fc6a6faf","license":"CC-BY-4.0","n_contributing_labs":null,"name":"BNCI2003_IVa Motor Imagery dataset","readme":"[![DOI](https://img.shields.io/badge/DOI-10.82901%2Fnemar.nm000143-blue)](https://doi.org/10.82901/nemar.nm000143)\n# BNCI2003_IVa Motor Imagery dataset\nBNCI2003_IVa Motor Imagery dataset.\n## Dataset Overview\n- **Code**: BNCI2003-004\n- **Paradigm**: imagery\n- **DOI**: 10.1109/TBME.2004.827088\n- **Subjects**: 5\n- **Sessions per subject**: 1\n- **Events**: right_hand=0, feet=1\n- **Trial interval**: [0, 3.5] s\n- **File format**: mat\n- **Data preprocessed**: True\n## Acquisition\n- **Sampling rate**: 100.0 Hz\n- **Number of channels**: 118\n- **Channel types**: eeg=118\n- **Channel names**: AF3, AF4, AF7, AF8, AFp1, AFp2, C1, C2, C3, C4, C5, C6, CCP1, CCP2, CCP3, CCP4, CCP5, CCP6, CCP7, CCP8, CFC1, CFC2, CFC3, CFC4, CFC5, CFC6, CFC7, CFC8, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, F1, F2, F3, F4, F5, F6, F7, F8, FAF1, FAF2, FAF5, FAF6, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FFC1, FFC2, FFC3, FFC4, FFC5, FFC6, FFC7, FFC8, FT10, FT7, FT8, FT9, Fp1, Fp2, Fpz, Fz, I1, I2, O1, O2, OI1, OI2, OPO1, OPO2, Oz, P1, P10, P2, P3, P4, P5, P6, P7, P8, P9, PCP1, PCP2, PCP3, PCP4, PCP5, PCP6, PCP7, PCP8, PO1, PO2, PO3, PO4, PO7, PO8, POz, PPO1, PPO2, PPO5, PPO6, PPO7, PPO8, Pz, T7, T8, TP10, TP7, TP8, TP9\n- **Montage**: standard_1005\n- **Hardware**: BrainAmp\n- **Sensor type**: EEG\n- **Line frequency**: 50.0 Hz\n- **Online filters**: {'bandpass': [0.05, 200]}\n## Participants\n- **Number of subjects**: 5\n- **Health status**: healthy\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Number of classes**: 2\n- **Class labels**: right_hand, feet\n- **Trial duration**: 3.5 s\n- **Stimulus type**: visual cue\n- **Mode**: offline\n- **Instructions**: subjects performed motor imagery (left hand, right hand, or right foot) according to visual cue for 3.5 seconds\n- **Stimulus presentation**: duration=3.5 s, interval=1.75-2.25 s random, modality=visual\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  right_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\n  feet\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine, Move, Foot\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: right_hand, feet\n- **Cue duration**: 3.5 s\n## Data Structure\n- **Trials**: 280\n- **Trials context**: 280 cues per subject, split into labeled training and unlabeled test sets (varying per subject)\n## Preprocessing\n- **Data state**: downsampled to 100 Hz for offline analysis\n- **Preprocessing applied**: True\n- **Steps**: bandpass filtering, downsampling\n- **Bandpass filter**: {'low_cutoff_hz': 0.05, 'high_cutoff_hz': 200.0}\n- **Downsampled to**: 100 Hz\n- **Notes**: Band-pass filtered 0.05-200 Hz during acquisition at 1000 Hz with 16-bit (0.1 uV) accuracy, then downsampled to 100 Hz by picking each 10th sample. Original experiment also recorded EMG and EOG but these are not in the shared data files.\n## Signal Processing\n- **Classifiers**: LDA, regularized LDA\n- **Feature extraction**: CSP, SUB (MRP/slow potentials), AR\n- **Frequency bands**: alpha=[8, 13] Hz; beta=[15, 25] Hz; alpha_beta=[7, 30] Hz\n- **Spatial filters**: CSP, spatial Laplacian\n## Cross-Validation\n- **Method**: 10x10-fold cross validation\n- **Folds**: 10\n- **Evaluation type**: within-subject\n## BCI Application\n- **Applications**: motor_control\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Motor\n- **Type**: Research\n## Documentation\n- **DOI**: 10.1109/TBME.2004.827088\n- **License**: CC-BY-4.0\n- **Investigators**: Guido Dornhege, Benjamin Blankertz, Gabriel Curio, Klaus-Robert Müller\n- **Senior author**: Klaus-Robert Müller\n- **Contact**: benjamin.blankertz@tu-berlin.de\n- **Institution**: Fraunhofer FIRST (IDA); Charité University Medicine Berlin\n- **Department**: Fraunhofer FIRST (IDA); Department of Neurology, Campus Benjamin Franklin\n- **Address**: 12489 Berlin, Germany; 12203 Berlin, Germany\n- **Country**: DE\n- **Repository**: BBCI\n- **Publication year**: 2004\n- **Funding**: Bundesministerium für Bildung und Forschung (BMBF) under Grants FKZ 01IBB02A and FKZ 01IBB02B\n- **Keywords**: brain-computer interface, BCI, common spatial patterns, electroencephalogram, EEG, event-related desynchronization, feature combination, movement related potential, multiclass, single-trial analysis\n## References\nGuido Dornhege, Benjamin Blankertz, Gabriel Curio, and Klaus-Robert Muller. Boosting bit rates in non-invasive EEG single-trial classifications by feature combination and multi-class paradigms. IEEE Trans. Biomed. Eng., 51(6):993-1002, June 2004.\nNotes\n.. versionadded:: 0.4.0\nThis is one of the earliest and most influential motor imagery BCI datasets, used extensively for benchmarking classification algorithms. The dataset was part of BCI Competition III and has been cited in hundreds of papers.\nSee Also\nBNCI2014_001 : BCI Competition IV 4-class motor imagery dataset BNCI2014_004 : BCI Competition 2008 2-class motor imagery dataset\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0train"],"size_bytes":516687682,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000143","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:08:40.174193+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-17T14:16:20Z"},"total_files":5,"computed_title":"BNCI2003_IVa Motor Imagery dataset","nchans_counts":[{"val":118,"count":5}],"sfreq_counts":[{"val":100.0,"count":5}],"stats_computed_at":"2026-05-01T13:49:34.645096+00:00","total_duration_s":14314.69,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"d0026a2c15c443e3","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Closest few-shot match is the example titled \"EEG Motor Movement/Imagery Dataset\" (Schalk et al.). It involves visually presented targets/cues that instruct real or imagined movements, and it is labeled (Modality=Visual, Type=Motor, Pathology=Healthy). The current dataset is likewise a motor-imagery BCI paradigm with a \"visual cue\" instructing imagery of effectors (right hand vs feet), so the same convention (Visual for cue modality; Motor for research type) applies.","metadata_analysis":"Key facts from the dataset metadata:\n1) Population: explicitly healthy — \"Health status: healthy\".\n2) Stimulus/input modality: explicitly visual — \"Stimulus type: visual cue\" and \"Stimulus presentation: ... modality=visual\".\n3) Construct/purpose: motor imagery/BCI — \"Paradigm: imagery\", \"Instructions: subjects performed motor imagery ... according to visual cue\", and \"Detected paradigm: motor_imagery\" with \"Imagery tasks: right_hand, feet\".","paper_abstract_analysis":"No useful paper information. (Only a DOI/citation is provided; no abstract text included.)","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\".\n- Few-shot pattern suggests: motor imagery benchmark datasets typically recruit healthy volunteers (as in the Schalk motor imagery example labeled Healthy).\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"Stimulus type: visual cue\" and \"Stimulus presentation: ... modality=visual\".\n- Few-shot pattern suggests: motor imagery tasks with on-screen targets/cues are labeled Visual for Modality (see \"EEG Motor Movement/Imagery Dataset\" labeled Visual).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: \"Paradigm: imagery\" and \"Instructions: subjects performed motor imagery...\" and \"Detected paradigm: motor_imagery\".\n- Few-shot pattern suggests: motor imagery / movement execution paradigms are labeled Type=Motor.\n- Alignment: ALIGN.","decision_summary":"Top-2 candidates with head-to-head selection:\n\nPathology:\n- Candidate 1: Healthy\n  Evidence: \"Health status: healthy\"; \"Subjects: 5\" in a BCI benchmark context; no mention of any diagnosis.\n- Candidate 2: Unknown\n  Evidence: would apply only if health status were not stated.\nDecision: Healthy (explicitly stated). Alignment: aligns with few-shot motor imagery example.\nConfidence notes: strong direct quote \"Health status: healthy\".\n\nModality:\n- Candidate 1: Visual\n  Evidence: \"Stimulus type: visual cue\"; \"Stimulus presentation: ... modality=visual\"; HED annotation includes \"Visual-presentation\".\n- Candidate 2: Motor\n  Evidence: motor imagery is the participant action, but guidelines define Modality as stimulus/input channel.\nDecision: Visual (stimulus is visual cues). Alignment: matches few-shot convention for motor imagery datasets.\nConfidence notes: three explicit visual-modality indicators (visual cue; modality=visual; HED Visual-presentation).\n\nType:\n- Candidate 1: Motor\n  Evidence: \"Paradigm: imagery\"; \"Instructions: subjects performed motor imagery...\"; \"Detected paradigm: motor_imagery\" / tasks \"right_hand, feet\".\n- Candidate 2: Perception\n  Evidence: could be argued because cues are visual, but the research construct is motor imagery/BCI control rather than sensory perception.\nDecision: Motor (primary construct is motor imagery/BCI). Alignment: matches few-shot motor imagery dataset labeled Type=Motor.\nConfidence notes: multiple explicit motor-imagery statements plus strong few-shot analog."}},"canonical_name":null,"name_confidence":0.82,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"BNCI2003"}}