{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c75","dataset_id":"nm000140","associated_paper_doi":null,"authors":["Josef Faller","Carmen Vidaurre","Teodoro Solis-Escalante","Christa Neuper","Reinhold Scherer"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":"10.82901/nemar.nm000140","datatypes":["eeg"],"demographics":{"subjects_count":12,"ages":[24,24,24,24,24,24,24,24,24,24,24,24],"age_min":24,"age_max":24,"age_mean":24.0,"species":null,"sex_distribution":null,"handedness_distribution":{"r":12}},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000140","osf_url":null,"github_url":null,"paper_url":null},"funding":["FP7 Framework EU Research Project BrainAble (No. 247447)"],"ingestion_fingerprint":"be3ce3c5bdcf8ef47c5f33dfeef955e6f202575c77c4cfb9bf25ec8c408783b0","license":"CC-BY-NC-ND-4.0","n_contributing_labs":null,"name":"BNCI 2015-001 Motor Imagery dataset","readme":"[![DOI](https://img.shields.io/badge/DOI-10.82901%2Fnemar.nm000140-blue)](https://doi.org/10.82901/nemar.nm000140)\n# BNCI 2015-001 Motor Imagery dataset\nBNCI 2015-001 Motor Imagery dataset.\n## Dataset Overview\n- **Code**: BNCI2015-001\n- **Paradigm**: imagery\n- **DOI**: 10.1109/tnsre.2012.2189584\n- **Subjects**: 12\n- **Sessions per subject**: 2\n- **Events**: right_hand=1, feet=2\n- **Trial interval**: [0, 5] s\n- **File format**: gdf\n- **Data preprocessed**: True\n## Acquisition\n- **Sampling rate**: 512.0 Hz\n- **Number of channels**: 13\n- **Channel types**: eeg=13\n- **Channel names**: FC3, FCz, FC4, C5, C3, C1, Cz, C2, C4, C6, CP3, CPz, CP4\n- **Montage**: 10-20\n- **Hardware**: g.tec\n- **Software**: Matlab\n- **Reference**: Car\n- **Sensor type**: active electrode\n- **Line frequency**: 50.0 Hz\n- **Online filters**: 50 Hz notch\n- **Cap manufacturer**: g.tec\n- **Cap model**: g.GAMMAsys\n- **Auxiliary channels**: gsr\n## Participants\n- **Number of subjects**: 12\n- **Health status**: healthy\n- **Age**: mean=24.8\n- **Gender distribution**: male=7, female=5\n- **Handedness**: all right-handed\n- **BCI experience**: naive\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Number of classes**: 2\n- **Class labels**: right_hand, feet\n- **Trial duration**: 11.0 s\n- **Study design**: Two-class motor imagery: sustained right hand movement imagery (palmar grip) versus both feet movement imagery (plantar extension)\n- **Feedback type**: visual\n- **Stimulus type**: cursor_feedback\n- **Stimulus modalities**: visual, auditory\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: training\n- **Instructions**: Relax during reference period (3s), perform sustained kinesthetic movement imagery during activity period. Condition 1 (arrow right): imagine palmar grip with right hand. Condition 2 (arrow down): imagine plantar extension of both feet.\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  right_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\n  feet\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine, Move, Foot\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: right_hand_palmar_grip, both_feet_plantar_extension\n- **Cue duration**: 1.25 s\n- **Imagery duration**: 4.0 s\n## Data Structure\n- **Trials**: 200\n- **Trials per class**: right_hand=100, feet=100\n- **Trials context**: per_session\n## Preprocessing\n- **Data state**: filtered\n- **Preprocessing applied**: True\n- **Steps**: bandpass filter, notch filter\n- **Highpass filter**: 0.5 Hz\n- **Lowpass filter**: 100.0 Hz\n- **Bandpass filter**: {'low_cutoff_hz': 0.5, 'high_cutoff_hz': 100.0}\n- **Notch filter**: [50.0] Hz\n- **Re-reference**: car\n## Signal Processing\n- **Classifiers**: LDA\n- **Feature extraction**: logarithmic bandpower, CSP\n- **Frequency bands**: alpha=[10, 13] Hz; beta=[16, 24] Hz\n## Cross-Validation\n- **Method**: leave-one-out\n- **Evaluation type**: cross_session\n## Performance (Original Study)\n- **Accuracy**: 80.0%\n## BCI Application\n- **Applications**: communication, control\n- **Online feedback**: True\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Motor\n- **Type**: Motor\n## Documentation\n- **DOI**: 10.1109/tnsre.2012.2189584\n- **License**: CC-BY-NC-ND-4.0\n- **Investigators**: Josef Faller, Carmen Vidaurre, Teodoro Solis-Escalante, Christa Neuper, Reinhold Scherer\n- **Senior author**: Reinhold Scherer\n- **Contact**: josef.faller@tugraz.at; christa.neuper@uni-graz.at; carmen.vidaurre@tu-berlin.de\n- **Institution**: Graz University of Technology\n- **Department**: Institute of Knowledge Discovery\n- **Address**: 8010 Graz, Austria\n- **Country**: Austria\n- **Repository**: BNCI Horizon\n- **Publication year**: 2012\n- **Funding**: FP7 Framework EU Research Project BrainAble (No. 247447)\n## References\nFaller, J., Vidaurre, C., Solis-Escalante, T., Neuper, C., & Scherer, R. (2012). Autocalibration and recurrent adaptation: Towards a plug and play online ERD-BCI. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 20(3), 313-319. https://doi.org/10.1109/tnsre.2012.2189584\nNotes\n.. note::\n``BNCI2015_001`` was previously named ``BNCI2015001``. ``BNCI2015001`` will be removed in version 1.1.\n.. versionadded:: 0.4.0\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.4.3 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0A","1B","2C"],"size_bytes":1212938754,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000140","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:08:39.793740+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-17T13:57:30Z"},"total_files":28,"computed_title":"BNCI 2015-001 Motor Imagery dataset","nchans_counts":[{"val":13,"count":28}],"sfreq_counts":[{"val":512.0,"count":28}],"stats_computed_at":"2026-05-01T13:49:34.645053+00:00","total_duration_s":60081.5390625,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"e74fac3247b66e67","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot example by paradigm is the \"EEG Motor Movement/Imagery Dataset\" (Schalk et al.), which is also a motor execution/imagery paradigm with screen cues. That example is labeled Pathology=Healthy and Type=Motor, and (importantly for convention) Modality=Visual because the dominant stimulus/input is a visual cue even though the cognitive construct is motor. This convention guides choosing Modality=Visual and Type=Motor for the present motor imagery BCI dataset.","metadata_analysis":"Key population facts: \"Health status: healthy\" and \"Tags\\n- **Pathology**: Healthy\" and \"BCI experience: naive\".\n\nKey task/stimulus facts: \"Paradigm: imagery\", \"Study design: Two-class motor imagery: sustained right hand movement imagery ... versus both feet movement imagery\", and \"Instructions: ... Condition 1 (arrow right) ... Condition 2 (arrow down)\".\n\nKey modality facts: \"Feedback type: visual\", \"Stimulus modalities: visual, auditory\", and explicitly \"Primary modality: visual\".","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n1) Metadata says: \"Health status: healthy\" / \"Pathology: Healthy\".\n2) Few-shot pattern suggests: motor imagery benchmark datasets are typically Healthy unless a diagnosis is stated.\n3) ALIGN.\n\nModality:\n1) Metadata says: \"Primary modality: visual\" and cues are \"arrow right\" / \"arrow down\" with \"Feedback type: visual\".\n2) Few-shot pattern suggests: motor imagery tasks with visual cues are labeled Modality=Visual (see Schalk motor imagery example).\n3) ALIGN (despite the dataset also listing \"Stimulus modalities: visual, auditory\", primary is visual).\n\nType:\n1) Metadata says: \"Two-class motor imagery\" and \"Detected paradigm: motor_imagery\".\n2) Few-shot pattern suggests: motor imagery datasets are Type=Motor.\n3) ALIGN.","decision_summary":"Pathology top-2: (1) Healthy vs (2) Unknown. Healthy wins due to explicit metadata: \"Health status: healthy\", \"Tags - Pathology: Healthy\", and the participant description indicating a non-clinical cohort (e.g., \"BCI experience: naive\"). Confidence=0.9.\n\nModality top-2: (1) Visual vs (2) Motor. Visual wins because the stimulus/input channel is explicitly visual: \"Primary modality: visual\", \"Feedback type: visual\", and cueing via \"arrow right\"/\"arrow down\"; few-shot motor imagery convention also maps such tasks to Modality=Visual. Confidence=0.9.\n\nType top-2: (1) Motor vs (2) Perception. Motor wins because the stated paradigm is motor imagery: \"Two-class motor imagery\" and \"Detected paradigm: motor_imagery\" with kinesthetic movement imagery instructions; matches the motor imagery few-shot example’s Type=Motor. Confidence=0.9."}},"canonical_name":null,"name_confidence":0.86,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Faller2015"}}