{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c8b","dataset_id":"nm000171","associated_paper_doi":null,"authors":["David Steyrl","Reinhold Scherer","Oswin Förstner","Gernot R. Müller-Putz"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":14,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000171","osf_url":null,"github_url":null,"paper_url":null},"funding":["FP7 BackHome (No. 288566)","FP7 ABC (No. 287774)"],"ingestion_fingerprint":"86b3530589c41e13f288ec44530f3e3c954e1f1e153a6b425f85918bcf8a89c5","license":"CC-BY-ND-4.0","n_contributing_labs":null,"name":"BNCI 2014-002 Motor Imagery dataset","readme":"# BNCI 2014-002 Motor Imagery dataset\nBNCI 2014-002 Motor Imagery dataset.\n## Dataset Overview\n- **Code**: BNCI2014-002\n- **Paradigm**: imagery\n- **DOI**: 10.1007/s00500-012-0895-4\n- **Subjects**: 14\n- **Sessions per subject**: 1\n- **Events**: right_hand=1, feet=2\n- **Trial interval**: [3, 8] s\n- **Runs per session**: 8\n- **File format**: MAT\n- **Data preprocessed**: True\n## Acquisition\n- **Sampling rate**: 512.0 Hz\n- **Number of channels**: 15\n- **Channel types**: eeg=15\n- **Channel names**: EEG1, EEG2, EEG3, EEG4, EEG5, EEG6, EEG7, EEG8, EEG9, EEG10, EEG11, EEG12, EEG13, EEG14, EEG15\n- **Montage**: Laplacian\n- **Hardware**: g.USBamp\n- **Software**: BCI2000\n- **Reference**: left mastoid\n- **Ground**: right mastoid\n- **Sensor type**: Ag/AgCl\n- **Line frequency**: 50.0 Hz\n- **Online filters**: 8th order Butterworth band-pass filters\n- **Cap manufacturer**: Guger Technologies OG\n- **Cap model**: g.LADYbird\n- **Electrode type**: active\n- **Electrode material**: Ag/AgCl\n## Participants\n- **Number of subjects**: 14\n- **Health status**: healthy\n- **Age**: min=20.0, max=30.0\n- **BCI experience**: mixed\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Number of classes**: 2\n- **Class labels**: right_hand, feet\n- **Trial duration**: 5.0 s\n- **Study design**: Two-class motor imagery: right hand and feet. Cue-guided Graz-BCI training paradigm with recording, training, and feedback within a single session.\n- **Feedback type**: continuous\n- **Stimulus type**: bar_graph\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: online\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  right_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\n  feet\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine, Move, Foot\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: right_hand, feet\n- **Imagery duration**: 5.0 s\n## Data Structure\n- **Trials**: 160\n- **Trials per class**: right_hand=80, feet=80\n- **Blocks per session**: 8\n- **Trials context**: total per subject\n## Preprocessing\n- **Data state**: minimally preprocessed (online filtered)\n- **Preprocessing applied**: True\n- **Steps**: bandpass filtering\n- **Filter type**: Butterworth\n- **Filter order**: 8\n## Signal Processing\n- **Classifiers**: Random Forest, Shrinkage LDA\n- **Feature extraction**: CSP, DFT, Bandpower\n- **Frequency bands**: alpha=[6, 14] Hz; beta=[14, 40] Hz\n- **Spatial filters**: CSP, Laplacian\n## Cross-Validation\n- **Method**: train-test split\n- **Evaluation type**: within_subject\n## Performance (Original Study)\n- **Accuracy**: 79.3%\n- **Peak Accuracy**: 89.67\n- **Median Accuracy**: 80.42\n## BCI Application\n- **Applications**: communication, control\n- **Environment**: laboratory\n- **Online feedback**: True\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Motor\n- **Type**: Motor Imagery\n## Documentation\n- **DOI**: 10.1515/bmt-2014-0117\n- **Associated paper DOI**: 10.3217/978-3-85125-378-8-61\n- **License**: CC-BY-ND-4.0\n- **Investigators**: David Steyrl, Reinhold Scherer, Oswin Förstner, Gernot R. Müller-Putz\n- **Contact**: david.steyrl@tugraz.at; reinhold.scherer@tugraz.at; oswin.foerstner@student.tugraz.at; gernot.mueller@tugraz.at\n- **Institution**: Graz University of Technology\n- **Department**: Institute for Knowledge Discovery, Laboratory of Brain-Computer Interfaces\n- **Country**: Austria\n- **Repository**: BNCI Horizon\n- **Publication year**: 2014\n- **Funding**: FP7 BackHome (No. 288566); FP7 ABC (No. 287774)\n- **Keywords**: brain-computer interfaces, machine learning, random forests, regularized linear discriminant analysis, sensorimotor rhythms\n## References\nScherer, R., Faller, J., Balderas, D., Friedrich, E. V., & Müller-Putz, G. (2015). Brain-computer interfacing: more than the sum of its parts. Soft Computing, 19(11), 3173-3186. https://doi.org/10.1007/s00500-012-0895-4\nNotes\n.. note::\n``BNCI2014_002`` was previously named ``BNCI2014002``. ``BNCI2014002`` will be removed in version 1.1.\n.. versionadded:: 0.4.0\nSee Also\nBNCI2014_001 : 4-class motor imagery (BCI Competition IV Dataset 2a) BNCI2014_004 : 2-class motor imagery (Dataset B)\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":581185710,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000171","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:08:47.425435+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-30T13:04:41Z"},"total_files":112,"computed_title":"BNCI 2014-002 Motor Imagery dataset","nchans_counts":[{"val":15,"count":224}],"sfreq_counts":[{"val":512.0,"count":224}],"stats_computed_at":"2026-05-01T13:49:34.645366+00:00","total_duration_s":49431.5625,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"1647085dd56a49d5","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Closest few-shot match is the \"EEG Motor Movement/Imagery Dataset\" example (Healthy, Visual, Motor). It shows the convention that motor imagery paradigms are labeled as Type=Motor (construct), while Modality is driven by the stimulus channel (often Visual cues on a screen). This aligns with the current dataset being a cue-guided motor imagery BCI task with visual feedback/cues.","metadata_analysis":"Population/pathology facts: \"Health status: healthy\" and \"Participants\\n- **Health status**: healthy\" and \"## Tags\\n- **Pathology**: Healthy\".\nTask/type facts: \"Study design: Two-class motor imagery: right hand and feet.\" and \"Detected paradigm: motor_imagery\" and \"Paradigm: imagery\".\nStimulus/modality facts: \"Stimulus modalities: visual\" and \"Primary modality: visual\" and \"Stimulus type: bar_graph\" (visual cue/feedback).","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says \"Health status: healthy\" / \"Pathology: Healthy\"; few-shot pattern suggests Healthy for non-clinical motor imagery datasets. ALIGN.\nModality: Metadata explicitly says \"Stimulus modalities: visual\" and \"Primary modality: visual\"; few-shot convention for motor imagery tasks with screen cues uses Visual modality (e.g., EEG Motor Movement/Imagery example). ALIGN.\nType: Metadata explicitly states \"Two-class motor imagery\" and \"Detected paradigm: motor_imagery\"; few-shot convention labels such datasets as Type=Motor (construct: motor imagery/BCI). ALIGN.","decision_summary":"Pathology top-2: (1) Healthy — supported by \"Health status: healthy\", \"Participants ... Health status: healthy\", \"Tags ... Pathology: Healthy\"; (2) Unknown — only if no recruitment info existed. Winner: Healthy (explicit). Confidence=0.9 (3 explicit quotes + few-shot alignment).\nModality top-2: (1) Visual — supported by \"Stimulus modalities: visual\", \"Primary modality: visual\", \"Stimulus type: bar_graph\"; (2) Motor — possible if one mislabeled by response/imagery rather than stimulus channel. Winner: Visual (explicit stimulus channel). Confidence=0.9 (3 explicit quotes + strong few-shot analog).\nType top-2: (1) Motor — supported by \"Two-class motor imagery\", \"Paradigm: imagery\", \"Detected paradigm: motor_imagery\"; (2) Perception — unlikely, would require sensory discrimination focus. Winner: Motor (primary construct is motor imagery/BCI). Confidence=0.9 (3 explicit quotes + few-shot alignment)."}},"canonical_name":null,"name_confidence":0.9,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Steyrl2014"}}