{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c79","dataset_id":"nm000144","associated_paper_doi":null,"authors":["Reinhold Scherer","Josef Faller","Elisabeth V. C. Friedrich","Eloy Opisso","Ursula Costa","Andrea Kübler","Gernot R. Müller-Putz"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":"10.82901/nemar.nm000144","datatypes":["eeg"],"demographics":{"subjects_count":9,"ages":[38,38,38,38,38,38,38,38,38],"age_min":38,"age_max":38,"age_mean":38.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000144","osf_url":null,"github_url":null,"paper_url":null},"funding":["FP7 EU Research Projects BrainAble (No. 247447)","ABC (No. 287774)","BackHome (No. 288566)"],"ingestion_fingerprint":"073654234258bd989c560625d0515b50d8f89dcdca6d52e729263c81fed3f723","license":"CC-BY-NC-ND-4.0","n_contributing_labs":null,"name":"BNCI 2015-004 Mental tasks dataset","readme":"[![DOI](https://img.shields.io/badge/DOI-10.82901%2Fnemar.nm000144-blue)](https://doi.org/10.82901/nemar.nm000144)\n# BNCI 2015-004 Mental tasks dataset\nBNCI 2015-004 Mental tasks dataset.\n## Dataset Overview\n- **Code**: BNCI2015-004\n- **Paradigm**: imagery\n- **DOI**: 10.1371/journal.pone.0123727\n- **Subjects**: 9\n- **Sessions per subject**: 2\n- **Events**: math=1, letter=2, rotation=3, count=4, baseline=5\n- **Trial interval**: [0, 4] s\n- **File format**: gdf\n- **Data preprocessed**: True\n## Acquisition\n- **Sampling rate**: 256.0 Hz\n- **Number of channels**: 30\n- **Channel types**: eeg=30\n- **Channel names**: AFz, F7, F3, Fz, F4, F8, FC3, FCz, FC4, T3, C3, Cz, C4, T4, CP3, CPz, CP4, P7, P5, P3, P1, Pz, P2, P4, P6, P8, PO3, PO4, O1, O2\n- **Montage**: 10-20\n- **Hardware**: g.tec\n- **Reference**: left and right mastoid\n- **Ground**: left and right mastoid\n- **Sensor type**: active electrode\n- **Line frequency**: 50.0 Hz\n- **Online filters**: 0.5-100 Hz bandpass, 50 Hz notch\n- **Cap manufacturer**: g.tec\n- **Electrode type**: g.LADYbird active electrodes\n- **Auxiliary channels**: EOG (2 ch, horizontal, vertical)\n## Participants\n- **Number of subjects**: 9\n- **Health status**: CNS tissue damage\n- **Clinical population**: stroke and spinal cord injury\n- **Age**: mean=38.0, std=10.0, min=20, max=57\n- **Gender distribution**: male=2, female=7\n- **Handedness**: not specified\n- **BCI experience**: naive\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Number of classes**: 5\n- **Class labels**: math, letter, rotation, count, baseline\n- **Trial duration**: 11.0 s\n- **Tasks**: word_association, mental_subtraction, spatial_navigation, right_hand_imagery, feet_imagery\n- **Study design**: Five mental tasks: word association (WORD), mental subtraction (SUB), spatial navigation (NAV), motor imagery of right hand (HAND), and motor imagery of both feet (FEET). Cue-guided paradigm with 7 seconds of continuous mental imagery per trial.\n- **Feedback type**: none\n- **Stimulus type**: visual cue\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: screening\n- **Instructions**: Participants were asked to continuously perform the specified mental imagery task for 7 seconds. For MI: kinesthetic imagination of movement (e.g., squeezing a rubber ball for hand, dorsiflexion for feet). For WORD: generate words beginning with presented letter. For SUB: successive elementary subtractions. For NAV: spatial navigation.\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  math\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Think\n          └─ Label/math\n  letter\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Think\n          └─ Label/letter\n  rotation\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Think\n          └─ Label/rotation\n  count\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine, Count\n  baseline\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Rest\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: right_hand, feet, word_association, mental_subtraction, spatial_navigation\n- **Cue duration**: 1.0 s\n- **Imagery duration**: 7.0 s\n## Data Structure\n- **Trials**: 40\n- **Blocks per session**: 8\n- **Trials context**: per_class_per_day\n## Preprocessing\n- **Data state**: filtered\n- **Preprocessing applied**: True\n- **Steps**: bandpass filter, notch filter, artifact rejection\n- **Highpass filter**: 0.5 Hz\n- **Lowpass filter**: 100.0 Hz\n- **Bandpass filter**: {'low_cutoff_hz': 0.5, 'high_cutoff_hz': 100.0}\n- **Notch filter**: [50] Hz\n- **Artifact methods**: manual artifact rejection based on EOG\n- **Re-reference**: left and right mastoid\n## Signal Processing\n- **Classifiers**: LDA\n- **Feature extraction**: bandpower, temporal features\n- **Frequency bands**: mu=[8, 12] Hz; beta=[13, 30] Hz\n## Cross-Validation\n- **Method**: 10-fold cross-validation\n- **Folds**: 10\n- **Evaluation type**: within_session, cross_session\n## Performance (Original Study)\n- **Accuracy**: 77.0%\n- **Best Task Pair Gmac**: 77.0\n- **Sub Vs Feet Gmac**: 77.0\n- **Word Vs Hand Gmac**: 70.0\n- **Hand Vs Feet Gmac**: 64.0\n- **Between Day Word Vs Hand Gmac**: 82.0\n## BCI Application\n- **Applications**: communication, motor_function_restoration\n- **Environment**: rehabilitation center\n- **Online feedback**: False\n## Tags\n- **Pathology**: Stroke, Spinal Cord Injury, CNS Damage\n- **Modality**: Motor, Cognitive\n- **Type**: Motor, Cognitive\n## Documentation\n- **DOI**: 10.1371/journal.pone.0123727\n- **License**: CC-BY-NC-ND-4.0\n- **Investigators**: Reinhold Scherer, Josef Faller, Elisabeth V. C. Friedrich, Eloy Opisso, Ursula Costa, Andrea Kübler, Gernot R. Müller-Putz\n- **Senior author**: Reinhold Scherer\n- **Contact**: reinhold.scherer@tugraz.at\n- **Institution**: Institut Guttmann\n- **Department**: Institut Universitari de Neurorehabilitació adscrit a la UAB\n- **Address**: 08916 Badalona, Barcelona, Spain\n- **Country**: Spain\n- **Repository**: BNCI Horizon 2020\n- **Data URL**: https://bnci-horizon-2020.eu/database/data-sets\n- **Publication year**: 2015\n- **Funding**: FP7 EU Research Projects BrainAble (No. 247447); ABC (No. 287774); BackHome (No. 288566)\n- **Ethics approval**: Comitè d'Ètica Assistencial de l'Institut Guttman\n- **Keywords**: brain-computer interface, motor imagery, mental tasks, EEG, CNS tissue damage, stroke, spinal cord injury, binary classification\n## References\nZhang, X., Yao, L., Zhang, Q., Kanhere, S., Sheng, M., & Liu, Y. (2017). A survey on deep learning based brain computer interface: Recent advances and new frontiers. IEEE Transactions on Cognitive and Developmental Systems, 10(2), 145-163.\nNotes\n.. note::\n``BNCI2015_004`` was previously named ``BNCI2015004``. ``BNCI2015004`` will be removed in version 1.1.\n.. versionadded:: 0.4.0\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1"],"size_bytes":1157300262,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000144","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:08:40.228572+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-17T14:45:55Z"},"total_files":18,"computed_title":"BNCI 2015-004 Mental tasks dataset","nchans_counts":[{"val":30,"count":18}],"sfreq_counts":[{"val":256.0,"count":18}],"stats_computed_at":"2026-05-01T13:49:34.645111+00:00","total_duration_s":49501.8671875,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"f128ca97c023fce1","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Other"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.85},"reasoning":{"few_shot_analysis":"Most similar few-shot example by paradigm is the “EEG Motor Movement/Imagery Dataset”, which is a cue-based motor imagery paradigm and is labeled Modality=Visual and Type=Motor. This guides the convention that even though the participant action is (motor) imagery, the stimulus/input channel is still Visual because the cues/targets are shown on a screen. For pathology, none of the few-shot examples are stroke/SCI specifically, so we rely on metadata facts and map these clinical recruitments to Pathology=Other (since stroke/SCI are not separate allowed labels).","metadata_analysis":"Key population facts: (1) “Health status: CNS tissue damage” and (2) “Clinical population: stroke and spinal cord injury”.\nKey modality facts: (1) “Stimulus type: visual cue” and (2) “Stimulus modalities: visual” plus “Primary modality: visual”.\nKey type/task-purpose facts: (1) “Detected paradigm: motor_imagery” and (2) “Study design: Five mental tasks: ... motor imagery of right hand (HAND), and motor imagery of both feet (FEET). Cue-guided paradigm with 7 seconds of continuous mental imagery per trial.” (also consistent with “Paradigm: imagery” and “Applications: communication, motor_function_restoration”).","paper_abstract_analysis":"No useful paper information. (No abstract text provided in the input; only a DOI is listed.)","evidence_alignment_check":"Pathology — Metadata says: “Clinical population: stroke and spinal cord injury” and “Health status: CNS tissue damage”. Few-shot pattern suggests: assign the explicitly stated clinical population when present; if not in allowed list, use the closest allowed bucket. ALIGN (metadata indicates clinical recruitment; mapping to allowed label becomes Other).\n\nModality — Metadata says: “Stimulus type: visual cue”, “Stimulus modalities: visual”, “Primary modality: visual”. Few-shot pattern suggests: in motor imagery tasks with on-screen cues, choose Visual as modality (as in the motor imagery few-shot example). ALIGN.\n\nType — Metadata says: “Detected paradigm: motor_imagery” and includes “motor imagery of right hand” and “motor imagery of both feet”. Few-shot pattern suggests: motor imagery/BCI datasets are Type=Motor. ALIGN (even though some classes are cognitive/mental arithmetic/word tasks, the dataset’s detected paradigm and common use is motor imagery classification).","decision_summary":"Pathology top-2: (A) Other vs (B) Healthy. Evidence for Other: “Clinical population: stroke and spinal cord injury”; “Health status: CNS tissue damage”. Evidence for Healthy: none (no statement of healthy recruitment). Winner: Other. Alignment: aligned.\n\nModality top-2: (A) Visual vs (B) Motor. Evidence for Visual: “Stimulus type: visual cue”; “Stimulus modalities: visual”; “Primary modality: visual”; plus few-shot motor imagery example uses Visual modality. Evidence for Motor: task includes motor imagery, but modality is defined as stimulus channel (not response/imagery). Winner: Visual. Alignment: aligned.\n\nType top-2: (A) Motor vs (B) Other. Evidence for Motor: “Detected paradigm: motor_imagery”; “motor imagery of right hand ... and ... both feet”; BCI application “motor_function_restoration”. Evidence for Other: mixed mental tasks also include subtraction/word/navigation, but the dataset is explicitly framed as an imagery/motor-imagery benchmark. Winner: Motor. Alignment: aligned.\n\nConfidence basis: Pathology has 2 explicit population quotes; Modality has 3 explicit visual-stimulus quotes + strong few-shot analog; Type has 2 explicit motor-imagery quotes + strong few-shot analog."}},"canonical_name":null,"name_confidence":0.9,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Scherer2015"}}