{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c7a","dataset_id":"nm000145","associated_paper_doi":null,"authors":["Moritz Grosse-Wentrup","Christian Liefhold","Klaus Gramann","Martin Buss"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":"10.82901/nemar.nm000145","datatypes":["eeg"],"demographics":{"subjects_count":10,"ages":[25,25,25,25,25,25,25,25,25,25],"age_min":25,"age_max":25,"age_mean":25.0,"species":null,"sex_distribution":null,"handedness_distribution":{"r":10}},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000145","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"be058a72026586e1586bbd4cb7d5cb324ed9991b8695bbf8c0bac01a1723e978","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Munich Motor Imagery dataset","readme":"[![DOI](https://img.shields.io/badge/DOI-10.82901%2Fnemar.nm000145-blue)](https://doi.org/10.82901/nemar.nm000145)\n# Munich Motor Imagery dataset\nMunich Motor Imagery dataset.\n## Dataset Overview\n- **Code**: GrosseWentrup2009\n- **Paradigm**: imagery\n- **DOI**: 10.1109/TBME.2008.2009768\n- **Subjects**: 10\n- **Sessions per subject**: 1\n- **Events**: right_hand=2, left_hand=1\n- **Trial interval**: [0, 7] s\n- **File format**: set\n- **Data preprocessed**: True\n## Acquisition\n- **Sampling rate**: 500.0 Hz\n- **Number of channels**: 128\n- **Channel types**: eeg=128\n- **Channel names**: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128\n- **Montage**: standard_1020\n- **Hardware**: BrainAmp\n- **Reference**: Cz\n- **Line frequency**: 50.0 Hz\n- **Online filters**: {'highpass_time_constant_s': 10}\n- **Impedance threshold**: 10 kOhm\n## Participants\n- **Number of subjects**: 10\n- **Health status**: healthy\n- **Age**: mean=25.6, std=2.5\n- **Gender distribution**: male=8, female=2\n- **Handedness**: {'right': 8}\n- **BCI experience**: mixed\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Task type**: motor_imagery\n- **Number of classes**: 2\n- **Class labels**: right_hand, left_hand\n- **Trial duration**: 10 s\n- **Tasks**: motor_imagery\n- **Study design**: two-class motor imagery with arrow cues\n- **Feedback type**: none\n- **Stimulus type**: arrow_cue\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Instructions**: Subjects were instructed to perform haptic motor imagery of the left or the right hand during display of the arrow, as indicated by the direction of the arrow\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  right_hand\n    ├─ Sensory-event\n    │  ├─ Experimental-stimulus\n    │  ├─ Visual-presentation\n    │  └─ Rightward, Arrow\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\n  left_hand\n    ├─ Sensory-event\n    │  ├─ Experimental-stimulus\n    │  ├─ Visual-presentation\n    │  └─ Leftward, Arrow\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Left, Hand\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: left_hand, right_hand\n- **Cue duration**: 7.0 s\n- **Imagery duration**: 7.0 s\n## Data Structure\n- **Trials**: 150\n- **Trials context**: per_class\n## Preprocessing\n- **Data state**: preprocessed\n- **Preprocessing applied**: True\n- **Artifact methods**: none\n- **Re-reference**: car\n- **Notes**: No trials were rejected and no artifact correction was performed. Data were re-referenced to common average reference offline.\n## Signal Processing\n- **Classifiers**: Logistic Regression\n- **Feature extraction**: CSP, Beamforming, Laplacian, Bandpower\n- **Frequency bands**: analyzed=[7.0, 30.0] Hz\n- **Spatial filters**: CSP, Beamforming, Laplacian\n## Cross-Validation\n- **Method**: bootstrapping\n- **Evaluation type**: within_subject\n## BCI Application\n- **Applications**: motor_control\n- **Environment**: shielded_room\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Motor\n- **Type**: Motor\n## Documentation\n- **DOI**: 10.1109/TBME.2008.2009768\n- **License**: CC-BY-4.0\n- **Investigators**: Moritz Grosse-Wentrup, Christian Liefhold, Klaus Gramann, Martin Buss\n- **Senior author**: Martin Buss\n- **Contact**: moritzgw@ieee.org\n- **Institution**: Technische Universität München\n- **Department**: Institute of Automatic Control Engineering (LSR)\n- **Country**: DE\n- **Repository**: Zenodo\n- **Publication year**: 2009\n- **Keywords**: Beamforming, brain-computer interfaces, common spatial patterns, electroencephalography, motor imagery, spatial filtering\n## References\nGrosse-Wentrup, Moritz, et al. \"Beamforming in noninvasive brain–computer interfaces.\" IEEE Transactions on Biomedical Engineering 56.4 (2009): 1209-1219.\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":5824656927,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000145","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:08:40.349228+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-29T22:30:10Z"},"total_files":10,"computed_title":"Munich Motor Imagery dataset","nchans_counts":[{"val":128,"count":10}],"sfreq_counts":[{"val":500.0,"count":10}],"stats_computed_at":"2026-05-01T13:49:34.645126+00:00","total_duration_s":30257.3,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"e0846a1f7cd10ad8","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Closest few-shot match is the \"EEG Motor Movement/Imagery Dataset\" example (Schalk et al., EEGMMIDB), which is also a motor execution/imagery paradigm and is labeled Type=Motor. That example uses visual targets/cues to drive motor imagery/execution, consistent with mapping motor imagery studies to Type=Motor. For Modality, the few-shot example is labeled Visual, reflecting that the dominant stimulus channel is the on-screen cue/target, not the imagined movement itself; this guides selecting Visual here because the dataset explicitly uses arrow cues as the stimulus.","metadata_analysis":"Key explicit metadata facts:\n1) Pathology/participants: \"Health status: healthy\" and also \"- **Number of subjects**: 10\".\n2) Task/type: \"- **Task type**: motor_imagery\" and \"Subjects were instructed to perform haptic motor imagery of the left or the right hand during display of the arrow\".\n3) Stimulus modality: \"- **Stimulus type**: arrow_cue\" plus \"- **Stimulus modalities**: visual\" and \"- **Primary modality**: visual\".\nThese directly specify a healthy cohort performing a two-class left vs right hand motor imagery task cued visually by arrows.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\".\n- Few-shot pattern suggests: motor imagery benchmark datasets are typically Healthy (e.g., EEGMMIDB example labeled Healthy).\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"Stimulus modalities: visual\" and \"Primary modality: visual\" and \"Stimulus type: arrow_cue\".\n- Few-shot pattern suggests: for motor imagery tasks with on-screen cues, Modality is labeled Visual (see EEG Motor Movement/Imagery Dataset example labeled Visual).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: \"Task type: motor_imagery\" and participants \"perform haptic motor imagery of the left or the right hand\".\n- Few-shot pattern suggests: motor imagery/execution paradigms map to Type=Motor (see EEGMMIDB example).\n- Alignment: ALIGN.","decision_summary":"Top-2 candidates and decision per category:\n\n1) Pathology\n- Candidate A: Healthy — supported by quotes: \"Health status: healthy\"; \"Number of subjects: 10\" (no clinical recruitment mentioned).\n- Candidate B: Unknown — would apply only if health status were not stated.\nDecision: Healthy (metadata explicitly states healthy).\nConfidence basis: 1 direct explicit quote naming healthy population.\n\n2) Modality\n- Candidate A: Visual — supported by quotes: \"Stimulus modalities: visual\"; \"Primary modality: visual\"; \"Stimulus type: arrow_cue\".\n- Candidate B: Motor — plausible because the task is motor imagery, but modality is defined as stimulus/input channel, not imagined movement.\nDecision: Visual (dominant stimulus channel is the arrow cue on screen).\nConfidence basis: 3 explicit metadata statements about visual stimulus modality.\n\n3) Type\n- Candidate A: Motor — supported by quotes: \"Task type: motor_imagery\"; \"perform haptic motor imagery of the left or the right hand\"; plus HED shows \"Agent-action -> Imagine -> Move\".\n- Candidate B: Perception — possible if the main goal were arrow-direction discrimination, but the protocol states motor imagery as the task construct.\nDecision: Motor.\nConfidence basis: 2+ explicit motor-imagery/task-purpose statements (and consistent HED annotation)."}},"canonical_name":null,"name_confidence":0.85,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"GrosseWentrup2009"}}