{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c8d","dataset_id":"nm000173","associated_paper_doi":null,"authors":["Patrick Ofner","Andreas Schwarz","Joana Pereira","Gernot R. Müller-Putz"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":15,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000173","osf_url":null,"github_url":null,"paper_url":null},"funding":["H2020-643955 MoreGrasp","ERC Consolidator Grant ERC-681231 Feel Your Reach"],"ingestion_fingerprint":"2e7ea5ddd5ceebdd021484543dbec2021ef3f473104bf2314f40a29d6967ec70","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Motor Imagery ataset from Ofner et al 2017","readme":"# Motor Imagery ataset from Ofner et al 2017\nMotor Imagery ataset from Ofner et al 2017.\n## Dataset Overview\n- **Code**: Ofner2017\n- **Paradigm**: imagery\n- **DOI**: 10.1371/journal.pone.0182578\n- **Subjects**: 15\n- **Sessions per subject**: 2\n- **Events**: right_elbow_flexion=1536, right_elbow_extension=1537, right_supination=1538, right_pronation=1539, right_hand_close=1540, right_hand_open=1541, rest=1542\n- **Trial interval**: [0, 3] s\n- **Runs per session**: 10\n- **Session IDs**: movement_execution, motor_imagery\n- **File format**: gdf\n## Acquisition\n- **Sampling rate**: 512.0 Hz\n- **Number of channels**: 61\n- **Channel types**: eeg=61, eog=3, misc=32\n- **Channel names**: C1, C2, C3, C4, C5, C6, CCP1h, CCP2h, CCP3h, CCP4h, CCP5h, CCP6h, CP1, CP2, CP3, CP4, CP5, CP6, CPP1h, CPP2h, CPP3h, CPP4h, CPP5h, CPP6h, CPz, Cz, F1, F2, F3, F4, FC1, FC2, FC3, FC4, FC5, FC6, FCC1h, FCC2h, FCC3h, FCC4h, FCC5h, FCC6h, FCz, FFC1h, FFC2h, FFC3h, FFC4h, FFC5h, FFC6h, FTT7h, FTT8h, Fz, P1, P2, P3, P4, PPO1h, PPO2h, Pz, TTP7h, TTP8h, armeodummy-0, armeodummy-1, armeodummy-10, armeodummy-11, armeodummy-12, armeodummy-2, armeodummy-3, armeodummy-4, armeodummy-5, armeodummy-6, armeodummy-7, armeodummy-8, armeodummy-9, eog-l, eog-m, eog-r, gesture, index_far, index_middle, index_near, litte_far, litte_near, middle_far, middle_near, middle_ring, pitch, ring_far, ring_little, ring_near, roll, thumb_far, thumb_index, thumb_near, thumb_palm, wrist_bend\n- **Montage**: standard_1005\n- **Hardware**: g.tec medical engineering GmbH\n- **Reference**: right mastoid\n- **Ground**: AFz\n- **Sensor type**: active\n- **Line frequency**: 50.0 Hz\n- **Online filters**: 0.01-200 Hz bandpass (8th order Chebyshev), 50 Hz notch\n## Participants\n- **Number of subjects**: 15\n- **Health status**: healthy\n- **Age**: mean=27.0, std=5.0, min=22.0, max=40.0\n- **Gender distribution**: female=9, male=6\n- **Handedness**: {'right': 14, 'left': 1}\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Number of classes**: 7\n- **Class labels**: right_elbow_flexion, right_elbow_extension, right_supination, right_pronation, right_hand_close, right_hand_open, rest\n- **Study design**: Trial-based paradigm with sustained movements/motor imagery. Each trial: fixation cross at 0s, cue presentation at 2s, sustained movement/MI execution. Subjects performed both movement execution (ME) and motor imagery (MI) in separate sessions.\n- **Feedback type**: none\n- **Stimulus type**: visual cue\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Training/test split**: False\n- **Instructions**: Subjects were instructed to execute sustained movements in ME session and perform kinesthetic motor imagery in MI session. For rest class, subjects were instructed to avoid any movement and to stay in the starting position.\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  right_elbow_flexion\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Flex\n          └─ Right, Elbow\n  right_elbow_extension\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Stretch\n          └─ Right, Elbow\n  right_supination\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Turn\n          ├─ Right, Forearm\n          └─ Label/supination\n  right_pronation\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Turn\n          ├─ Right, Forearm\n          └─ Label/pronation\n  right_hand_close\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Close\n          └─ Right, Hand\n  right_hand_open\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Open\n          └─ Right, Hand\n  rest\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Rest\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: elbow_flexion, elbow_extension, forearm_supination, forearm_pronation, hand_open, hand_close\n## Data Structure\n- **Trials**: 420\n- **Trials per class**: elbow_flexion=60, elbow_extension=60, forearm_supination=60, forearm_pronation=60, hand_open=60, hand_close=60, rest=60\n- **Trials context**: per_session\n## Preprocessing\n- **Preprocessing applied**: False\n## Signal Processing\n- **Classifiers**: sLDA\n- **Feature extraction**: time-domain signals, discriminative spatial patterns (DSP)\n- **Frequency bands**: analyzed=[0.3, 3.0] Hz\n- **Spatial filters**: sLORETA source localization\n## Cross-Validation\n- **Method**: 10x10-fold cross-validation\n- **Folds**: 10\n- **Evaluation type**: within-session\n## Performance (Original Study)\n- **Mov Vs Mov Me**: 55.0\n- **Mov Vs Rest Me**: 87.0\n- **Mov Vs Mov Mi**: 27.0\n- **Mov Vs Rest Mi**: 73.0\n## BCI Application\n- **Applications**: neuroprosthesis, robotic_arm\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Motor\n- **Type**: Motor Imagery, Motor Execution\n## Documentation\n- **DOI**: 10.1371/journal.pone.0182578\n- **Associated paper DOI**: 10.1371/journal.pone.0182578\n- **License**: CC-BY-4.0\n- **Investigators**: Patrick Ofner, Andreas Schwarz, Joana Pereira, Gernot R. Müller-Putz\n- **Senior author**: Gernot R. Müller-Putz\n- **Contact**: gernot.mueller@tugraz.at\n- **Institution**: Graz University of Technology\n- **Department**: Institute of Neural Engineering, BCI-Lab\n- **Country**: AT\n- **Repository**: BNCI Horizon 2020\n- **Data URL**: https://bnci-horizon-2020.eu/database/data-sets\n- **Publication year**: 2017\n- **Funding**: H2020-643955 MoreGrasp; ERC Consolidator Grant ERC-681231 Feel Your Reach\n- **Ethics approval**: Medical University of Graz, approval number 28-108 ex 15/16\n- **Acknowledgements**: Data are available from the BNCI Horizon 2020 database at http://bnci-horizon-2020.eu/database/data-sets (accession number 001-2017) and from Zenodo at DOI 10.5281/zenodo.834976\n- **Keywords**: upper limb movements, EEG, motor imagery, movement execution, low-frequency, time-domain, BCI, neuroprosthesis\n## Abstract\nHow neural correlates of movements are represented in the human brain is of ongoing interest and has been researched with invasive and non-invasive methods. In this study, we analyzed the encoding of single upper limb movements in the time-domain of low-frequency electroencephalography (EEG) signals. Fifteen healthy subjects executed and imagined six different sustained upper limb movements. We classified these six movements and a rest class and obtained significant average classification accuracies of 55% (movement vs movement) and 87% (movement vs rest) for executed movements, and 27% and 73%, respectively, for imagined movements. Furthermore, we analyzed the classifier patterns in the source space and located the brain areas conveying discriminative movement information. The classifier patterns indicate that mainly premotor areas, primary motor cortex, somatosensory cortex and posterior parietal cortex convey discriminative movement information. The decoding of single upper limb movements is specially interesting in the context of a more natural non-invasive control of e.g., a motor neuroprosthesis or a robotic arm in highly motor disabled persons.\n## Methodology\nSubjects performed 6 sustained upper limb movements (elbow flexion/extension, forearm supination/pronation, hand open/close) plus rest in two separate sessions (movement execution and motor imagery). EEG was recorded from 61 channels, filtered to 0.3-3 Hz, and classified using shrinkage LDA with discriminative spatial patterns. Source localization was performed using sLORETA. Classification employed both single time-point and time-window approaches with 10x10-fold cross-validation.\n## References\nOfner, P., Schwarz, A., Pereira, J. and Müller-Putz, G.R., 2017. Upper limb movements can be decoded from the time-domain of low-frequency EEG. PloS one, 12(8), p.e0182578. https://doi.org/10.1371/journal.pone.0182578\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0execution","1imagination"],"size_bytes":9172751830,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000173","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:08:48.103929+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-30T13:04:45Z"},"total_files":300,"computed_title":"Motor Imagery ataset from Ofner et al 2017","nchans_counts":[{"val":61,"count":300}],"sfreq_counts":[{"val":512.0,"count":300}],"stats_computed_at":"2026-05-01T13:49:34.645391+00:00","total_duration_s":97570.4140625,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"7711fcca4f8549ad","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot example is **“EEG Motor Movement/Imagery Dataset”** (Schalk et al.): it is a trial-based motor execution + motor imagery paradigm with **visual targets/cues on a screen**, labeled **Pathology=Healthy, Modality=Visual, Type=Motor**. The present dataset is likewise motor execution + motor imagery with explicit **visual cue** presentation, so the same convention applies. Other few-shot examples (e.g., gambling/oddball/digit-span) are less similar because they are not motor imagery/execution paradigms.","metadata_analysis":"Key quoted metadata facts:\n1) Population: \"**Health status**: healthy\" and the abstract states \"**Fifteen healthy subjects** executed and imagined six different sustained upper limb movements.\" Also provided tags include \"**Pathology**: Healthy\".\n2) Task/construct: \"**Paradigm**: imagery\" and \"Subjects performed both **movement execution (ME)** and **motor imagery (MI)** in separate sessions.\" Events are movement/imagery classes: \"right_elbow_flexion... right_hand_open... **rest**\".\n3) Stimulus modality: \"**Stimulus type: visual cue**\" and protocol includes \"**fixation cross**\" and \"**cue presentation**\". HED annotations repeatedly include \"**Visual-presentation**\".","paper_abstract_analysis":"Abstract is included in the README. It reinforces the task and population: \"**Fifteen healthy subjects executed and imagined** six different sustained upper limb movements\" and frames the study goal as decoding/representation of movements from EEG (motor domain). No contradiction with metadata.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\"; \"Fifteen healthy subjects\"; tag \"Pathology: Healthy\".\n- Few-shot suggests: Motor imagery/execution benchmark datasets are typically labeled Healthy unless a disorder is stated (e.g., EEGMMIDB example is Healthy).\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"Stimulus type: visual cue\"; includes \"fixation cross\" and \"cue presentation\"; HED includes \"Visual-presentation\".\n- Few-shot suggests: In motor imagery tasks with on-screen targets/cues, label Modality as Visual (EEG Motor Movement/Imagery Dataset example).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: \"movement execution (ME) and motor imagery (MI)\"; events are upper-limb movements; described as decoding \"upper limb movements\".\n- Few-shot suggests: Motor imagery/execution datasets are labeled Type=Motor (EEG Motor Movement/Imagery Dataset example).\n- Alignment: ALIGN.","decision_summary":"Top-2 comparative selections:\n\nPathology:\n1) Healthy (WIN) — evidence: \"Health status: healthy\"; \"Fifteen healthy subjects\"; tag \"Pathology: Healthy\".\n2) Unknown (runner-up) — would apply if health status were not specified.\nDecision: Healthy. Alignment status: ALIGN. Confidence justification: 3 explicit metadata/README quotes + strong few-shot analog.\n\nModality:\n1) Visual (WIN) — evidence: \"Stimulus type: visual cue\"; protocol includes \"fixation cross\"/\"cue presentation\"; HED: \"Visual-presentation\"; plus few-shot EEGMMIDB convention.\n2) Motor (runner-up) — plausible because the paradigm is motor imagery/execution, but modality is defined by presented stimuli; here cues are explicitly visual.\nDecision: Visual. Alignment status: ALIGN. Confidence justification: 3 explicit visual-stimulus indicators + strong few-shot analog.\n\nType:\n1) Motor (WIN) — evidence: \"movement execution (ME) and motor imagery (MI)\"; paradigm \"imagery\"; movement-class events (elbow/forearm/hand + rest) and decoding movement information.\n2) Perception (runner-up) — would fit if the main goal were sensory discrimination; not supported here.\nDecision: Motor. Alignment status: ALIGN. Confidence justification: multiple explicit motor-imagery/execution statements + strong few-shot analog."}},"canonical_name":null,"name_confidence":0.84,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Ofner2017"}}