{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cd4","dataset_id":"nm000311","associated_paper_doi":null,"authors":["Ji-Hoon Jeong","Jeong-Hyun Cho","Kyung-Hwan Shim","Byoung-Hee Kwon","Byeong-Hoo Lee","Do-Yeun Lee","Dae-Hyeok Lee","Seong-Whan Lee"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"10.82901/nemar.nm000311","datatypes":["eeg"],"demographics":{"subjects_count":25,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":{"r":25}},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000311","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"f5cec11f982d50b86acc2ae94b1f1b4d1cff9e5da4c673c2cc6b858eb607771c","license":"CC0-1.0","n_contributing_labs":null,"name":"Multimodal upper-limb MI/ME EEG (Jeong et al. 2020)","readme":"[![DOI](https://img.shields.io/badge/DOI-10.82901%2Fnemar.nm000311-blue)](https://doi.org/10.82901/nemar.nm000311)\nJeong2020\n=========\nMultimodal MI+ME dataset from Jeong et al 2020.\nDataset Overview\n----------------\n  Code: Jeong2020\n  Paradigm: imagery\n  DOI: 10.1093/gigascience/giaa098\n  Subjects: 25\n  Sessions per subject: 3\n  Events: reach_forward=1, reach_backward=2, reach_left=3, reach_right=4, reach_up=5, reach_down=6, grasp_cup=7, grasp_ball=8, grasp_card=9, twist_pronation=10, twist_supination=11\n  Trial interval: [0, 4] s\n  Runs per session: 3\n  File format: BrainVision\nAcquisition\n-----------\n  Sampling rate: 1000.0 Hz\n  Number of channels: 71\n  Channel types: eeg=60, eog=4, emg=7\n  Channel names: Fp1, AF7, AF3, AFz, F7, F5, F3, F1, Fz, FT7, FC5, FC3, FC1, T7, C5, C3, C1, Cz, TP7, CP5, CP3, CP1, CPz, P7, P5, P3, P1, Pz, PO7, PO3, POz, Fp2, AF4, AF8, F2, F4, F6, F8, FC2, FC4, FC6, FT8, C2, C4, C6, T8, CP2, CP4, CP6, TP8, P2, P4, P6, P8, PO4, PO8, O1, Oz, O2, Iz\n  Montage: standard_1005\n  Hardware: BrainAmp (BrainProducts GmbH)\n  Reference: FCz\n  Ground: Fpz\n  Sensor type: actiCap\n  Line frequency: 60.0 Hz\n  Online filters: {'highpass': 0.016, 'lowpass': 1000}\nParticipants\n------------\n  Number of subjects: 25\n  Health status: healthy\n  Age: min=24.0, max=32.0\n  Gender distribution: female=10, male=15\n  Handedness: right-handed\n  BCI experience: naive\n  Species: human\nExperimental Protocol\n---------------------\n  Paradigm: imagery\n  Number of classes: 11\n  Class labels: reach_forward, reach_backward, reach_left, reach_right, reach_up, reach_down, grasp_cup, grasp_ball, grasp_card, twist_pronation, twist_supination\n  Trial duration: 4.0 s\n  Study design: 11 intuitive upper-limb movement tasks: 6 reaching + 3 grasping + 2 wrist twisting. MI and real movement conditions, 3 sessions.\n  Feedback type: none\n  Stimulus type: text cues\n  Stimulus modalities: visual\n  Primary modality: visual\n  Synchronicity: synchronous\n  Mode: offline\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  reach_forward\n    ├─ Sensory-event\n    └─ Label/reach_forward\n  reach_backward\n    ├─ Sensory-event\n    └─ Label/reach_backward\n  reach_left\n    ├─ Sensory-event\n    └─ Label/reach_left\n  reach_right\n    ├─ Sensory-event\n    └─ Label/reach_right\n  reach_up\n    ├─ Sensory-event\n    └─ Label/reach_up\n  reach_down\n    ├─ Sensory-event\n    └─ Label/reach_down\n  grasp_cup\n    ├─ Sensory-event\n    └─ Label/grasp_cup\n  grasp_ball\n    ├─ Sensory-event\n    └─ Label/grasp_ball\n  grasp_card\n    ├─ Sensory-event\n    └─ Label/grasp_card\n  twist_pronation\n    ├─ Sensory-event\n    └─ Label/twist_pronation\n  twist_supination\n    ├─ Sensory-event\n    └─ Label/twist_supination\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: motor_imagery\n  Imagery tasks: reach_forward, reach_backward, reach_left, reach_right, reach_up, reach_down, grasp_cup, grasp_ball, grasp_card, twist_pronation, twist_supination\n  Imagery duration: 4.0 s\nData Structure\n--------------\n  Trials: 41250\n  Trials context: 25 subjects x 3 sessions x 550 trials (300 reaching + 150 grasping + 100 twisting)\nSignal Processing\n-----------------\n  Classifiers: CSP+RLDA\n  Feature extraction: CSP\n  Frequency bands: mu_beta=[8.0, 30.0] Hz\n  Spatial filters: CSP\nCross-Validation\n----------------\n  Method: 10x10-fold\n  Folds: 10\n  Evaluation type: within_session\nBCI Application\n---------------\n  Applications: motor_control, prosthetics\n  Environment: laboratory\n  Online feedback: False\nTags\n----\n  Pathology: Healthy\n  Modality: Motor\n  Type: Research\nDocumentation\n-------------\n  DOI: 10.1093/gigascience/giaa098\n  License: CC0-1.0\n  Investigators: Ji-Hoon Jeong, Jeong-Hyun Cho, Kyung-Hwan Shim, Byoung-Hee Kwon, Byeong-Hoo Lee, Do-Yeun Lee, Dae-Hyeok Lee, Seong-Whan Lee\n  Institution: Korea University\n  Country: KR\n  Data URL: https://zenodo.org/records/19021436\n  Publication year: 2020\nReferences\n----------\nJeong, J.-H., Cho, J.-H., Shim, K.-H., et al. (2020). Multimodal signal dataset for 11 intuitive movement tasks from single upper extremity during multiple recording sessions. GigaScience, 9(10), giaa098. https://doi.org/10.1093/gigascience/giaa098\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1","2"],"size_bytes":95161245507,"source":"nemar","storage":{"backend":"s3","base":"s3://nemar/nm000311","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-22T12:30:55.983030+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-03T17:45:52Z"},"total_files":213,"computed_title":"Multimodal upper-limb MI/ME EEG (Jeong et al. 2020)","nchans_counts":[{"val":71,"count":213}],"sfreq_counts":[{"val":1000.0,"count":213}],"stats_computed_at":"2026-04-22T23:16:00.314500+00:00","total_duration_s":446631.785,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"04a13e19dd2b5b46","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.85,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot is **EEG Motor Movement/Imagery Dataset** (Schalk et al., BCI2000): it involves motor execution/imagery cued by on-screen visual targets, and is labeled **Modality=Visual** and **Type=Motor**. This guides the convention that for motor-imagery paradigms, the *stimulus channel* is typically labeled Visual (because cues are visual), while the *research construct* is Motor. The present dataset is likewise an upper-limb MI/ME (motor imagery + motor execution) dataset with visual text cues, so the same mapping applies.","metadata_analysis":"Key population facts: \"Health status: healthy\" and also \"Tags\\n----\\n  Pathology: Healthy\".\nKey task/type facts: \"Paradigm: imagery\", \"Detected paradigm: motor_imagery\", and \"Study design: 11 intuitive upper-limb movement tasks: 6 reaching + 3 grasping + 2 wrist twisting. MI and real movement conditions\".\nKey stimulus/modality facts: \"Stimulus type: text cues\", \"Stimulus modalities: visual\", and \"Primary modality: visual\".","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says \"Health status: healthy\" and explicitly tags \"Pathology: Healthy\". Few-shot pattern for non-clinical volunteer MI/ME datasets also suggests Healthy. ALIGN.\nModality: Metadata explicitly says \"Stimulus type: text cues\" and \"Stimulus modalities: visual\" (\"Primary modality: visual\"). Few-shot convention (e.g., EEG Motor Movement/Imagery Dataset labeled Modality=Visual despite motor actions) suggests using the cue modality (visual) rather than motor output. ALIGN.\nType: Metadata emphasizes MI/ME of upper-limb actions (\"motor_imagery\"; \"upper-limb movement tasks\"), matching few-shot motor imagery/execution datasets labeled Type=Motor. ALIGN.","decision_summary":"Top-2 candidates per category:\n- Pathology: (1) Healthy vs (2) Other. Winner Healthy due to explicit recruitment/annotation: \"Health status: healthy\"; \"Pathology: Healthy\". Alignment: align.\n- Modality: (1) Visual vs (2) Motor. Winner Visual because stimuli are visual cues: \"Stimulus type: text cues\"; \"Stimulus modalities: visual\"; \"Primary modality: visual\". Few-shot motor imagery example uses same convention (Modality=Visual). Alignment: align.\n- Type: (1) Motor vs (2) Attention/Other. Winner Motor because purpose is motor imagery/execution classification: \"Detected paradigm: motor_imagery\"; \"MI and real movement conditions\"; movement classes (reach/grasp/twist). Few-shot motor imagery example labeled Type=Motor. Alignment: align.\nConfidence justification: Pathology has 2+ explicit metadata statements including an explicit pathology tag; Modality has 3 explicit visual-stimulus statements; Type has multiple explicit motor-imagery/movement-task statements plus a strong few-shot analog."}},"canonical_name":null,"name_confidence":0.78,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Jeong2020"}}