{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c80","dataset_id":"nm000152","associated_paper_doi":null,"authors":["Xin Zhang","Xinyi Yong","Carlo Menon"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":12,"ages":[27,28,24,21,31,21,30,26,20,33,23,33],"age_min":20,"age_max":33,"age_mean":26.416666666666668,"species":null,"sex_distribution":{"m":10,"f":2},"handedness_distribution":{"r":11,"l":1}},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000152","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"a13ea48f22d06eb5d4b366286bf0b9e40990df6323a6c83e2d58765397eb492c","license":"CC BY 4.0","n_contributing_labs":null,"name":"Upper-limb elbow-centered motor imagery dataset (10 classes)","readme":"# Upper-limb elbow-centered motor imagery dataset (10 classes)\nUpper-limb elbow-centered motor imagery dataset (10 classes).\n## Dataset Overview\n- **Code**: Zhang2017\n- **Paradigm**: imagery\n- **DOI**: 10.1371/journal.pone.0188293\n- **Subjects**: 12\n- **Sessions per subject**: 1\n- **Events**: rest=1, elbow_flexion=2, drawer=3, soup=4, weight_lifting=5, door=6, plate_cleaning=7, combing=8, pizza_cutting=9, pick_and_place=10\n- **Trial interval**: [0, 4] s\n- **Runs per session**: 15\n- **File format**: BCI2000\n## Acquisition\n- **Sampling rate**: 1000.0 Hz\n- **Number of channels**: 17\n- **Channel types**: eeg=17\n- **Hardware**: EGI Geodesic Net Amps 400 series (N400)\n- **Software**: BCI2000 (Stimulus Presentation mode)\n- **Reference**: Cz\n- **Ground**: COM\n- **Sensor type**: Ag/AgCl sponge\n- **Line frequency**: 60.0 Hz\n- **Online filters**: {'bandpass': [0.1, 40]}\n## Participants\n- **Number of subjects**: 12\n- **Health status**: healthy\n- **Age**: min=20, max=33\n- **Gender distribution**: male=10, female=2\n- **Handedness**: {'right': 11, 'left': 1}\n- **BCI experience**: naive\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Number of classes**: 10\n- **Class labels**: rest, elbow_flexion, drawer, soup, weight_lifting, door, plate_cleaning, combing, pizza_cutting, pick_and_place\n- **Trial duration**: 5.0 s\n- **Study design**: Upper-limb elbow-centered motor imagery with 9 goal-directed tasks plus rest. Each trial: 4-6 s cue (randomized) then 4-6 s rest (randomized).\n- **Feedback type**: none\n- **Stimulus type**: picture cues\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Instructions**: Participants were asked to repetitively perform the kinesthetic motor imagery task displayed on the screen without actually moving.\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  rest\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Rest\n  elbow_flexion\n    ├─ Sensory-event\n    └─ Label/elbow_flexion\n  drawer\n    ├─ Sensory-event\n    └─ Label/drawer\n  soup\n    ├─ Sensory-event\n    └─ Label/soup\n  weight_lifting\n    ├─ Sensory-event\n    └─ Label/weight_lifting\n  door\n    ├─ Sensory-event\n    └─ Label/door\n  plate_cleaning\n    ├─ Sensory-event\n    └─ Label/plate_cleaning\n  combing\n    ├─ Sensory-event\n    └─ Label/combing\n  pizza_cutting\n    ├─ Sensory-event\n    └─ Label/pizza_cutting\n  pick_and_place\n    ├─ Sensory-event\n    └─ Label/pick_and_place\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: elbow_flexion, drawer, soup, weight_lifting, door, plate_cleaning, combing, pizza_cutting, pick_and_place\n- **Cue duration**: 5.0 s\n- **Imagery duration**: 5.0 s\n## Data Structure\n- **Trials**: 330\n- **Trials context**: 15 runs of 24 trials each (4 rest + 4 elbow + 2 each of 8 goal tasks). Total: 60 rest + 30 per MI task = 330.\n## Preprocessing\n- **Data state**: raw\n- **Preprocessing applied**: False\n## Signal Processing\n- **Classifiers**: LDA, DAL\n- **Feature extraction**: bandpower, CSP, FBCSP\n- **Frequency bands**: bandpass=[6.0, 35.0] Hz; mu=[7.0, 13.0] Hz; beta=[13.0, 30.0] Hz\n- **Spatial filters**: CSP, FBCSP\n## Cross-Validation\n- **Method**: 5x5-fold\n- **Folds**: 5\n- **Evaluation type**: within_subject\n## BCI Application\n- **Applications**: motor_control, rehabilitation\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Motor\n- **Type**: Research\n## Documentation\n- **DOI**: 10.1371/journal.pone.0188293\n- **License**: CC BY 4.0\n- **Investigators**: Xin Zhang, Xinyi Yong, Carlo Menon\n- **Senior author**: Carlo Menon\n- **Institution**: Simon Fraser University\n- **Department**: School of Engineering Science\n- **Country**: CA\n- **Repository**: Figshare\n- **Data URL**: https://doi.org/10.6084/m9.figshare.5579461.v1\n- **Publication year**: 2017\n- **Keywords**: motor imagery, upper limb, elbow, BCI, EEG, kinesthetic imagery\n## References\nX. Zhang, X. Yong, and C. Menon, \"Evaluating the versatility of EEG models generated from motor imagery tasks: An exploratory investigation on upper-limb elbow-centered motor imagery tasks,\" PLoS ONE, vol. 12, no. 11, e0188293, 2017. DOI: 10.1371/journal.pone.0188293\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":1727306574,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000152","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:08:42.284196+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-29T22:40:33Z"},"total_files":180,"computed_title":"Upper-limb elbow-centered motor imagery dataset (10 classes)","nchans_counts":[{"val":17,"count":180}],"sfreq_counts":[{"val":1000.0,"count":180}],"stats_computed_at":"2026-05-01T13:49:34.645223+00:00","total_duration_s":33282.9,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"91b82cc41527c2b1","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Closest few-shot match is the \"EEG Motor Movement/Imagery Dataset\" example (Schalk et al.), which is a motor imagery/movement paradigm where cues are shown on a screen. In that example, the catalog convention labels Modality as \"Visual\" (because cues are visual) and Type as \"Motor\" (because motor execution/imagery is the construct). This directly guides mapping here: motor imagery with picture cues → Modality=Visual, Type=Motor.","metadata_analysis":"Key facts from provided metadata/readme: (1) Population: \"Health status: healthy\" and also \"Subjects: 12\" with no clinical recruitment stated. (2) Task/paradigm: \"Paradigm: imagery\" and \"Upper-limb elbow-centered motor imagery with 9 goal-directed tasks plus rest\" and \"Participants were asked to repetitively perform the kinesthetic motor imagery task displayed on the screen without actually moving.\" (3) Stimulus/input channel: \"Stimulus type: picture cues\" plus \"Stimulus modalities: visual\" and \"Primary modality: visual.\"","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says \"Health status: healthy\" (ALIGNS with few-shot conventions for non-clinical cohorts → Healthy). Modality: Metadata says \"Stimulus type: picture cues\" and \"Stimulus modalities: visual\" / \"Primary modality: visual\"; few-shot motor imagery example also uses Modality=Visual due to screen cues (ALIGN). Type: Metadata says \"Upper-limb elbow-centered motor imagery\" and participants performed \"kinesthetic motor imagery\"; few-shot convention maps motor imagery studies to Type=Motor (ALIGN). No conflicts detected; no need to override few-shot patterns with contrary metadata facts.","decision_summary":"Top-2 candidates per category and selection.\n\nPathology:\n- Healthy (winner): explicit \"Health status: healthy\"; no diagnosis-based recruitment.\n- Unknown (runner-up): would apply if health status absent/unclear.\nAlignment: Align. Final: Healthy.\n\nModality:\n- Visual (winner): \"Stimulus type: picture cues\"; \"Stimulus modalities: visual\"; \"Primary modality: visual\"; matches few-shot motor imagery convention (visual cueing).\n- Motor (runner-up): motor imagery involves motor system, but modality is defined as stimulus input channel, not response/imagery domain.\nAlignment: Align. Final: Visual.\n\nType:\n- Motor (winner): \"motor imagery dataset\"; \"kinesthetic motor imagery\"; multiple motor-imagery classes (e.g., \"elbow_flexion\").\n- Perception (runner-up): possible if primary goal were visual cue processing, but metadata emphasizes motor imagery/BCI.\nAlignment: Align. Final: Motor.\n\nConfidence justification quotes/features: Pathology supported by 1 strong explicit quote; Modality supported by 3 explicit quotes; Type supported by 2+ explicit quotes plus strong few-shot analog."}},"canonical_name":null,"name_confidence":0.74,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Zhang2017"}}