{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c7f","dataset_id":"nm000151","associated_paper_doi":null,"authors":["Mojgan Tavakolan","Zack Frehlick","Xinyi Yong","Carlo Menon"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":"10.82901/nemar.nm000151","datatypes":["eeg"],"demographics":{"subjects_count":12,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000151","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"fd9debbb3fb8ca541b768064afd3db8618ac26c384400da47ed484d1977aa4de","license":"CC0-1.0","n_contributing_labs":null,"name":"Motor imagery dataset for three imaginary states of the same upper extremity","readme":"[![DOI](https://img.shields.io/badge/DOI-10.82901%2Fnemar.nm000151-blue)](https://doi.org/10.82901/nemar.nm000151)\n# Motor imagery dataset for three imaginary states of the same upper extremity\nMotor imagery dataset for three imaginary states of the same upper extremity.\n## Dataset Overview\n- **Code**: Tavakolan2017\n- **Paradigm**: imagery\n- **DOI**: 10.1371/journal.pone.0174161\n- **Subjects**: 12\n- **Sessions per subject**: 4\n- **Events**: rest=1, right_hand=2, right_elbow_flexion=3\n- **Trial interval**: [0, 3] s\n- **File format**: BCI2000\n## Acquisition\n- **Sampling rate**: 1000.0 Hz\n- **Number of channels**: 32\n- **Channel types**: eeg=32\n- **Montage**: GSN-HydroCel-32\n- **Hardware**: EGI Geodesic Net Amps 400 series\n- **Reference**: Cz\n- **Sensor type**: Ag/AgCl sponge\n- **Line frequency**: 60.0 Hz\n- **Online filters**: {'bandpass': [0.1, 100]}\n- **Impedance threshold**: 50 kOhm\n## Participants\n- **Number of subjects**: 12\n- **Health status**: healthy\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Number of classes**: 3\n- **Class labels**: rest, right_hand, right_elbow_flexion\n- **Trial duration**: 3.0 s\n- **Study design**: Three-class motor imagery of the same upper extremity: rest, grasping (MI-GRASP), and elbow flexion (MI-ELBOW). 20 trials per class per session, 4 sessions per subject.\n- **Feedback type**: none\n- **Stimulus type**: visual cue\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Instructions**: REST: relax without movement. MI-GRASP: imagine opening and closing all fingers to grab an object. MI-ELBOW: imagine moving the forearm up and down.\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  rest\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Rest\n  right_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\n  right_elbow_flexion\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Flex\n          └─ Right, Elbow\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: rest, right_hand, right_elbow_flexion\n- **Cue duration**: 3.0 s\n- **Imagery duration**: 3.0 s\n## Data Structure\n- **Trials**: 2880\n- **Trials per class**: rest=20, right_hand=20, right_elbow_flexion=20\n- **Trials context**: 12 subjects x 4 sessions x 60 trials (20 per class)\n## Preprocessing\n- **Data state**: continuous\n## Signal Processing\n- **Classifiers**: SVM-RBF\n- **Feature extraction**: autoregressive_coefficients, waveform_length, root_mean_square\n- **Frequency bands**: bandpass=[6.0, 35.0] Hz\n## Cross-Validation\n- **Method**: 10x10-fold\n- **Folds**: 10\n- **Evaluation type**: within_subject\n## BCI Application\n- **Applications**: motor_control, rehabilitation\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Motor\n- **Type**: Research\n## Documentation\n- **DOI**: 10.1371/journal.pone.0174161\n- **License**: CC0-1.0\n- **Investigators**: Mojgan Tavakolan, Zack Frehlick, Xinyi Yong, Carlo Menon\n- **Senior author**: Carlo Menon\n- **Institution**: Simon Fraser University\n- **Department**: MENRVA Research Group, Schools of Mechatronic Systems Engineering and Engineering Science\n- **Country**: CA\n- **Repository**: Zenodo\n- **Data URL**: https://zenodo.org/records/18967205\n- **Publication year**: 2017\n- **Ethics approval**: Simon Fraser University Office of Research Ethics\n- **Keywords**: motor imagery, EEG, upper extremity, same limb, time-domain features, SVM, BCI\n## References\nM. Tavakolan, Z. Frehlick, X. Yong, and C. Menon, \"Classifying three imaginary states of the same upper extremity using time-domain features,\" PLoS ONE, vol. 12, no. 3, e0174161, 2017. DOI: 10.1371/journal.pone.0174161\nM. Tavakolan, Z. Frehlick, X. Yong, and C. Menon, \"Data from: Classifying three imaginary states of the same upper extremity using time-domain features,\" Dryad, 2017. DOI: 10.5061/dryad.6qs86\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1","2","3"],"size_bytes":3435352514,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000151","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:08:42.017884+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-17T13:40:38Z"},"total_files":46,"computed_title":"Motor imagery dataset for three imaginary states of the same upper extremity","nchans_counts":[{"val":32,"count":46}],"sfreq_counts":[{"val":1000.0,"count":46}],"stats_computed_at":"2026-05-01T13:49:34.645206+00:00","total_duration_s":35644.474,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"94e249e83e0489f2","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot example: \"EEG Motor Movement/Imagery Dataset\" (Schalk et al.). It is a motor execution/imagery BCI paradigm where \"A target appears...\" on a screen, and it is labeled Modality=Visual and Type=Motor. This guides the convention that (1) motor imagery paradigms map to Type=Motor, and (2) the stimulus/input channel for cueing the imagery (screen targets/cues) determines Modality (often Visual), even though the imagined/attempted action is motor-related.","metadata_analysis":"Key metadata facts:\n- Population/clinical status: \"Health status: healthy\" and also \"Tags - Pathology: Healthy\".\n- Task/purpose: \"Paradigm: imagery\" and \"Study design: Three-class motor imagery... rest, grasping (MI-GRASP), and elbow flexion (MI-ELBOW).\" Also instructions: \"MI-GRASP: imagine opening and closing all fingers... MI-ELBOW: imagine moving the forearm...\".\n- Stimulus modality: \"Stimulus type: visual cue\" plus \"Stimulus modalities: visual\" and \"Primary modality: visual\".\nThese collectively indicate a healthy cohort performing visually-cued motor imagery, typically used for BCI/motor-control applications (\"BCI Application - Applications: motor_control, rehabilitation\").","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\" (and \"Tags - Pathology: Healthy\").\n- Few-shot suggests: motor imagery datasets commonly use healthy volunteers unless otherwise specified (e.g., Schalk dataset labeled Healthy).\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"Stimulus type: visual cue\", \"Stimulus modalities: visual\", \"Primary modality: visual\".\n- Few-shot pattern suggests: for motor imagery tasks that are cued by on-screen targets/cues, Modality is labeled Visual (see Schalk motor movement/imagery example).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: \"Three-class motor imagery of the same upper extremity\" with explicit imagined movements (grasp, elbow flexion).\n- Few-shot pattern suggests: movement execution/imagery as the research focus maps to Type=Motor.\n- Alignment: ALIGN.","decision_summary":"Top-2 comparative selections:\n\nPathology:\n1) Healthy (WINNER) — Evidence: \"Health status: healthy\"; \"Tags - Pathology: Healthy\"; \"Subjects: 12\" with no clinical recruitment described.\n2) Unknown (RUNNER-UP) — Would apply if health status were not stated.\nAlignment status: aligned with few-shot convention. Confidence is high because there are 2 explicit metadata statements indicating healthy participants.\n\nModality:\n1) Visual (WINNER) — Evidence: \"Stimulus type: visual cue\"; \"Stimulus modalities: visual\"; \"Primary modality: visual\".\n2) Motor (RUNNER-UP) — Plausible because the task is motor imagery, but modality definition prioritizes stimulus/input channel, not the imagined action.\nAlignment status: aligned with few-shot (motor imagery + on-screen cue => Visual modality). Confidence high due to 3 explicit modality statements.\n\nType:\n1) Motor (WINNER) — Evidence: \"Paradigm: imagery\"; \"Three-class motor imagery... grasping (MI-GRASP), and elbow flexion (MI-ELBOW)\"; instructions explicitly describe imagining limb movements.\n2) Perception (RUNNER-UP) — Would only fit if the primary aim were sensory discrimination; here the core construct is motor imagery/BCI control.\nAlignment status: aligned with few-shot. Confidence high due to multiple explicit motor-imagery descriptions."}},"canonical_name":null,"name_confidence":0.84,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Tavakolan2017"}}