{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cdd","dataset_id":"nm000339","associated_paper_doi":null,"authors":["James R. Stieger","Stephen A. Engel","Bin He"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.1038/s41597-021-00883-1","datatypes":["eeg"],"demographics":{"subjects_count":62,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":{"r":62}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000339","osf_url":null,"github_url":null,"paper_url":null},"funding":["NIH AT009263","NIH EB021027","NIH NS096761","NIH MH114233","NIH EB029354"],"ingestion_fingerprint":"2116e9e1a9c66d8e164f870738d921e192b7ee47a588eb6c191d45ac0493f53b","license":"CC-BY-NC-4.0","n_contributing_labs":null,"name":"Stieger et al. 2021 — Continuous sensorimotor rhythm based brain computer interface learning in a large population","readme":"Stieger2021\n===========\nMotor Imagery dataset from Stieger et al. 2021 [1]_.\nDataset Overview\n----------------\n  Code: Stieger2021\n  Paradigm: imagery\n  DOI: 10.1038/s41597-021-00883-1\n  Subjects: 62\n  Sessions per subject: 11\n  Events: right_hand=1, left_hand=2, both_hand=3, rest=4\n  Trial interval: [0, 3] s\n  File format: MAT\nAcquisition\n-----------\n  Sampling rate: 1000.0 Hz\n  Number of channels: 62\n  Channel types: eeg=62\n  Channel names: AF3, AF4, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fp1, Fp2, Fpz, Fz, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO5, PO6, PO7, PO8, POz, Pz, T7, T8, TP7, TP8\n  Montage: 10-10\n  Hardware: Neuroscan SynAmps RT amplifiers\n  Software: Neuroscan\n  Sensor type: EEG\n  Line frequency: 60.0 Hz\n  Online filters: 0.1 to 200 Hz with 60 Hz notch filter\n  Impedance threshold: 5.0 kOhm\n  Cap manufacturer: Neuroscan\n  Cap model: Quik-Cap\nParticipants\n------------\n  Number of subjects: 62\n  Health status: healthy\n  Age: min=18, max=63\n  Gender distribution: male=13, female=49\n  Handedness: mostly right-handed\n  Species: human\nExperimental Protocol\n---------------------\n  Paradigm: imagery\n  Number of classes: 4\n  Class labels: right_hand, left_hand, both_hand, rest\n  Tasks: LR, UD, 2D\n  Study design: longitudinal training study with intervention\n  Feedback type: visual\n  Stimulus type: target_bar\n  Stimulus modalities: visual\n  Primary modality: visual\n  Mode: online\n  Instructions: Imagine your left (right) hand opening and closing to move the cursor left (right). Imagine both hands opening and closing to move the cursor up. Finally, to move the cursor down, voluntarily rest; in other words, clear your mind.\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  right_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\n  left_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Left, Hand\n  both_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine, Move, Hand\n  rest\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Rest\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: motor_imagery\n  Imagery tasks: left_hand, right_hand, both_hands, rest\n  Cue duration: 2.0 s\n  Imagery duration: 6.0 s\nData Structure\n--------------\n  Trials: 450\n  Blocks per session: 18\n  Trials context: per_session\nPreprocessing\n-------------\n  Data state: raw\n  Preprocessing applied: False\nSignal Processing\n-----------------\n  Feature extraction: ERD, ERS, autoregressive model, power spectrum\n  Frequency bands: alpha=[10.5, 13.5] Hz; mu=[8, 14] Hz\n  Spatial filters: Laplacian (C3/C4 with 4 surrounding electrodes)\nCross-Validation\n----------------\n  Evaluation type: cross_session\nPerformance (Original Study)\n----------------------------\n  Accuracy: 70.0%\n  Pvc 1D Threshold: 70.0\n  Pvc 2D Threshold: 40.0\nBCI Application\n---------------\n  Applications: cursor_control\n  Environment: laboratory\n  Online feedback: True\nTags\n----\n  Pathology: Healthy\n  Modality: Motor\n  Type: Active\nDocumentation\n-------------\n  Description: Continuous sensorimotor rhythm based brain computer interface learning in a large population\n  DOI: 10.1038/s41597-021-00883-1\n  License: CC-BY-NC-4.0\n  Investigators: James R. Stieger, Stephen A. Engel, Bin He\n  Senior author: Bin He\n  Contact: bhe1@andrew.cmu.edu\n  Institution: Carnegie Mellon University, University of Minnesota\n  Department: Carnegie Mellon University, Pittsburgh, PA, USA; University of Minnesota, Minneapolis, MN, USA\n  Address: Pittsburgh, PA, USA; Minneapolis, MN, USA\n  Country: US\n  Repository: GitHub\n  Data URL: https://doi.org/10.6084/m9.figshare.13123148.v1\n  Publication year: 2021\n  Funding: NIH AT009263; NIH EB021027; NIH NS096761; NIH MH114233; NIH EB029354\n  Ethics approval: University of Minnesota IRB; Carnegie Mellon University IRB\n  Keywords: BCI, sensorimotor rhythm, motor imagery, EEG, longitudinal, learning\nAbstract\n--------\nBrain computer interfaces (BCIs) are valuable tools that expand the nature of communication through bypassing traditional neuromuscular pathways. The non-invasive, intuitive, and continuous nature of sensorimotor rhythm (SMR) based BCIs enables individuals to control computers, robotic arms, wheelchairs, and even drones by decoding motor imagination from electroencephalography (EEG). Large and uniform datasets are needed to design, evaluate, and improve the BCI algorithms. In this work, we release a large and longitudinal dataset collected during a study that examined how individuals learn to control SMR-BCIs. The dataset contains over 600 hours of EEG recordings collected during online and continuous BCI control from 62 healthy adults, (mostly) right hand dominant participants, across (up to) 11 training sessions per participant. The data record consists of 598 recording sessions, and over 250,000 trials of 4 different motor-imagery-based BCI tasks.\nMethodology\n-----------\nParticipants completed 7-11 online BCI training sessions. Each session consisted of 450 trials across 3 tasks (LR, UD, 2D) with 6 runs total. Each trial: 2s inter-trial interval, 2s target presentation, up to 6s feedback control. Online control used spatial filtering (Laplacian around C3/C4), autoregressive model (order 16) for spectrum estimation, alpha power (12 Hz ± 1.5 Hz) for control signal. Horizontal motion controlled by lateralized alpha power (C4-C3), vertical motion by total alpha power (C4+C3). Control signals normalized to zero mean and unit variance. Cursor position updated every 40 ms.\nReferences\n----------\nStieger, J. R., Engel, S. A., & He, B. (2021). Continuous sensorimotor rhythm based brain computer interface learning in a large population. Scientific Data, 8(1), 98. https://doi.org/10.1038/s41597-021-00883-1\nNotes\n.. versionadded:: 1.1.0\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["1","10","11","2","3","4","5","6","7","8","9"],"size_bytes":398845922151,"source":"nemar","storage":{"backend":"s3","base":"s3://openneuro.org/nm000339","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-22T12:52:27.805088+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":598,"computed_title":"Stieger et al. 2021 — Continuous sensorimotor rhythm based brain computer interface learning in a large population","nchans_counts":[{"val":60,"count":598}],"sfreq_counts":[{"val":1000.0,"count":598}],"stats_computed_at":"2026-04-22T23:16:00.314615+00:00","total_duration_s":2215269.402,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"698e2121df972330","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Learning"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot example by paradigm is the 'EEG Motor Movement/Imagery Dataset' (Schalk et al.), which maps a motor imagery BCI-style task with on-screen targets to Modality='Visual' and Type='Motor'. This guides the convention that motor imagery tasks often still have Visual modality when cues/feedback are visually presented (stimulus channel determines modality). However, unlike that example, this dataset’s stated research purpose is explicitly a longitudinal BCI training/learning study, which makes Type='Learning' a strong contender.","metadata_analysis":"Pathology/population facts: (1) 'Health status: healthy' (Participants section). (2) 'The dataset contains ... EEG recordings ... from 62 healthy adults' (Abstract section). (3) 'Tags\\n----\\n  Pathology: Healthy' (Tags section).\n\nModality/stimulus facts: (1) 'Feedback type: visual' (Experimental Protocol). (2) 'Stimulus modalities: visual' and 'Primary modality: visual' (Experimental Protocol). (3) 'Stimulus type: target_bar' (Experimental Protocol).\n\nType/purpose facts: (1) 'Study design: longitudinal training study with intervention' (Experimental Protocol). (2) 'examined how individuals learn to control SMR-BCIs' (Abstract section). (3) Documentation description: 'Continuous sensorimotor rhythm based brain computer interface learning in a large population' (Documentation). Task mechanics also clearly involve motor imagery: 'Paradigm: imagery' and instructions 'Imagine your left (right) hand opening and closing...' (Experimental Protocol).","paper_abstract_analysis":"Useful paper information is included inline under 'Abstract'. It emphasizes learning as the study goal: 'examined how individuals learn to control SMR-BCIs' and describes a 'longitudinal dataset ... across (up to) 11 training sessions per participant.' This supports Type='Learning' (learning across training) while still being grounded in motor imagery BCI tasks.","evidence_alignment_check":"Pathology: Metadata says 'Health status: healthy' and '62 healthy adults' (ALIGN with few-shot conventions where non-clinical cohorts are labeled Healthy). No conflict.\n\nModality: Metadata explicitly says 'Stimulus modalities: visual', 'Primary modality: visual', and 'Feedback type: visual'. Few-shot motor imagery example also uses Visual modality for on-screen target paradigms. ALIGN.\n\nType: Metadata says both (a) motor imagery task mechanics ('Paradigm: imagery', 'Imagine your left (right) hand...') and (b) learning/training purpose ('longitudinal training study', 'learn to control SMR-BCIs', 'BCI learning'). Few-shot motor imagery example would suggest Type='Motor' by convention, but the explicit stated study purpose here emphasizes learning. Partial tension, resolved by choosing the label that best matches the primary research purpose stated in metadata/abstract (Learning).","decision_summary":"Top-2 candidates per category:\n\nPathology:\n- Healthy (WIN): 'Health status: healthy'; '62 healthy adults'; 'Pathology: Healthy'.\n- Unknown (runner-up): not supported; metadata is explicit.\nAlignment: aligned. Confidence justified by 3 explicit quotes.\n\nModality:\n- Visual (WIN): 'Stimulus modalities: visual'; 'Primary modality: visual'; 'Feedback type: visual'.\n- Motor (runner-up): motor imagery is performed, but modality is defined by stimulus channel, not response/imagery.\nAlignment: aligned with few-shot motor imagery convention (Visual modality when visually cued). Confidence justified by 3 explicit quotes + few-shot analog.\n\nType:\n- Learning (WIN): 'longitudinal training study with intervention'; 'examined how individuals learn to control SMR-BCIs'; 'brain computer interface learning in a large population'.\n- Motor (runner-up): motor imagery is central to the task ('Paradigm: imagery'; 'Imagine your left (right) hand...').\nAlignment: slight tension with the closest few-shot motor imagery example (often Type=Motor), but metadata/abstract explicitly frame the dataset’s purpose as learning across training sessions; therefore Learning is selected. Confidence justified by 3 explicit learning-related quotes, but reduced due to plausible Motor alternative."}},"canonical_name":null,"name_confidence":0.82,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Stieger2021"}}