{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c85","dataset_id":"nm000162","associated_paper_doi":null,"authors":["Nitikorn Srisrisawang","Gernot R Müller-Putz"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":20,"ages":[26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26],"age_min":26,"age_max":26,"age_mean":26.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000162","osf_url":null,"github_url":null,"paper_url":null},"funding":["Royal Thai Government (scholar funding for N.S.)","BioTechMed Graz"],"ingestion_fingerprint":"c2bc3aefa757ba63344632f6205f03e1a19aa26d9425310f047af251b225105a","license":"CC-BY-4.0","n_contributing_labs":null,"name":"BNCI 2025-001 Motor Kinematics Reaching dataset","readme":"# BNCI 2025-001 Motor Kinematics Reaching dataset\nBNCI 2025-001 Motor Kinematics Reaching dataset.\n## Dataset Overview\n- **Code**: BNCI2025-001\n- **Paradigm**: imagery\n- **DOI**: 10.1088/1741-2552/ada0ea\n- **Subjects**: 20\n- **Sessions per subject**: 1\n- **Events**: up_slow_near=1, up_slow_far=2, up_fast_near=3, up_fast_far=4, down_slow_near=5, down_slow_far=6, down_fast_near=7, down_fast_far=8, left_slow_near=9, left_slow_far=10, left_fast_near=11, left_fast_far=12, right_slow_near=13, right_slow_far=14, right_fast_near=15, right_fast_far=16\n- **Trial interval**: [0, 4] s\n- **File format**: EEG (BrainAmp)\n- **Data preprocessed**: True\n## Acquisition\n- **Sampling rate**: 500.0 Hz\n- **Number of channels**: 67\n- **Channel types**: eeg=67, eog=4\n- **Channel names**: AF3, AF4, AF7, AF8, AFz, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, EOGL1, EOGL2, EOGL3, EOGR1, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fz, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO7, PO8, POz, PPO1h, PPO2h, Pz, T7, T8, TP7, TP8, targetPosX, targetPoxY, validity, vx, vy, x, y\n- **Montage**: af7 af3 afz af4 af8 f7 f5 f3 f1 fz f2 f4 f6 f8 ft7 fc5 fc3 fc1 fcz fc2 fc4 fc6 ft8 t7 c5 c3 c1 cz c2 c4 c6 t8 tp7 cp5 cp3 cp1 cpz cp2 cp4 cp6 tp8 p7 p5 p3 p1 pz p2 p4 p6 p8 ppo1h ppo2h po7 po3 poz po4 po8 o1 oz o2\n- **Hardware**: BrainAmp\n- **Software**: EEGLAB\n- **Reference**: common average\n- **Sensor type**: EEG\n- **Line frequency**: 50.0 Hz\n- **Online filters**: 50 Hz notch\n- **Cap manufacturer**: Zebris Medical GmbH\n- **Cap model**: ELPOS\n- **Auxiliary channels**: EOG (4 ch, horizontal, vertical)\n## Participants\n- **Number of subjects**: 20\n- **Health status**: patients\n- **Clinical population**: Healthy\n- **Age**: mean=26.1, std=4.1\n- **Gender distribution**: male=12, female=8\n- **Handedness**: {'right': 17, 'left': 3}\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Task type**: discrete reaching\n- **Number of classes**: 16\n- **Class labels**: up_slow_near, up_slow_far, up_fast_near, up_fast_far, down_slow_near, down_slow_far, down_fast_near, down_fast_far, left_slow_near, left_slow_far, left_fast_near, left_fast_far, right_slow_near, right_slow_far, right_fast_near, right_fast_far\n- **Tasks**: discrete reaching\n- **Study design**: Four-direction center-out reaching task with varying speeds (quick/slow) and distances (near/far) following visual cue, self-paced execution with eye fixation on cue\n- **Feedback type**: visual (cue color: green for correct, red for incorrect direction)\n- **Stimulus type**: visual cue\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: cue-paced\n- **Mode**: both\n- **Instructions**: Follow cue with eyes, wait at least 1s after cue stops, mimic movement while fixating eyes on cue, move smoothly with whole arm avoiding wrist rotation\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  up_slow_near\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Reach\n          ├─ Upward\n          ├─ Label/slow\n          └─ Label/near\n  up_slow_far\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Reach\n          ├─ Upward\n          ├─ Label/slow\n          └─ Label/far\n  up_fast_near\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Reach\n          ├─ Upward\n          ├─ Label/fast\n          └─ Label/near\n  up_fast_far\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Reach\n          ├─ Upward\n          ├─ Label/fast\n          └─ Label/far\n  down_slow_near\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Reach\n          ├─ Downward\n          ├─ Label/slow\n          └─ Label/near\n  down_slow_far\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Reach\n          ├─ Downward\n          ├─ Label/slow\n          └─ Label/far\n  down_fast_near\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Reach\n          ├─ Downward\n          ├─ Label/fast\n          └─ Label/near\n  down_fast_far\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Reach\n          ├─ Downward\n          ├─ Label/fast\n          └─ Label/far\n  left_slow_near\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Reach\n          ├─ Left\n          ├─ Label/slow\n          └─ Label/near\n  left_slow_far\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Reach\n          ├─ Left\n          ├─ Label/slow\n          └─ Label/far\n  left_fast_near\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Reach\n          ├─ Left\n          ├─ Label/fast\n          └─ Label/near\n  left_fast_far\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Reach\n          ├─ Left\n          ├─ Label/fast\n          └─ Label/far\n  right_slow_near\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Reach\n          ├─ Right\n          ├─ Label/slow\n          └─ Label/near\n  right_slow_far\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Reach\n          ├─ Right\n          ├─ Label/slow\n          └─ Label/far\n  right_fast_near\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Reach\n          ├─ Right\n          ├─ Label/fast\n          └─ Label/near\n  right_fast_far\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Reach\n          ├─ Right\n          ├─ Label/fast\n          └─ Label/far\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Number of targets**: 4\n- **Imagery tasks**: right_hand_reaching\n## Data Structure\n- **Trials**: 960\n- **Trials per class**: up=240, down=240, left=240, right=240\n- **Blocks per session**: 10\n- **Block duration**: 1200.0 s\n- **Trials context**: per_participant (before rejection)\n## Preprocessing\n- **Data state**: preprocessed with eye artifact correction\n- **Preprocessing applied**: True\n- **Steps**: low-pass filter at 100 Hz, notch filter at 50 Hz, downsampling to 200 Hz, bad channel rejection and interpolation, bandpass filter 0.3-80 Hz, eye artifact correction via SGEYESUB, ICA with FastICA algorithm, IC artifact removal, low-pass filter at 3 Hz, downsampling to 10 Hz, bad trial rejection, common average reference\n- **Highpass filter**: 0.3 Hz\n- **Lowpass filter**: 100.0 Hz\n- **Bandpass filter**: {'low_cutoff_hz': 0.3, 'high_cutoff_hz': 80.0}\n- **Notch filter**: [50] Hz\n- **Filter type**: Butterworth\n- **Filter order**: 2\n- **Artifact methods**: ICA, SGEYESUB (Sparse Generalized Eye Artifact Subspace Subtraction), IClabel plugin\n- **Re-reference**: common average\n- **Downsampled to**: 200.0 Hz\n- **Epoch window**: [-3.0, 4.0]\n- **Notes**: Frontal channels (AF7, AF3, AFz, AF4, AF8) and EOG removed prior to CAR to reduce residual eye artifacts. Final analysis used 55 channels. Eye blocks recorded separately for SGEYESUB model training. Bad trials rejected based on amplitude >200 µV or standard deviation >5SD. Movement-related bad trials rejected for incorrect direction, no movement, duration <0.2s or >4s, or movement initiated <0.5s after cue stop.\n## Signal Processing\n- **Classifiers**: sLDA (shrinkage Linear Discriminant Analysis)\n- **Feature extraction**: Low-frequency EEG (0.3-3 Hz), Source localization (sLORETA), ICA, ROI-based features\n- **Frequency bands**: delta=[0.3, 3.0] Hz; analyzed=[0.3, 100.0] Hz\n- **Spatial filters**: Common Average Reference, Source-space projection\n## Cross-Validation\n- **Method**: stratified k-fold\n- **Folds**: 10\n- **Evaluation type**: within_session\n## Performance (Original Study)\n- **Direction Accuracy Cstp**: 39.75\n- **Direction Accuracy Mon**: 42.42\n- **Speed Accuracy Cstp**: 66.03\n- **Speed Accuracy Mon**: 70.49\n- **Distance Accuracy Cstp**: 60.83\n- **Distance Accuracy Mon**: 55.41\n- **Quick Direction Accuracy Cstp**: 44.12\n- **Quick Direction Accuracy Mon**: 49.67\n- **Slow Direction Accuracy Cstp**: 37.42\n- **Slow Direction Accuracy Mon**: 35.89\n## BCI Application\n- **Applications**: motor_control, rehabilitation\n- **Environment**: laboratory\n- **Online feedback**: True\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Visual\n- **Type**: Motor\n## Documentation\n- **Description**: EEG dataset investigating simultaneous encoding of speed, distance, and direction in discrete hand reaching movements using a four-direction center-out task\n- **DOI**: 10.1088/1741-2552/ada0ea\n- **License**: CC-BY-4.0\n- **Investigators**: Nitikorn Srisrisawang, Gernot R Müller-Putz\n- **Senior author**: Gernot R Müller-Putz\n- **Contact**: gernot.mueller@tugraz.at\n- **Institution**: Institute of Neural Engineering, Graz University of Technology\n- **Department**: Institute of Neural Engineering\n- **Address**: Stremayrgasse 16/IV, 8010 Graz, Austria\n- **Country**: Austria\n- **Repository**: GitHub\n- **Data URL**: https://github.com/rkobler/eyeartifactcorrection\n- **Publication year**: 2024\n- **Funding**: Royal Thai Government (scholar funding for N.S.); BioTechMed Graz\n- **Ethics approval**: Ethical committee at the Graz University of Technology (EK-28/2024); Declaration of Helsinki\n- **Acknowledgements**: Members of the Graz BCI team, especially Markus Crell for providing motion capture software\n- **Keywords**: electroencephalography, brain–computer interface, source localization, discrete reaching, center-out task\n## Abstract\nObjective. The complicated processes of carrying out a hand reach are still far from fully understood. In order to further the understanding of the kinematics of hand movement, the simultaneous representation of speed, distance, and direction in the brain is explored. Approach. We utilized electroencephalography (EEG) signals and hand position recorded during a four-direction center-out reaching task with either quick or slow speed, near and far distance. Linear models were employed in two modes: decoding and encoding. First, to test the discriminability of speed, distance, and direction. Second, to find the contribution of the cortical sources via the source localization. Additionally, we compared the decoding accuracy when using features obtained from EEG signals and source-localized EEG signals based on the results from the encoding model. Main results. Speed, distance, and direction can be classified better than chance. The accuracy of the speed was also higher than the distance, indicating a stronger representation of the speed than the distance. The speed and distance showed similar significant sources in the central regions related to the movement initiation, while the direction indicated significant sources in the parieto-occipital regions related to the movement preparation. The combination of the features from EEG and source localized signals improved the classification. Significance. Directional and non-directional information are represented in two separate networks. The quick movement resulted in improvement in the direction classification. Our results enhance our understanding of hand movement in the brain and help us make informed decisions when designing an improved paradigm in the future.\n## Methodology\nParticipants performed discrete reaching movements in four directions (up, down, left, right) with two speeds (quick: 0.4-0.8s cue duration, slow: 1.2-2.4s cue duration) and two distances (near: ~5cm/8.7cm actual, far: ~10cm/15.6cm actual). Each trial consisted of outward and inward movements. Visual cue moved from center to target position. Participants waited ≥1s after cue stop before mimicking movement with eyes fixated on cue. Hand position tracked via camera with pink marker on right index finger. 32 conditions (2 speed × 2 distance × 4 direction × 2 inward/outward) with 30 trials per class = 960 trials total per participant. After rejection, ~852 trials remained. EEG processed with EEGLAB on MATLAB R2019b. Signals epoched in two alignments: cue stop aligned (CStp: -3 to 4s) and movement onset aligned (MOn: -3 to 3s). Analysis included MRCP analysis, point-wise classification with instantaneous and windowed (500ms) features, encoding model using GLM, source localization using BEM with ICBM152 template and sLORETA inverse solution via Brainstorm, and source-space classification using data-driven ROIs derived from encoding model. Classification performed with shrinkage LDA. Permutation testing (1000 repetitions) used for significance. FDR controlled using Benjamini-Hochberg procedures.\n## References\nSrisrisawang, N., & Muller-Putz, G. R. (2024). Simultaneous encoding of speed, distance, and direction in discrete reaching: an EEG study. Journal of Neural Engineering, 21(6). https://doi.org/10.1088/1741-2552/ada0ea\nNotes\n.. versionadded:: 1.3.0\nThis dataset is notable for its multi-parameter kinematic design, enabling study of how multiple movement parameters are represented simultaneously in EEG activity. The paradigm uses movement execution rather than motor imagery, making it complementary to MI datasets.\nThe data is compatible with the MOABB motor imagery paradigm for processing purposes, though the underlying task is movement execution.\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":16132666242,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000162","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:08:45.444139+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-02T21:18:31Z"},"total_files":20,"computed_title":"BNCI 2025-001 Motor Kinematics Reaching dataset","nchans_counts":[{"val":67,"count":20}],"sfreq_counts":[{"val":500.0,"count":20}],"stats_computed_at":"2026-05-01T13:49:34.645284+00:00","total_duration_s":160014.72,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"eaf95e7c2021db6e","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.7,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot by task paradigm is the 'EEG Motor Movement/Imagery Dataset' example (Healthy / Visual / Motor). That example uses visually presented targets and motor execution/imagery, and is labeled Type=Motor even though stimuli are visual. This guides the convention that Modality reflects the stimulus channel (visual cue/targets), while Type reflects the motor construct (movement/imagery/kinematics).","metadata_analysis":"Key population and task facts from metadata:\n- Population: \"Clinical population: Healthy\" and \"Number of subjects: 20\".\n- Conflicting health-status field: \"Health status: patients\" (but no diagnosis given anywhere else).\n- Motor task: \"Task type: discrete reaching\" and \"Study design: Four-direction center-out reaching task with varying speeds (quick/slow) and distances (near/far)\".\n- Visual stimulus: \"Stimulus type: visual cue\" plus \"Stimulus modalities: visual\" and \"Primary modality: visual\".\n- Additional motor framing: \"Paradigm: imagery\" and later clarification: \"The paradigm uses movement execution rather than motor imagery\".","paper_abstract_analysis":"The included abstract supports a motor-kinematics/movement study: \"hand reach\" and \"four-direction center-out reaching task\" and \"representation of speed, distance, and direction\". It does not indicate any clinical recruitment beyond healthy participants.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Clinical population: Healthy\" (explicit).\n- Few-shot pattern suggests: motor reaching/imagery datasets are typically Healthy unless a disorder is stated (as in Parkinson's/TBI examples).\n- Alignment: PARTIAL (there is a conflicting metadata field \"Health status: patients\", but it is not a diagnosis and is contradicted by the explicit \"Clinical population: Healthy\"). Per rules, explicit clinical-population fact wins.\n\nModality:\n- Metadata says: \"Stimulus type: visual cue\" and \"Stimulus modalities: visual\" / \"Primary modality: visual\".\n- Few-shot pattern suggests: motor tasks with screen cues are labeled Visual for Modality (see motor movement/imagery example).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: \"Task type: discrete reaching\" and describes reaching movements/kinematics; also notes \"movement execution rather than motor imagery\".\n- Few-shot pattern suggests: movement execution/imagery studies are Type=Motor.\n- Alignment: ALIGN.","decision_summary":"Top-2 comparative selections:\n\nPathology candidates:\n1) Healthy (SELECTED): supported by explicit quote \"Clinical population: Healthy\"; also no disorder named anywhere.\n2) Unknown: plausible only because of conflicting line \"Health status: patients\" without diagnosis.\nDecision: Healthy is stronger because it is an explicit clinical-population statement; \"patients\" appears to be a misfilled field rather than a recruitment diagnosis.\nConfidence basis: 1 strong explicit quote for Healthy plus overall absence of any disorder label.\n\nModality candidates:\n1) Visual (SELECTED): \"Stimulus type: visual cue\", \"Stimulus modalities: visual\", \"Primary modality: visual\".\n2) Motor: could be considered if focusing on action rather than stimulus, but labeling convention uses stimulus channel.\nDecision: Visual clearly dominates with multiple explicit stimulus-modality fields.\nConfidence basis: 3 explicit modality quotes.\n\nType candidates:\n1) Motor (SELECTED): \"Task type: discrete reaching\", center-out reaching design, and motor-kinematics abstract; plus explicit note that it is movement execution.\n2) Perception: less plausible because visual cues are instrumental, but the study goal is movement parameter encoding/decoding.\nDecision: Motor is stronger because the primary construct is reaching/kinematics, not sensory discrimination.\nConfidence basis: multiple explicit motor-task descriptions plus abstract support."}},"canonical_name":null,"name_confidence":0.72,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Srisrisawang2025"}}