{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c8a","dataset_id":"nm000170","associated_paper_doi":null,"authors":["Hannah S Pulferer","Brynja Ásgeirsdóttir","Valeria Mondini","Andreea I Sburlea","Gernot R Müller-Putz"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":10,"ages":[24,24,24,24,24,24,24,24,24,24],"age_min":24,"age_max":24,"age_mean":24.0,"species":null,"sex_distribution":null,"handedness_distribution":{"r":10}},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000170","osf_url":null,"github_url":null,"paper_url":null},"funding":["European Research Council ERC-CoG 2015 681231 'Feel Your Reach'","NTU-TUG joint PhD program"],"ingestion_fingerprint":"c2b334b3ec07f88f2ecff0a2be6ce172621da52d02fc0ff2ec7dbcfb2e0252db","license":"CC-BY-4.0","n_contributing_labs":null,"name":"BNCI 2025-002 Continuous 2D Trajectory Decoding dataset","readme":"# BNCI 2025-002 Continuous 2D Trajectory Decoding dataset\nBNCI 2025-002 Continuous 2D Trajectory Decoding dataset.\n## Dataset Overview\n- **Code**: BNCI2025-002\n- **Paradigm**: imagery\n- **DOI**: 10.1088/1741-2552/ac689f\n- **Subjects**: 10\n- **Sessions per subject**: 3\n- **Events**: snakerun=1, freerun=2, eyerun=3\n- **Trial interval**: [0, 8] s\n- **Runs per session**: 3\n- **File format**: gdf\n- **Data preprocessed**: True\n## Acquisition\n- **Sampling rate**: 200.0 Hz\n- **Number of channels**: 60\n- **Channel types**: eeg=60, eog=4\n- **Channel names**: AF3, AF4, AF7, AF8, AFz, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fz, HEOG1, HEOG2, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO7, PO8, POz, PPO1h, PPO2h, Pz, T7, T8, TP7, TP8, VEOG1, VEOG2\n- **Montage**: af7 af3 afz af4 af8 f7 f5 f3 f1 fz f2 f4 f6 f8 ft7 fc5 fc3 fc1 fcz fc2 fc4 fc6 ft8 t7 c5 c3 c1 cz c2 c4 c6 t8 tp7 cp5 cp3 cp1 cpz cp2 cp4 cp6 tp8 p7 p5 p3 p1 pz p2 p4 p6 p8 ppo1h ppo2h po7 po3 poz po4 po8 o1 oz o2\n- **Hardware**: actiCAP, Brain Products GmbH\n- **Software**: MATLAB 2015b, Psychtoolbox, EEGLAB\n- **Reference**: right mastoid\n- **Ground**: Fpz\n- **Sensor type**: EEG\n- **Line frequency**: 50.0 Hz\n- **Online filters**: anti-aliasing 25 Hz, notch 50 Hz\n- **Auxiliary channels**: EOG (4 ch, horizontal, vertical)\n## Participants\n- **Number of subjects**: 10\n- **Health status**: patients\n- **Clinical population**: Healthy (able-bodied participants) + 1 SCI participant\n- **Age**: mean=24.0, std=5.0\n- **Gender distribution**: male=5, female=5\n- **Handedness**: {'right': 10}\n- **BCI experience**: naive BCI users in terms of motor decoding; 4 had previous EEG experience\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Task type**: continuous 2D trajectory decoding\n- **Number of classes**: 3\n- **Class labels**: snakerun, freerun, eyerun\n- **Trial duration**: 23.0 s\n- **Study design**: Attempted movement paradigm: participants instructed to attempt lower arm movement as if wielding a computer mouse while arm was strapped to armrest. Two task types: snakeruns (target tracking) and freeruns (self-paced shape tracing). Offline calibration followed by online feedback in 50% and 100% EEG feedback conditions.\n- **Feedback type**: visual (green dot showing EEG-decoded trajectory position)\n- **Stimulus type**: visual targets (white snake/shapes on black screen)\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: continuous\n- **Mode**: attempted movement\n- **Training/test split**: True\n- **Instructions**: Track snake with gaze and simultaneously attempt movement of strapped lower arm/hand as if wielding computer mouse; for freeruns: trace static shapes at own pace\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  snakerun\n    ├─ Experiment-structure\n    └─ Label/snakerun\n  freerun\n    ├─ Experiment-structure\n    └─ Label/freerun\n  eyerun\n    ├─ Experiment-structure\n    └─ Label/eyerun\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: attempted arm/hand movement (2D continuous trajectory)\n## Data Structure\n- **Trials**: {'calibration_eyeruns': 38, 'calibration_snakeruns': 48, '50%_EEG_feedback_snakeruns': 36, '100%_EEG_feedback_snakeruns': 36, 'freeruns': 9}\n- **Trials context**: per_paradigm_type\n## Preprocessing\n- **Data state**: preprocessed\n- **Preprocessing applied**: True\n- **Steps**: anti-aliasing filter (25 Hz), notch filter (50 Hz), downsampling to 100 Hz, bad channel interpolation, eye artifact subtraction (SGEYESUB algorithm), removal of frontal (AF) row channels, high-pass filter (0.18 Hz), common average re-reference, pops and drifts attenuation (HEAR algorithm), low-pass filter (3 Hz), downsampling to 20 Hz\n- **Highpass filter**: 0.18 Hz\n- **Lowpass filter**: 3.0 Hz\n- **Notch filter**: [50] Hz\n- **Filter type**: Not specified\n- **Artifact methods**: SGEYESUB (eye artifact subtraction), HEAR (pops and drifts removal)\n- **Re-reference**: common average reference\n- **Downsampled to**: 20.0 Hz\n## Signal Processing\n- **Classifiers**: PLS regression with UKF smoothing\n- **Feature extraction**: Temporal features (7 time points × 55 channels = 385 features), sLORETA (source localization)\n- **Spatial filters**: Minimum norm imaging\n## Cross-Validation\n- **Method**: across-session\n- **Evaluation type**: within-subject, learning effects over sessions\n## Performance (Original Study)\n- **Normalized Correlation Mean**: 0.31\n- **Normalized Correlation Std**: 0.02\n- **Correlation Range Rc**: 0.4-0.5\n- **Nrmse Calibration**: 0.1\n- **Nrmse 100% Feedback**: 0.12\n## BCI Application\n- **Applications**: neuroprosthesis, robotic arm control, upper limb restoration\n- **Environment**: laboratory\n- **Online feedback**: True\n## Tags\n- **Pathology**: Healthy, Spinal cord injury\n- **Modality**: Visual\n- **Type**: Motor attempt, Continuous decoding\n## Documentation\n- **Description**: Continuous 2D trajectory decoding from attempted movement: across-session performance in able-bodied and feasibility in a spinal cord injured participant\n- **DOI**: 10.1088/1741-2552/ac689f\n- **License**: CC-BY-4.0\n- **Investigators**: Hannah S Pulferer, Brynja Ásgeirsdóttir, Valeria Mondini, Andreea I Sburlea, Gernot R Müller-Putz\n- **Senior author**: Gernot R Müller-Putz\n- **Contact**: gernot.mueller@tugraz.at\n- **Institution**: Institute of Neural Engineering, Graz University of Technology\n- **Address**: Stremayrgasse 16/IV, 8010 Graz, Austria\n- **Country**: Austria\n- **Repository**: GitHub\n- **Data URL**: https://github.com/sccn/labstreaminglayer\n- **Publication year**: 2022\n- **Funding**: European Research Council ERC-CoG 2015 681231 'Feel Your Reach'; NTU-TUG joint PhD program\n- **Ethics approval**: Medical University of Graz, votum number 32–583 ex 19/20\n- **Keywords**: electroencephalography, trajectory decoding, learning effects, source localization, motor control, neuroplasticity, brain-computer interface\n## References\nKobler, R. J., Almeida, I., Sburlea, A. I., & Muller-Putz, G. R. (2022). Continuous 2D trajectory decoding from attempted movement: across-session performance in able-bodied and feasibility in a spinal cord injured participant. Journal of Neural Engineering, 19(3), 036005. https://doi.org/10.1088/1741-2552/ac689f\nNotes\n.. versionadded:: 1.3.0\nThis dataset is designed for continuous decoding research, specifically for predicting 2D hand movement trajectories from EEG. Unlike classification-based motor imagery datasets, this dataset contains continuous trajectory labels suitable for regression-based decoders.\nThe paradigm \"imagery\" is used for compatibility with MOABB's motor imagery processing pipelines, though the actual task involves attempted (rather than imagined) movements.\nSee Also\nBNCI2014_001 : 4-class motor imagery dataset BNCI2014_004 : 2-class motor imagery dataset\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0ses1","1ses2","2ses3"],"size_bytes":3669196979,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000170","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:08:46.948185+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-30T13:04:48Z"},"total_files":90,"computed_title":"BNCI 2025-002 Continuous 2D Trajectory Decoding dataset","nchans_counts":[{"val":60,"count":180}],"sfreq_counts":[{"val":200.0,"count":180}],"stats_computed_at":"2026-05-01T13:49:34.645353+00:00","total_duration_s":202885.45,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"825956b043f6aab4","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Other"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.75,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot example by paradigm is the “EEG Motor Movement/Imagery Dataset” (motor/imagery tasks with visual cues), labeled Modality=Visual and Type=Motor. This guides mapping attempted/imagined movement decoding paradigms to Type=Motor even if there is on-screen visual cueing. However, unlike that example (Healthy volunteers only), this dataset explicitly includes a spinal cord injury (SCI) participant, so the Pathology cannot be assumed Healthy from the motor-imagery convention alone.","metadata_analysis":"Key population/task/stimulus facts from metadata:\n- Population: “Clinical population: Healthy (able-bodied participants) + 1 SCI participant” and “Health status: patients” and “Tags - Pathology: Healthy, Spinal cord injury”.\n- Task/motor construct: “Attempted movement paradigm: participants instructed to attempt lower arm movement as if wielding a computer mouse while arm was strapped to armrest.” Also “Mode: attempted movement” and “Detected paradigm: motor_imagery” and “Task type: continuous 2D trajectory decoding”.\n- Visual stimulus/feedback: “Stimulus type: visual targets (white snake/shapes on black screen)”, “Feedback type: visual (green dot showing EEG-decoded trajectory position)”, and “Primary modality: visual”.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n1) Metadata says: “Clinical population: Healthy (able-bodied participants) + 1 SCI participant” and “Tags - Pathology: Healthy, Spinal cord injury”.\n2) Few-shot pattern suggests: motor imagery/attempt datasets are often labeled Healthy (e.g., EEG Motor Movement/Imagery example).\n3) Alignment: PARTIAL CONFLICT (metadata explicitly includes an SCI participant, not purely healthy).\n4) Resolution: metadata clinical-population fact wins; since SCI is a clinical condition and not an allowed specific label, map to Pathology=Other.\n\nModality:\n1) Metadata says: “Stimulus type: visual targets…”, “Feedback type: visual…”, “Primary modality: visual”.\n2) Few-shot pattern suggests: motor/BCI tasks with on-screen targets are Visual modality.\n3) Alignment: ALIGN.\n\nType:\n1) Metadata says: “Attempted movement paradigm… attempt lower arm movement…”, “Mode: attempted movement”, “Detected paradigm: motor_imagery”, “continuous 2D trajectory decoding”.\n2) Few-shot pattern suggests: motor imagery/movement decoding paradigms map to Type=Motor.\n3) Alignment: ALIGN.","decision_summary":"Top-2 comparative selection:\n\nPathology:\n- Candidate 1: Other — supported by “Clinical population: Healthy (able-bodied participants) + 1 SCI participant” and “Tags - Pathology: Healthy, Spinal cord injury”, indicating recruitment includes a clinical condition (SCI) that is not an allowed specific label.\n- Candidate 2: Healthy — supported by “Healthy (able-bodied participants)” and n=10 with only 1 SCI, suggesting predominantly healthy participants.\nHead-to-head: Other is stronger because metadata explicitly includes SCI (a recruited clinical population), and allowed labels lack an SCI option. Evidence alignment: few-shot convention (Healthy for motor imagery) is overridden by explicit metadata clinical-population fact.\nConfidence (0.75) justified by 3 explicit population quotes but remaining ambiguity due to mixed cohort composition.\n\nModality:\n- Candidate 1: Visual — supported by “Stimulus type: visual targets…”, “Feedback type: visual…”, “Primary modality: visual”.\n- Candidate 2: Motor — possible because motor attempt is central, but modality is defined as stimulus channel, not response.\nHead-to-head: Visual clearly dominates because stimuli/feedback are visual. Confidence (0.9) from 3 explicit visual-stimulus quotes + strong few-shot analog.\n\nType:\n- Candidate 1: Motor — supported by “Attempted movement paradigm…”, “Mode: attempted movement”, “Detected paradigm: motor_imagery”, and trajectory decoding for movement control.\n- Candidate 2: Perception — possible because of visual tracking targets, but the research purpose is motor decoding/BCI control.\nHead-to-head: Motor is clearly primary (attempted movement decoding). Confidence (0.9) from 3+ explicit motor/attempted-movement quotes + strong few-shot analog."}},"canonical_name":null,"name_confidence":0.78,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Pulferer2025"}}