{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c84","dataset_id":"nm000161","associated_paper_doi":null,"authors":["Markus R. Crell","Gernot R. Müller-Putz"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":20,"ages":[27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27],"age_min":27,"age_max":27,"age_mean":27.0,"species":null,"sex_distribution":null,"handedness_distribution":{"r":20}},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000161","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"2b4d44a2fe6aae51440af29800ebc3877b16fc07a88a713b53202f2d0600480e","license":"CC-BY-4.0","n_contributing_labs":null,"name":"BNCI 2024-001 Handwritten Character Classification dataset","readme":"# BNCI 2024-001 Handwritten Character Classification dataset\nBNCI 2024-001 Handwritten Character Classification dataset.\n## Dataset Overview\n- **Code**: BNCI2024-001\n- **Paradigm**: imagery\n- **DOI**: 10.1016/j.compbiomed.2024.109132\n- **Subjects**: 20\n- **Sessions per subject**: 1\n- **Events**: letter_a=1, letter_d=2, letter_e=3, letter_f=4, letter_j=5, letter_n=6, letter_o=7, letter_s=8, letter_t=9, letter_v=10\n- **Trial interval**: [0, 3] s\n- **Runs per session**: 2\n- **File format**: MAT\n## Acquisition\n- **Sampling rate**: 500.0 Hz\n- **Number of channels**: 60\n- **Channel types**: eeg=60, eog=4\n- **Channel names**: AF3, AF4, AF7, AF8, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, EOG1, EOG2, EOG3, EOG4, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FT10, FT7, FT8, FT9, Fp1, Fp2, Fpz, Fz, M1, M2, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, POz, Pz, T7, T8, TP7, TP8\n- **Montage**: eogl1 eogl2 eogl3 eogr1 af7 af3 afz af4 af8 f7 f5 f3 f1 fz f2 f4 f6 f8 ft7 fc5 fc3 fc1 fcz fc2 fc4 fc6 ft8 t7 c5 c3 c1 cz c2 c4 c6 t8 tp7 cp5 cp3 cp1 cpz cp2 cp4 cp6 tp8 p7 p5 p3 p1 pz p2 p4 p6 p8 ppo1h ppo2h po7 po3 poz po4 po8 o1 oz o2\n- **Hardware**: BrainVision\n- **Software**: EEGLAB\n- **Reference**: right mastoid\n- **Sensor type**: active electrodes\n- **Line frequency**: 50.0 Hz\n- **Online filters**: 50 Hz notch\n- **Cap manufacturer**: Brain Products GmbH\n- **Auxiliary channels**: EOG (4 ch, horizontal, vertical)\n## Participants\n- **Number of subjects**: 20\n- **Health status**: healthy\n- **Age**: mean=27.5, std=3.92\n- **Gender distribution**: male=11, female=11\n- **Handedness**: {'right': 22}\n- **BCI experience**: not specified\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Task type**: handwriting\n- **Number of classes**: 10\n- **Class labels**: letter_a, letter_d, letter_e, letter_f, letter_j, letter_n, letter_o, letter_s, letter_t, letter_v\n- **Trial duration**: 8.5 s\n- **Study design**: Handwritten character task with 10 letters (a,d,e,f,j,n,o,s,t,v) using right index finger. Letters fade in (2s), remain visible (0.5s), fade out (2s), then 4s writing phase. Each letter written 60 times across 15 runs.\n- **Feedback type**: Training included visual feedback showing finger position; main paradigm had no feedback during writing (only fixation cross)\n- **Stimulus type**: letter cue\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Mode**: offline\n- **Training/test split**: True\n- **Instructions**: Start movement when letter fades out completely; write letter during 4s writing phase; stop hand at last position until next letter appears; execute home movement during fade-in to return to comfortable starting position\n- **Stimulus presentation**: fade_in_duration=2.0s, visible_duration=0.5s, fade_out_duration=2.0s, writing_duration=4.0s, total_trial_duration=8.5s\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  letter_a\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Write\n          ├─ Hand\n          └─ Label/a\n  letter_d\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Write\n          ├─ Hand\n          └─ Label/d\n  letter_e\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Write\n          ├─ Hand\n          └─ Label/e\n  letter_f\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Write\n          ├─ Hand\n          └─ Label/f\n  letter_j\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Write\n          ├─ Hand\n          └─ Label/j\n  letter_n\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Write\n          ├─ Hand\n          └─ Label/n\n  letter_o\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Write\n          ├─ Hand\n          └─ Label/o\n  letter_s\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Write\n          ├─ Hand\n          └─ Label/s\n  letter_t\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Write\n          ├─ Hand\n          └─ Label/t\n  letter_v\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Write\n          ├─ Hand\n          └─ Label/v\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: handwriting of letters a, d, e, f, j, n, o, s, t, v\n- **Cue duration**: 2.0 s\n- **Imagery duration**: 4.0 s\n## Data Structure\n- **Trials**: 60\n- **Blocks per session**: 15\n- **Block duration**: 340 s\n- **Trials context**: per_class\n## Preprocessing\n- **Data state**: raw\n- **Preprocessing applied**: True\n- **Steps**: notch filtering, bandpass filtering, bad channel interpolation, EOG artifact correction (SGEYESUB), ICA for artifact removal, re-referencing to CAR, bad segment rejection, lowpass filtering, downsampling, epoching\n- **Highpass filter**: 0.3 Hz\n- **Lowpass filter**: 70.0 Hz\n- **Bandpass filter**: {'low_cutoff_hz': 0.3, 'high_cutoff_hz': 70.0}\n- **Notch filter**: [50] Hz\n- **Filter type**: Butterworth\n- **Filter order**: 4\n- **Artifact methods**: ICA, SGEYESUB\n- **Re-reference**: car\n- **Downsampled to**: 128 Hz\n- **Epoch window**: [-4.5, 4.0]\n- **Notes**: Two datasets created: dataset 1 (0.3-3 Hz, 10 Hz sampling) and dataset 2 (0.3-40 Hz, 128 Hz sampling). Bad segments rejected if exceeding ±120 μV or kurtosis/probability > 7 SD from mean.\n## Signal Processing\n- **Classifiers**: Shrinkage Linear Discriminant Analysis (sLDA), EEGNet CNN\n- **Feature extraction**: low-frequency EEG, broadband EEG, continuous kinematics decoding\n- **Frequency bands**: analyzed=[0.3, 70.0] Hz\n- **Spatial filters**: CAR\n## Cross-Validation\n- **Method**: 2-times repeated 5-fold cross-validation\n- **Folds**: 5\n- **Evaluation type**: cross_session\n## Performance (Original Study)\n- **Accuracy**: 26.2%\n- **10 Letters Direct Lowfreq**: 23.1\n- **10 Letters Twostep**: 26.2\n- **5 Letters Direct Lowfreq**: 39.0\n- **5 Letters Twostep**: 46.7\n- **Kinematics Correlation Range**: 0.10-0.57\n- **Chance Level Correlation**: 0.04\n## BCI Application\n- **Applications**: communication, character_selection\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Motor\n- **Type**: Motor\n## Documentation\n- **Description**: Classification of handwritten letters from EEG through continuous kinematic decoding\n- **DOI**: 10.1016/j.compbiomed.2024.109132\n- **License**: CC-BY-4.0\n- **Investigators**: Markus R. Crell, Gernot R. Müller-Putz\n- **Senior author**: Gernot R. Müller-Putz\n- **Contact**: gernot.mueller@tugraz.at\n- **Institution**: Graz University of Technology\n- **Department**: Institute of Neural Engineering\n- **Address**: Graz, Austria\n- **Country**: Austria\n- **Repository**: BNCI Horizon 2020\n- **Data URL**: https://bnci-horizon-2020.eu/database/data-sets\n- **Publication year**: 2024\n- **Ethics approval**: Ethics Committee at Graz University of Technology\n- **Keywords**: Brain-computer interface (BCI), Electroencephalography (EEG), Handwriting, Continuous movement decoding, Non-invasive\n## Abstract\nThis study explores the classification of ten letters (a,d,e,f,j,n,o,s,t,v) from non-invasive neural signals of 20 participants. Letters were classified with direct classification from low-frequency and broadband EEG, and a two-step approach comprising continuous decoding of hand kinematics followed by classification. The two-step approach yielded significantly higher performances of 26.2% for ten letters and 46.7% for five letters. Hand kinematics could be reconstructed with correlation of 0.10 to 0.57 (average chance level: 0.04). Results suggest movement speed as the most informative kinematic for decoding short hand movements.\n## Methodology\nParticipants wrote 10 letters using right index finger with motion capture tracking (30 Hz, 2D positions). Two-round session with 7 runs (round 1) and 8 runs (round 2), 40 trials per run, 8.5s per trial. Training phase included 4 steps: observation, guided following, unguided following, and execution without feedback. Classification using sliding-window approach with sLDA and EEGNet CNN. Trajectory decoding using EEGNet architecture adapted for regression of position-based (px, py, vx, vy), distance-based (d, ḋ, θ, θ̇), and speed-based (s) kinematics.\n## References\nCrell, M. R., & Muller-Putz, G. R. (2024). Handwritten character classification from EEG through continuous kinematic decoding. Computers in Biology and Medicine, 182, 109132. https://doi.org/10.1016/j.compbiomed.2024.109132\nNotes\n.. versionadded:: 1.3.0\nThis dataset is notable for exploring non-invasive EEG-based handwritten character classification, which could enable communication for individuals with limited movement capacity. The study demonstrated that handwritten characters can be classified from non-invasive EEG and that decoding movement kinematics prior to classification improves performance.\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":10921575525,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000161","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:08:45.424785+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-02T21:18:30Z"},"total_files":40,"computed_title":"BNCI 2024-001 Handwritten Character Classification dataset","nchans_counts":[{"val":60,"count":40}],"sfreq_counts":[{"val":500.0,"count":40}],"stats_computed_at":"2026-05-01T13:49:34.645270+00:00","total_duration_s":121020.8,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"7973968f17ed8782","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.9,"modality":0.85,"type":0.9},"reasoning":{"few_shot_analysis":"Closest few-shot match is **EEG Motor Movement/Imagery Dataset** (Healthy, Visual, Motor). That example shows the catalog convention that for motor imagery/movement tasks where cues are shown on a screen, **Modality** is labeled by the stimulus channel (Visual) while **Type** captures the research focus (Motor). The current dataset is also an imagery-based BCI-like motor task (handwriting) with visual letter cues, so the same mapping applies.","metadata_analysis":"Pathology/population: explicitly healthy participants: (1) \"Health status: healthy\"; (2) \"This study explores the classification ... from non-invasive neural signals of 20 participants\" with no patient group; (3) \"Number of subjects: 20\".\n\nTask/type: clear motor/handwriting imagery/execution focus: (1) \"Paradigm: imagery\"; (2) \"Task type: handwriting\"; (3) \"Participants wrote 10 letters using right index finger\"; (4) \"write letter during 4s writing phase\".\n\nStimulus modality: explicitly visual cueing: (1) \"Stimulus type: letter cue\"; (2) \"Stimulus modalities: visual\"; (3) \"Letters fade in (2s), remain visible (0.5s), fade out (2s)\" and then writing phase.\n\nNote: metadata also contains internal tags \"## Tags - **Modality**: Motor - **Type**: Motor\" which conflicts with the catalog definition of Modality (stimulus/input channel).","paper_abstract_analysis":"No separate paper abstract field beyond the dataset-provided abstract in the README/metadata. The included abstract supports a motor/BCI goal: \"classification of ten letters ...\" and \"continuous decoding of hand kinematics\".","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\".\n- Few-shot pattern suggests: similar to motor imagery dataset with volunteers -> Healthy.\n- ALIGN.\n\nModality:\n- Metadata says: \"Stimulus modalities: visual\" and describes visually presented letters (fade in/out).\n- Few-shot pattern suggests: as in the Motor Movement/Imagery example, cue-on-screen tasks are labeled Visual for Modality.\n- Potential CONFLICT: dataset 'Tags' line suggests \"Modality: Motor\".\n- Resolution: choose **Visual** because the instruction defines Modality as the dominant sensory/input channel of stimuli; explicit metadata facts about visual cues override internal tagging inconsistencies.\n\nType:\n- Metadata says: \"Paradigm: imagery\", \"Task type: handwriting\", \"Participants wrote 10 letters using right index finger\", and focuses on \"continuous kinematic decoding\".\n- Few-shot pattern suggests: motor imagery/movement paradigms map to Type=Motor.\n- ALIGN.","decision_summary":"Top-2 candidates with head-to-head selection:\n\n1) Pathology:\n- Candidate A: Healthy (evidence: \"Health status: healthy\"; \"Subjects: 20\"; no clinical recruitment described).\n- Candidate B: Unknown (would apply if no health info).\n- Winner: **Healthy**. Alignment: aligns with few-shot motor imagery example.\n- Confidence basis: multiple explicit statements of healthy/non-clinical cohort.\n\n2) Modality:\n- Candidate A: Visual (evidence: \"Stimulus modalities: visual\"; \"Stimulus type: letter cue\"; \"Letters fade in... remain visible... fade out...\").\n- Candidate B: Motor (evidence: internal tags \"Modality: Motor\"; motor-writing phase is central but is response/imagery, not stimulus channel).\n- Winner: **Visual** because Modality is defined by stimulus/input channel and the letter cues are explicitly visual.\n- Confidence basis: 3 explicit visual-stimulus descriptions; one conflicting internal tag lowers certainty slightly.\n\n3) Type:\n- Candidate A: Motor (evidence: \"Paradigm: imagery\"; \"Task type: handwriting\"; \"Participants wrote 10 letters using right index finger\"; \"continuous decoding of hand kinematics\").\n- Candidate B: Perception/Other (would apply if focus were visual letter perception rather than writing/imagery).\n- Winner: **Motor** since the construct is movement/imagery/kinematics decoding.\n- Confidence basis: multiple explicit motor/imagery/handwriting statements + strong few-shot analog."}},"canonical_name":null,"name_confidence":0.7,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Crell2024"}}