{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c76","dataset_id":"nm000141","associated_paper_doi":null,"authors":["Maitreyee Wairagkar","Yoshikatsu Hayashi","Slawomir J. Nasuto"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":"10.82901/nemar.nm000141","datatypes":["eeg"],"demographics":{"subjects_count":14,"ages":[26,26,26,26,26,26,26,26,26,26,26,26,26,26],"age_min":26,"age_max":26,"age_mean":26.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000141","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"a619ef96d75f27821e994c722213f11626286a68cf8894a106ff7028ddf845da","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Motor execution dataset from Wairagkar et al 2018","readme":"[![DOI](https://img.shields.io/badge/DOI-10.82901%2Fnemar.nm000141-blue)](https://doi.org/10.82901/nemar.nm000141)\n# Motor execution dataset from Wairagkar et al 2018\nMotor execution dataset from Wairagkar et al 2018.\n## Dataset Overview\n- **Code**: Wairagkar2018\n- **Paradigm**: imagery\n- **DOI**: 10.1371/journal.pone.0193722\n- **Subjects**: 14\n- **Sessions per subject**: 1\n- **Events**: right_hand=1, rest=2, left_hand=3\n- **Trial interval**: [0, 3] s\n- **File format**: MAT\n- **Data preprocessed**: True\n## Acquisition\n- **Sampling rate**: 1024.0 Hz\n- **Number of channels**: 19\n- **Channel types**: eeg=19\n- **Channel names**: Fp1, Fp2, F7, F3, Fz, F4, F8, T7, C3, Cz, C4, T8, P7, P3, Pz, P4, P8, O1, O2\n- **Montage**: standard_1020\n- **Hardware**: Deymed TruScan 32\n- **Reference**: FCz\n- **Ground**: AFz\n- **Sensor type**: Ag/AgCl ring\n- **Line frequency**: 50.0 Hz\n- **Online filters**: {'highpass': 0.5, 'lowpass': 60, 'notch_hz': 50}\n## Participants\n- **Number of subjects**: 14\n- **Health status**: healthy\n- **Age**: mean=26.0, std=4.0\n- **Gender distribution**: female=8, male=6\n- **Handedness**: mixed (12 right, 2 left)\n- **BCI experience**: naive\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Number of classes**: 3\n- **Class labels**: right_hand, rest, left_hand\n- **Trial duration**: 6.0 s\n- **Study design**: Asynchronous voluntary finger tapping: right tap, left tap, and resting state\n- **Feedback type**: none\n- **Stimulus type**: text cues\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: asynchronous\n- **Mode**: offline\n- **Instructions**: Participants were asked to tap their index finger at a self-chosen time within a 10-second window after the cue\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  right_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\n  rest\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Rest\n  left_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Left, Hand\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: right_hand, left_hand, rest\n## Data Structure\n- **Trials**: 1665\n- **Trials context**: 14 subjects x 120 trials (40 per condition), except subject 2 with 105 trials (35 per condition)\n## Preprocessing\n- **Data state**: preprocessed\n- **Preprocessing applied**: True\n- **Steps**: DC offset removal, 0.5 Hz high-pass filter, 50 Hz notch filter, 60 Hz low-pass filter, ICA artifact removal (EEGLAB infomax), trial segmentation (-3 to +3 s around movement onset)\n- **Highpass filter**: 0.5 Hz\n- **Lowpass filter**: 60.0 Hz\n- **Notch filter**: 50.0 Hz\n## Signal Processing\n- **Classifiers**: LDA\n- **Feature extraction**: autocorrelation_relaxation_time, ERD\n- **Frequency bands**: broadband=[0.5, 30.0] Hz; mu=[8.0, 13.0] Hz; beta=[13.0, 30.0] Hz; low=[0.5, 8.0] Hz\n- **Spatial filters**: bipolar_montage\n## Cross-Validation\n- **Method**: 10x10-fold\n- **Folds**: 10\n- **Evaluation type**: within_subject\n## BCI Application\n- **Applications**: motor_control\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Motor\n- **Type**: Research\n## Documentation\n- **DOI**: 10.1371/journal.pone.0193722\n- **License**: CC-BY-4.0\n- **Investigators**: Maitreyee Wairagkar, Yoshikatsu Hayashi, Slawomir J. Nasuto\n- **Senior author**: Slawomir J. Nasuto\n- **Institution**: University of Reading\n- **Department**: Brain Embodiment Lab, Biomedical Engineering\n- **Country**: GB\n- **Repository**: University of Reading Research Data Archive\n- **Data URL**: https://researchdata.reading.ac.uk/117/\n- **Publication year**: 2018\n## References\nWairagkar, M., Hayashi, Y., & Nasuto, S. J. (2018). Exploration of neural correlates of movement intention based on characterisation of temporal dependencies in electroencephalography. PLOS ONE, 13(3), e0193722. https://doi.org/10.1371/journal.pone.0193722\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.4.3 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":599447842,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000141","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:08:39.977396+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-17T14:13:35Z"},"total_files":14,"computed_title":"Motor execution dataset from Wairagkar et al 2018","nchans_counts":[{"val":19,"count":14}],"sfreq_counts":[{"val":1024.0,"count":14}],"stats_computed_at":"2026-05-01T13:49:34.645069+00:00","total_duration_s":10097.705078125,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"dbeb8426e2bb07ff","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot example: 'EEG Motor Movement/Imagery Dataset' (Schalk et al.). It uses visually presented cues/targets to elicit motor execution/imagery and is labeled Modality='Visual' and Type='Motor'. This guides the convention that motor/imagery paradigms are Type='Motor' while the stimulus channel (cues on screen) determines Modality='Visual'.","metadata_analysis":"Key population facts: \"Health status: healthy\" and also \"Tags\\n- **Pathology**: Healthy\".\nKey task/stimulus facts: \"Stimulus type: text cues\", \"Stimulus modalities: visual\", and \"Primary modality: visual\".\nKey construct facts: \"Study design: Asynchronous voluntary finger tapping: right tap, left tap, and resting state\" and \"Detected paradigm: motor_imagery\" (also events: \"right_hand=1, rest=2, left_hand=3\").","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says participants are healthy (\"Health status: healthy\"; \"Pathology: Healthy\"). Few-shot pattern suggests that when no disorder is recruited, Pathology='Healthy'. ALIGN.\nModality: Metadata explicitly states visual stimulus channel (\"Stimulus type: text cues\"; \"Stimulus modalities: visual\"; \"Primary modality: visual\"). Few-shot convention (motor imagery dataset example) labels Modality as 'Visual' when cues are on-screen. ALIGN.\nType: Metadata indicates a motor execution/imagery paradigm (\"Asynchronous voluntary finger tapping\"; \"Detected paradigm: motor_imagery\"; class labels right/left hand/rest). Few-shot convention labels such paradigms as Type='Motor'. ALIGN.","decision_summary":"Top-2 candidates per category:\n- Pathology: (1) Healthy vs (2) Unknown. Healthy wins due to explicit metadata: \"Health status: healthy\" and \"Pathology: Healthy\".\n- Modality: (1) Visual vs (2) Motor. Visual wins because modality is defined by stimulus input channel and metadata explicitly says: \"Stimulus type: text cues\", \"Stimulus modalities: visual\", \"Primary modality: visual\"; also matches the motor imagery few-shot example labeled Visual.\n- Type: (1) Motor vs (2) Resting-state. Motor wins because the main paradigm is movement/imagery classification (\"motor_imagery\"; \"finger tapping\"), while 'rest' is only one class/condition rather than the overall study purpose."}},"canonical_name":null,"name_confidence":0.86,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Wairagkar2018"}}