{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c73","dataset_id":"nm000138","associated_paper_doi":null,"authors":["Alexandre Barachant"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":"10.82901/nemar.nm000138","datatypes":["eeg"],"demographics":{"subjects_count":8,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000138","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"d4c048158e7c3b0935293c82b7b0703976d057c49cb9e576db75b60b60850075","license":"CC-BY-SA-4.0","n_contributing_labs":null,"name":"Alex Motor Imagery dataset","readme":"[![DOI](https://img.shields.io/badge/DOI-10.82901%2Fnemar.nm000138-blue)](https://doi.org/10.82901/nemar.nm000138)\n# Alex Motor Imagery dataset\nAlex Motor Imagery dataset.\n## Dataset Overview\n- **Code**: AlexandreMotorImagery\n- **Paradigm**: imagery\n- **DOI**: 10.5281/zenodo.806022\n- **Subjects**: 8\n- **Sessions per subject**: 1\n- **Events**: right_hand=2, feet=3, rest=4\n- **Trial interval**: [0, 3] s\n- **File format**: fif\n- **Data preprocessed**: True\n## Acquisition\n- **Sampling rate**: 512.0 Hz\n- **Number of channels**: 16\n- **Channel types**: eeg=16\n- **Channel names**: Fpz, F7, F3, Fz, F4, F8, T7, C3, Cz, C4, T8, P7, P3, Pz, P4, P8\n- **Montage**: standard_1005\n- **Hardware**: g.tec g.USBamp\n- **Software**: Matlab/Simulink\n- **Reference**: earlobe\n- **Sensor type**: EEG\n- **Line frequency**: 50.0 Hz\n## Participants\n- **Number of subjects**: 8\n- **Health status**: healthy\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Number of classes**: 3\n- **Class labels**: right_hand, feet, rest\n- **Trial duration**: 3.0 s\n- **Study design**: Cue-based motor imagery paradigm (Step B of Brain Switch campaign) for familiarization and algorithm development\n- **Feedback type**: none\n- **Stimulus type**: visual cue\n- **Stimulus modalities**: visual, auditory\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Instructions**: Cue-based paradigm without feedback. Subjects perform 20 imagined movements per class (right hand, feet, rest) following a visual cue, lasting 3 seconds each. Total duration approximately 10 minutes.\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  right_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\n  feet\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine, Move, Foot\n  rest\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Rest\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: right_hand, feet, rest\n- **Cue duration**: 1.0 s\n- **Imagery duration**: 3.0 s\n## Data Structure\n- **Trials**: 60\n- **Trials per class**: right_hand=20, feet=20, rest=20\n- **Trials context**: 20 trials per class, 3 second duration each\n## Preprocessing\n- **Re-reference**: earlobe\n## Signal Processing\n- **Classifiers**: LDA, SVM, MDM, Riemannian, kNN, Naive Bayes, Logistic Regression\n- **Feature extraction**: CSP, FBCSP, ERD, ERS, PSD, Covariance/Riemannian, AR, ICA\n- **Frequency bands**: alpha=[8.0, 12.0] Hz; mu=[8.0, 12.0] Hz\n- **Spatial filters**: CSP, Geodesic filtering\n## Cross-Validation\n- **Method**: cross-validation\n- **Evaluation type**: within_session\n## BCI Application\n- **Applications**: motor_control\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Motor\n- **Type**: Research\n## Documentation\n- **Description**: Motor imagery dataset from the PhD dissertation of A. Barachant. Contains EEG recordings from 8 subjects performing motor imagination tasks (right hand, feet, or rest). Used to validate robust control of an effector via asynchronous EEG-based brain-machine interface.\n- **DOI**: 10.5281/zenodo.806022\n- **Associated paper DOI**: tel-01196752v1\n- **License**: CC-BY-SA-4.0\n- **Investigators**: Alexandre Barachant\n- **Senior author**: Alexandre Barachant\n- **Contact**: alexandre.barachant@gmail.com\n- **Institution**: Université de Grenoble\n- **Department**: Laboratoire Électronique et système pour la santé CEA-LETI\n- **Address**: CEA-LETI Grenoble, France\n- **Country**: France\n- **Repository**: Zenodo\n- **Data URL**: https://zenodo.org/record/806023\n- **Publication year**: 2012\n- **Keywords**: brain-computer interface, motor imagery, EEG, Riemannian geometry, asynchronous BCI, brain-switch, covariance matrices, Common Spatial Pattern\n## Abstract\nMotor imagery dataset from the PhD thesis on robust control of an effector via asynchronous EEG brain-machine interface (Barachant, 2012). This shared dataset corresponds to Step B (cue-based imagery without feedback) of the Brain Switch campaign. Contains recordings from 8 subjects performing 3 motor imagery tasks (right hand, feet, rest) with 20 trials per class.\n## Methodology\nCue-based paradigm without feedback (Step B of Brain Switch campaign). EEG recorded at 512 Hz with 16 active electrodes using a g.tec g.USBamp amplifier. Reference electrode placed on the ear. Subjects performed imagined movements following visual cues: right hand, feet, and rest, 20 trials per class, 3 seconds each. Recorded in standard office conditions (not shielded laboratory). Software: Matlab/Simulink with g.tec drivers.\n## References\nBarachant, A., 2012. Commande robuste d'un effecteur par une interface cerveau machine EEG asynchrone (Doctoral dissertation, Université de Grenoble). https://tel.archives-ouvertes.fr/tel-01196752\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.4.3 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":104575336,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000138","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:08:39.202729+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-17T13:49:45Z"},"total_files":8,"computed_title":"Alex Motor Imagery dataset","nchans_counts":[{"val":16,"count":8}],"sfreq_counts":[{"val":512.0,"count":8}],"stats_computed_at":"2026-05-01T13:49:34.645026+00:00","total_duration_s":3973.375,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"e03da7eba92b82bb","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot is the 'EEG Motor Movement/Imagery Dataset' example (Schalk et al.), which is also a cue-based motor movement/imagery paradigm and is labeled with Type=Motor and Modality=Visual (because the stimulus is a visual target/cue, even though the cognitive/behavioral domain is motor imagery). This strongly guides mapping Alex Motor Imagery to Type=Motor and Modality=Visual. The same few-shot also uses Pathology=Healthy for volunteer datasets without a clinical recruitment criterion, matching this dataset’s explicit 'Health status: healthy'.","metadata_analysis":"Key population/task/stimulus facts from the dataset metadata/readme:\n- Population: \"Health status\": \"healthy\" and also \"Tags\\n- **Pathology**: Healthy\".\n- Task paradigm/domain: \"Paradigm: imagery\", \"Study design: Cue-based motor imagery paradigm\", and \"Subjects perform 20 imagined movements per class (right hand, feet, rest) following a visual cue\".\n- Stimulus/input modality: \"Stimulus type: visual cue\", \"Primary modality: visual\", and \"Stimulus modalities: visual, auditory\".\nThese indicate a healthy cohort performing cue-based motor imagery, with visual cues as the dominant presented stimulus channel.","paper_abstract_analysis":"No useful paper information (no separate paper abstract provided beyond the dataset’s own embedded 'Abstract' text in the readme).","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status\": \"healthy\"; and \"Tags ... Pathology: Healthy\".\n- Few-shot pattern suggests: non-clinical volunteer motor-imagery datasets are labeled Healthy.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"Stimulus type: visual cue\" and \"Primary modality: visual\" (also notes \"Stimulus modalities: visual, auditory\").\n- Few-shot pattern suggests: motor imagery tasks with on-screen targets/cues map to Modality=Visual (stimulus-channel-based), not Motor.\n- Alignment: ALIGN.\n\nType:\n- Metadata says: \"Cue-based motor imagery paradigm\"; \"imagined movements per class (right hand, feet, rest)\".\n- Few-shot pattern suggests: motor movement/imagery paradigms map to Type=Motor.\n- Alignment: ALIGN.","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n1) Healthy (selected) — supported by \"Health status: healthy\" and \"Tags ... Pathology: Healthy\".\n2) Unknown — only if population health were unspecified (not the case).\nAlignment status: aligned with few-shot conventions.\nConfidence basis: 2 explicit metadata quotes support Healthy; little ambiguity.\n\nModality:\n1) Visual (selected) — supported by \"Stimulus type: visual cue\", \"Primary modality: visual\", and the cue-based motor-imagery few-shot convention mapping similar paradigms to Visual.\n2) Motor — plausible if one mistakenly labels by effector/imagery rather than stimulus channel, but guidelines prioritize stimulus modality.\nAlignment status: aligned with few-shot and metadata.\nConfidence basis: 3 explicit metadata quotes + strong few-shot analog.\n\nType:\n1) Motor (selected) — supported by \"Cue-based motor imagery paradigm\" and \"imagined movements ... (right hand, feet, rest)\" and detected \"motor_imagery\".\n2) Perception — weak alternative if focusing on cue perception, but the research purpose is motor imagery/BCI control.\nAlignment status: aligned with few-shot and metadata.\nConfidence basis: multiple explicit metadata statements + strong few-shot analog."}},"canonical_name":null,"name_confidence":0.74,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Barachant2012"}}