{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cc5","dataset_id":"nm000250","associated_paper_doi":null,"authors":["Pauline Dreyer","Aline Roc","Léa Pillette","Sébastien Rimbert","Fabien Lotte"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":87,"ages":[29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29],"age_min":29,"age_max":29,"age_mean":29.0,"species":null,"sex_distribution":{"m":45,"f":42},"handedness_distribution":{"r":87}},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000250","osf_url":null,"github_url":null,"paper_url":null},"funding":["European Research Council (ERC Starting Grant project BrainConquest, grant ERC-2016-STG-714567)"],"ingestion_fingerprint":"7cdc28322cfdca2e86fceb010a61aa9ce1fb4f6f0db04e41cf4e33b2adac889f","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Dreyer et al. 2023 — A large EEG database with users' profile information for motor imagery brain-computer interface research","readme":"# Class for Dreyer2023 dataset management. MI dataset\nClass for Dreyer2023 dataset management. MI dataset.\n## Dataset Overview\n- **Code**: Dreyer2023\n- **Paradigm**: imagery\n- **DOI**: 10.1038/s41597-023-02445-z\n- **Subjects**: 87\n- **Sessions per subject**: 1\n- **Events**: left_hand=1, right_hand=2\n- **Trial interval**: [0, 5] s\n- **Runs per session**: 6\n- **Session IDs**: calibration, online_training\n- **File format**: GDF\n- **Contributing labs**: Inria Bordeaux\n## Acquisition\n- **Sampling rate**: 512.0 Hz\n- **Number of channels**: 27\n- **Channel types**: eeg=27, emg=2, eog=3\n- **Channel names**: C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, EMGd, EMGg, EOG1, EOG2, EOG3, F3, F4, FC1, FC2, FC3, FC4, FC5, FC6, FCz, Fz, P3, P4, Pz\n- **Montage**: 10-20\n- **Hardware**: g.USBAmp (g.tec, Austria)\n- **Software**: OpenViBE 2.1.0 (Dataset A) / OpenViBE 2.2.0 (Dataset B and C)\n- **Reference**: left earlobe\n- **Ground**: FPz\n- **Sensor type**: active electrodes\n- **Line frequency**: 50.0 Hz\n- **Online filters**: none (raw signals recorded without hardware filters)\n- **Cap manufacturer**: g.tec\n- **Auxiliary channels**: EOG (3 ch, horizontal, vertical), EMG (2 ch), gsr\n## Participants\n- **Number of subjects**: 87\n- **Health status**: healthy\n- **Age**: mean=29.0, min=19, max=59\n- **Gender distribution**: female=41, male=46\n- **Handedness**: right\n- **BCI experience**: naive\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Number of classes**: 2\n- **Class labels**: left_hand, right_hand\n- **Trial duration**: 8.0 s\n- **Tasks**: right_hand_MI, left_hand_MI, resting_state\n- **Study design**: Graz protocol\n- **Feedback type**: continuous visual\n- **Stimulus type**: blue bar varying in length\n- **Stimulus modalities**: visual, auditory\n- **Primary modality**: visual\n- **Synchronicity**: cue-based\n- **Mode**: online\n- **Training/test split**: True\n- **Instructions**: Participants were encouraged to perform kinesthetic imagination and leave them free to choose their mental imagery strategy. Participants were instructed to try to find the best strategy so that the system would show the longest possible feedback bar. Only positive feedback was provided.\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  left_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Left, Hand\n  right_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: right_hand, left_hand\n- **Cue duration**: 1.25 s\n- **Imagery duration**: 3.75 s\n## Data Structure\n- **Trials**: 240\n- **Trials per class**: right_hand=120, left_hand=120\n- **Blocks per session**: 6\n- **Block duration**: 420.0 s\n- **Trials context**: per subject (120 per class)\n## Preprocessing\n- **Data state**: raw\n- **Preprocessing applied**: False\n- **Bandpass filter**: [5.0, 35.0]\n- **Filter type**: Butterworth\n- **Filter order**: 5\n- **Artifact methods**: visual inspection\n- **Re-reference**: Laplacian (C3, C4 for feature extraction)\n- **Notes**: The raw signals were recorded without any hardware filters. For online processing, a fifth-order Butterworth filter was applied in a participant-specific discriminant frequency band in the range of 5 Hz to 35 Hz with 0.5 Hz large bins. Impedance could not be measured with active electrodes; EEG signals were visually checked and regularly re-checked to ensure good signal quality.\n## Signal Processing\n- **Classifiers**: LDA\n- **Feature extraction**: CSP, Bandpower\n- **Frequency bands**: analyzed=[5.0, 35.0] Hz; alpha=[8.0, 13.0] Hz; mu=[8.0, 13.0] Hz; beta=[13.0, 30.0] Hz\n- **Spatial filters**: CSP, Laplacian\n## Cross-Validation\n- **Method**: calibration-feedback\n- **Evaluation type**: within_session\n## Performance (Original Study)\n- **Accuracy**: 63.35%\n- **Mean Accuracy Std**: 17.36\n- **Mean Accuracy R3**: 63.14\n- **Mean Accuracy R4**: 64.82\n- **Chance Level Individual**: 58.7\n- **Chance Level Database**: 51.0\n## BCI Application\n- **Applications**: rehabilitation, assistive_technology, neurofeedback, user_training\n- **Environment**: laboratory\n- **Online feedback**: True\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Motor\n- **Type**: Motor Imagery\n## Documentation\n- **Description**: A large EEG database with users' profile information for motor imagery brain-computer interface research. Contains electroencephalographic signals from 87 human participants, collected during a single day of brain-computer interface (BCI) experiments, organized into 3 datasets (A, B, and C) that were all recorded using the same protocol: right and left hand motor imagery (MI).\n- **DOI**: 10.1038/s41597-023-02445-z\n- **Associated paper DOI**: 10.1038/s41597-023-02445-z\n- **License**: CC-BY-4.0\n- **Investigators**: Pauline Dreyer, Aline Roc, Léa Pillette, Sébastien Rimbert, Fabien Lotte\n- **Senior author**: Fabien Lotte\n- **Contact**: fabien.lotte@inria.fr\n- **Institution**: Centre Inria de l'université de Bordeaux\n- **Department**: LaBRI (Univ. Bordeaux/CNRS/Bordeaux INP)\n- **Address**: Talence, 33405, France\n- **Country**: FR\n- **Repository**: Zenodo\n- **Data URL**: https://doi.org/10.5281/zenodo.8089820\n- **Publication year**: 2023\n- **Funding**: European Research Council (ERC Starting Grant project BrainConquest, grant ERC-2016-STG-714567)\n- **Ethics approval**: Inria's ethics committee, the COERLE (Approval number: 2018-13)\n- **Keywords**: motor imagery, brain-computer interface, EEG, BCI illiteracy, user training, personality profile, cognitive traits, user profile\n## Abstract\nWe present and share a large database containing electroencephalographic signals from 87 human participants, collected during a single day of brain-computer interface (BCI) experiments, organized into 3 datasets (A, B, and C) that were all recorded using the same protocol: right and left hand motor imagery (MI). Each session contains 240 trials (120 per class), which represents more than 20,800 trials, or approximately 70 hours of recording time. It includes the performance of the associated BCI users, detailed information about the demographics, personality profile as well as some cognitive traits and the experimental instructions and codes (executed in the open-source platform OpenViBE). Such database could prove useful for various studies, including but not limited to: (1) studying the relationships between BCI users' profiles and their BCI performances, (2) studying how EEG signals properties varies for different users' profiles and MI tasks, (3) using the large number of participants to design cross-user BCI machine learning algorithms or (4) incorporating users' profile information into the design of EEG signal classification algorithms.\n## Methodology\nParticipants performed a Graz protocol MI-BCI task with 6 runs (2 calibration runs with sham feedback, 4 online training runs with real feedback). Each run consisted of 40 trials (20 per MI-task) with 8s trial duration. Trial structure: green cross (t=0s), acoustic signal (t=2s), red arrow cue (t=3s, 1.25s duration), continuous visual feedback (t=4.25s, 3.75s duration), inter-trial interval (1.5-3.5s). Signal processing used participant-specific Most Discriminant Frequency Band (MDFB) selection (5-35 Hz range), fifth-order Butterworth filtering, Common Spatial Pattern (CSP) with 3 pairs of spatial filters, and Linear Discriminant Analysis (LDA) classifier trained on calibration data. Participants completed 6 questionnaires assessing demographics, personality (16PF5), cognitive traits, spatial abilities (Mental Rotation test), learning style (ILS), and pre/post-experiment states (NeXT questionnaire).\n## References\nPillette, L., Roc, A., N’kaoua, B., & Lotte, F. (2021). Experimenters' influence on mental-imagery based brain-computer interface user training. International Journal of Human-Computer Studies, 149, 102603.\nBenaroch, C., Yamamoto, M. S., Roc, A., Dreyer, P., Jeunet, C., & Lotte, F. (2022). When should MI-BCI feature optimization include prior knowledge, and which one?. Brain-Computer Interfaces, 9(2), 115-128.\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":9489875769,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000250","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:09:40.975391+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-11T20:58:45Z"},"total_files":520,"computed_title":"Dreyer et al. 2023 — A large EEG database with users' profile information for motor imagery brain-computer interface research","nchans_counts":[{"val":27,"count":520}],"sfreq_counts":[{"val":512.0,"count":520}],"stats_computed_at":"2026-05-01T13:49:34.646356+00:00","total_duration_s":228474.984375,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"f4de10d359c03b12","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Closest few-shot match is **\"EEG Motor Movement/Imagery Dataset\"** (Healthy, Visual, Motor): it labels MI paradigms as **Type=Motor** and **Modality=Visual** when trials are cued by on-screen visual targets/cues. This Dreyer2023 dataset is likewise a Graz-protocol motor-imagery BCI with explicit visual cueing/feedback (and an additional auditory cue), so the same convention applies. A secondary, weaker match is the Parkinson’s cross-modal oddball example (Multisensory) only to illustrate that when auditory+visual are both task-relevant, Modality can become Multisensory; here, metadata explicitly states a primary modality (visual), pushing toward Visual rather than Multisensory.","metadata_analysis":"Key facts from provided metadata/readme:\n- Population: \"**Health status: healthy**\" and \"**Number of subjects: 87**\".\n- Paradigm/task: \"**MI dataset**\", \"**Paradigm: imagery**\", \"**Study design: Graz protocol**\", and tasks include \"**right_hand_MI, left_hand_MI**\".\n- Stimuli/cues: \"**Feedback type: continuous visual**\", \"**Stimulus type: blue bar varying in length**\", \"**Stimulus modalities: visual, auditory**\" with \"**Primary modality: visual**\".\n- Trial structure reiterates sensory cueing: \"**acoustic signal (t=2s)**\" and \"**red arrow cue (t=3s ...)**\" plus \"**continuous visual feedback**\".","paper_abstract_analysis":"Useful paper-like summary is embedded in the dataset text under Abstract/Methodology: \"organized into 3 datasets ... recorded using the same protocol: **right and left hand motor imagery (MI)**\" and \"Participants performed a **Graz protocol MI-BCI task** ... 4 online training runs with real feedback\". This supports Type=Motor and confirms the MI-BCI nature of the task.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\".\n- Few-shot suggests: Motor imagery datasets with volunteers are typically labeled Healthy.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"Stimulus modalities: visual, auditory\" and explicitly \"Primary modality: visual\"; also \"Feedback type: continuous visual\" and \"Stimulus type: blue bar\".\n- Few-shot pattern suggests: MI datasets with on-screen cues/targets are labeled Visual (see EEG Motor Movement/Imagery Dataset).\n- Alignment: ALIGN (despite presence of an acoustic signal, primary modality is explicitly visual).\n\nType:\n- Metadata says: \"MI dataset\", \"Detected paradigm: motor_imagery\", tasks \"right_hand_MI, left_hand_MI\".\n- Few-shot pattern suggests: motor imagery / movement paradigms are labeled Type=Motor.\n- Alignment: ALIGN.","decision_summary":"Top-2 comparative selections:\n\n1) Pathology candidates: (A) Healthy vs (B) Unknown\n- Evidence for Healthy: \"Health status: healthy\"; participants are \"human\" with no disorder recruitment stated.\n- Evidence for Unknown: none beyond generic possibility.\n- Winner: Healthy (explicit metadata). Alignment: ALIGN.\n- Confidence support: 1 strong explicit quote (healthy) + consistent context.\n\n2) Modality candidates: (A) Visual vs (B) Multisensory\n- Evidence for Visual: \"Primary modality: visual\"; \"Feedback type: continuous visual\"; \"Stimulus type: blue bar\"; \"red arrow cue\".\n- Evidence for Multisensory: \"Stimulus modalities: visual, auditory\" and \"acoustic signal\" in trial structure.\n- Head-to-head: Visual wins because metadata explicitly designates the *primary* modality as visual and most task-relevant information/feedback is visual.\n- Confidence support: 3+ explicit quotes for visual dominance.\n\n3) Type candidates: (A) Motor vs (B) Attention\n- Evidence for Motor: \"MI dataset\"; \"Detected paradigm: motor_imagery\"; class labels \"left_hand, right_hand\"; tasks \"right_hand_MI, left_hand_MI\".\n- Evidence for Attention: cue-based paradigm could involve attention, but not the study aim.\n- Winner: Motor (explicit MI-BCI focus).\n- Confidence support: 3+ explicit MI/motor-imagery quotes."}},"canonical_name":null,"name_confidence":0.82,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Dreyer2023"}}