{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4ca3","dataset_id":"nm000206","associated_paper_doi":null,"authors":["Marcel F. Hinss","Emilie S. Jahanpour","Bertille Somon","Lou Pluchon","Frédéric Dehais","Raphaëlle N. Roy"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":15,"ages":[23,23,23,23,23,23,23,23,23,23,23,23,23,23,23],"age_min":23,"age_max":23,"age_mean":23.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000206","osf_url":null,"github_url":null,"paper_url":null},"funding":["ERASMUS program","ANITI (Artificial and Natural Intelligence Toulouse Institute)"],"ingestion_fingerprint":"9984b113d2e2b889c28d0f0b97c9d882e898b74f944af484b23407b41c4cb727","license":"CC-BY-SA-4.0","n_contributing_labs":null,"name":"Neuroergonomic 2021 dataset","readme":"# Neuroergonomic 2021 dataset\nNeuroergonomic 2021 dataset.\n## Dataset Overview\n- **Code**: Hinss2021\n- **Paradigm**: rstate\n- **DOI**: 10.1038/s41597-022-01898-y\n- **Subjects**: 15\n- **Sessions per subject**: 2\n- **Events**: rs=1, easy=2, medium=3, diff=4\n- **Trial interval**: [0, 2] s\n- **File format**: set\n## Acquisition\n- **Sampling rate**: 500.0 Hz\n- **Number of channels**: 62\n- **Channel types**: eeg=62\n- **Channel names**: AF3, AF4, AF7, AF8, AFz, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT10, FT7, FT8, FT9, Fp1, Fp2, Fz, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO7, PO8, POz, Pz, T7, T8, TP7, TP8\n- **Montage**: standard_1020\n- **Hardware**: ActiCHamp (Brain Products Gmbh)\n- **Reference**: Fpz\n- **Sensor type**: active Ag/AgCl\n- **Line frequency**: 50.0 Hz\n- **Impedance threshold**: 25 kOhm\n- **Auxiliary channels**: ecg\n## Participants\n- **Number of subjects**: 15\n- **Health status**: healthy\n- **Age**: mean=23.9\n- **Gender distribution**: female=11, male=18\n## Experimental Protocol\n- **Paradigm**: rstate\n- **Number of classes**: 4\n- **Class labels**: rs, easy, medium, diff\n- **Study design**: Passive BCI neuroergonomics dataset with resting state and 3 difficulty levels of MATB-II task (easy, medium, difficult). The MOABB loader provides resting state and MATB conditions only.\n- **Feedback type**: none\n- **Stimulus type**: visual display\n- **Training/test split**: False\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  rs\n    ├─ Experiment-structure\n    └─ Rest\n  easy\n    ├─ Experiment-structure\n    └─ Label/easy\n  medium\n    ├─ Experiment-structure\n    └─ Label/medium\n  diff\n    ├─ Experiment-structure\n    └─ Label/difficult\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: resting_state\n## Data Structure\n- **Trials**: 90\n- **Trials context**: total\n## Preprocessing\n- **Data state**: raw\n- **Preprocessing applied**: False\n## Signal Processing\n- **Classifiers**: MDM, Riemannian\n- **Feature extraction**: Bandpower, Covariance/Riemannian, ICA\n- **Frequency bands**: alpha=[8.0, 13.0] Hz; theta=[4.0, 8.0] Hz\n## Cross-Validation\n- **Method**: 5-fold\n- **Folds**: 5\n- **Evaluation type**: cross_subject, cross_session, transfer_learning\n## Performance (Original Study)\n- **Accuracy**: 70.67%\n## BCI Application\n- **Applications**: neuroergonomics, mental_workload_estimation\n- **Environment**: laboratory\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Cognitive\n- **Type**: Research\n## Documentation\n- **DOI**: 10.1038/s41597-022-01898-y\n- **License**: CC-BY-SA-4.0\n- **Investigators**: Marcel F. Hinss, Emilie S. Jahanpour, Bertille Somon, Lou Pluchon, Frédéric Dehais, Raphaëlle N. Roy\n- **Senior author**: Raphaëlle N. Roy\n- **Contact**: marcel.hinss@isae-supaero.fr\n- **Institution**: ISAE-SUPAERO, Université de Toulouse\n- **Department**: Department of Information Processing and Systems\n- **Address**: Toulouse, France\n- **Country**: FR\n- **Repository**: Zenodo\n- **Data URL**: https://doi.org/10.5281/zenodo.6874128\n- **Publication year**: 2023\n- **Funding**: ERASMUS program; ANITI (Artificial and Natural Intelligence Toulouse Institute)\n- **Ethics approval**: Comité d'Éthique de la Recherche (CER), Université de Toulouse (CER number 2021-342)\n- **Acknowledgements**: This research was supported in part by the ERASMUS program (which funded Mr Hinss' internship), and by ANITI (Artificial and Natural Intelligence Toulouse Institute), Toulouse, France.\n- **How to acknowledge**: Please cite: Hinss et al. (2023). Open multi-session and multi-task EEG cognitive dataset for passive brain-computer interface applications. Scientific Data, 10, 85. https://doi.org/10.1038/s41597-022-01898-y\n## References\n.. [Hinss2021] M. Hinss, B. Somon, F. Dehais & R. N. Roy (2021) Open EEG Datasets for Passive Brain-Computer Interface Applications: Lacks and Perspectives. IEEE Neural Engineering Conference.\n.. [Hinss2023] M. F. Hinss, et al. (2023) An EEG dataset for cross-session mental workload estimation: Passive BCI competition of the Neuroergonomics Conference 2021. Scientific Data, 10, 85. https://doi.org/10.1038/s41597-022-01898-y\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["1","2"],"size_bytes":1310956227,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000206","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["rstate"],"timestamps":{"digested_at":"2026-04-30T14:09:01.869968+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-24T01:53:29Z"},"total_files":30,"computed_title":"Neuroergonomic 2021 dataset","nchans_counts":[{"val":61,"count":30}],"sfreq_counts":[{"val":500.0,"count":30}],"stats_computed_at":"2026-05-01T13:49:34.645656+00:00","total_duration_s":14309.94,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"697f40b36eda9937","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.9,"modality":0.8,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot conventions:\n- The resting-state example “A Resting-state EEG Dataset for Sleep Deprivation” maps an eyes-open/closed resting paradigm to Modality=Resting State and Type=Resting-state. This guides that when the dataset is purely rest, those labels are used.\n- The cognitive-control/attention example “EEG: DPX Cog Ctl Task in Acute Mild TBI” shows that when the purpose is cognitive control/monitoring during an active task (with cues/probes), Type can map to Attention rather than Perception or Motor.\nFor the present dataset, metadata explicitly states both resting-state segments and an active task with difficulty manipulations for workload estimation; following the few-shot style, we prioritize the active, difficulty-manipulated workload task for Type (Attention) and the stated stimulus channel (Visual) for Modality, while still acknowledging resting blocks are included.","metadata_analysis":"Key explicit metadata facts:\n- Population/health: “Health status: healthy” and also “Tags - Pathology: Healthy” and “Participants: ... Health status: healthy”.\n- Task/paradigm: “Passive BCI neuroergonomics dataset with resting state and 3 difficulty levels of MATB-II task (easy, medium, difficult).”\n- Stimulus channel: “Stimulus type: visual display”.\n- Study aim: “Applications: neuroergonomics, mental_workload_estimation” and “Number of classes: 4 ... rs, easy, medium, diff”.\nThese indicate a healthy cohort performing/resting with an emphasis on workload levels during a visually presented MATB-II task plus resting baseline.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: “Health status: healthy”, “Tags - Pathology: Healthy”.\n- Few-shot pattern suggests: healthy participant datasets are labeled Pathology=Healthy.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: “Stimulus type: visual display” (MATB-II) and also includes “resting state”.\n- Few-shot pattern suggests: pure resting datasets use Modality=Resting State; active tasks use the stimulus channel (e.g., Visual/Auditory).\n- Alignment: PARTIAL; dataset contains both rest and task, but explicit stimulus channel for the task is visual. Choose Visual as dominant/input modality for the workload task rather than Resting State.\n\nType:\n- Metadata says: “mental_workload_estimation” and “3 difficulty levels of MATB-II task (easy, medium, difficult)” (workload manipulation), alongside a resting condition.\n- Few-shot pattern suggests: difficulty/cognitive control manipulations map to Attention; pure rest maps to Resting-state.\n- Alignment: PARTIAL; both Attention (workload during task) and Resting-state (rs blocks) are plausible, but the stated application is workload estimation, favoring Attention.","decision_summary":"Top-2 candidates with head-to-head selection:\n\nPathology:\n1) Healthy — Evidence: “Health status: healthy”; “Tags - Pathology: Healthy”; “Participants ... Health status: healthy”.\n2) Unknown — would apply if no recruitment info was given.\nDecision: Healthy (explicitly stated). Alignment: aligned with few-shot conventions.\n\nModality:\n1) Visual — Evidence: “Stimulus type: visual display”; “MATB-II task (easy, medium, difficult)” (presented via display).\n2) Resting State — Evidence: “resting state” condition and class label “rs”.\nDecision: Visual, because the key manipulated experimental conditions (easy/medium/diff workload) are driven by a visual task; resting appears as a baseline condition.\n\nType:\n1) Attention — Evidence: “mental_workload_estimation”; “3 difficulty levels of MATB-II task (easy, medium, difficult)”; “Passive BCI neuroergonomics dataset”.\n2) Resting-state — Evidence: inclusion of “resting state” / label “rs”.\nDecision: Attention, because the primary research purpose is workload estimation across difficulty levels (a sustained attention/cognitive load construct), with resting used as a comparison baseline.\n\nConfidence justifications:\n- Pathology confidence high due to 3 explicit metadata statements of healthy status.\n- Modality confidence moderate-high because “visual display” is explicit, but the dataset includes resting blocks.\n- Type confidence moderate-high because workload estimation/difficulty manipulation is explicit, but a Resting-state label remains a plausible runner-up due to the rs condition."}},"canonical_name":null,"name_confidence":0.64,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Hinss2021_Neuroergonomic"}}