{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cc4","dataset_id":"nm000249","associated_paper_doi":null,"authors":["Ping-Keng Jao","Ricardo Chavarriaga","Jose del R. Millan"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":13,"ages":[22,22,22,22,22,22,22,22,22,22,22,22,22],"age_min":22,"age_max":22,"age_mean":22.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000249","osf_url":null,"github_url":null,"paper_url":null},"funding":["Swiss National Centres of Competence in Research (NCCR) Robotics"],"ingestion_fingerprint":"de9426892be1531e43d22b9900010940ff61d3027a0fa16efb1ea60a557c8b61","license":"CC-BY-4.0","n_contributing_labs":null,"name":"BNCI 2022-001 EEG Correlates of Difficulty Level dataset","readme":"# BNCI 2022-001 EEG Correlates of Difficulty Level dataset\nBNCI 2022-001 EEG Correlates of Difficulty Level dataset.\n## Dataset Overview\n- **Code**: BNCI2022-001\n- **Paradigm**: imagery\n- **DOI**: 10.1109/THMS.2020.3038339\n- **Subjects**: 13\n- **Sessions per subject**: 1\n- **Events**: trajectory_start=1, waypoint_miss=16, waypoint_hit=48, trajectory_end=255\n- **Trial interval**: [0, 90] s\n- **Session IDs**: offline, online_session_2, online_session_3\n- **File format**: gdf\n- **Data preprocessed**: True\n## Acquisition\n- **Sampling rate**: 256.0 Hz\n- **Number of channels**: 64\n- **Channel types**: eeg=64, eog=3\n- **Channel names**: AF3, AF4, AF7, AF8, AFz, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, EOG1, EOG2, EOG3, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fp1, Fp2, Fpz, Fz, Iz, O1, O2, Oz, P1, P10, P2, P3, P4, P5, P6, P7, P8, P9, PO3, PO4, PO7, PO8, POz, Pz, T7, T8, TP7, TP8\n- **Montage**: 10-10\n- **Hardware**: Biosemi ActiveTwo\n- **Software**: EEGLAB\n- **Reference**: car\n- **Sensor type**: active\n- **Line frequency**: 50.0 Hz\n- **Auxiliary channels**: EOG (3 ch, horizontal, vertical), ppg\n## Participants\n- **Number of subjects**: 13\n- **Health status**: patients\n- **Clinical population**: normal or corrected-to-normal vision, no history of motor or neurological disease (one subject with history of vasovagal syncope)\n- **Age**: mean=22.6, std=1.04\n- **Gender distribution**: female=8, male=5\n- **Handedness**: {'right': 12, 'left': 1}\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Number of classes**: 4\n- **Class labels**: trajectory_start, waypoint_miss, waypoint_hit, trajectory_end\n- **Trial duration**: 90.0 s\n- **Study design**: Subjects piloted a simulated drone through circular waypoints using a flight joystick, controlling roll and pitch while the drone maintained constant velocity. In offline session: 32 trajectories each with constant difficulty level (v-shape design from level 16 to 1 and back to 16), each trajectory had 32 waypoints and lasted ~90 seconds. In online sessions: each condition consisted of 12 trajectories with 33 waypoints and 8 decision points per trajectory.\n- **Feedback type**: visual\n- **Stimulus type**: visual\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: cue-based\n- **Mode**: both\n- **Instructions**: Subjects piloted a simulated drone through a series of circular waypoints. Subjects controlled the roll and pitch while the drone had a constant velocity of 11.8 arbitrary units per second when flying straight. They were instructed to press a button when the current level was easy as a way to collect ground truth for decoding or to proceed with self-paced learning.\n- **Stimulus presentation**: screen_size=twenty-inch screen, screen_resolution=1680x1050, input_device=Logitech Extreme 3D Pro joystick, waypoint_colors=green (current), blue (next), yellow (decision point), waypoint_distance_pitch=32 A.U. (at least 2.7 seconds), waypoint_distance_roll=24 A.U. (at least 2.0 seconds)\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  trajectory_start\n    ├─ Experiment-structure\n    └─ Label/trajectory_start\n  waypoint_miss\n    ├─ Experiment-structure\n    └─ Label/waypoint_miss\n  waypoint_hit\n    ├─ Experiment-structure\n    └─ Label/waypoint_hit\n  trajectory_end\n    ├─ Experiment-structure\n    └─ Label/trajectory_end\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: right_hand, left_hand, feet\n## Data Structure\n- **Trials**: {'offline_session': '32 trajectories of 32 waypoints each (~90 seconds per trajectory)', 'online_session_per_condition': '12 trajectories of 33 waypoints each with 8 decision points'}\n- **Blocks per session**: 2\n- **Trials context**: Offline session: v-shape difficulty design (level 16→1→16). Online sessions: each condition had 12 trajectories, starting at level 1 for 1st trajectory, then 4 levels lower than final level of previous trajectory. Average 10.3 seconds per decision group (4 waypoints).\n## Preprocessing\n- **Data state**: preprocessed\n- **Preprocessing applied**: True\n- **Steps**: downsampling from 2048 Hz to 256 Hz, casual bandpass filtering between 1 and 40 Hz, SPHARA 20th order spatial low-pass filter for interpolation and artifact reduction, common-average re-referencing, ICA for EOG artifact removal, peripheral electrodes removed (25 central channels kept), artifact rejection: windows with peak value > 50 µV rejected\n- **Highpass filter**: 1.0 Hz\n- **Lowpass filter**: 40.0 Hz\n- **Bandpass filter**: [1.0, 40.0]\n- **Filter type**: Butterworth\n- **Filter order**: 14\n- **Artifact methods**: ICA, SPHARA, amplitude thresholding\n- **Re-reference**: car\n- **Downsampled to**: 256.0 Hz\n- **Notes**: Out of 39 recordings, P2 was removed twice from offline or online sessions due to short-circuit with the CMS or DRL electrode. On average, 15.8 ICA components were returned and 1.07 components were removed during construction of online decoders (correlation > 0.7 with EOG).\n## Signal Processing\n- **Classifiers**: LDA, Generalized Linear Model with elastic net regularization\n- **Feature extraction**: PSD, ICA, log-PSD\n- **Frequency bands**: analyzed=[2.0, 28.0] Hz; theta=[4.0, 8.0] Hz; alpha=[10.5, 13.0] Hz\n- **Spatial filters**: SPHARA, common-average reference\n## Cross-Validation\n- **Method**: leave-one-pair-out cross-validation (4x or 64x depending on class balance)\n- **Folds**: 4\n- **Evaluation type**: within_subject, cross_session\n## Performance (Original Study)\n- **Accuracy**: 76.7%\n- **Offline Validation Accuracy Mean**: 76.7\n- **Offline Validation Accuracy Std**: 5.1\n- **Online Session 2 Accuracy Mean**: 56.2\n- **Online Session 2 Accuracy Std**: 8.6\n- **Online Session 3 Accuracy Mean**: 54.7\n- **Online Session 3 Accuracy Std**: 11.0\n- **Online Above Chance Recordings**: 16 out of 26 (~62%)\n## BCI Application\n- **Applications**: drone control, adaptive learning, difficulty regulation, visuomotor learning\n- **Environment**: indoor laboratory\n- **Online feedback**: True\n## Tags\n- **Pathology**: Healthy\n- **Modality**: EEG\n- **Type**: Experimental/Research\n## Documentation\n- **DOI**: 10.1109/TAFFC.2021.3059688\n- **Associated paper DOI**: 10.1109/THMS.2020.3038339\n- **License**: CC-BY-4.0\n- **Investigators**: Ping-Keng Jao, Ricardo Chavarriaga, Jose del R. Millan\n- **Senior author**: Jose del R. Millan\n- **Contact**: ping-keng.jao@alumni.epfl.ch; ricardo.chavarriaga@zhaw.ch; jose.millan@austin.utexas.edu\n- **Institution**: Ecole Polytechnique Federale de Lausanne\n- **Address**: 1015 Geneva, Switzerland\n- **Country**: Switzerland\n- **Repository**: BNCI Horizon\n- **Publication year**: 2021\n- **Funding**: Swiss National Centres of Competence in Research (NCCR) Robotics\n- **Acknowledgements**: The authors would like to thank Alexander Cherpillod for his help in the implementation of the simulator and Ruslan Aydarkhanov for his suggestions in designing the protocol. Some figures were drawn with the Gramm MATLAB toolbox.\n- **Keywords**: EEG, real-time decoding of difficulty, closed-loop adaptation, (simulated) flying, workload, challenge point, brain-machine interface\n## Abstract\nAdaptively increasing the difficulty level in learning was shown to be beneficial than increasing the level after some fixed time intervals. To efficiently adapt the level, we aimed at decoding the subjective difficulty level based on Electroencephalography (EEG) signals. We designed a visuomotor learning task that one needed to pilot a simulated drone through a series of waypoints of different sizes, to investigate the effectiveness of the EEG decoder. The EEG decoder was compared with another condition that the subjects decided when to increase the difficulty level. We examined the decoding performance together with behavioral outcomes. The online accuracies were higher than the chance level for 16 out of 26 cases, and the behavioral results, such as task scores, skill curves, and learning patterns, of EEG condition were similar to the condition based on manual regulation of difficulty.\n## Methodology\nThe study compared two conditions for difficulty regulation during a simulated drone piloting task: (1) EEG-based automatic difficulty adjustment using real-time decoding of perceived difficulty, and (2) Manual self-paced adjustment where subjects pressed a button when they found the level easy. Each subject participated in one offline session (for building subject-specific decoders) and two online sessions (each containing both EEG and Manual conditions in counterbalanced order). The task involved piloting a drone through circular waypoints with 16 difficulty levels defined by waypoint radius. Features were extracted using Thomson's multitaper algorithm with 2-second sliding windows, and classification used generalized linear models with elastic net regularization followed by LDA. The study evaluated both decoding accuracy and behavioral outcomes (task scores, skill curves, learning patterns).\n## References\nJao, P.-K., Chavarriaga, R., & Millan, J. d. R. (2021). EEG Correlates of Difficulty Levels in Dynamical Transitions of Simulated Flying and Mapping Tasks. IEEE Transactions on Human-Machine Systems, 51(2), 99-108. https://doi.org/10.1109/THMS.2020.3038339\nNotes\n.. versionadded:: 1.3.0\nThis dataset is designed for cognitive workload assessment and difficulty level detection. Unlike motor imagery datasets, the task involves actual motor control while the cognitive state (perceived difficulty) varies.\nThe public release contains only the first session (offline) data. Additional behavioral data and online sessions with closed-loop difficulty adaptation are not included. The paradigm \"imagery\" is used for compatibility, though the actual task involves motor execution with cognitive load variations.\nSee Also\nBNCI2015_004 : Multi-class mental task dataset with imagery and cognitive tasks BNCI2014_001 : 4-class motor imagery dataset\nExamples\n>>> from moabb.datasets import BNCI2022_001 >>> dataset = BNCI2022_001() >>> dataset.subject_list [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0task"],"size_bytes":3203167296,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000249","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:09:40.837372+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-26T07:30:44Z"},"total_files":13,"computed_title":"BNCI 2022-001 EEG Correlates of Difficulty Level dataset","nchans_counts":[{"val":64,"count":13}],"sfreq_counts":[{"val":256.0,"count":13}],"stats_computed_at":"2026-05-01T13:49:34.646342+00:00","total_duration_s":58287.94921875,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"0994584d80d12257","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.7,"modality":0.9,"type":0.75},"reasoning":{"few_shot_analysis":"Most similar few-shot examples by labeling convention:\n- EEG: DPX Cog Ctl Task in Acute Mild TBI (labeled Type=Attention): it is a cognitive-control/effort-demanding task where the scientific target is cognitive control/attention-related processes rather than low-level perception. This guides mapping “workload / difficulty / cognitive effort” paradigms to Type=Attention.\n- EEG Motor Movement/Imagery Dataset (labeled Type=Motor): shows that when movement/imagery is the main research focus, Type=Motor is used. In the target dataset, however, the README explicitly states the goal is decoding perceived difficulty/workload, so Motor is a plausible runner-up but not the best match by stated purpose.\n- Several examples with explicit stimulus modality mapping (e.g., visual discrimination labeled Modality=Visual) support assigning Modality based on stimulus channel (screen-based waypoints/visual feedback).","metadata_analysis":"Key metadata facts (quotes):\n- Population/health: \"Clinical population: normal or corrected-to-normal vision, no history of motor or neurological disease\" and \"Age: mean=22.6\".\n- But there is a conflicting field: \"Health status: patients\".\n- Stimulus modality: \"Stimulus type: visual\", \"Stimulus modalities: visual\", and \"Feedback type: visual\"; also \"Stimulus presentation: screen_size=twenty-inch screen\".\n- Research aim / construct: \"This dataset is designed for cognitive workload assessment and difficulty level detection.\" and \"we aimed at decoding the subjective difficulty level based on Electroencephalography (EEG) signals.\" Also: \"We designed a visuomotor learning task\" (learning context, but the measured construct is perceived difficulty/workload).","paper_abstract_analysis":"Useful paper-like abstract text is included in the metadata. It reinforces the main construct: \"decoding the subjective difficulty level\" during a \"visuomotor learning task\" with adaptive difficulty regulation; no additional pathology information is provided.","evidence_alignment_check":"Pathology:\n- Metadata says: \"no history of motor or neurological disease\" (implies non-clinical recruitment) but also says \"Health status: patients\".\n- Few-shot pattern suggests: when participants are non-clinical/volunteers, label Healthy.\n- Alignment/conflict: CONFLICT due to the single word \"patients\" vs explicit exclusion of neurological/motor disease. Resolution: choose Healthy because the explicit recruitment criterion \"no history of motor or neurological disease\" is the stronger factual indicator of a normative cohort; \"patients\" appears to be a metadata template/annotation artifact.\n\nModality:\n- Metadata says: \"Stimulus type: visual\" and \"Stimulus modalities: visual\".\n- Few-shot pattern suggests: screen-based tasks/discrimination map to Modality=Visual.\n- Alignment/conflict: ALIGN.\n\nType:\n- Metadata says: \"cognitive workload assessment and difficulty level detection\" and \"decoding the subjective difficulty level\".\n- Few-shot pattern suggests: effort/cognitive control demand paradigms are labeled Attention (e.g., DPX cognitive control example), while pure movement-focus datasets are Motor.\n- Alignment/conflict: PARTIAL ALIGN (task has motor execution), but stated purpose prioritizes workload/difficulty decoding over movement physiology; thus Attention wins over Motor/Learning.","decision_summary":"Top-2 candidates and final selections:\n\nPathology:\n1) Healthy (selected)\n  - Evidence: \"no history of motor or neurological disease\"; \"normal or corrected-to-normal vision\".\n2) Unknown\n  - Evidence: conflicting tag \"Health status: patients\" without a named disorder.\nDecision: Healthy, because explicit inclusion/exclusion criteria indicate a normative cohort; the term \"patients\" is inconsistent with the rest of the participant description.\nConfidence basis: 2 supporting quotes for Healthy, but 1 conflicting field.\n\nModality:\n1) Visual (selected)\n  - Evidence: \"Stimulus type: visual\"; \"Stimulus modalities: visual\"; \"Feedback type: visual\"; screen presentation details.\n2) Motor\n  - Evidence: joystick control is central, but this is response/effector rather than stimulus channel per the labeling rules.\nDecision: Visual.\nConfidence basis: 3+ explicit modality statements.\n\nType:\n1) Attention (selected)\n  - Evidence: \"cognitive workload assessment\"; \"decoding the subjective difficulty level\"; adaptive difficulty regulation (mental effort/attentional demand).\n2) Learning\n  - Evidence: \"visuomotor learning task\" and difficulty adaptation during learning.\nDecision: Attention, because the primary measured construct is workload/difficulty (cognitive effort) rather than learning per se.\nConfidence basis: 2 explicit purpose statements; some ambiguity with learning/motor aspects reduces confidence."}},"canonical_name":null,"name_confidence":0.72,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Jao2022"}}