{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c88","dataset_id":"nm000168","associated_paper_doi":null,"authors":["Ricardo Chavarriaga","José del R. Millán"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":6,"ages":[27,27,27,27,27,27],"age_min":27,"age_max":27,"age_mean":27.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000168","osf_url":null,"github_url":null,"paper_url":null},"funding":["EC under Contract BACS FP6-IST-027140"],"ingestion_fingerprint":"712b320d4b251915d567c38e1270d8dccab8da05a24193fac4b66209f0be2340","license":"CC-BY-NC-ND-4.0","n_contributing_labs":null,"name":"BNCI 2015-013 Error-Related Potentials dataset","readme":"# BNCI 2015-013 Error-Related Potentials dataset\nBNCI 2015-013 Error-Related Potentials dataset.\n## Dataset Overview\n- **Code**: BNCI2015-013\n- **Paradigm**: p300\n- **DOI**: 10.1109/TNSRE.2010.2053387\n- **Subjects**: 6\n- **Sessions per subject**: 20\n- **Events**: Target=1, NonTarget=2\n- **Trial interval**: [0, 0.6] s\n- **File format**: matlab\n## Acquisition\n- **Sampling rate**: 512.0 Hz\n- **Number of channels**: 64\n- **Channel types**: eeg=64\n- **Channel names**: Fp1, AF7, AF3, F1, F3, F5, F7, FT7, FC5, FC3, FC1, C1, C3, C5, T7, TP7, CP5, CP3, CP1, P1, P3, P5, P7, P9, PO7, PO3, O1, Iz, Oz, POz, Pz, CPz, Fpz, Fp2, AF8, AF4, AFz, Fz, F2, F4, F6, F8, FT8, FC6, FC4, FC2, FCz, Cz, C2, C4, C6, T8, TP8, CP6, CP4, CP2, P2, P4, P6, P8, P10, PO8, PO4, O2\n- **Montage**: standard_1020\n- **Hardware**: Biosemi ActiveTwo\n- **Sensor type**: active\n- **Line frequency**: 50.0 Hz\n## Participants\n- **Number of subjects**: 6\n- **Health status**: patients\n- **Clinical population**: Healthy\n- **Age**: mean=27.83, std=2.23\n- **Gender distribution**: male=5, female=1\n- **Handedness**: not reported\n- **BCI experience**: not reported\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: p300\n- **Task type**: monitoring\n- **Number of classes**: 2\n- **Class labels**: Target, NonTarget\n- **Trial duration**: 2.0 s\n- **Study design**: Error-related potential (ErrP) monitoring task where subjects observe a cursor moving towards a target. The cursor moves autonomously with 20% or 40% error probability. Subjects monitor performance without control.\n- **Feedback type**: visual\n- **Stimulus type**: cursor_movement\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Training/test split**: True\n- **Instructions**: Subjects seat in front of a computer screen and monitor a moving cursor (green square) and target location (blue for left, red for right). No control over cursor movement, only assess whether it performs properly. Fixate center of screen.\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: p300\n## Data Structure\n- **Trials**: ~50 trials per block, ~64 trials per block for error_prob=0.20\n- **Blocks per session**: 10\n- **Block duration**: 180.0 s\n- **Trials context**: per_block\n## Preprocessing\n- **Data state**: raw\n- **Preprocessing applied**: False\n## Signal Processing\n- **Classifiers**: Gaussian classifier\n- **Feature extraction**: event-related potentials\n- **Frequency bands**: analyzed=[1.0, 10.0] Hz\n## Cross-Validation\n- **Method**: train-test split\n- **Evaluation type**: cross_session\n## Performance (Original Study)\n- **Accuracy**: 75.8%\n- **Correct Recognition Rate**: 63.2\n- **Error Recognition Rate**: 75.8\n## BCI Application\n- **Applications**: error_detection\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Cognitive\n- **Type**: ErrP\n## Documentation\n- **Description**: Dataset on EEG error-related potentials (ErrPs) elicited when users monitor the behavior of an external autonomous agent. One of the first studies showing that error correlates can be observed and decoded during monitoring of external agents without user control.\n- **DOI**: 10.1109/TNSRE.2010.2053387\n- **License**: CC-BY-NC-ND-4.0\n- **Investigators**: Ricardo Chavarriaga, José del R. Millán\n- **Senior author**: José del R. Millán\n- **Contact**: ricardo.chavarriaga@epfl.ch; jose.millan@epfl.ch\n- **Institution**: Ecole Polytechnique Fédérale de Lausanne\n- **Department**: Defitech Chair in Brain-Machine Interface, CNBI, Center for Neuroprosthetics\n- **Country**: CH\n- **Repository**: BNCI Horizon\n- **Publication year**: 2010\n- **Funding**: EC under Contract BACS FP6-IST-027140\n- **Keywords**: error-related potentials, ErrP, brain-computer interface, reinforcement learning, monitoring, error detection\n## References\nChavarriaga, R., & Millán, J. D. R. (2010). Learning from EEG error-related potentials in noninvasive brain-computer interfaces. IEEE Trans. Neural Syst. Rehabil. Eng., 18(4), 381-388. https://doi.org/10.1109/TNSRE.2010.2053387\nNotes\n.. versionadded:: 1.2.0\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1","10","11","12","13","14","15","16","17","18","19","2","3","4","5","6","7","8","9"],"size_bytes":2176361584,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000168","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-30T14:08:46.077995+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-02T21:18:34Z"},"total_files":120,"computed_title":"BNCI 2015-013 Error-Related Potentials dataset","nchans_counts":[{"val":64,"count":120}],"sfreq_counts":[{"val":512.0,"count":120}],"stats_computed_at":"2026-05-01T13:49:34.645325+00:00","total_duration_s":21927.765625,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"eac5bbe4ee349a05","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.7},"reasoning":{"few_shot_analysis":"Closest few-shot conventions: (1) The TBI DPX dataset labeled Type=Attention uses a cognitive-control/monitoring style paradigm (cue/probe expectancy, performance monitoring with feedback), showing that when the core construct is monitoring/attentional control rather than sensory discrimination, Type tends to map to Attention rather than Perception. (2) The Parkinson cross-modal oddball example shows that oddball/ERP paradigms are labeled by the dominant stimulus modality (Multisensory there; Visual/Auditory when single-channel) and not by the response device; this guides choosing Modality=Visual here because the task is screen-based cursor/target monitoring.","metadata_analysis":"Key metadata facts: (a) Population: readme lists \"Clinical population: Healthy\" and also a tag \"Pathology: Healthy\"; however it also contains a conflicting field \"Health status: patients\". (b) Stimulus/modality: \"Stimulus modalities: visual\", \"Primary modality: visual\", and \"Feedback type: visual\"; instructions specify participants \"monitor a moving cursor (green square) and target location\" on a screen. (c) Research purpose/task: described as an \"Error-related potential (ErrP) monitoring task\" where \"Subjects monitor performance without control\" and application is \"error_detection\" with keywords \"error-related potentials\" and \"monitoring\".","paper_abstract_analysis":"No useful paper information (no abstract text provided in the input; only a citation).","evidence_alignment_check":"Pathology — Metadata says: \"Clinical population: Healthy\" and \"Tags - Pathology: Healthy\" (but also \"Health status: patients\"). Few-shot pattern suggests using explicitly named diagnoses when present; here no disorder is named and metadata explicitly states Healthy. ALIGN overall (treat \"patients\" as a metadata inconsistency).\nModality — Metadata says: \"Stimulus modalities: visual\", \"Primary modality: visual\", and cursor/target are shown on a \"computer screen\". Few-shot convention labels modality by stimulus channel (e.g., oddball example). ALIGN.\nType — Metadata says: \"Error-related potential (ErrP) monitoring task\" and \"Applications: error_detection\" while subjects only observe/monitor. Few-shot convention (e.g., DPX labeled Attention) suggests mapping monitoring/cognitive control tasks to Attention rather than Perception/Decision-making. Mostly ALIGN, but this is still a somewhat imperfect fit because ErrP/performance monitoring is not a dedicated allowed Type label; closest mapping is Attention vs Other.","decision_summary":"Top-2 candidates per category:\nPathology: (1) Healthy — supported by \"Clinical population: Healthy\" and \"Tags - Pathology: Healthy\". (2) Unknown/Other — supported only by conflicting phrase \"Health status: patients\" without any diagnosis. Winner: Healthy.\nModality: (1) Visual — supported by \"Stimulus modalities: visual\", \"Primary modality: visual\", \"Feedback type: visual\", and visual cursor/target instructions. (2) Other — weak alternative if treating it as abstract monitoring rather than sensory stimulation; not supported. Winner: Visual.\nType: (1) Attention — supported by \"monitoring\" framing (\"subjects observe a cursor\", \"monitor performance\") and few-shot convention that monitoring/cognitive-control-like paradigms map to Attention. (2) Other — because ErrP/performance monitoring is not explicitly an allowed Type. Winner: Attention.\nConfidence justification: Pathology has 2 explicit Healthy statements vs 1 contradictory generic word; Modality has 3+ explicit visual statements plus HED visual presentation; Type relies on 1–2 task-description quotes plus convention-based mapping."}},"canonical_name":null,"name_confidence":0.72,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Chavarriaga2015"}}