{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cb0","dataset_id":"nm000219","associated_paper_doi":null,"authors":["Christoph Reichert","Igor Fabian Tellez Ceja","Catherine M. Sweeney-Reed","Hans-Jochen Heinze","Hermann Hinrichs","Stefan Dürschmid"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":18,"ages":[26,29,19,31,18,34,25,27,19,24,37,38,25,27,38,27,27,25],"age_min":18,"age_max":38,"age_mean":27.555555555555557,"species":null,"sex_distribution":{"m":8,"f":10},"handedness_distribution":{"r":17,"l":1}},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000219","osf_url":null,"github_url":null,"paper_url":null},"funding":["German Ministry of Education and Research (BMBF) within the Research Campus STIMULATE under grant number 13GW0095D"],"ingestion_fingerprint":"fadb25bf39aede6770e925cdc21ef52a66a88109a01f8881bfe13a3d3923dd1f","license":"CC-BY-4.0","n_contributing_labs":null,"name":"BNCI 2020-002 Attention Shift (Covert Spatial Attention) dataset","readme":"# BNCI 2020-002 Attention Shift (Covert Spatial Attention) dataset\nBNCI 2020-002 Attention Shift (Covert Spatial Attention) dataset.\n## Dataset Overview\n- **Code**: BNCI2020-002\n- **Paradigm**: p300\n- **DOI**: 10.3389/fnins.2020.591777\n- **Subjects**: 18\n- **Sessions per subject**: 1\n- **Events**: NonTarget=1, Target=2\n- **Trial interval**: [0, 16] s\n- **File format**: MAT\n## Acquisition\n- **Sampling rate**: 250.0 Hz\n- **Number of channels**: 30\n- **Channel types**: eeg=30, eog=2\n- **Channel names**: C3, C4, CP1, CP2, Cz, F3, F4, F7, F8, FC1, FC2, Fp1, Fp2, Fz, HEOG, IZ, LMAST, O10, O9, Oz, P3, P4, P7, P8, PO3, PO4, PO7, PO8, Pz, T7, T8, VEOG\n- **Montage**: extended 10-20\n- **Hardware**: BrainAmp DC Amplifier\n- **Reference**: right mastoid\n- **Sensor type**: Ag/AgCl electrodes\n- **Line frequency**: 50.0 Hz\n- **Online filters**: 0.1 Hz highpass\n- **Cap manufacturer**: Brain Products GmbH\n- **Auxiliary channels**: EOG (2 ch, horizontal, vertical)\n## Participants\n- **Number of subjects**: 18\n- **Health status**: healthy\n- **Age**: mean=27.0, min=19.0, max=38.0\n- **Gender distribution**: male=8, female=10\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: p300\n- **Task type**: binary decision\n- **Number of classes**: 2\n- **Class labels**: NonTarget, Target\n- **Feedback type**: visual (yes/no text)\n- **Stimulus type**: colored crosses (green + and red x)\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: online\n- **Training/test split**: True\n- **Instructions**: Respond to yes/no questions by shifting attention to green cross (yes) or red cross (no) while maintaining central gaze fixation\n- **Stimulus presentation**: duration_ms=250, soa_ms=850 (jittered by 0-250 ms), stimuli_per_trial=10\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: p300\n- **Number of targets**: 2\n- **Number of repetitions**: 10\n- **Stimulus onset asynchrony**: 850.0 ms\n## Data Structure\n- **Trials**: 24\n- **Blocks per session**: 7\n- **Trials context**: per_block\n## Preprocessing\n- **Data state**: raw\n- **Preprocessing applied**: False\n- **Steps**: re-referenced to average of left and right mastoid, 4th order zero-phase IIR Butterworth bandpass filter (1.0-12.5 Hz), resampled to 50 Hz, epoched from stimulus onset to 750 ms after\n- **Highpass filter**: 1.0 Hz\n- **Lowpass filter**: 12.5 Hz\n- **Bandpass filter**: [1.0, 12.5]\n- **Filter type**: Butterworth IIR\n- **Filter order**: 4\n- **Re-reference**: average of left and right mastoid\n- **Downsampled to**: 50.0 Hz\n- **Epoch window**: [0.0, 0.75]\n## Signal Processing\n- **Classifiers**: Canonical Correlation Analysis (CCA)\n- **Feature extraction**: N2pc, ERP, Canonical difference waves\n- **Spatial filters**: CCA spatial filters\n## Cross-Validation\n- **Method**: leave-one-out cross-validation (LOOCV)\n- **Evaluation type**: within_subject\n## Performance (Original Study)\n- **Accuracy**: 88.5%\n- **Itr**: 3.02 bits/min\n- **Std Accuracy**: 7.8\n- **Min Accuracy**: 70.8\n- **Max Accuracy**: 90.3\n## BCI Application\n- **Applications**: communication, binary decision\n- **Environment**: laboratory\n- **Online feedback**: True\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Visual\n- **Type**: Attention\n## Documentation\n- **Description**: Gaze-independent brain-computer interface based on covert spatial attention shifts for binary (yes/no) communication\n- **DOI**: 10.3389/fnins.2020.591777\n- **Associated paper DOI**: 10.3389/fnins.2020.591777\n- **License**: CC-BY-4.0\n- **Investigators**: Christoph Reichert, Igor Fabian Tellez Ceja, Catherine M. Sweeney-Reed, Hans-Jochen Heinze, Hermann Hinrichs, Stefan Dürschmid\n- **Senior author**: Stefan Dürschmid\n- **Contact**: christoph.reichert@lin-magdeburg.de\n- **Institution**: Leibniz Institute for Neurobiology\n- **Department**: Department of Behavioral Neurology\n- **Address**: Magdeburg, Germany\n- **Country**: Germany\n- **Repository**: BNCI Horizon\n- **Data URL**: http://bnci-horizon-2020.eu/database/data-sets\n- **Publication year**: 2020\n- **Funding**: German Ministry of Education and Research (BMBF) within the Research Campus STIMULATE under grant number 13GW0095D\n- **Ethics approval**: Ethics Committee of the Otto-von-Guericke University, Magdeburg\n- **Keywords**: visual spatial attention, brain-computer interface, stimulus features, N2pc, canonical correlation analysis, gaze-independent, BCI\n## References\nReichert, C., Tellez-Ceja, I. F., Schwenker, F., Rusnac, A.-L., Curio, G., Aust, L., & Hinrichs, H. (2020). Impact of Stimulus Features on the Performance of a Gaze-Independent Brain-Computer Interface Based on Covert Spatial Attention Shifts. Frontiers in Neuroscience, 14, 591777. https://doi.org/10.3389/fnins.2020.591777\nNotes\n.. versionadded:: 1.3.0\nThis dataset uses a covert spatial attention paradigm with N2pc ERP detection, which is different from traditional P300 or motor imagery paradigms. The paradigm is designed for gaze-independent BCI control, making it suitable for users who cannot control eye movements.\nSee Also\nBNCI2015_009 : AMUSE auditory spatial P300 paradigm BNCI2015_010 : RSVP visual P300 paradigm\nExamples\n>>> from moabb.datasets import BNCI2020_002 >>> dataset = BNCI2020_002() >>> dataset.subject_list [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":1073288045,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000219","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-30T14:09:08.987378+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-24T08:26:07Z"},"total_files":18,"computed_title":"BNCI 2020-002 Attention Shift (Covert Spatial Attention) dataset","nchans_counts":[{"val":30,"count":18}],"sfreq_counts":[{"val":250.0,"count":18}],"stats_computed_at":"2026-05-01T13:49:34.645831+00:00","total_duration_s":47615.928,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"071cdd68550f07d4","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.9,"modality":0.95,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot convention is the Cross-modal Oddball Task example (Parkinson's; Modality=Multisensory; Type=Clinical/Intervention) and the TBI auditory oddball example (Modality=Auditory). These examples show that (i) P300/oddball-style paradigms are labeled by the stimulus modality (auditory/visual/multisensory) rather than response mechanics, and (ii) Type should reflect the cognitive construct (e.g., attention/oddball-related target detection) unless the primary goal is clinical characterization. Here, the dataset is explicitly a covert spatial attention BCI paradigm with visual stimuli, aligning best with Type=Attention and Modality=Visual. For pathology, the few-shot set includes multiple datasets explicitly labeled/quoted as healthy cohorts, matching this dataset’s explicit 'Health status: healthy'.","metadata_analysis":"Key explicit metadata facts:\n1) Population: \"Health status: healthy\" and \"Subjects: 18\".\n2) Modality/stimuli: \"Stimulus type: colored crosses (green + and red x)\", \"Stimulus modalities: visual\", and \"Primary modality: visual\".\n3) Construct/task aim: \"Respond to yes/no questions by shifting attention to green cross (yes) or red cross (no) while maintaining central gaze fixation\" and \"Gaze-independent brain-computer interface based on covert spatial attention shifts\". Also, the note: \"covert spatial attention paradigm with N2pc ERP detection\".","paper_abstract_analysis":"No useful paper information (abstract text not provided in the input; only a citation/DOI is listed).","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\".\n- Few-shot pattern suggests: when metadata explicitly indicates healthy volunteers, use Pathology=Healthy (seen across multiple few-shot healthy datasets).\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"Stimulus modalities: visual\", \"Primary modality: visual\", and \"Stimulus type: colored crosses\".\n- Few-shot pattern suggests: label modality by stimulus channel (e.g., oddball labeled Auditory when tones; Multisensory when audio+visual).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: \"Attention Shift (Covert Spatial Attention)\", \"shifting attention... while maintaining central gaze fixation\", and \"gaze-independent brain-computer interface based on covert spatial attention shifts\".\n- Few-shot pattern suggests: oddball/P300-like target detection and attention paradigms map to Type=Attention when the construct is attentional selection/target detection rather than learning/memory/motor.\n- Alignment: ALIGN.","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n- Candidate 1: Healthy (evidence: \"Health status: healthy\"; also dataset tags list \"Pathology: Healthy\").\n- Candidate 2: Unknown (only if health status were missing).\nHead-to-head: Healthy wins due to explicit population statement.\nConfidence basis: 2+ explicit quotes (\"Health status: healthy\"; \"Pathology: Healthy\").\n\nModality:\n- Candidate 1: Visual (evidence: \"Stimulus modalities: visual\"; \"Primary modality: visual\"; \"Stimulus type: colored crosses\").\n- Candidate 2: Other (only if stimulus channel were unclear).\nHead-to-head: Visual wins with multiple explicit modality fields.\nConfidence basis: 3+ explicit quotes naming visual modality.\n\nType:\n- Candidate 1: Attention (evidence: title \"Attention Shift (Covert Spatial Attention)\"; instruction \"shifting attention...\"; documentation \"covert spatial attention shifts\"; feature \"N2pc\").\n- Candidate 2: Perception (because there are visual targets/non-targets), but the primary aim is attentional shifting/selection for BCI control.\nHead-to-head: Attention wins because covert spatial attention and N2pc are explicitly central.\nConfidence basis: 3+ explicit quotes indicating covert spatial attention/N2pc and gaze-independent attention shifting."}},"canonical_name":null,"name_confidence":0.76,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Reichert2020"}}