{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4ca9","dataset_id":"nm000212","associated_paper_doi":null,"authors":["Sulamith Schaeff","Matthias Sebastian Treder","Bastian Venthur","Benjamin Blankertz"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":16,"ages":[23,23,23,23,23,23,23,23,23,23,23,23,23,23,23,23],"age_min":23,"age_max":23,"age_mean":23.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000212","osf_url":null,"github_url":null,"paper_url":null},"funding":["DFG grant","grant nos s","BMBF grant","grant no MU MU"],"ingestion_fingerprint":"1fe48578b6bdc31b82b021197a1ea289455876a1356e1bfb5ae150536122589c","license":"CC-BY-NC-ND-4.0","n_contributing_labs":null,"name":"BNCI 2015-007 Motion VEP (mVEP) Speller dataset","readme":"# BNCI 2015-007 Motion VEP (mVEP) Speller dataset\nBNCI 2015-007 Motion VEP (mVEP) Speller dataset.\n## Dataset Overview\n- **Code**: BNCI2015-007\n- **Paradigm**: p300\n- **DOI**: 10.1088/1741-2560/9/4/045006\n- **Subjects**: 16\n- **Sessions per subject**: 1\n- **Events**: Target=1, NonTarget=2\n- **Trial interval**: [0, 0.7] s\n- **Runs per session**: 2\n- **Session IDs**: practice, calibration, copy_spelling, free_spelling\n- **File format**: gdf\n- **Data preprocessed**: True\n## Acquisition\n- **Sampling rate**: 100.0 Hz\n- **Number of channels**: 63\n- **Channel types**: eeg=63\n- **Channel names**: Fp1, Fp2, AF3, AF4, AF7, AF8, Fz, F1, F2, F3, F4, F5, F6, F7, F8, F9, F10, FCz, FC1, FC2, FC3, FC4, FC5, FC6, FT7, FT8, T7, T8, Cz, C1, C2, C3, C4, C5, C6, TP7, TP8, CPz, CP1, CP2, CP3, CP4, CP5, CP6, Pz, P1, P2, P3, P4, P5, P6, P7, P8, P9, P10, POz, PO3, PO4, PO7, PO8, Oz, O1, O2\n- **Montage**: 10-10\n- **Hardware**: BrainAmp EEG amplifier\n- **Software**: Pyff, VisionEgg, MATLAB\n- **Reference**: linked mastoids\n- **Ground**: forehead\n- **Sensor type**: active electrode\n- **Line frequency**: 50.0 Hz\n- **Online filters**: hardware bandpass filter 0.016–250 Hz\n- **Impedance threshold**: 10.0 kOhm\n- **Cap manufacturer**: Brain Products\n- **Electrode type**: actiCap active electrode system\n## Participants\n- **Number of subjects**: 16\n- **Health status**: patients\n- **Clinical population**: Healthy\n- **Age**: mean=23.8, min=21, max=30\n- **Gender distribution**: male=10, female=6\n- **Handedness**: normal or corrected-to-normal vision\n- **BCI experience**: naive\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: p300\n- **Task type**: visual_speller\n- **Number of classes**: 2\n- **Class labels**: Target, NonTarget\n- **Trial duration**: 30.0 s\n- **Study design**: Three different Cake Speller modifications: Overt Cake Speller (gaze toward target), Covert Cake Speller (central fixation, covert attention), Motion Center Speller (foveal stimulation). Two-level selection (group-level and symbol-level) from 30 symbols.\n- **Study domain**: gaze-independent communication\n- **Feedback type**: visual\n- **Stimulus type**: motion VEP (mVEP)\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: online\n- **Training/test split**: True\n- **Instructions**: Copy-spelling and free-spelling with attention to target symbols. Participants counted moving bar/pattern presentations in target location.\n- **Stimulus presentation**: soa_ms=200 ms (Cake Spellers) or 266 ms (Motion Center Speller), stimulus_duration_ms=100 ms, isi_ms=100 ms, repetitions=10 repetitions per level, total_presentations=120 per selection (2 levels × 10 repetitions × 6 groups/symbols)\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: p300\n- **Number of targets**: 6\n- **Number of repetitions**: 10\n- **Inter-stimulus interval**: 100.0 ms\n- **Stimulus onset asynchrony**: 200.0 ms\n## Data Structure\n- **Trials**: 120\n- **Blocks per session**: 4\n- **Trials context**: per_selection (2 levels × 10 repetitions × 6 groups/symbols)\n## Preprocessing\n- **Data state**: filtered\n- **Preprocessing applied**: True\n- **Steps**: downsampling, low-pass filter, baseline correction, artifact rejection\n- **Highpass filter**: 0.016 Hz\n- **Lowpass filter**: 250.0 Hz\n- **Bandpass filter**: {'low_cutoff_hz': 0.016, 'high_cutoff_hz': 250.0}\n- **Filter type**: hardware bandpass, Chebyshev low-pass for offline\n- **Artifact methods**: min-max criterion (70 μV), variance criterion\n- **Re-reference**: linked mastoids\n- **Downsampled to**: 100.0 Hz\n- **Epoch window**: [-0.2, 1.0]\n- **Notes**: For offline analysis: downsampled to 200 Hz, low-pass filtered (42 Hz passband, 49 Hz stopband). For online: downsampled to 100 Hz. Artifact rejection: min-max ≥70 μV. Nontarget epochs filtered to avoid overlap with targets (3 preceding and 4 following stimuli must be nontargets).\n## Signal Processing\n- **Classifiers**: LDA with shrinkage of covariance matrix\n- **Feature extraction**: signed square values of point-biserial correlation coefficients\n- **Frequency bands**: analyzed=[100.0, 800.0] Hz\n- **Spatial filters**: LDA spatial filter\n## Cross-Validation\n- **Method**: train on calibration, test on copy-spelling and free-spelling\n- **Evaluation type**: within_session\n## Performance (Original Study)\n- **N200 Latency Overt Ms**: 164.0\n- **N200 Latency Covert Ms**: 180.0\n- **N200 Latency Motion Center Ms**: 198.0\n- **P300 Latency Range Ms**: 300-500\n- **N200 Latency Range Ms**: 100-250\n## BCI Application\n- **Applications**: speller, communication\n- **Environment**: laboratory\n- **Online feedback**: True\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Visual\n- **Type**: P300, VEP\n## Documentation\n- **Description**: Exploring motion VEPs for gaze-independent communication\n- **DOI**: 10.1088/1741-2560/9/4/045006\n- **Associated paper DOI**: 10.1088/1741-2560/11/2/026009\n- **License**: CC-BY-NC-ND-4.0\n- **Investigators**: Sulamith Schaeff, Matthias Sebastian Treder, Bastian Venthur, Benjamin Blankertz\n- **Senior author**: Benjamin Blankertz\n- **Contact**: benjamin.blankertz@tu-berlin.de\n- **Institution**: Berlin Institute of Technology\n- **Department**: Neurotechnology Group\n- **Country**: Germany\n- **Repository**: BNCI Horizon\n- **Publication year**: 2012\n- **Funding**: DFG grant; grant nos s; BMBF grant; grant no MU MU\n- **Ethics approval**: Declaration of Helsinki\n- **Keywords**: motion visually evoked potentials, mVEP, BCI, speller, gaze-independent, covert attention, P300, N200\n## References\nTreder, M. S., Purwins, H., Miklody, D., Sturm, I., & Blankertz, B. (2012). Decoding auditory attention to instruments in polyphonic music using single-trial EEG classification. Journal of Neural Engineering, 11(2), 026009. https://doi.org/10.1088/1741-2560/11/2/026009\nNotes\n.. versionadded:: 1.2.0\nSee Also\nBNCI2015_008 : Center Speller P300 dataset (gaze-independent) BNCI2015_009 : AMUSE auditory spatial P300 dataset BNCI2015_010 : RSVP visual speller (gaze-independent visual paradigm)\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":1372907711,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000212","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-30T14:09:05.966280+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-24T04:17:09Z"},"total_files":32,"computed_title":"BNCI 2015-007 Motion VEP (mVEP) Speller dataset","nchans_counts":[{"val":63,"count":32}],"sfreq_counts":[{"val":100.0,"count":32}],"stats_computed_at":"2026-05-01T13:49:34.645745+00:00","total_duration_s":71836.22,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"c9338fa19d629ad2","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.7,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot reference is the “Cross-modal Oddball Task” example (Parkinson’s) because it shares a classic target vs non-target oddball structure that elicits P300-like responses. That example’s convention is to label oddball/target-detection paradigms under an attention/cognitive-control style Type (even though that particular dataset’s final Type is driven by the clinical focus). This guides mapping the present Target/NonTarget P300-speller paradigm to Type=Attention (rather than Motor). The schizophrenia visual-discrimination example (Type=Perception) is less similar because it is a perceptual decision task without the oddball target-detection structure.","metadata_analysis":"Key task/stimulus facts are explicit in the README: (1) oddball-like P300 speller structure: “Paradigm: p300” and “Events: Target=1, NonTarget=2”. (2) visual stimulation: “Task type: visual_speller”, “Stimulus modalities: visual”, and the HED tags include “Visual-presentation”. (3) attention to targets: “Instructions: Copy-spelling and free-spelling with attention to target symbols. Participants counted moving bar/pattern presentations in target location.”\n\nPopulation facts: metadata contains conflicting fields: “Health status: patients” but also “Clinical population: Healthy” and demographics are typical young adults (e.g., “Age: mean=23.8, min=21, max=30”). No disorder is named as a recruitment criterion.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology — Metadata says: “Clinical population: Healthy” (but also “Health status: patients”). Few-shot pattern suggests: for BCI/P300 datasets without an explicitly named diagnosis, use Healthy. ALIGN overall (minor internal metadata inconsistency; no explicit clinical condition is provided).\n\nModality — Metadata says: “Stimulus modalities: visual” / “Primary modality: visual” and HED includes “Visual-presentation”. Few-shot pattern suggests: speller/oddball with screen stimuli -> Visual. ALIGN.\n\nType — Metadata says: “Paradigm: p300”, “Events: Target=1, NonTarget=2”, and instructions emphasize “attention to target symbols”. Few-shot pattern suggests: target-detection/oddball-like P300 paradigms map to Attention (as in the oddball example’s convention). ALIGN.","decision_summary":"Top-2 candidates per category:\n\nPathology:\n1) Healthy (selected) — Evidence: “Clinical population: Healthy”; no diagnosis named; young adult sample “Age: mean=23.8…”.\n2) Unknown — Competing evidence: contradictory field “Health status: patients” without any named disorder.\nDecision: Healthy, because explicit “Clinical population: Healthy” and absence of any recruitment diagnosis outweigh the ambiguous “patients” wording.\n\nModality:\n1) Visual (selected) — Evidence: “Stimulus modalities: visual”, “Primary modality: visual”, “Task type: visual_speller”, HED “Visual-presentation”.\n2) Multisensory — Only weakly plausible (no evidence of auditory/tactile stimuli).\nDecision: Visual.\n\nType:\n1) Attention (selected) — Evidence: “Paradigm: p300”; “Target=1, NonTarget=2”; “attention to target symbols”; covert attention variant described (“Covert Cake Speller … covert attention”). Few-shot analog: oddball Target/NonTarget convention maps to attention-focused constructs.\n2) Perception — Alternative supported by “Stimulus type: motion VEP (mVEP)”, but the core experimental contrast is target detection in a speller.\nDecision: Attention.\n\nConfidence justification:\n- Pathology 0.7: one strong explicit quote (“Clinical population: Healthy”) but internal conflict (“Health status: patients”).\n- Modality 0.9: multiple explicit modality quotes + HED visual tag.\n- Type 0.8: multiple explicit task-structure/attention quotes + strong few-shot analog to oddball/P300 conventions."}},"canonical_name":null,"name_confidence":0.77,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Schaeff2015"}}