{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cbc","dataset_id":"nm000240","associated_paper_doi":null,"authors":["Álvaro Fernández-Rodríguez","Víctor Martínez-Cagigal","Eduardo Santamaría-Vázquez","Ricardo Ron-Angevin","Roberto Hornero"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":16,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000240","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"5170763a24e88df9a3f9cf1cc79e587473d7b3f22c6612fe4c0c2b60756b441a","license":"CC-BY-NC-SA-4.0","n_contributing_labs":null,"name":"Checkerboard m-sequence-based c-VEP dataset from","readme":"# Checkerboard m-sequence-based c-VEP dataset from\nCheckerboard m-sequence-based c-VEP dataset from Martínez-Cagigal et al. (2025) and Fernández-Rodríguez et al. (2023).\n## Dataset Overview\n- **Code**: MartinezCagigal2023Checkercvep\n- **Paradigm**: cvep\n- **DOI**: https://doi.org/10.71569/7c67-v596\n- **Subjects**: 16\n- **Sessions per subject**: 8\n- **Events**: 0.0=100, 1.0=101\n- **Trial interval**: (0, 1) s\n- **Runs per session**: 3\n## Acquisition\n- **Sampling rate**: 256.0 Hz\n- **Number of channels**: 16\n- **Channel types**: eeg=16\n- **Montage**: standard_1005\n- **Line frequency**: 50.0 Hz\n## Participants\n- **Number of subjects**: 16\n- **Health status**: healthy\n## Experimental Protocol\n- **Paradigm**: cvep\n- **Number of classes**: 2\n- **Class labels**: 0.0, 1.0\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  0.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_0_0\n  1.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_1_0\n```\n## Documentation\n- **DOI**: 10.71569/7c67-v596\n- **Associated paper DOI**: 10.3389/fnhum.2023.1288438\n- **License**: CC-BY-NC-SA-4.0\n- **Investigators**: Álvaro Fernández-Rodríguez, Víctor Martínez-Cagigal, Eduardo Santamaría-Vázquez, Ricardo Ron-Angevin, Roberto Hornero\n- **Senior author**: Roberto Hornero\n- **Contact**: victor.martinez@gib.tel.uva.es\n- **Institution**: University of Valladolid\n- **Department**: Biomedical Engineering Group, ETSIT\n- **Address**: Paseo de Belén, 15, 47011, Valladolid, Spain\n- **Country**: ES\n- **Repository**: U Valladoid\n- **Data URL**: https://doi.org/10.71569/7c67-v596\n- **Publication year**: 2023\n- **Ethics approval**: Approved by the local ethics committee; all participants provided informed consent\n- **How to acknowledge**: Please cite: Fernández-Rodríguez et al. (2023). Influence of spatial frequency in visual stimuli for cVEP-based BCIs: evaluation of performance and user experience. Frontiers in Human Neuroscience, 17, 1288438. https://doi.org/10.3389/fnhum.2023.1288438\n## References\nMartínez Cagigal, V. (2025). Dataset: Influence of spatial frequency in visual stimuli for cVEP-based BCIs: evaluation of performance and user experience. https://doi.org/10.71569/7c67-v596\nFernández-Rodríguez, Á., Martínez-Cagigal, V., Santamaría-Vázquez, E., Ron-Angevin, R., & Hornero, R. (2023). Influence of spatial frequency in visual stimuli for cVEP-based BCIs: evaluation of performance and user experience. Frontiers in Human Neuroscience, 17, 1288438. https://doi.org/10.3389/fnhum.2023.1288438\nSantamaría-Vázquez, E., Martínez-Cagigal, V., Marcos-Martínez, D., Rodríguez-González, V., Pérez-Velasco, S., Moreno-Calderón, S., & Hornero, R. (2023). MEDUSA©: A novel Python-based software ecosystem to accelerate brain–computer interface and cognitive neuroscience research. Computer Methods and Programs in Biomedicine, 230, 107357. https://doi.org/10.1016/j.cmpb.2023.107357\nNotes\nAlthough the dataset was recorded in a single session, each condition is stored as a separate session to match the MOABB structure. Within each session, three runs are available (two for training, one for testing).\n.. versionadded:: 1.2.0\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0c1","1c2","2c3","3c4","4c5","5c6","6c7","7c8"],"size_bytes":668711084,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000240","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["cvep"],"timestamps":{"digested_at":"2026-04-30T14:09:34.034777+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-25T20:28:15Z"},"total_files":383,"computed_title":"Checkerboard m-sequence-based c-VEP dataset from","nchans_counts":[{"val":16,"count":383}],"sfreq_counts":[{"val":256.0,"count":383}],"stats_computed_at":"2026-05-01T13:49:34.645995+00:00","total_duration_s":48270.50390625,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"b462bda93587db7a","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.7},"reasoning":{"few_shot_analysis":"Most similar few-shot conventions by paradigm/stimulus: (1) The visual discrimination dataset (Meta-rdk) is labeled Visual + Perception; it shows that visually driven stimulus-response paradigms that primarily probe sensory/evoked processing map to Type=Perception. (2) The auditory brainstem response dataset is labeled Auditory + Perception; it provides a close analog in that the primary goal is evoked responses to sensory stimulation (here auditory; in our dataset visual c-VEP). By contrast, the Motor Movement/Imagery BCI dataset is labeled Type=Motor because the cognitive construct is movement/imagery; this helps rule out Motor here because the present dataset is c-VEP (visually evoked), not movement/imagery.","metadata_analysis":"Key metadata facts: (1) Population: readme explicitly states \"Health status: healthy\" and \"Subjects: 16\". (2) Stimulus modality: the readme includes HED tags \"Visual-presentation\" under both event codes (0.0 and 1.0) and the title/readme specify \"Checkerboard m-sequence-based c-VEP\", i.e., a visual evoked potential paradigm. (3) Research aim/paradigm: readme states \"Paradigm: cvep\" and cites the associated paper \"Influence of spatial frequency in visual stimuli for cVEP-based BCIs: evaluation of performance and user experience\", indicating a visually evoked-potential BCI paradigm built on visual stimulation parameters (spatial frequency).","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: (1) Metadata says: \"Health status: healthy\". (2) Few-shot pattern suggests that when participants are healthy controls/volunteers, label Pathology=Healthy (seen in multiple few-shots labeled Healthy). (3) ALIGN.\n\nModality: (1) Metadata says: HED annotations include \"Visual-presentation\" and the dataset is \"Checkerboard ... c-VEP\". (2) Few-shot pattern suggests stimulus channel determines Modality (e.g., visual discrimination -> Visual; ABR/music/speech -> Auditory). (3) ALIGN.\n\nType: (1) Metadata says: \"Paradigm: cvep\" and paper title indicates \"visual stimuli for cVEP-based BCIs\" (visually evoked responses driven by stimulus properties). (2) Few-shot pattern suggests evoked responses to sensory stimuli map to Type=Perception (visual discrimination; auditory evoked/ABR). A competing few-shot convention is that BCI datasets can map to Motor when the construct is movement/imagery, but that does not apply here because the paradigm is visual evoked potentials rather than motor imagery. (3) Mostly ALIGN with Perception; slight ambiguity with Attention (cVEP BCIs often require attending to one of multiple visual codes), but attention is not explicitly stated in the provided metadata.","decision_summary":"Top-2 candidates per category with head-to-head selection:\n\nPathology candidates: (A) Healthy vs (B) Unknown. Evidence: \"Health status: healthy\" strongly supports Healthy; no clinical recruitment described. Winner: Healthy. Alignment: aligned with few-shot Healthy conventions. Confidence supported by explicit quote.\n\nModality candidates: (A) Visual vs (B) Other. Evidence: \"Checkerboard ... c-VEP\" and HED \"Visual-presentation\" directly indicate visual stimulation. Winner: Visual. Alignment: aligned with few-shot stimulus-channel convention. Confidence supported by multiple explicit cues.\n\nType candidates: (A) Perception vs (B) Attention. Evidence for Perception: \"c-VEP\" (visual evoked potential) + \"Influence of spatial frequency in visual stimuli\" implies sensory/evoked response characterization. Evidence for Attention: plausible in cVEP BCI use-cases, but not explicitly stated. Winner: Perception because the metadata emphasizes visually evoked potentials and stimulus parameters rather than explicit attentional manipulation. Confidence moderate due to remaining ambiguity."}},"canonical_name":null,"name_confidence":0.86,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"FernandezRodriguez2025"}}