{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cbb","dataset_id":"nm000239","associated_paper_doi":null,"authors":["Víctor Martínez-Cagigal","Eduardo Santamaría-Vázquez","Sergio Pérez-Velasco","Diego Marcos-Martínez","Selene Moreno-Calderón","Roberto Hornero"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":16,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000239","osf_url":null,"github_url":null,"paper_url":null},"funding":["Ministerio de Ciencia e Innovación/Agencia Estatal de Investigación and ERDF (TED2021-129915B-I00, RTC2019-007350-1, PID2020-115468RB-I00)","CIBER-BBN through Instituto de Salud Carlos III"],"ingestion_fingerprint":"da2f8f575b1c0f3f3f955a6bfabf580d38a0606aa9ff5e3019267ceea9b41ca2","license":"CC-BY-NC-SA-4.0","n_contributing_labs":null,"name":"P-ary m-sequence-based c-VEP dataset from Martínez-Cagigal et al. (2023)","readme":"# P-ary m-sequence-based c-VEP dataset from Martínez-Cagigal et al. (2023)\nP-ary m-sequence-based c-VEP dataset from Martínez-Cagigal et al. (2023)\n## Dataset Overview\n- **Code**: MartinezCagigal2023Parycvep\n- **Paradigm**: cvep\n- **DOI**: https://doi.org/10.71569/025s-eq10\n- **Subjects**: 16\n- **Sessions per subject**: 5\n- **Events**: 0.0=100, 1.0=101, 2.0=102, 3.0=103, 4.0=104, 5.0=105, 6.0=106, 7.0=107, 8.0=108, 9.0=109, 10.0=110\n- **Trial interval**: (0, 1) s\n- **Runs per session**: 8\n## Acquisition\n- **Sampling rate**: 256.0 Hz\n- **Number of channels**: 16\n- **Channel types**: eeg=16\n- **Montage**: standard_1005\n- **Line frequency**: 50.0 Hz\n## Participants\n- **Number of subjects**: 16\n- **Health status**: healthy\n## Experimental Protocol\n- **Paradigm**: cvep\n- **Number of classes**: 11\n- **Class labels**: 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  0.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_0_0\n  1.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_1_0\n  2.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_2_0\n  3.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_3_0\n  4.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_4_0\n  5.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_5_0\n  6.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_6_0\n  7.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_7_0\n  8.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_8_0\n  9.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_9_0\n  10.0\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/intensity_10_0\n```\n## Documentation\n- **DOI**: 10.71569/025s-eq10\n- **Associated paper DOI**: 10.1016/j.eswa.2023.120815\n- **License**: CC-BY-NC-SA-4.0\n- **Investigators**: Víctor Martínez-Cagigal, Eduardo Santamaría-Vázquez, Sergio Pérez-Velasco, Diego Marcos-Martínez, Selene Moreno-Calderón, Roberto Hornero\n- **Senior author**: Roberto Hornero\n- **Contact**: victor.martinez@gib.tel.uva.es\n- **Institution**: University of Valladolid\n- **Department**: Biomedical Engineering Group, ETSIT\n- **Address**: Paseo de Belén, 15, 47011, Valladolid, Spain\n- **Country**: ES\n- **Repository**: U Valladoid\n- **Data URL**: https://doi.org/10.71569/025s-eq10\n- **Publication year**: 2023\n- **Funding**: Ministerio de Ciencia e Innovación/Agencia Estatal de Investigación and ERDF (TED2021-129915B-I00, RTC2019-007350-1, PID2020-115468RB-I00); CIBER-BBN through Instituto de Salud Carlos III\n- **Ethics approval**: Approved by the local ethics committee; all participants provided informed consent\n- **Acknowledgements**: This study was partially funded by Ministerio de Ciencia e Innovación/Agencia Estatal de Investigación and ERDF, and CIBER-BBN through Instituto de Salud Carlos III.\n- **How to acknowledge**: Please cite: Martínez-Cagigal et al. (2023). Non-binary m-sequences for more comfortable brain-computer interfaces based on c-VEPs. Expert Systems With Applications, 232, 120815. https://doi.org/10.1016/j.eswa.2023.120815\n## References\nMartínez-Cagigal, V., Santamaría-Vázquez, E., Pérez-Velasco, S., Marcos-Martínez, D., Moreno-Calderón, S., & Hornero, R. (2023). Non-binary m-sequences for more comfortable brain-computer interfaces based on c-VEPs. *Expert Systems with Applications, 232*, 120815. https://doi.org/10.1016/j.eswa.2023.120815\nMartínez-Cagigal, V., Thielen, J., Santamaría-Vázquez, E., Pérez-Velasco, S., Desain, P., & Hornero, R. (2021). Brain-computer interfaces based on code-modulated visual evoked potentials (c-VEP): A literature review. *Journal of Neural Engineering*, 18(6), 061002. https://doi.org/10.1088/1741-2552/ac38cf\nMartínez-Cagigal, V. (2025). Dataset: Non-binary m-sequences for more comfortable brain-computer interfaces based on c-VEPs. https://doi.org/10.35376/10324/70945\nSantamaría-Vázquez, E., Martínez-Cagigal, V., Marcos-Martínez, D., Rodríguez-González, V., Pérez-Velasco, S., Moreno-Calderón, S., & Hornero, R. (2023). MEDUSA©: A novel Python-based software ecosystem to accelerate brain-computer interface and cognitive neuroscience research. *Computer Methods and Programs in Biomedicine, 230*, 107357. https://doi.org/10.1016/j.cmpb.2023.107357\nNotes\nAlthough the dataset was recorded in a single session, each condition is stored as a separate session to match the MOABB structure. Within each session, eight runs are available (six for training, two for testing).\n.. versionadded:: 1.2.0\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0base2","1base3","2base5","3base7","4base11"],"size_bytes":821069280,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000239","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["cvep"],"timestamps":{"digested_at":"2026-04-30T14:09:31.072497+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-25T20:28:01Z"},"total_files":640,"computed_title":"P-ary m-sequence-based c-VEP dataset from Martínez-Cagigal et al. (2023)","nchans_counts":[{"val":16,"count":640}],"sfreq_counts":[{"val":256.0,"count":608},{"val":600.0,"count":32}],"stats_computed_at":"2026-05-01T13:49:34.645983+00:00","total_duration_s":54342.57166666667,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"1e6c352530e7723b","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.7,"modality":0.9,"type":0.75},"reasoning":{"few_shot_analysis":"Most similar few-shot conventions are the visual discrimination/perception example (Meta-rdk) and other visually driven paradigms labeled as Visual+Perception/Motor depending on the scientific target. Like the Meta-rdk example (visual dot-motion discrimination labeled Modality=Visual, Type=Perception), this dataset is driven by visual stimuli (c-VEP/coded visual stimulation) and aims to measure/decipher evoked visual responses for BCI classification. Unlike the motor imagery dataset (EEG Motor Movement/Imagery Dataset labeled Type=Motor), there is no movement-execution/imagery construct as the primary focus; the primary signal of interest is a visually evoked potential, which aligns better with Perception/visual evoked processing labeling convention.","metadata_analysis":"Key explicit metadata facts:\n1) Population/health: \"Health status: healthy\" and \"Number of subjects: 16\".\n2) Stimulus modality: repeated HED annotations show \"Visual-presentation\" under each class label, and the dataset title/paradigm states \"c-VEP\" (code-modulated visual evoked potentials): \"Paradigm: cvep\" and HED tree includes \"Sensory-event -> Experimental-stimulus -> Visual-presentation\".\n3) Study purpose context: the cited paper indicates a BCI based on visual evoked potentials: \"Non-binary m-sequences for more comfortable brain-computer interfaces based on c-VEPs\".","paper_abstract_analysis":"No useful paper abstract text was provided in the dataset metadata (only the associated paper DOI and citation are listed).","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\".\n- Few-shot pattern suggests: datasets explicitly stating healthy participants are labeled \"Healthy\".\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: HED annotations repeatedly include \"Visual-presentation\" and paradigm is \"cvep\" (visual evoked potentials).\n- Few-shot pattern suggests: visual stimulus tasks are labeled \"Visual\" (e.g., visual discrimination in Meta-rdk).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: \"c-VEP\" / \"code-modulated visual evoked potentials\" and BCI-focused citation \"brain-computer interfaces based on c-VEPs\".\n- Few-shot pattern suggests: when the main measured construct is sensory-evoked responses to stimuli (rather than motor control), label as \"Perception\".\n- Alignment: Mostly ALIGN; minor ambiguity because BCI datasets could be considered \"Other\" if the aim were purely engineering/benchmarking, but the core construct is still visually evoked perceptual processing.","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n- Candidate 1: Healthy (evidence: \"Health status: healthy\").\n- Candidate 2: Unknown (would apply if no recruitment info).\nHead-to-head: Healthy wins due to explicit recruitment/health statement. (ALIGN)\n\nModality:\n- Candidate 1: Visual (evidence: \"Paradigm: cvep\"; HED: \"Visual-presentation\"; title includes \"c-VEP\").\n- Candidate 2: Other (if modality were unspecified).\nHead-to-head: Visual wins with multiple explicit visual-stimulus indicators. (ALIGN)\n\nType:\n- Candidate 1: Perception (evidence: visual evoked potentials: \"c-VEP\"; HED: \"Sensory-event\" + \"Visual-presentation\"; stimulus classes/labels indicate sensory stimulation decoding).\n- Candidate 2: Other (because it is a BCI dataset and could be framed as methodological/benchmark).\nHead-to-head: Perception wins because the primary cognitive/neural target is visually evoked response decoding rather than motor control, learning, or resting. (Mostly ALIGN)\n\nConfidence justification (quotes/features): Pathology supported by 1 explicit quote; Modality supported by 3+ explicit cues (cvep + repeated \"Visual-presentation\" + title); Type supported by c-VEP/visual-evoked/BCI phrasing but with some ambiguity between Perception vs Other."}},"canonical_name":null,"name_confidence":0.74,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"MartinezCagigal2023"}}