{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c6c","dataset_id":"nm000131","associated_paper_doi":null,"authors":["Lu Wang","Zhenhao Zhang","Dan Han","Zhijun Zhang","Zhifang Liu","Wei Liu"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":8,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000131","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"9f216b445b452ac7334ab6acd77ba18b6f3a6e6d4650f3ae068395b9b8c8146c","license":"CC BY 4.0","n_contributing_labs":null,"name":"Wang2021 – Combined SSVEP dataset with single stimulus location for two inputs","readme":"# Combined SSVEP dataset with single stimulus location for two inputs\nCombined SSVEP dataset with single stimulus location for two inputs.\n## Dataset Overview\n- **Code**: Wang2021Combined\n- **Paradigm**: ssvep\n- **DOI**: 10.1111/ejn.15030\n- **Subjects**: 8\n- **Sessions per subject**: 1\n- **Events**: 14.17=1, 12.14=2, 9.44=3, 7.73=4\n- **Trial interval**: [0.0, 5.0] s\n- **File format**: CNT\n## Acquisition\n- **Sampling rate**: 1000.0 Hz\n- **Number of channels**: 31\n- **Channel types**: eeg=31, eog=2\n- **Montage**: standard_1005\n- **Hardware**: eego mylab (ANT Neuro)\n- **Line frequency**: 50.0 Hz\n## Participants\n- **Number of subjects**: 8\n- **Health status**: healthy\n## Experimental Protocol\n- **Paradigm**: ssvep\n- **Task type**: covert_attention\n- **Number of classes**: 4\n- **Class labels**: 14.17, 12.14, 9.44, 7.73\n- **Trial duration**: 5.0 s\n- **Study design**: One-to-two combined SSVEP with overlapping stimuli\n- **Feedback type**: none\n- **Stimulus type**: overlapping SSVEP arrows (CRT 85 Hz)\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  14.17\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14_17\n  12.14\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12_14\n  9.44\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/9_44\n  7.73\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/7_73\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: ssvep\n- **Stimulus frequencies**: [14.17, 12.14, 9.44, 7.73] Hz\n## Data Structure\n- **Blocks per session**: 2\n## BCI Application\n- **Environment**: lab\n## Tags\n- **Pathology**: healthy\n- **Modality**: visual\n- **Type**: perception\n## Documentation\n- **DOI**: 10.1111/ejn.15030\n- **License**: CC BY 4.0\n- **Investigators**: Lu Wang, Zhenhao Zhang, Dan Han, Zhijun Zhang, Zhifang Liu, Wei Liu\n- **Senior author**: Zhijun Zhang\n- **Institution**: Shandong University\n- **Country**: CN\n- **Repository**: Zenodo\n- **Data URL**: https://zenodo.org/records/18873228\n- **Publication year**: 2021\n## References\nL. Wang, Z. Zhang, D. Han, Z. Zhang, Z. Liu, and W. Liu, \"Single stimulus location for two inputs: A combined brain-computer interface based on Steady-State Visual Evoked Potential (SSVEP),\" European Journal of Neuroscience, vol. 53, no. 3, pp. 861-875, 2021. DOI: 10.1111/ejn.15030\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.4.3 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":2757289649,"source":"nemar","storage":{"backend":"s3","base":"s3://nemar/nm000131","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["ssvep"],"timestamps":{"digested_at":"2026-04-06T13:13:15.300407+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-18T00:40:53Z"},"total_files":22,"computed_title":"Wang2021 – Combined SSVEP dataset with single stimulus location for two inputs","nchans_counts":[{"val":31,"count":22}],"sfreq_counts":[{"val":1000.0,"count":22}],"stats_computed_at":"2026-04-04T21:29:34.904748+00:00","total_duration_s":22181.697,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"4c399546f5e80882","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.85,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"No few-shot example is an exact SSVEP/covert-attention BCI match. Closest conventions: (1) the DPX cognitive control dataset labeled Type=Attention uses a visually cued task designed around attentional control (“Dot Probe Continuous Performance Task...”), showing that when the paradigm is explicitly about attention allocation/control, Type=Attention is used. (2) the schizophrenia moving-dots dataset labeled Type=Perception shows that purely sensory discrimination is labeled Perception. This dataset explicitly states “Task type: covert_attention”, which follows the Attention convention more than Perception.","metadata_analysis":"Key population/task/stimulus facts from metadata/readme:\n- Population: “Health status: healthy” and also “Tags\\n- **Pathology**: healthy”.\n- Stimulus modality: “Stimulus modalities: visual” and “Primary modality: visual”, plus “Stimulus type: overlapping SSVEP arrows”. HED annotations also include “Visual-presentation”.\n- Task purpose/construct: “Task type: covert_attention” and “Study design: One-to-two combined SSVEP with overlapping stimuli” (SSVEP selection via attentional focus on a target frequency/location).","paper_abstract_analysis":"No useful paper information (only DOI/citation provided, no abstract included in the metadata).","evidence_alignment_check":"Pathology:\n- Metadata says: “Health status: healthy”; “Tags - Pathology: healthy”.\n- Few-shot suggests: when participants are not recruited for a disorder, label Healthy.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: “Stimulus modalities: visual”; “Primary modality: visual”; HED includes “Visual-presentation”.\n- Few-shot suggests: visual stimulus tasks map to Modality=Visual.\n- Alignment: ALIGN.\n\nType:\n- Metadata says: “Task type: covert_attention” (attention allocation is explicit) and SSVEP BCI selection with “overlapping stimuli”.\n- Few-shot suggests: explicit attentional-control paradigms are labeled Type=Attention (e.g., DPX task). Pure sensory discrimination would map to Perception (moving-dot discrimination example).\n- Alignment: PARTIAL—SSVEP could be construed as perceptual/evoked-response work, but the explicit “covert_attention” phrasing aligns better with Attention; no conflict with an explicit metadata fact about Type, because metadata does not explicitly assert a cognitive-construct label beyond “covert_attention”.","decision_summary":"Top-2 candidates per category with head-to-head selection:\n\nPathology:\n1) Healthy (WINNER) — Evidence: “Health status: healthy”; “Tags - Pathology: healthy”; dataset description indicates healthy participants.\n2) Unknown (runner-up) — would apply if no population info were given.\nDecision: Healthy. Evidence alignment: aligned with few-shot conventions.\nConfidence basis: 2+ explicit quotes + repeated consistent tagging.\n\nModality:\n1) Visual (WINNER) — Evidence: “Stimulus modalities: visual”; “Primary modality: visual”; “Stimulus type: overlapping SSVEP arrows”; HED “Visual-presentation”.\n2) Other (runner-up) — only if stimulus modality were ambiguous.\nDecision: Visual. Evidence alignment: aligned with few-shot conventions.\nConfidence basis: 3+ explicit visual-stimulus quotes/features.\n\nType:\n1) Attention (WINNER) — Evidence: explicit “Task type: covert_attention”; SSVEP BCI with overlapping stimuli implies selective attention to a target frequency.\n2) Perception (runner-up) — plausible because SSVEP is a visually evoked response paradigm and sometimes framed as perceptual coding.\nDecision: Attention because the metadata explicitly foregrounds covert attention as the task type.\nConfidence basis: 2 explicit task-design quotes pointing to attention; runner-up remains plausible, lowering confidence slightly."}},"canonical_name":[],"name_confidence":0.8,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Wang2021"}}