{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c5f","dataset_id":"nm000118","associated_paper_doi":null,"authors":["Masaki Nakanishi","Yijun Wang","Yu-Te Wang","Tzyy-Ping Jung"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":9,"ages":[28,28,28,28,28,28,28,28,28],"age_min":28,"age_max":28,"age_mean":28.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000118","osf_url":null,"github_url":null,"paper_url":null},"funding":["Swartz Foundation gift fund","U.S. Office of Naval Research (N00014-08-1215)","Army Research Office (W911NF-09-1-0510)","Army Research Laboratory (W911NF-10-2-0022)","DARPA (USDI D11PC20183)","UC Proof of Concept Grant Award (269228)","NIH Grant (1R21EY025056-01)","Recruitment Program for Young Professionals"],"ingestion_fingerprint":"96262fdd943ad8e5a8966163b8bcf0e333318dc20eb6238b363387dc40ea2093","license":"Unknown","n_contributing_labs":null,"name":"Nakanishi2015 – SSVEP Nakanishi 2015 dataset","readme":"# SSVEP Nakanishi 2015 dataset\nSSVEP Nakanishi 2015 dataset.\n## Dataset Overview\n- **Code**: Nakanishi2015\n- **Paradigm**: ssvep\n- **DOI**: 10.1371/journal.pone.0140703\n- **Subjects**: 9\n- **Sessions per subject**: 1\n- **Events**: 9.25=1, 11.25=2, 13.25=3, 9.75=4, 11.75=5, 13.75=6, 10.25=7, 12.25=8, 14.25=9, 10.75=10, 12.75=11, 14.75=12\n- **Trial interval**: [0.15, 4.3] s\n- **File format**: mat\n- **Data preprocessed**: True\n## Acquisition\n- **Sampling rate**: 256.0 Hz\n- **Number of channels**: 8\n- **Channel types**: eeg=8\n- **Channel names**: PO7, PO3, POz, PO4, PO8, O1, Oz, O2\n- **Montage**: standard_1020\n- **Hardware**: Biosemi ActiveTwo\n- **Reference**: CMS/DRL\n- **Sensor type**: EEG\n- **Line frequency**: 60.0 Hz\n## Participants\n- **Number of subjects**: 9\n- **Health status**: healthy\n- **Age**: mean=28.0\n- **Gender distribution**: male=9, female=1\n- **BCI experience**: not specified\n## Experimental Protocol\n- **Paradigm**: ssvep\n- **Number of classes**: 12\n- **Class labels**: 9.25, 11.25, 13.25, 9.75, 11.75, 13.75, 10.25, 12.25, 14.25, 10.75, 12.75, 14.75\n- **Trial duration**: 4.0 s\n- **Study design**: 12-class SSVEP target identification task with joint frequency and phase coding\n- **Feedback type**: none\n- **Stimulus type**: flickering\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Training/test split**: False\n- **Instructions**: Subjects were asked to gaze at one of the visual stimuli indicated by the stimulus program in a random order for 4s. At the beginning of each trial, a red square appeared for 1s at the position of the target stimulus. Subjects were asked to shift their gaze to the target within the same 1s duration. After that, all stimuli started to flicker simultaneously for 4s.\n- **Stimulus presentation**: SoftwareName=MATLAB with Psychophysics Toolbox, monitor=ASUS VG278 27-inch LCD, refresh_rate=60Hz, resolution=1280x800 pixels, stimulus_size=6x6 cm each, viewing_distance=60cm, arrangement=4x3 matrix virtual keypad\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  9.25\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/9_25\n  11.25\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/11_25\n  13.25\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/13_25\n  9.75\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/9_75\n  11.75\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/11_75\n  13.75\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/13_75\n  10.25\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10_25\n  12.25\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12_25\n  14.25\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14_25\n  10.75\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10_75\n  12.75\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12_75\n  14.75\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/14_75\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: ssvep\n- **Stimulus frequencies**: [9.25, 9.75, 10.25, 10.75, 11.25, 11.75, 12.25, 12.75, 13.25, 13.75, 14.25, 14.75] Hz\n- **Frequency resolution**: 0.5 Hz\n- **Code type**: joint frequency and phase coding\n- **Number of targets**: 12\n## Data Structure\n- **Trials**: 180\n- **Blocks per session**: 15\n- **Trials context**: 15 blocks x 12 trials per block = 180 trials total per subject\n## Preprocessing\n- **Preprocessing applied**: True\n- **Steps**: downsampling, bandpass filtering\n- **Bandpass filter**: {'low_cutoff_hz': 6.0, 'high_cutoff_hz': 80.0}\n- **Filter type**: IIR\n- **Downsampled to**: 256.0 Hz\n- **Epoch window**: [0.135, 4.135]\n- **Notes**: Zero-phase forward and reverse IIR filtering was implemented using the filtfilt() function in MATLAB. Data epochs were extracted with a 135-ms latency delay considering the visual system delay.\n## Signal Processing\n- **Classifiers**: CCA, IT-CCA, MwayCCA, L1-MCCA, MsetCCA, CACC, Combination Method\n- **Feature extraction**: CCA, canonical correlation\n- **Spatial filters**: CCA\n## Cross-Validation\n- **Method**: leave-one-block-out\n- **Folds**: 15\n- **Evaluation type**: cross_validation\n## Performance (Original Study)\n- **Accuracy**: 92.78%\n- **Itr**: 91.68 bits/min\n- **R Square**: 0.87\n- **Combination Method Accuracy 1S**: 92.78\n- **Combination Method Itr 1S**: 91.68\n- **Standard Cca Accuracy 1S**: 55.0\n- **Standard Cca Itr 2S**: 50.4\n## BCI Application\n- **Applications**: communication\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Visual\n- **Type**: Research\n## Documentation\n- **Description**: A comparison study of canonical correlation analysis based methods for detecting steady-state visual evoked potentials. This study performed a comparison of existing CCA-based SSVEP detection methods using a 12-class SSVEP dataset recorded from 10 subjects in a simulated online BCI experiment.\n- **DOI**: 10.1371/journal.pone.0140703\n- **License**: Unknown\n- **Investigators**: Masaki Nakanishi, Yijun Wang, Yu-Te Wang, Tzyy-Ping Jung\n- **Contact**: wangyj@semi.ac.cn\n- **Institution**: University of California San Diego\n- **Department**: Swartz Center for Computational Neuroscience, Institute for Neural Computation; Center for Advanced Neurological Engineering, Institute of Engineering in Medicine\n- **Country**: US\n- **Repository**: Github\n- **Data URL**: https://github.com/mnakanishi/12JFPM_SSVEP/raw/master/data/\n- **Publication year**: 2015\n- **Funding**: Swartz Foundation gift fund; U.S. Office of Naval Research (N00014-08-1215); Army Research Office (W911NF-09-1-0510); Army Research Laboratory (W911NF-10-2-0022); DARPA (USDI D11PC20183); UC Proof of Concept Grant Award (269228); NIH Grant (1R21EY025056-01); Recruitment Program for Young Professionals\n- **Ethics approval**: Human Research Protections Program of the University of California San Diego\n- **Keywords**: SSVEP, BCI, CCA, canonical correlation analysis, brain-computer interface, steady-state visual evoked potentials\n## Abstract\nCanonical correlation analysis (CCA) has been widely used in the detection of the steady-state visual evoked potentials (SSVEPs) in brain-computer interfaces (BCIs). The standard CCA method, which uses sinusoidal signals as reference signals, was first proposed for SSVEP detection without calibration. However, the detection performance can be deteriorated by the interference from the spontaneous EEG activities. Recently, various extended methods have been developed to incorporate individual EEG calibration data in CCA to improve the detection performance. Although advantages of the extended CCA methods have been demonstrated in separate studies, a comprehensive comparison between these methods is still missing. This study performed a comparison of the existing CCA-based SSVEP detection methods using a 12-class SSVEP dataset recorded from 10 subjects in a simulated online BCI experiment. Classification accuracy and information transfer rate (ITR) were used for performance evaluation. The results suggest that individual calibration data can significantly improve the detection performance. Furthermore, the results showed that the combination method based on the standard CCA and the individual template based CCA (IT-CCA) achieved the highest performance.\n## Methodology\nA simulated online BCI experiment was conducted with 10 subjects. Each subject completed 15 blocks, with each block containing 12 trials (one for each of the 12 targets). Visual stimuli were presented as a 4x3 matrix on a 27-inch LCD monitor at 60Hz refresh rate. The 12 targets used joint frequency and phase coding (frequencies: 9.25-14.75Hz with 0.5Hz intervals; phases: 0 to 5.5π with 0.5π intervals). Each trial began with a 1s cue (red square) followed by 4s of flickering stimulation. EEG was recorded from 8 occipital electrodes at 2048Hz and downsampled to 256Hz for analysis. Seven CCA-based methods were compared using leave-one-block-out cross-validation (14 blocks for training, 1 for testing). Performance was evaluated using classification accuracy and ITR.\n## References\nMasaki Nakanishi, Yijun Wang, Yu-Te Wang and Tzyy-Ping Jung, \"A Comparison Study of Canonical Correlation Analysis Based Methods for Detecting Steady-State Visual Evoked Potentials,\" PLoS One, vol.10, no.10, e140703, 2015. `<http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0140703>`_\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.4.3 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":68534649,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000118","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["ssvep"],"timestamps":{"digested_at":"2026-04-30T14:08:33.562305+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-29T22:39:18Z"},"total_files":9,"computed_title":"Nakanishi2015 – SSVEP Nakanishi 2015 dataset","nchans_counts":[{"val":8,"count":9}],"sfreq_counts":[{"val":256.0,"count":9}],"stats_computed_at":"2026-05-01T13:49:34.644632+00:00","total_duration_s":7682.30859375,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"b0a8fcc467550ddb","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot by stimulus modality + goal is the schizophrenia visual discrimination dataset (Meta-rdk), labeled (Visual, Perception). While the paradigm differs (motion discrimination vs SSVEP flicker), the convention is to label visually driven perceptual/evoked-potential paradigms as Type=Perception when the main experimental manipulation is visual stimulus properties and the measured response is a sensory evoked EEG signal. This guides mapping the SSVEP target-identification (gaze-at-flicker) paradigm to Type=Perception rather than Motor.","metadata_analysis":"Key population facts: (1) \"Health status: healthy\". (2) \"Tags\\n- **Pathology**: Healthy\". (3) \"A simulated online BCI experiment was conducted with 10 subjects\" (no clinical recruitment mentioned).\n\nKey stimulus/modality facts: (1) \"Stimulus modalities: visual\" and \"Primary modality: visual\". (2) \"Stimulus type: flickering\". (3) \"Subjects were asked to gaze at one of the visual stimuli ... After that, all stimuli started to flicker simultaneously for 4s.\".\n\nKey task/construct facts: (1) \"Study design: 12-class SSVEP target identification task\". (2) \"A comparison study of canonical correlation analysis based methods for detecting steady-state visual evoked potentials\". (3) HED annotations repeatedly mark each class as \"Sensory-event\" and \"Visual-presentation\".","paper_abstract_analysis":"The included abstract/summary reinforces the construct as visual evoked-potential detection: \"detecting steady-state visual evoked potentials (SSVEPs) in brain-computer interfaces (BCIs)\" and describes comparing SSVEP detection methods, consistent with a visual perception/evoked-response purpose rather than memory, affect, or motor control.","evidence_alignment_check":"Pathology: Metadata says \"Health status: healthy\" and tags list \"Pathology: Healthy\". Few-shot pattern suggests when participants are non-clinical volunteers, label as Healthy. ALIGN.\n\nModality: Metadata explicitly states \"Stimulus modalities: visual\" / \"Primary modality: visual\" and describes \"visual stimuli\" that \"flicker\". Few-shot conventions label based on stimulus channel (e.g., visual tasks -> Visual). ALIGN.\n\nType: Metadata centers on SSVEP as a visually evoked response: \"SSVEP target identification task\" and \"detecting steady-state visual evoked potentials\". Few-shot convention (e.g., visual discrimination labeled Perception) suggests sensory-evoked/visual discrimination paradigms map to Perception when the core construct is stimulus-driven sensory processing. ALIGN (though Attention is a plausible alternative because gaze/focus is required, the explicit focus on evoked potential detection favors Perception).","decision_summary":"Top-2 candidates per category:\n\nPathology:\n1) Healthy (WIN) — supported by \"Health status: healthy\", \"Tags - Pathology: Healthy\", and no mention of any diagnosed clinical recruitment.\n2) Unknown (runner-up) — would apply only if health status were not stated.\nAlignment: few-shot and metadata align. Confidence basis: 2 explicit quotes + consistent context.\n\nModality:\n1) Visual (WIN) — supported by \"Stimulus modalities: visual\", \"Primary modality: visual\", and \"Stimulus type: flickering\" / \"gaze at one of the visual stimuli\".\n2) Other (runner-up) — only if stimulus channel were unspecified.\nAlignment: few-shot and metadata align. Confidence basis: 3+ explicit quotes.\n\nType:\n1) Perception (WIN) — supported by \"SSVEP target identification task\", \"detecting steady-state visual evoked potentials\", and HED \"Visual-presentation\" sensory-event structure; matches few-shot convention mapping visual stimulus-driven EEG paradigms to Perception.\n2) Attention (runner-up) — because the task requires sustained gaze/attentional focus on a target.\nAlignment: largely aligns; Perception is stronger because the dataset’s stated goal is SSVEP (sensory evoked response) detection. Confidence basis: 2 explicit quotes + strong few-shot analog."}},"canonical_name":null,"name_confidence":0.9,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Nakanishi2015"}}