{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c61","dataset_id":"nm000120","associated_paper_doi":null,"authors":["Vangelis P. Oikonomou","Georgios Liaros","Kostantinos Georgiadis","Elisavet Chatzilari","Katerina Adam","Spiros Nikolopoulos","Ioannis Kompatsiaris"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":11,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000120","osf_url":null,"github_url":null,"paper_url":null},"funding":["H2020-ICT-2014-644780"],"ingestion_fingerprint":"99f7bac106780e93313af92677e2afb5c38382443bcb01782c52d03d59db9cca","license":"ODC-By-1.0","n_contributing_labs":null,"name":"Oikonomou2016 – SSVEP MAMEM 2 dataset","readme":"# SSVEP MAMEM 2 dataset\nSSVEP MAMEM 2 dataset.\n## Dataset Overview\n- **Code**: MAMEM2\n- **Paradigm**: ssvep\n- **DOI**: 10.48550/arXiv.1602.00904\n- **Subjects**: 11\n- **Sessions per subject**: 1\n- **Events**: 6.66=1, 7.50=2, 8.57=3, 10.00=4, 12.00=5\n- **Trial interval**: [1, 4] s\n- **Runs per session**: 5\n- **File format**: MAT\n## Acquisition\n- **Sampling rate**: 250.0 Hz\n- **Number of channels**: 256\n- **Channel types**: eeg=256\n- **Channel names**: E1, E10, E100, E101, E102, E103, E104, E105, E106, E107, E108, E109, E11, E110, E111, E112, E113, E114, E115, E116, E117, E118, E119, E12, E120, E121, E122, E123, E124, E125, E126, E127, E128, E129, E13, E130, E131, E132, E133, E134, E135, E136, E137, E138, E139, E14, E140, E141, E142, E143, E144, E145, E146, E147, E148, E149, E15, E150, E151, E152, E153, E154, E155, E156, E157, E158, E159, E16, E160, E161, E162, E163, E164, E165, E166, E167, E168, E169, E17, E170, E171, E172, E173, E174, E175, E176, E177, E178, E179, E18, E180, E181, E182, E183, E184, E185, E186, E187, E188, E189, E19, E190, E191, E192, E193, E194, E195, E196, E197, E198, E199, E2, E20, E200, E201, E202, E203, E204, E205, E206, E207, E208, E209, E21, E210, E211, E212, E213, E214, E215, E216, E217, E218, E219, E22, E220, E221, E222, E223, E224, E225, E226, E227, E228, E229, E23, E230, E231, E232, E233, E234, E235, E236, E237, E238, E239, E24, E240, E241, E242, E243, E244, E245, E246, E247, E248, E249, E25, E250, E251, E252, E253, E254, E255, E256, E26, E27, E28, E29, E3, E30, E31, E32, E33, E34, E35, E36, E37, E38, E39, E4, E40, E41, E42, E43, E44, E45, E46, E47, E48, E49, E5, E50, E51, E52, E53, E54, E55, E56, E57, E58, E59, E6, E60, E61, E62, E63, E64, E65, E66, E67, E68, E69, E7, E70, E71, E72, E73, E74, E75, E76, E77, E78, E79, E8, E80, E81, E82, E83, E84, E85, E86, E87, E88, E89, E9, E90, E91, E92, E93, E94, E95, E96, E97, E98, E99\n- **Montage**: GSN-HydroCel-256\n- **Hardware**: EGI 300 Geodesic EEG System (GES 300)\n- **Reference**: Cz\n- **Line frequency**: 50.0 Hz\n- **Impedance threshold**: 80.0 kOhm\n- **Cap manufacturer**: EGI\n- **Cap model**: HydroCel Geodesic Sensor Net (HCGSN)\n## Participants\n- **Number of subjects**: 11\n- **Health status**: healthy\n- **Age**: min=24, max=39\n- **Gender distribution**: male=8, female=3\n- **Handedness**: {'right': 10, 'left': 1}\n## Experimental Protocol\n- **Paradigm**: ssvep\n- **Number of classes**: 5\n- **Class labels**: 6.66, 7.50, 8.57, 10.00, 12.00\n- **Trial duration**: 5.0 s\n- **Study design**: Subjects focus attention on visual stimuli flickering at different frequencies (6.66, 7.50, 8.57, 10.00, 12.00 Hz) to select commands. Each stimulus presented for 5 seconds followed by 5 seconds rest.\n- **Feedback type**: none\n- **Stimulus type**: flickering box\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Stimulus presentation**: SoftwareName=Microsoft Visual Studio 2010 with OpenGL, device=22 inch LCD monitor, refresh_rate=60 Hz, resolution=1680x1080\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  6.66\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/6_66\n  7.50\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/7_50\n  8.57\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/8_57\n  10.00\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10_00\n  12.00\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12_00\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: ssvep\n- **Stimulus frequencies**: [6.66, 7.5, 8.57, 10.0, 12.0] Hz\n- **Number of targets**: 5\n- **Number of repetitions**: 3\n## Data Structure\n- **Trials**: 1104\n- **Trials context**: Each session includes 23 trials (8 adaptation trials excluded from analysis). 5 sessions per subject (with exceptions: S001=3 sessions, S003=4 sessions, S004=4 sessions). Total: 1104 trials of 5 seconds each.\n## Preprocessing\n- **Data state**: raw\n- **Preprocessing applied**: False\n## Signal Processing\n- **Classifiers**: LDA, SVM, Random Forest, kNN, Naive Bayes, AdaBoost, Decision Trees, CCA\n- **Feature extraction**: PWelch, Periodogram, FFT, Goertzel, PYULEAR (Yule-AR), STFT, DWT, PSD, Wavelet, Spectrogram\n- **Frequency bands**: analyzed=[5.0, 48.0] Hz\n- **Spatial filters**: CAR, CSP, Minimum Energy\n## Cross-Validation\n- **Method**: leave-one-subject-out\n- **Evaluation type**: cross_subject\n## Performance (Original Study)\n- **Accuracy**: 74.42%\n- **Mean Accuracy Default Config**: 72.47\n- **Mean Accuracy Optimal Config**: 74.42\n- **Processing Time Msec**: 68\n## BCI Application\n- **Applications**: command_selection\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Visual\n- **Type**: Research\n## Documentation\n- **DOI**: 10.48550/arXiv.1602.00904\n- **Associated paper DOI**: arXiv:1602.00904v2\n- **License**: ODC-By-1.0\n- **Investigators**: Vangelis P. Oikonomou, Georgios Liaros, Kostantinos Georgiadis, Elisavet Chatzilari, Katerina Adam, Spiros Nikolopoulos, Ioannis Kompatsiaris\n- **Institution**: Centre for Research and Technology Hellas (CERTH)\n- **Country**: GR\n- **Repository**: GitHub\n- **Data URL**: https://figshare.com/articles/dataset/3153409\n- **Publication year**: 2016\n- **Funding**: H2020-ICT-2014-644780\n- **Ethics approval**: Approved by ethics committee of Centre for Research and Technology Hellas, date 3/7/2015, grant H2020-ICT-2014-644780\n- **Keywords**: SSVEP, BCI, brain-computer interface, EEG, visual evoked potentials, signal processing, feature extraction, classification\n## Abstract\nBrain-computer interfaces (BCIs) have been gaining momentum in making human-computer interaction more natural, especially for people with neuro-muscular disabilities. This study focuses on SSVEP-based BCIs and performs a comparative evaluation of state-of-the-art algorithms for filtering, artifact removal, feature extraction, feature selection and classification. Dataset consists of 256-channel EEG signals from 11 subjects with 5 flickering frequencies (6.66, 7.50, 8.57, 10.00, 12.00 Hz).\n## Methodology\nLeave-one-subject-out cross-validation was used to evaluate a general-purpose BCI system without subject-specific training. Systematic comparison of algorithms across all signal processing stages: (1) Signal filtering: FIR vs IIR filters; (2) Artifact removal: AMUSE vs FastICA; (3) Feature extraction: PWelch, Periodogram, PYULEAR, DWT, STFT, Goertzel; (4) Feature selection: entropy-based methods and PCA/SVD; (5) Classification: SVM, LDA, KNN, Naive Bayes, Random Forest, AdaBoost. Optimal configuration achieved 74.42% mean accuracy using IIR-Elliptic filter, AMUSE artifact removal, PWelch feature extraction with nfft=512, segment length=350, overlap=0.75, and channel-138.\n## References\nOikonomou, V. P., Liaros, G., Georgiadis, K., Chatzilari, E., Adam, K., Nikolopoulos, S., & Kompatsiaris, I. (2016). Comparative evaluation of state-of-the-art algorithms for SSVEP-based BCIs. arXiv preprint arXiv:1602.00904.\nMAMEM Steady State Visually Evoked Potential EEG Database `<https://archive.physionet.org/physiobank/database/mssvepdb/>`_\nS. Nikolopoulos, 2016, DataAcquisitionDetails.pdf `<https://figshare.com/articles/dataset/MAMEM_EEG_SSVEP_Dataset_II_256_channels_11_subjects_5_frequencies_presented_simultaneously_/3153409?file=4911931>`_\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.4.3 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":4726017199,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000120","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["ssvep"],"timestamps":{"digested_at":"2026-04-30T14:08:33.932545+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-29T22:39:07Z"},"total_files":55,"computed_title":"Oikonomou2016 – SSVEP MAMEM 2 dataset","nchans_counts":[{"val":256,"count":55}],"sfreq_counts":[{"val":250.0,"count":55}],"stats_computed_at":"2026-05-01T13:49:34.644733+00:00","total_duration_s":18393.036,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"726ab304c583ae3e","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.85,"modality":0.9,"type":0.75},"reasoning":{"few_shot_analysis":"No few-shot example is explicitly SSVEP/BCI. The closest convention match is the few-shot \"EEG Motor Movement/Imagery Dataset\" which treats BCI-oriented paradigms by labeling Modality from the stimulus channel (visual targets) rather than responses, and Type by the primary cognitive construct (Motor for movement/imagery). For the current dataset, the paradigm is SSVEP with flickering visual stimuli and explicit instruction to focus attention on one target; by this convention, the best candidate Types are Attention (covert attentional selection of a flicker target) vs Perception (visual evoked responses).","metadata_analysis":"Key metadata facts:\n- Population/health: \"Health status: healthy\" and \"Number of subjects: 11\".\n- Stimulus modality: \"Stimulus modalities: visual\" and \"Stimulus type: flickering box\".\n- Task demand: \"Subjects focus attention on visual stimuli flickering at different frequencies ... to select commands.\" Also \"Paradigm: ssvep\" and the events are the flicker frequencies (\"6.66\", \"7.50\", \"8.57\", \"10.00\", \"12.00\").","paper_abstract_analysis":"The included abstract emphasizes an engineering/BCI benchmarking goal: \"This study focuses on SSVEP-based BCIs\" and describes the dataset as \"256-channel EEG signals from 11 subjects with 5 flickering frequencies\". It also mentions BCIs as useful \"especially for people with neuro-muscular disabilities,\" but this is contextual motivation and does not state that the recruited participants had such a condition; metadata explicitly says participants were healthy.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\".\n- Few-shot pattern suggests: when explicitly healthy/controls with no disorder recruitment focus, label Healthy.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"Stimulus modalities: visual\" and \"flickering box\".\n- Few-shot pattern suggests: label Modality by the stimulus channel (e.g., visual tasks -> Visual).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: \"Subjects focus attention on visual stimuli flickering at different frequencies ... to select commands.\" Also paradigm is \"ssvep\" (steady-state visually evoked potentials).\n- Few-shot pattern suggests: perceptual discrimination/evoked sensory paradigms often map to Perception; tasks requiring attentional selection/control map to Attention.\n- Alignment: PARTIAL (both Attention and Perception plausible); no conflict with explicit facts, just ambiguity in construct emphasis. Chosen label reflects the explicit \"focus attention\" instruction typical of SSVEP target-selection BCI.","decision_summary":"Top-2 candidates per category (with head-to-head comparison):\n\nPathology:\n1) Healthy (WINNER) — evidence: \"Health status: healthy\"; also \"Subjects: 11\" with no diagnostic groups.\n2) Unknown (runner-up) — would apply if health status were not stated.\nDecision: Healthy.\n\nModality:\n1) Visual (WINNER) — evidence: \"Stimulus modalities: visual\", \"Stimulus type: flickering box\", and SSVEP flicker frequencies listed as events.\n2) Other (runner-up) — only if stimulus modality were unclear.\nDecision: Visual.\n\nType:\n1) Attention (WINNER) — evidence: explicit instruction \"Subjects focus attention on visual stimuli ... to select commands\"; SSVEP target selection typically driven by selective attention to one flicker.\n2) Perception (runner-up) — evidence: paradigm \"ssvep\" and steady-state visually evoked potentials are sensory-evoked responses to flicker.\nDecision: Attention, because the metadata foregrounds attentional selection (focus attention) as the operative construct rather than mere passive perception.\n\nConfidence justification:\n- Pathology confidence is high due to explicit statement \"Health status: healthy\".\n- Modality confidence is high due to multiple explicit visual-stimulus lines.\n- Type confidence is moderate-high because both Attention and Perception are plausible, but \"focus attention\" provides a direct cue favoring Attention."}},"canonical_name":null,"name_confidence":0.78,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Oikonomou2016_MAMEM2"}}