{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c60","dataset_id":"nm000119","associated_paper_doi":null,"authors":["Vangelis P. Oikonomou","Georgios Liaros","Kostantinos Georgiadis","Elisavet Chatzilari","Katerina Adam","Spiros Nikolopoulos","Ioannis Kompatsiaris"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":11,"ages":[24,37,25,37,39,31,27,28,26,31,29],"age_min":24,"age_max":39,"age_mean":30.363636363636363,"species":null,"sex_distribution":null,"handedness_distribution":{"r":10,"l":1}},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000119","osf_url":null,"github_url":null,"paper_url":null},"funding":["H2020-ICT-2014-644780"],"ingestion_fingerprint":"dc9b175242fe7cd91b2a565997dd7eecde06527774c78f11a2c471d6bbe8933a","license":"ODC-By-1.0","n_contributing_labs":null,"name":"Oikonomou2016 – SSVEP MAMEM 1 dataset","readme":"# SSVEP MAMEM 1 dataset\nSSVEP MAMEM 1 dataset.\n## Dataset Overview\n- **Code**: MAMEM1\n- **Paradigm**: ssvep\n- **DOI**: 10.48550/arXiv.1602.00904\n- **Subjects**: 11\n- **Sessions per subject**: 1\n- **Events**: 6.66=1, 7.50=2, 8.57=3, 10.00=4, 12.00=5\n- **Trial interval**: [1, 4] s\n- **File format**: MATLAB .mat\n## Acquisition\n- **Sampling rate**: 250.0 Hz\n- **Number of channels**: 256\n- **Channel types**: eeg=256\n- **Channel names**: E1, E10, E100, E101, E102, E103, E104, E105, E106, E107, E108, E109, E11, E110, E111, E112, E113, E114, E115, E116, E117, E118, E119, E12, E120, E121, E122, E123, E124, E125, E126, E127, E128, E129, E13, E130, E131, E132, E133, E134, E135, E136, E137, E138, E139, E14, E140, E141, E142, E143, E144, E145, E146, E147, E148, E149, E15, E150, E151, E152, E153, E154, E155, E156, E157, E158, E159, E16, E160, E161, E162, E163, E164, E165, E166, E167, E168, E169, E17, E170, E171, E172, E173, E174, E175, E176, E177, E178, E179, E18, E180, E181, E182, E183, E184, E185, E186, E187, E188, E189, E19, E190, E191, E192, E193, E194, E195, E196, E197, E198, E199, E2, E20, E200, E201, E202, E203, E204, E205, E206, E207, E208, E209, E21, E210, E211, E212, E213, E214, E215, E216, E217, E218, E219, E22, E220, E221, E222, E223, E224, E225, E226, E227, E228, E229, E23, E230, E231, E232, E233, E234, E235, E236, E237, E238, E239, E24, E240, E241, E242, E243, E244, E245, E246, E247, E248, E249, E25, E250, E251, E252, E253, E254, E255, E256, E26, E27, E28, E29, E3, E30, E31, E32, E33, E34, E35, E36, E37, E38, E39, E4, E40, E41, E42, E43, E44, E45, E46, E47, E48, E49, E5, E50, E51, E52, E53, E54, E55, E56, E57, E58, E59, E6, E60, E61, E62, E63, E64, E65, E66, E67, E68, E69, E7, E70, E71, E72, E73, E74, E75, E76, E77, E78, E79, E8, E80, E81, E82, E83, E84, E85, E86, E87, E88, E89, E9, E90, E91, E92, E93, E94, E95, E96, E97, E98, E99\n- **Montage**: GSN-HydroCel-256\n- **Hardware**: EGI 300 Geodesic EEG System (GES 300)\n- **Line frequency**: 50.0 Hz\n- **Impedance threshold**: 80.0 kOhm\n- **Cap manufacturer**: EGI\n- **Cap model**: HydroCel Geodesic Sensor Net (HCGSN)\n## Participants\n- **Number of subjects**: 11\n- **Health status**: healthy\n- **Clinical population**: able-bodied subjects without any known neuro-muscular or mental disorders\n- **Age**: min=24, max=39\n- **Gender distribution**: male=8, female=3\n- **Handedness**: {'right': 10, 'left': 1}\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: ssvep\n- **Number of classes**: 5\n- **Class labels**: 6.66, 7.50, 8.57, 10.00, 12.00\n- **Trial duration**: 5.0 s\n- **Study design**: Subjects focus attention on a single violet box flickering at different frequencies (6.66, 7.50, 8.57, 10.00, 12.00 Hz) presented sequentially. Each frequency is presented for 5 seconds (trial) followed by 5 seconds rest, repeated 3 times per frequency, with 30 seconds rest between different frequencies.\n- **Feedback type**: none\n- **Stimulus type**: flickering box\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n- **Instructions**: Subjects were instructed to focus attention on the flickering box, limit movements, and avoid swallowing or blinking during visual stimulation\n- **Stimulus presentation**: SoftwareName=Microsoft Visual Studio 2010 with OpenGL, monitor=22 inch LCD monitor, refresh_rate=60 Hz, resolution=1680x1080 pixels\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  6.66\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/6_66\n  7.50\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/7_50\n  8.57\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/8_57\n  10.00\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/10_00\n  12.00\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Label/12_00\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: ssvep\n- **Stimulus frequencies**: [6.66, 7.5, 8.57, 10.0, 12.0] Hz\n- **Number of targets**: 5\n- **Number of repetitions**: 3\n## Data Structure\n- **Trials**: 1104\n- **Trials context**: Total 1104 trials across all subjects. Each session includes 23 trials (8 adaptation + 15 main). S001: 3 sessions, S003 and S004: 4 sessions, others: 5 sessions. Some sessions excluded due to technical issues.\n## Preprocessing\n- **Data state**: raw\n- **Preprocessing applied**: False\n## Signal Processing\n- **Classifiers**: LDA, SVM, Random Forest, kNN, Naive Bayes, CCA, AdaBoost, Decision Trees\n- **Feature extraction**: Periodogram, Welch Spectrum, Goertzel algorithm, Yule-AR Spectrum, FFT, PSD, Discrete Wavelet Transform\n- **Frequency bands**: analyzed=[5.0, 48.0] Hz\n- **Spatial filters**: CAR, CSP, Minimum Energy\n## Cross-Validation\n- **Method**: leave-one-subject-out\n- **Evaluation type**: cross_subject\n## Performance (Original Study)\n- **Default Accuracy**: 72.47\n- **Optimal Accuracy**: 79.47\n## BCI Application\n- **Applications**: communication\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Visual\n- **Type**: Perception\n## Documentation\n- **Description**: Comparative evaluation of state-of-the-art algorithms for SSVEP-based BCIs\n- **DOI**: 10.6084/m9.figshare.2068677.v1\n- **Associated paper DOI**: 10.48550/arXiv.1602.00904\n- **License**: ODC-By-1.0\n- **Investigators**: Vangelis P. Oikonomou, Georgios Liaros, Kostantinos Georgiadis, Elisavet Chatzilari, Katerina Adam, Spiros Nikolopoulos, Ioannis Kompatsiaris\n- **Senior author**: Ioannis Kompatsiaris\n- **Institution**: Centre for Research and Technology Hellas (CERTH)\n- **Country**: GR\n- **Repository**: Figshare\n- **Data URL**: https://dx.doi.org/10.6084/m9.figshare.2068677.v1\n- **Publication year**: 2016\n- **Funding**: H2020-ICT-2014-644780\n- **Ethics approval**: Centre for Research and Technology Hellas ethics committee, dated 3/7/2015, grant H2020-ICT-2014-644780\n- **Keywords**: SSVEP, BCI, EEG, brain-computer interface, comparative evaluation, state-of-the-art algorithms\n## Abstract\nBrain-computer interfaces (BCIs) have been gaining momentum in making human-computer interaction more natural, especially for people with neuro-muscular disabilities. This report focuses on SSVEP-based BCIs and performs a comparative evaluation of the most promising algorithms. A dataset of 256-channel EEG signals from 11 subjects is provided, along with a processing toolbox for reproducing results and supporting further experimentation.\n## Methodology\nEmpirical approach where each signal processing parameter (filtering, artifact removal, feature extraction, feature selection, classification) is studied independently by keeping all other parameters fixed. Leave-one-subject-out cross-validation used to evaluate system without subject-specific training. Multiple algorithms compared for each processing stage to obtain state-of-the-art baseline.\n## References\nOikonomou, V. P., Liaros, G., Georgiadis, K., Chatzilari, E., Adam, K., Nikolopoulos, S., & Kompatsiaris, I. (2016). Comparative evaluation of state-of-the-art algorithms for SSVEP-based BCIs. arXiv preprint arXiv:1602.00904.\nMAMEM Steady State Visually Evoked Potential EEG Database `<https://archive.physionet.org/physiobank/database/mssvepdb/>`_\nS. Nikolopoulos, 2016, DataAcquisitionDetails.pdf `<https://figshare.com/articles/dataset/MAMEM_EEG_SSVEP_Dataset_I_256_channels_11_subjects_5_frequencies_/2068677?file=3793738>`_\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.4.3 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":5751416681,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000119","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["ssvep"],"timestamps":{"digested_at":"2026-04-30T14:08:33.635830+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-29T22:39:13Z"},"total_files":47,"computed_title":"Oikonomou2016 – SSVEP MAMEM 1 dataset","nchans_counts":[{"val":256,"count":47}],"sfreq_counts":[{"val":250.0,"count":47}],"stats_computed_at":"2026-05-01T13:49:34.644703+00:00","total_duration_s":22405.392,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"bce606e142a72e56","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.75},"reasoning":{"few_shot_analysis":"Most similar few-shot conventions are:\n- The schizophrenia visual discrimination dataset is labeled Modality=Visual and Type=Perception for a task where participants view visual stimuli and make perceptual judgments (\"We used a visual discrimination task. Stimuli consisted of ... moving dots...\"). This guides mapping visually driven evoked responses to Perception rather than Motor.\n- The healthy auditory brainstem response dataset is labeled Type=Perception for primarily sensory-evoked responses to auditory stimuli. By analogy, SSVEP (steady-state visually evoked potentials) is a sensory-evoked visual paradigm and typically maps to Perception.\nNo few-shot example directly shows SSVEP/BCI, but the above examples establish the convention: sensory-evoked paradigms (visual/auditory) → Modality by stimulus channel (Visual/Auditory) and Type often → Perception.","metadata_analysis":"Pathology/population facts:\n- \"Health status: healthy\"\n- \"Clinical population: able-bodied subjects without any known neuro-muscular or mental disorders\"\n- \"Subjects: 11\"\n\nStimulus/modality facts:\n- \"Subjects focus attention on a single violet box flickering at different frequencies\"\n- \"Stimulus type: flickering box\"\n- \"Stimulus modalities: visual\" and \"Primary modality: visual\"\n\nTask/purpose facts (construct):\n- \"Paradigm: ssvep\" and \"Comparative evaluation of state-of-the-art algorithms for SSVEP-based BCIs\"\n- HED annotations label each class as \"Sensory-event\" and \"Visual-presentation\"","paper_abstract_analysis":"No separate paper abstract beyond the dataset-embedded abstract text. The embedded abstract emphasizes SSVEP-based BCI algorithm evaluation (\"Comparative evaluation of the most promising algorithms\") rather than a higher-level cognitive manipulation.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\"; \"able-bodied subjects without any known neuro-muscular or mental disorders\".\n- Few-shot pattern suggests: when participants are described as healthy/controls with no disorder focus → Healthy.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"Stimulus modalities: visual\"; \"flickering ... box\"; HED includes \"Visual-presentation\".\n- Few-shot pattern suggests: modality is determined by stimulus channel (e.g., visual discrimination → Visual; auditory ABR → Auditory).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: \"Paradigm: ssvep\"; \"focus attention on ... flickering ... frequencies\"; HED marks as \"Sensory-event\" / \"Experimental-stimulus\".\n- Few-shot pattern suggests: sensory-evoked/discrimination paradigms map to Perception; attention-control paradigms (e.g., executive control tasks) map to Attention.\n- Alignment: Mostly ALIGN with Perception (sensory-evoked SSVEP), with mild ambiguity because the instruction includes \"focus attention\" (could suggest Attention). No explicit cognitive-control/attentional manipulation is described beyond attending to the stimulus.","decision_summary":"Top-2 candidate labels and final decisions:\n\nPathology:\n1) Healthy — Supported by \"Health status: healthy\" and \"able-bodied subjects without any known neuro-muscular or mental disorders\".\n2) Unknown — Only if participant health were unspecified (not the case here).\nWinner: Healthy (explicitly stated). Evidence alignment: ALIGN.\n\nModality:\n1) Visual — Supported by \"Stimulus modalities: visual\", \"Primary modality: visual\", and \"flickering box\" / HED \"Visual-presentation\".\n2) Multisensory — Only if multiple stimulus channels were present (not described).\nWinner: Visual. Evidence alignment: ALIGN.\n\nType:\n1) Perception — SSVEP is a visually evoked sensory paradigm (\"Paradigm: ssvep\"; \"flickering ... box\"; HED \"Sensory-event\"), matching few-shot convention for sensory-evoked tasks → Perception.\n2) Attention — Because subjects are instructed to \"focus attention\" on the flicker, but there is no explicit attentional manipulation/construct beyond enabling the SSVEP response.\nWinner: Perception (stronger fit to sensory-evoked SSVEP/visual stimulation purpose). Evidence alignment: mostly ALIGN (minor ambiguity vs Attention).\n\nConfidence justification:\n- Pathology 0.9: 2+ explicit population quotes (\"Health status: healthy\"; \"without any known ... disorders\") plus clear few-shot convention.\n- Modality 0.9: 3+ explicit modality cues (\"Stimulus modalities: visual\"; \"Primary modality: visual\"; \"Visual-presentation\"/\"flickering box\") plus few-shot convention.\n- Type 0.75: explicit SSVEP/flicker sensory paradigm quotes support Perception, but Attention is a plausible runner-up due to \"focus attention\" wording."}},"canonical_name":null,"name_confidence":0.78,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Oikonomou2016_MAMEM1"}}