{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4ccb","dataset_id":"nm000267","associated_paper_doi":null,"authors":["Jaeyoung Shin","Alexander von Lühmann","Benjamin Blankertz","Do-Won Kim","Jichai Jeong","Han-Jeong Hwang","Klaus-Robert Müller"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.1109/TNSRE.2016.2628057","datatypes":["eeg"],"demographics":{"subjects_count":29,"ages":[28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28],"age_min":28,"age_max":28,"age_mean":28.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000267","osf_url":null,"github_url":null,"paper_url":null},"funding":["Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF2014R1A6A3A03057524)","Ministry of Science, ICT & Future Planning (NRF-2015R1C1A1A02037032)","Brain Korea 21 PLUS Program through the NRF funded by the Ministry of Education","Korea University Grant","BMBF (#01GQ0850, Bernstein Focus: Neurotechnology)"],"ingestion_fingerprint":"56c14d85d97e030d792a91df10658db894dec354f4923aa9131e8e4dd7ffacdb","license":"GPL-3.0","n_contributing_labs":null,"name":"Shin et al. 2017 (Experiment A) — Open Access Dataset for EEG+NIRS Single-Trial Classification","readme":"Shin2017A\n=========\nMotor Imagey Dataset from Shin et al 2017.\nDataset Overview\n----------------\n  Code: Shin2017A\n  Paradigm: imagery\n  DOI: 10.1109/TNSRE.2016.2628057\n  Subjects: 29\n  Sessions per subject: 6\n  Events: left_hand=1, right_hand=2, subtraction=3, rest=4\n  Trial interval: [0, 10] s\n  File format: MATLAB\n  Data preprocessed: True\nAcquisition\n-----------\n  Sampling rate: 200.0 Hz\n  Number of channels: 30\n  Channel types: eeg=30, eog=2\n  Channel names: AFF1h, AFF2h, AFF5h, AFF6h, AFp1, AFp2, CCP3h, CCP4h, CCP5h, CCP6h, Cz, F3, F4, F7, F8, FCC3h, FCC4h, FCC5h, FCC6h, HEOG, P3, P4, P7, P8, POO1, POO2, PPO1h, PPO2h, Pz, T7, T8, VEOG\n  Montage: 10-5\n  Hardware: BrainAmp\n  Reference: linked mastoids\n  Ground: Fz\n  Sensor type: active electrodes\n  Line frequency: 50.0 Hz\n  Cap manufacturer: EASYCAP GmbH\n  Cap model: custom-made stretchy fabric cap\n  Auxiliary channels: EOG (4 ch, horizontal, vertical), ecg, respiration\nParticipants\n------------\n  Number of subjects: 29\n  Health status: healthy\n  Age: mean=28.5, std=3.7\n  Gender distribution: male=14, female=15\n  Handedness: {'right': 29, 'left': 1}\n  BCI experience: naive to MI experiment\n  Species: human\nExperimental Protocol\n---------------------\n  Paradigm: imagery\n  Number of classes: 2\n  Class labels: left_hand, right_hand\n  Trial duration: 10.0 s\n  Study design: Dataset A: left vs right hand motor imagery (kinesthetic imagery of opening and closing hands)\n  Feedback type: none\n  Stimulus type: visual arrow and fixation cross\n  Stimulus modalities: visual, auditory\n  Primary modality: visual\n  Synchronicity: cued\n  Mode: offline\n  Instructions: Subjects were instructed to perform kinesthetic MI (i.e., to imagine the opening and closing their hands as they were grabbing a ball) to ensure that actual MI, not visual MI, was performed. Subjects were asked to imagine hand gripping (opening and closing their hands) with a 1 Hz pace.\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  left_hand\n    ├─ Sensory-event\n    │  ├─ Experimental-stimulus\n    │  ├─ Visual-presentation\n    │  └─ Leftward, Arrow\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Left, Hand\n  right_hand\n    ├─ Sensory-event\n    │  ├─ Experimental-stimulus\n    │  ├─ Visual-presentation\n    │  └─ Rightward, Arrow\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\n  subtraction\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Think\n          └─ Label/subtraction\n  rest\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Rest\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: motor_imagery\n  Number of repetitions: 20\n  Imagery tasks: left_hand, right_hand\n  Cue duration: 2.0 s\n  Imagery duration: 10.0 s\nData Structure\n--------------\n  Trials: {'per_session': 20, 'per_class_per_session': 10, 'total_per_class': 30}\n  Blocks per session: 10\n  Trials context: 10 blocks per session, each block containing 2 trials (one left, one right hand MI) randomized\nPreprocessing\n-------------\n  Data state: preprocessed\n  Preprocessing applied: True\n  Steps: common average reference, bandpass filtering (0.5-50 Hz), ICA-based EOG rejection, downsampling to 200 Hz\n  Highpass filter: 0.5 Hz\n  Lowpass filter: 50.0 Hz\n  Bandpass filter: [0.5, 50.0]\n  Filter type: Chebyshev type II\n  Filter order: 4\n  Artifact methods: ICA, EOG rejection\n  Re-reference: car\n  Downsampled to: 200.0 Hz\nSignal Processing\n-----------------\n  Classifiers: Shrinkage LDA\n  Feature extraction: CSP, log-variance\n  Frequency bands: mu=[8.0, 12.0] Hz; beta=[12.0, 25.0] Hz; analyzed=[8.0, 25.0] Hz\n  Spatial filters: CSP\nCross-Validation\n----------------\n  Method: 10x5-fold\n  Folds: 5\n  Evaluation type: within_subject\nPerformance (Original Study)\n----------------------------\n  Accuracy: 65.6%\n  Eeg Accuracy: 65.6\n  Hbr Accuracy: 66.5\n  Hbo Accuracy: 63.5\n  Eeg+Hbr+Hbo Accuracy: 74.2\nBCI Application\n---------------\n  Applications: motor_control\n  Environment: laboratory\n  Online feedback: False\nTags\n----\n  Pathology: Healthy\n  Modality: Motor\n  Type: Imagery\nDocumentation\n-------------\n  Description: Open access dataset for hybrid brain-computer interfaces (BCIs) using electroencephalography (EEG) and near-infrared spectroscopy (NIRS). Dataset includes two BCI experiments: left versus right hand motor imagery, and mental arithmetic versus resting state.\n  DOI: 10.1109/TNSRE.2016.2628057\n  License: GPL-3.0\n  Investigators: Jaeyoung Shin, Alexander von Lühmann, Benjamin Blankertz, Do-Won Kim, Jichai Jeong, Han-Jeong Hwang, Klaus-Robert Müller\n  Senior author: Klaus-Robert Müller\n  Contact: h2j@kumoh.ac.kr; klaus-robert.mueller@tuberlin.de\n  Institution: Berlin Institute of Technology\n  Department: Machine Learning Group, Department of Computer Science\n  Address: 10587 Berlin, Germany\n  Country: DE\n  Repository: GitHub\n  Data URL: http://doc.ml.tu-berlin.de/hBCI\n  Publication year: 2017\n  Funding: Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF2014R1A6A3A03057524); Ministry of Science, ICT & Future Planning (NRF-2015R1C1A1A02037032); Brain Korea 21 PLUS Program through the NRF funded by the Ministry of Education; Korea University Grant; BMBF (#01GQ0850, Bernstein Focus: Neurotechnology)\n  Ethics approval: Ethics Committee of the Institute of Psychology and Ergonomics, Technical University of Berlin (approval number: SH_01_20150330); Declaration of Helsinki\n  Keywords: Brain-computer interface (BCI), electroencephalography (EEG), hybrid BCI, mental arithmetic, motor imagery, near-infrared spectroscopy (NIRS), open access dataset\nAbstract\n--------\nWe provide an open access dataset for hybrid brain-computer interfaces (BCIs) using electroencephalography (EEG) and near-infrared spectroscopy (NIRS). For this, we conducted two BCI experiments (left versus right hand motor imagery; mental arithmetic versus resting state). The dataset was validated using baseline signal analysis methods, with which classification performance was evaluated for each modality and a combination of both modalities. As already shown in previous literature, the capability of discriminating different mental states can be enhanced by using a hybrid approach, when comparing to single modality analyses. This makes the provided data highly suitable for hybrid BCI investigations. Since our open access dataset also comprises motion artifacts and physiological data, we expect that it can be used in a wide range of future validation approaches in multimodal BCI research.\nMethodology\n-----------\nTwenty-nine right-handed and one left-handed healthy subjects participated in motor imagery and mental arithmetic tasks. EEG data was recorded at 1000 Hz using 30 active electrodes with a BrainAmp amplifier, referenced to linked mastoids. NIRS data was collected at 12.5 Hz using NIRScout with 14 sources and 16 detectors resulting in 36 channels. Three sessions were conducted for each paradigm (MI and MA). Each session included 20 trials with 10s task periods and 15-17s rest periods. For MI, subjects performed kinesthetic hand gripping imagery at 1 Hz pace. Visual instructions included arrows for MI and arithmetic problems for MA. Motion artifacts from eye/head movements were also recorded. Signal processing included CSP for spatial filtering, log-variance features, and shrinkage LDA classifier with 10x5-fold cross-validation.\nReferences\n----------\nShin, J., von Lühmann, A., Blankertz, B., Kim, D.W., Jeong, J., Hwang, H.J. and Müller, K.R., 2017. Open access dataset for EEG+NIRS single-trial classification. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 25(10), pp.1735-1745.\nGNU General Public License, Version 3 `<https://www.gnu.org/licenses/gpl-3.0.txt>`_\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0imagery","1arithmetic","2imagery","3arithmetic","4imagery","5arithmetic"],"size_bytes":2014054285,"source":"nemar","storage":{"backend":"s3","base":"s3://openneuro.org/nm000267","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-22T12:52:17.832763+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":174,"computed_title":"Shin et al. 2017 (Experiment A) — Open Access Dataset for EEG+NIRS Single-Trial Classification","nchans_counts":[{"val":32,"count":174}],"sfreq_counts":[{"val":200.0,"count":174}],"stats_computed_at":"2026-04-22T23:16:00.314362+00:00","total_duration_s":104520.13,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"254e8ac495066caf","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.9,"modality":0.85,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot example by paradigm is the \"EEG Motor Movement/Imagery Dataset\" (Schalk et al.), which is a healthy-volunteer motor imagery dataset with visual cues (targets on a screen) and is labeled Pathology=Healthy, Modality=Visual, Type=Motor. Shin2017A matches this convention: it is explicitly a motor imagery BCI paradigm with visually presented arrows/fixation used as cues. This guides mapping motor imagery studies to Type=Motor (not Perception/Attention) and Modality determined by stimulus channel (visual cues) rather than the imagined movement itself.","metadata_analysis":"Key metadata facts:\n1) Population: \"Health status: healthy\" and also \"Twenty-nine ... healthy subjects participated\".\n2) Stimulus channel: \"Stimulus type: visual arrow and fixation cross\" and \"Primary modality: visual\".\n3) Research construct/task aim: \"Motor Imagey Dataset\" and \"Dataset A: left vs right hand motor imagery\" plus instructions: \"Subjects were instructed to perform kinesthetic MI ... imagine the opening and closing their hands\".\nThese directly support Healthy participants, Visual stimulus modality, and Motor (motor imagery/BCI) study type.","paper_abstract_analysis":"Useful paper-like text is included in the README under Abstract/Methodology and aligns with the metadata: \"two BCI experiments (left versus right hand motor imagery; mental arithmetic versus resting state)\" and \"healthy subjects participated in motor imagery...\". This reinforces Type=Motor and Pathology=Healthy.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\" / \"healthy subjects participated\".\n- Few-shot pattern suggests: motor imagery datasets in examples are commonly Healthy (e.g., Schalk motor imagery example).\n- ALIGN.\n\nModality:\n- Metadata says: \"Stimulus type: visual arrow and fixation cross\" and \"Primary modality: visual\" (even though it also lists \"Stimulus modalities: visual, auditory\").\n- Few-shot pattern suggests: in motor imagery cueing paradigms, Modality is labeled by stimulus channel (often Visual) rather than Motor.\n- ALIGN (choose dominant/primary modality = Visual).\n\nType:\n- Metadata says: \"Paradigm: imagery\" and \"left vs right hand motor imagery (kinesthetic imagery...)\".\n- Few-shot pattern suggests: imagery/movement-focused datasets map to Type=Motor (see Schalk motor imagery example labeled Motor).\n- ALIGN.","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n1) Healthy (selected): supported by \"Health status: healthy\" and \"healthy subjects\".\n2) Unknown (runner-up): would apply only if no population info, but explicit health status is given.\nEvidence alignment: aligns with few-shot motor imagery conventions.\n\nModality:\n1) Visual (selected): \"Stimulus type: visual arrow and fixation cross\" + \"Primary modality: visual\".\n2) Multisensory (runner-up): metadata mentions \"Stimulus modalities: visual, auditory\", but visual is explicitly primary and the cueing is visual.\nEvidence alignment: aligns with few-shot convention to use stimulus channel.\n\nType:\n1) Motor (selected): \"Dataset A: left vs right hand motor imagery\" and kinesthetic MI instructions.\n2) Attention (runner-up): could be considered due to cued task structure, but study purpose is explicitly motor imagery/BCI classification.\nEvidence alignment: aligns with few-shot motor imagery example labeling.\n\nConfidence justifications:\n- Pathology 0.9: multiple explicit statements (\"Health status: healthy\"; \"healthy subjects participated\") + strong few-shot analog.\n- Modality 0.85: explicit primary modality and stimulus type are visual (2 strong quotes), minor ambiguity due to mention of auditory modality.\n- Type 0.9: multiple explicit MI/BCI statements (\"Motor Imagey Dataset\"; \"left vs right hand motor imagery\"; MI instructions) + strong few-shot analog."}},"canonical_name":null,"name_confidence":0.78,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Shin2017_Shin2017A"}}