{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4ccc","dataset_id":"nm000268","associated_paper_doi":null,"authors":["Jaeyoung Shin","Alexander von Lühmann","Benjamin Blankertz","Do-Won Kim","Jichai Jeong","Han-Jeong Hwang","Klaus-Robert Müller"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.1109/TNSRE.2016.2628057","datatypes":["eeg"],"demographics":{"subjects_count":29,"ages":[28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28],"age_min":28,"age_max":28,"age_mean":28.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000268","osf_url":null,"github_url":null,"paper_url":null},"funding":["Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2014R1A6A3A03057524)","Ministry of Science, ICT & Future Planning (NRF-2015R1C1A1A02037032)","Brain Korea 21 PLUS Program through the NRF funded by the Ministry of Education","Korea University Grant","BMBF (#01GQ0850, Bernstein Focus: Neurotechnology)"],"ingestion_fingerprint":"6fba4940f1cafdc1c8fd564d1ee38684b36b27ba6490f4f83fac6b7eaacd0b26","license":"GPL-3.0","n_contributing_labs":null,"name":"Shin et al. 2017 (Experiment B) — Open Access Dataset for EEG+NIRS Single-Trial Classification","readme":"Shin2017B\n=========\nMental Arithmetic Dataset from Shin et al 2017.\nDataset Overview\n----------------\n  Code: Shin2017B\n  Paradigm: imagery\n  DOI: 10.1109/TNSRE.2016.2628057\n  Subjects: 29\n  Sessions per subject: 6\n  Events: left_hand=1, right_hand=2, subtraction=3, rest=4\n  Trial interval: [0, 10] s\n  Session IDs: 1arithmetic, 3arithmetic, 5arithmetic\n  File format: MATLAB\n  Data preprocessed: True\nAcquisition\n-----------\n  Sampling rate: 200.0 Hz\n  Number of channels: 30\n  Channel types: eeg=30, eog=2\n  Channel names: AFF1h, AFF2h, AFF5h, AFF6h, AFp1, AFp2, CCP3h, CCP4h, CCP5h, CCP6h, Cz, F3, F4, F7, F8, FCC3h, FCC4h, FCC5h, FCC6h, HEOG, P3, P4, P7, P8, POO1, POO2, PPO1h, PPO2h, Pz, T7, T8, VEOG\n  Montage: 10-5\n  Hardware: BrainAmp\n  Software: MATLAB R2013b\n  Reference: linked mastoids\n  Ground: Fz\n  Sensor type: active electrodes\n  Line frequency: 50.0 Hz\n  Cap manufacturer: EASYCAP GmbH\n  Cap model: custom-made stretchy fabric cap\n  Auxiliary channels: EOG (4 ch, horizontal, vertical), ecg, respiration\nParticipants\n------------\n  Number of subjects: 29\n  Health status: healthy\n  Age: mean=28.5, std=3.7\n  Gender distribution: male=14, female=15\n  Handedness: {'right': 29, 'left': 1}\n  BCI experience: naive to MI experiment\n  Species: human\nExperimental Protocol\n---------------------\n  Paradigm: imagery\n  Number of classes: 2\n  Class labels: subtraction, rest\n  Trial duration: 10.0 s\n  Trials per class: subtraction=30, rest=30\n  Study design: Dataset B: mental arithmetic (serial subtraction of one-digit number) versus baseline/rest task\n  Feedback type: none\n  Stimulus type: visual instruction (subtraction problem and fixation cross)\n  Stimulus modalities: visual, auditory\n  Primary modality: visual\n  Synchronicity: cued-synchronous\n  Mode: offline\n  Training/test split: False\n  Instructions: For the MA task, subjects memorized an initial subtraction (three-digit minus one-digit) displayed for 2s, then repeatedly subtracted the one-digit number from each result. For baseline, subjects rested with no specific thought.\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  left_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Left, Hand\n  right_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\n  subtraction\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Think\n          └─ Label/subtraction\n  rest\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Rest\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: motor_imagery\n  Number of repetitions: 20\nData Structure\n--------------\n  Trials: {'per_session': 20, 'per_condition_session': 10, 'per_condition_total': 30}\n  Trials context: Each session: 1 min pre-experiment rest + 20 trials + 1 min post-experiment rest. Trial: 2s visual instruction + 10s task + 15-17s random rest\nPreprocessing\n-------------\n  Data state: preprocessed\n  Preprocessing applied: True\n  Steps: common average reference, bandpass filtering (0.5-50 Hz), ICA-based EOG rejection, downsampling to 200 Hz\n  Highpass filter: 0.5 Hz\n  Lowpass filter: 50.0 Hz\n  Bandpass filter: [0.5, 50.0]\n  Filter type: Chebyshev type II\n  Filter order: 4\n  Artifact methods: EOG correction, ICA\n  Re-reference: car\n  Downsampled to: 200.0 Hz\nSignal Processing\n-----------------\n  Classifiers: LDA, Shrinkage LDA\n  Feature extraction: CSP, log-variance\n  Frequency bands: analyzed=[4.0, 35.0] Hz\n  Spatial filters: CSP\nCross-Validation\n----------------\n  Method: 10x5-fold\n  Folds: 5\n  Evaluation type: within_subject\nPerformance (Original Study)\n----------------------------\n  Ma Eeg Max Accuracy: 75.9\n  Ma Hbr Max Accuracy: 80.7\n  Ma Hbo Max Accuracy: 83.6\nBCI Application\n---------------\n  Applications: hybrid_bci_research\n  Environment: laboratory\n  Online feedback: False\nTags\n----\n  Pathology: Healthy\n  Modality: Cognitive\n  Type: Cognitive\nDocumentation\n-------------\n  Description: Open access dataset for hybrid brain-computer interfaces using EEG and NIRS with motor imagery and mental arithmetic tasks\n  DOI: 10.1109/TNSRE.2016.2628057\n  License: GPL-3.0\n  Investigators: Jaeyoung Shin, Alexander von Lühmann, Benjamin Blankertz, Do-Won Kim, Jichai Jeong, Han-Jeong Hwang, Klaus-Robert Müller\n  Senior author: Klaus-Robert Müller\n  Contact: h2j@kumoh.ac.kr; klaus-robert.mueller@tuberlin.de\n  Institution: Berlin Institute of Technology\n  Department: Department of Computer Science, Machine Learning Group\n  Address: 10587 Berlin, Germany\n  Country: DE\n  Repository: GitHub\n  Data URL: http://doc.ml.tu-berlin.de/hBCI\n  Publication year: 2017\n  Funding: Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2014R1A6A3A03057524); Ministry of Science, ICT & Future Planning (NRF-2015R1C1A1A02037032); Brain Korea 21 PLUS Program through the NRF funded by the Ministry of Education; Korea University Grant; BMBF (#01GQ0850, Bernstein Focus: Neurotechnology)\n  Ethics approval: Ethics Committee of the Institute of Psychology and Ergonomics, Technical University of Berlin (approval number: SH_01_20150330)\n  Keywords: Brain-computer interface, BCI, electroencephalography, EEG, hybrid BCI, mental arithmetic, motor imagery, near-infrared spectroscopy, NIRS, open access dataset\nAbstract\n--------\nOpen access dataset for hybrid brain-computer interfaces using EEG and NIRS. Includes two experiments: (1) left vs right hand motor imagery, (2) mental arithmetic vs resting state. Dataset validated using baseline signal analysis showing hybrid approach enhances discrimination of mental states. Also includes motion artifacts and physiological data for wide range of validation approaches.\nMethodology\n-----------\nThirty subjects performed 6 sessions alternating between motor imagery (dataset A: left/right hand) and mental arithmetic (dataset B: MA vs rest). Each session: 20 trials with 2s cue, 10s task, 15-17s rest. EEG recorded at 1000 Hz with 30 channels, downsampled to 200 Hz. Preprocessing: CAR, 0.5-50 Hz bandpass (4th order Chebyshev II), ICA-based EOG rejection. Feature extraction: CSP with log-variance of first/last 3 components using 3s moving window (1s step). Classification: shrinkage LDA with 10x5-fold CV. Hybrid analysis combines EEG and NIRS outputs using meta-classifier.\nReferences\n----------\nShin, J., von Lühmann, A., Blankertz, B., Kim, D.W., Jeong, J., Hwang, H.J. and Müller, K.R., 2017. Open access dataset for EEG+NIRS single-trial classification. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 25(10), pp.1735-1745.\nGNU General Public License, Version 3 `<https://www.gnu.org/licenses/gpl-3.0.txt>`_\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0imagery","1arithmetic","2imagery","3arithmetic","4imagery","5arithmetic"],"size_bytes":2014018782,"source":"nemar","storage":{"backend":"s3","base":"s3://openneuro.org/nm000268","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-22T12:52:18.342118+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":174,"computed_title":"Shin et al. 2017 (Experiment B) — Open Access Dataset for EEG+NIRS Single-Trial Classification","nchans_counts":[{"val":32,"count":174}],"sfreq_counts":[{"val":200.0,"count":174}],"stats_computed_at":"2026-04-22T23:16:00.314375+00:00","total_duration_s":104520.13,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"a191cf1ed7ebd09b","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Memory"],"confidence":{"pathology":0.85,"modality":0.8,"type":0.7},"reasoning":{"few_shot_analysis":"Closest few-shot convention match is the healthy motor imagery BCI dataset (“EEG Motor Movement/Imagery Dataset”), which maps an imagery-based BCI paradigm in healthy participants to Pathology=Healthy and Type=Motor when movement imagery is the primary construct. In Shin2017B, the paradigm is also BCI/imagery-style and healthy participants, but the main contrast for Dataset B is mental arithmetic vs rest (a cognitive/working-memory load manipulation), so the Type should follow the cognitive construct (working memory/mental calculation) rather than the mechanics of cued trials. For Modality, few-shot examples consistently label by stimulus channel (e.g., digit-span uses auditory digits → Auditory; motor imagery uses visual targets/cues → Visual). Shin2017B states visual instruction as the primary stimulus modality, guiding Modality=Visual.","metadata_analysis":"Key metadata facts:\n- Population: “Health status: healthy” and also “Tags ---- Pathology: Healthy”.\n- Task/construct: “Mental Arithmetic Dataset… Dataset B: mental arithmetic (serial subtraction of one-digit number) versus baseline/rest task” and “Class labels: subtraction, rest”.\n- Stimulus modality: “Stimulus type: visual instruction (subtraction problem and fixation cross)” plus “Primary modality: visual” (even though it also lists “Stimulus modalities: visual, auditory”).","paper_abstract_analysis":"No useful paper information beyond what is already included in the dataset README/embedded abstract text.","evidence_alignment_check":"Pathology:\n- Metadata says: “Health status: healthy” and “Pathology: Healthy”.\n- Few-shot pattern suggests: Imagery/BCI datasets with no diagnosis and explicitly healthy participants → Healthy.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: “Stimulus type: visual instruction (subtraction problem and fixation cross)” and “Primary modality: visual” (also notes “Stimulus modalities: visual, auditory”).\n- Few-shot pattern suggests: label modality by dominant stimulus channel; when cues are visual, choose Visual.\n- Alignment: MOSTLY ALIGN (auditory is mentioned, but primary and explicit stimulus description is visual, so Visual wins).\n\nType:\n- Metadata says: “mental arithmetic (serial subtraction…) versus baseline/rest task” and describes memorizing and repeatedly subtracting (“subjects memorized… then repeatedly subtracted…”).\n- Few-shot pattern suggests: when the paradigm is memory load / working memory (e.g., digit span) → Type=Memory; when movement imagery is the focus → Type=Motor.\n- Alignment: ALIGN (this dataset’s Dataset B is cognitive/mental arithmetic rather than motor imagery as the primary construct).","decision_summary":"Top-2 comparative selections:\n\n1) Pathology\n- Candidate A: Healthy\n  Evidence: “Health status: healthy”; “Tags ---- Pathology: Healthy”; “Subjects: 29” with no clinical recruitment.\n- Candidate B: Unknown\n  Evidence: would apply only if health status were not stated.\n- Decision: Healthy (metadata explicitly states healthy).\n- Confidence basis: 2+ explicit quotes + clear few-shot convention match.\n\n2) Modality\n- Candidate A: Visual\n  Evidence: “Stimulus type: visual instruction (subtraction problem and fixation cross)”; “Primary modality: visual”; trials include “2s visual instruction”.\n- Candidate B: Multisensory\n  Evidence: “Stimulus modalities: visual, auditory”.\n- Decision: Visual (dominant/primary stimulus channel is explicitly visual; auditory mention appears secondary/unspecified).\n- Confidence basis: multiple explicit quotes favor Visual; minor ambiguity due to mention of auditory.\n\n3) Type\n- Candidate A: Memory\n  Evidence: “mental arithmetic (serial subtraction…) versus baseline/rest task”; “subjects memorized an initial subtraction… then repeatedly subtracted…” (working-memory/mental calculation load).\n- Candidate B: Attention\n  Evidence: mental arithmetic also engages sustained attention/cognitive control; however not explicitly framed as attention.\n- Decision: Memory (closest allowed cognitive-construct label for mental arithmetic/working-memory load).\n- Confidence basis: explicit task description supports working-memory/mental calculation; some overlap with attention reduces certainty."}},"canonical_name":null,"name_confidence":0.84,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Shin2017_Shin2017B"}}