{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4c9e","dataset_id":"nm000201","associated_paper_doi":null,"authors":["Young-Eun Lee","Gi-Hwan Shin","Minji Lee","Seong-Whan Lee"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":24,"ages":[28,22,29,21,26,27,24,24,22,23,32,24,26,24,21,25,23,23,24,24,22,28,19,27],"age_min":19,"age_max":32,"age_mean":24.5,"species":null,"sex_distribution":{"f":10,"m":14},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000201","osf_url":null,"github_url":null,"paper_url":null},"funding":["IITP No. 2017-0-00451","IITP No. 2015-0-00185","IITP No. 2019-0-00079"],"ingestion_fingerprint":"f8a99dad0592f4a574fd2008c71ebd09f284d14df6816b0ad3e9eebd9d527ef2","license":"CC BY 4.0","n_contributing_labs":null,"name":"ERP paradigm of the Mobile BCI dataset","readme":"# ERP paradigm of the Mobile BCI dataset\nERP paradigm of the Mobile BCI dataset.\n## Dataset Overview\n- **Code**: Lee2021Mobile-ERP\n- **Paradigm**: p300\n- **DOI**: 10.1038/s41597-021-01094-4\n- **Subjects**: 24\n- **Sessions per subject**: 5\n- **Events**: Target=2, NonTarget=1\n- **Trial interval**: [0, 1.0] s\n- **File format**: BrainVision\n## Acquisition\n- **Sampling rate**: 100.0 Hz\n- **Number of channels**: 73\n- **Channel types**: eeg=73\n- **Montage**: standard_1005\n- **Hardware**: BrainAmp (Brain Product GmbH)\n- **Reference**: FCz\n- **Ground**: Fpz\n- **Sensor type**: Ag/AgCl\n- **Line frequency**: 60.0 Hz\n- **Impedance threshold**: 50 kOhm\n- **Electrode material**: Ag/AgCl\n- **Auxiliary channels**: EOG (4 ch, vertical, horizontal)\n## Participants\n- **Number of subjects**: 24\n- **Health status**: healthy\n- **Age**: mean=24.5, std=2.9, min=19, max=32\n- **Gender distribution**: male=14, female=10\n## Experimental Protocol\n- **Paradigm**: p300\n- **Number of classes**: 2\n- **Class labels**: Target, NonTarget\n- **Trial duration**: 1.0 s\n- **Study design**: BCI during motion (standing/walking/running)\n- **Stimulus type**: visual oddball\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: offline\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\n```\n## BCI Application\n- **Environment**: mobile\n- **Online feedback**: False\n## Tags\n- **Pathology**: healthy\n- **Modality**: visual\n- **Type**: perception\n## Documentation\n- **DOI**: 10.1038/s41597-021-01094-4\n- **License**: CC BY 4.0\n- **Investigators**: Young-Eun Lee, Gi-Hwan Shin, Minji Lee, Seong-Whan Lee\n- **Senior author**: Seong-Whan Lee\n- **Institution**: Korea University\n- **Country**: KR\n- **Repository**: OSF\n- **Data URL**: https://osf.io/r7s9b/\n- **Publication year**: 2021\n- **Funding**: IITP No. 2017-0-00451; IITP No. 2015-0-00185; IITP No. 2019-0-00079\n- **Ethics approval**: Institutional Review Board of Korea University, KUIRB-2019-0194-01\n- **Keywords**: SSVEP, ERP, mobile BCI, ear-EEG, locomotion\n## References\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["1","2","3","4","5"],"size_bytes":5575353587,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000201","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-30T14:09:00.481057+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-24T01:37:40Z"},"total_files":113,"computed_title":"ERP paradigm of the Mobile BCI dataset","nchans_counts":[{"val":48,"count":108},{"val":73,"count":5}],"sfreq_counts":[{"val":500.0,"count":108},{"val":100.0,"count":5}],"stats_computed_at":"2026-05-01T13:49:34.645621+00:00","total_duration_s":79682.786,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"1b66b4c18b0ea6f9","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.85,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot example by paradigm is the “Cross-modal Oddball Task” (Parkinson’s), which is explicitly described as an oddball task with standard vs oddball cues. That example shows the convention that oddball/target-vs-nontarget paradigms are categorized by their stimulus modality (visual+auditory there → Multisensory) and by a cognitive construct closely tied to attentional target detection/cognitive control (Type there chosen as Clinical/Intervention due to the clinical focus, but the oddball paradigm itself maps naturally to attentional target detection in healthy cohorts). A second relevant convention is that when metadata explicitly states “healthy” participants, Pathology should be “Healthy” regardless of any other study features (as in several few-shot ‘Healthy’ datasets).","metadata_analysis":"Key population facts: the README states \"**Health status**: healthy\" and \"**Number of subjects**: 24\" with age/gender details.\n\nKey task/stimulus facts: the README states \"**Paradigm**: p300\", \"**Stimulus type**: visual oddball\", and \"**Stimulus modalities**: visual\" / \"**Primary modality**: visual\". It also specifies an ERP oddball structure via \"**Events**: Target=2, NonTarget=1\" and \"**Class labels**: Target, NonTarget\".","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\".\n- Few-shot pattern suggests: when participants are not a clinical cohort, label “Healthy”.\n- ALIGN.\n\nModality:\n- Metadata says: \"Stimulus type: visual oddball\" and \"Stimulus modalities: visual\" / \"Primary modality: visual\".\n- Few-shot pattern suggests: oddball modality follows the stimulus channel (e.g., cross-modal oddball → Multisensory; auditory tasks → Auditory).\n- ALIGN.\n\nType:\n- Metadata says: \"Paradigm: p300\" with \"Events: Target... NonTarget\" and \"Stimulus type: visual oddball\".\n- Few-shot pattern suggests: oddball/target-detection paradigms are typically categorized under attention-related constructs (target detection/cognitive control), unless the dataset is primarily clinical (then Type may shift to Clinical/Intervention).\n- ALIGN (this dataset is explicitly healthy and focuses on P300 target detection during motion).","decision_summary":"Top-2 candidates (with head-to-head selection):\n\nPathology:\n1) Healthy — supported by: \"Health status: healthy\"; also consistent with standard non-clinical recruitment.\n2) Unknown — would apply only if health status were not stated.\nWinner: Healthy. Alignment: aligned. Confidence basis: explicit quote(s) naming healthy status.\n\nModality:\n1) Visual — supported by: \"Stimulus type: visual oddball\"; \"Stimulus modalities: visual\"; \"Primary modality: visual\".\n2) Multisensory — possible only if multiple stimulus channels were used (not indicated here).\nWinner: Visual. Alignment: aligned. Confidence basis: multiple explicit modality lines.\n\nType:\n1) Attention — supported by P300/oddball target detection structure: \"Paradigm: p300\" plus \"Events: Target... NonTarget\" and \"Stimulus type: visual oddball\" (classic attentional target detection).\n2) Perception — plausible because oddball involves sensory detection/discrimination, and the README even includes a tag \"Type: perception\"; however the dominant construct in P300 oddball is typically attentional selection to rare targets.\nWinner: Attention. Alignment: aligned. Confidence basis: explicit oddball/P300 paradigm description and target/non-target event structure; runner-up remains plausible due to detection framing and the provided tag."}},"canonical_name":null,"name_confidence":0.68,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Lee2021_ERP"}}