{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4ca1","dataset_id":"nm000204","associated_paper_doi":null,"authors":["Jongmin Lee","Minju Kim","Dojin Heo","Jongsu Kim","Min-Ki Kim","Taejun Lee","Jongwoo Park","HyunYoung Kim","Minho Hwang","Laehyun Kim","Sung-Phil Kim"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":14,"ages":[22,22,22,22,22,22,22,22,22,22,22,22,22,22],"age_min":22,"age_max":22,"age_mean":22.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000204","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"a3e4037b9fb6c2e5540ca7b21bc48b295c5996f8047946d3a211c5e38c284d4e","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Bluetooth speaker experiment (14 subjects, 6 classes, 31 EEG ch)","readme":"# Bluetooth speaker experiment (14 subjects, 6 classes, 31 EEG ch)\nBluetooth speaker experiment (14 subjects, 6 classes, 31 EEG ch).\n## Dataset Overview\n- **Code**: Lee2024-BS\n- **Paradigm**: p300\n- **DOI**: 10.3389/fnhum.2024.1320457\n- **Subjects**: 14\n- **Sessions per subject**: 1\n- **Events**: Target=2, NonTarget=1\n- **Trial interval**: [0, 1] s\n- **File format**: MATLAB\n## Acquisition\n- **Sampling rate**: 500.0 Hz\n- **Number of channels**: 31\n- **Channel types**: eeg=31\n- **Channel names**: Fp1, Fpz, Fp2, F7, F3, Fz, F4, F8, FT9, FC5, FC1, FC2, FC6, FT10, T7, C3, Cz, C4, T8, CP5, CP1, CP2, CP6, P7, P3, Pz, P4, P8, O1, Oz, O2\n- **Montage**: standard_1020\n- **Hardware**: actiCHamp (Brain Products)\n- **Reference**: linked mastoids\n- **Sensor type**: active\n- **Line frequency**: 60.0 Hz\n## Participants\n- **Number of subjects**: 14\n- **Health status**: healthy\n- **Age**: mean=22.64, std=3.08\n- **Gender distribution**: male=9, female=5\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: p300\n- **Number of classes**: 2\n- **Class labels**: Target, NonTarget\n- **Trial duration**: 1.0 s\n- **Study design**: P300 BCI for BS home appliance control; 6-class oddball; LCD display\n- **Feedback type**: visual\n- **Stimulus type**: flash\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Mode**: online\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: p300\n- **Stimulus onset asynchrony**: 750.0 ms\n## Data Structure\n- **Trials**: 50 training + 30 testing blocks per subject\n- **Trials context**: per_subject\n## BCI Application\n- **Applications**: home_appliance_control\n- **Environment**: laboratory\n- **Online feedback**: True\n## Tags\n- **Pathology**: Healthy\n- **Modality**: ERP\n- **Type**: P300\n## Documentation\n- **DOI**: 10.3389/fnhum.2024.1320457\n- **License**: CC-BY-4.0\n- **Investigators**: Jongmin Lee, Minju Kim, Dojin Heo, Jongsu Kim, Min-Ki Kim, Taejun Lee, Jongwoo Park, HyunYoung Kim, Minho Hwang, Laehyun Kim, Sung-Phil Kim\n- **Institution**: Ulsan National Institute of Science and Technology\n- **Country**: KR\n- **Data URL**: https://github.com/jml226/Home-Appliance-Control-Dataset\n- **Publication year**: 2024\n## References\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":338661221,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000204","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-30T14:09:01.324477+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-24T01:46:29Z"},"total_files":420,"computed_title":"Bluetooth speaker experiment (14 subjects, 6 classes, 31 EEG ch)","nchans_counts":[{"val":31,"count":420}],"sfreq_counts":[{"val":500.0,"count":420}],"stats_computed_at":"2026-05-01T13:49:34.645633+00:00","total_duration_s":7031.916,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"0ae9f4e90acec65e","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.7},"reasoning":{"few_shot_analysis":"Closest few-shot paradigms are oddball/P300-like tasks. Example: 'Cross-modal Oddball Task.' (Parkinson's) uses standard vs oddball cues and is labeled Modality=Multisensory and Type=Clinical/Intervention due to the explicit PD clinical cohort focus. Example: 'EEG: Three-Stim Auditory Oddball and Rest in Acute and Chronic TBI' uses auditory oddball stimuli and includes a clinical cohort (TBI), labeled Pathology=TBI and Modality=Auditory. These examples guide that (i) oddball/P300 paradigms map modality to the stimulus channel (auditory/visual/multisensory), and (ii) pathology is driven by recruited clinical group (PD/TBI) rather than task. For the present dataset, the task is explicitly 'p300' with 'flash' visual stimuli in healthy participants, so we follow the same convention: Modality=Visual; Pathology=Healthy; and for Type choose a cognitive-construct label consistent with oddball target detection (most consistent with Attention among allowed labels).","metadata_analysis":"Key metadata facts: (1) Population: readme states \"**Health status**: healthy\" and \"**Number of subjects**: 14\". (2) Paradigm/task: \"**Paradigm**: p300\" and \"**Study design**: P300 BCI for BS home appliance control; 6-class oddball\" with events \"Target=2, NonTarget=1\". (3) Stimulus modality: \"**Stimulus type**: flash\" and \"**Stimulus modalities**: visual\" / \"**Primary modality**: visual\" plus HED tags include \"Visual-presentation\" for both Target and NonTarget. These directly support Healthy + Visual + an oddball/P300 target-detection construct.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says \"Health status: healthy\" (ALIGN) with few-shot convention that explicit recruitment condition determines pathology. Few-shot patterns do not suggest any clinical group here (ALIGN).\nModality: Metadata says \"Stimulus modalities: visual\", \"Stimulus type: flash\", and HED includes \"Visual-presentation\" (ALIGN) with few-shot convention mapping modality to stimulus channel (e.g., auditory oddball -> Auditory; cross-modal oddball -> Multisensory).\nType: Metadata says \"Paradigm: p300\" and \"6-class oddball\" with Target/NonTarget events (ALIGN) with few-shot understanding of oddball/P300 as target detection; however few-shot Type labels vary because they are driven by study aim/population (e.g., PD oddball labeled Clinical/Intervention due to PD focus; TBI oddball labeled Decision-making). There is no explicit clinical/intervention aim here, so we prioritize task construct (target detection/attentional selection) and select Attention.","decision_summary":"Pathology top-2: (1) Healthy — supported by \"Health status: healthy\" and participant summary; (2) Unknown — only if health were unspecified. Winner: Healthy. Alignment: aligned.\nModality top-2: (1) Visual — supported by \"Stimulus modalities: visual\", \"Primary modality: visual\", \"Stimulus type: flash\", and HED \"Visual-presentation\"; (2) Other — only if stimulus channel were unclear. Winner: Visual. Alignment: aligned.\nType top-2: (1) Attention — supported by \"Paradigm: p300\", \"6-class oddball\", and Target vs NonTarget detection/BCI selection; (2) Perception — plausible because it is stimulus-evoked discrimination, but the paradigm is specifically P300/oddball which is more classically attentional target detection than general perception. Winner: Attention. Alignment: mostly aligned (few-shot shows oddball can map to different Types when clinical aim dominates; not the case here).\nConfidence justification: Pathology has 1 strong explicit quote (healthy) plus consistent context; Modality has 3+ explicit cues (visual modality/primary modality/flash/HED); Type has explicit 'p300' and 'oddball' plus event structure Target/NonTarget, but mapping to Attention vs Perception remains somewhat interpretive."}},"canonical_name":null,"name_confidence":0.74,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Lee2024_Bluetooth_speaker_14"}}