{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4ca6","dataset_id":"nm000209","associated_paper_doi":null,"authors":["Dylan Forenzo","Yixuan Liu","Jeehyun Kim","Yidan Ding","Taehyung Yoon","Bin He"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":25,"ages":[25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25,25],"age_min":25,"age_max":25,"age_mean":25.0,"species":null,"sex_distribution":null,"handedness_distribution":{"r":25}},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000209","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"78d67bbba14125d35ccab03d370525a3262dfe76388e83bae91b08afd7a64d6d","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Motor imagery + spatial attention dataset from Forenzo & He 2023","readme":"# Motor imagery + spatial attention dataset from Forenzo & He 2023\nMotor imagery + spatial attention dataset from Forenzo & He 2023.\n## Dataset Overview\n- **Code**: Forenzo2023\n- **Paradigm**: imagery\n- **DOI**: 10.1109/TBME.2023.3298957\n- **Subjects**: 25\n- **Sessions per subject**: 5\n- **Events**: left_hand=1, right_hand=2\n- **Trial interval**: [0, 4] s\n- **Runs per session**: 3\n- **File format**: MAT\n## Acquisition\n- **Sampling rate**: 1000.0 Hz\n- **Number of channels**: 64\n- **Channel types**: eeg=64\n- **Montage**: standard_1005\n- **Hardware**: Neuroscan Quik-Cap 64-ch, SynAmps 2/RT\n- **Reference**: between Cz and CPz\n- **Sensor type**: Ag/AgCl\n- **Line frequency**: 60.0 Hz\n- **Online filters**: {'lowpass': 200, 'notch_hz': 60}\n## Participants\n- **Number of subjects**: 25\n- **Health status**: healthy\n- **Age**: mean=25.5\n- **Gender distribution**: female=10, male=15\n- **Handedness**: right-handed (24 of 25)\n- **BCI experience**: mixed (19 naive, 6 experienced)\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Number of classes**: 2\n- **Class labels**: left_hand, right_hand\n- **Trial duration**: 6.0 s\n- **Study design**: 5-session BCI study with motor imagery (MI), overt spatial attention (OSA), and combined (MIOSA) tasks\n- **Feedback type**: cursor\n- **Stimulus type**: continuous pursuit\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Synchronicity**: synchronous\n- **Mode**: online\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  left_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Left, Hand\n  right_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Right, Hand\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: left_hand, right_hand\n- **Imagery duration**: 6.0 s\n## Data Structure\n- **Trials**: 1875\n- **Trials context**: 25 subjects x 5 sessions x 3 MI runs x 5 trials\n## Signal Processing\n- **Classifiers**: linear_classifier\n- **Feature extraction**: AR_spectral_estimation, alpha_bandpower\n- **Frequency bands**: alpha=[8.0, 13.0] Hz\n- **Spatial filters**: Laplacian\n## Cross-Validation\n- **Evaluation type**: within_subject\n## BCI Application\n- **Applications**: cursor_control\n- **Environment**: laboratory\n- **Online feedback**: True\n## Tags\n- **Pathology**: Healthy\n- **Modality**: Motor\n- **Type**: Research\n## Documentation\n- **DOI**: 10.1109/TBME.2023.3298957\n- **License**: CC-BY-4.0\n- **Investigators**: Dylan Forenzo, Yixuan Liu, Jeehyun Kim, Yidan Ding, Taehyung Yoon, Bin He\n- **Institution**: Carnegie Mellon University\n- **Department**: Department of Biomedical Engineering\n- **Country**: US\n- **Data URL**: https://kilthub.cmu.edu/articles/dataset/23677098\n- **Publication year**: 2023\n## References\nForenzo, D., & He, B. (2024). Integrating simultaneous motor imagery and spatial attention for EEG-BCI control. IEEE Trans. Biomed. Eng., 71(1), 282-294. https://doi.org/10.1109/TBME.2023.3298957\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1"],"size_bytes":5253098940,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000209","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:09:04.027828+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-24T03:20:54Z"},"total_files":150,"computed_title":"Motor imagery + spatial attention dataset from Forenzo & He 2023","nchans_counts":[{"val":64,"count":150}],"sfreq_counts":[{"val":1000.0,"count":150}],"stats_computed_at":"2026-05-01T13:49:34.645696+00:00","total_duration_s":27262.769,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"8f58f4d092440cae","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Motor"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot example by paradigm is the \"EEG Motor Movement/Imagery Dataset\" (Schalk et al.), which is a motor imagery paradigm in healthy volunteers and is labeled Pathology=Healthy, Modality=Visual (targets on screen), Type=Motor. This convention guides mapping motor-imagery BCI datasets with on-screen cues/feedback to Modality=Visual (stimulus channel) and Type=Motor (construct studied), even though responses are imagined movements. A secondary thematic similarity is the TBI DPX example labeled Type=Attention for an attention/cognitive-control task; this helps as a runner-up for Type because the current dataset explicitly includes \"spatial attention\".","metadata_analysis":"Key facts from metadata:\n- Population/diagnosis: \"Health status: healthy\" and also \"**Subjects**: 25\" with no clinical recruitment.\n- Task/paradigm: \"Study design: 5-session BCI study with motor imagery (MI), overt spatial attention (OSA), and combined (MIOSA) tasks\"; also \"Paradigm-Specific Parameters - Detected paradigm: motor_imagery\".\n- Stimulus modality: explicitly \"Stimulus modalities: visual\" and \"Primary modality: visual\"; also \"Feedback type: cursor\" / \"Stimulus type: continuous pursuit\" indicates visually guided feedback/cueing.\n- Event labels: \"Events: left_hand=1, right_hand=2\" with HED tags including \"Visual-presentation\" and \"Imagine -> Move\".","paper_abstract_analysis":"No useful paper information. (Only a citation is provided; no abstract text included in the metadata.)","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\".\n- Few-shot pattern suggests: motor imagery datasets (e.g., Schalk EEGMMIDB) are typically Healthy unless a disorder is stated.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"Stimulus modalities: visual\" and \"Primary modality: visual\"; HED includes \"Visual-presentation\".\n- Few-shot pattern suggests: motor imagery with on-screen targets/cues is labeled Visual for Modality (see Schalk EEGMMIDB labeled Visual).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: \"motor imagery (MI), overt spatial attention (OSA), and combined (MIOSA) tasks\" and \"Detected paradigm: motor_imagery\".\n- Few-shot pattern suggests: motor imagery BCI paradigms map to Type=Motor (Schalk EEGMMIDB labeled Motor), while explicitly attention-focused paradigms can map to Type=Attention (e.g., DPX task example).\n- Alignment: PARTIAL (mixed constructs motor imagery + spatial attention). No conflict requiring override; choose the dominant/primary construct indicated by the paradigm detection and class labels (left/right hand imagery).","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n1) Healthy — Evidence: \"Health status: healthy\"; participants described as \"Subjects: 25\" with no diagnosis terms.\n2) Unknown — Would apply if no recruitment/diagnosis info were provided.\nWinner: Healthy (explicit metadata). Confidence based on direct statement \"Health status: healthy\".\n\nModality:\n1) Visual — Evidence: \"Stimulus modalities: visual\"; \"Primary modality: visual\"; \"Feedback type: cursor\" / \"continuous pursuit\"; HED includes \"Visual-presentation\".\n2) Motor — Would apply if the primary input were proprioceptive/movement execution rather than presented stimuli.\nWinner: Visual (multiple explicit metadata lines specifying visual stimuli). Confidence high due to repeated explicit modality statements.\n\nType:\n1) Motor — Evidence: \"Detected paradigm: motor_imagery\"; \"motor imagery (MI)\"; class labels \"left_hand, right_hand\" and HED \"Imagine -> Move\".\n2) Attention — Evidence: explicit inclusion of \"spatial attention\" / \"overt spatial attention (OSA)\" in the study design.\nWinner: Motor, because the dataset is primarily organized as a motor imagery BCI with left/right hand imagery classes and explicit motor_imagery paradigm detection, with spatial attention as an additional integrated component. Confidence slightly lower due to the genuine mixed MI+attention design."}},"canonical_name":null,"name_confidence":0.82,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Forenzo2023"}}