{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c7d","dataset_id":"nm000148","associated_paper_doi":null,"authors":["David Rozado","Andreas Duenser","Ben Howell"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":"10.82901/nemar.nm000148","datatypes":["eeg"],"demographics":{"subjects_count":30,"ages":[38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38,38],"age_min":38,"age_max":38,"age_mean":38.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000148","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"89837b7b660e82736789e22ad56a8f8c2f44b2ffa30f45757005082bd8479fe1","license":"CC0 1.0","n_contributing_labs":null,"name":"Motor imagery BCI dataset with pupillometry augmentation","readme":"[![DOI](https://img.shields.io/badge/DOI-10.82901%2Fnemar.nm000148-blue)](https://doi.org/10.82901/nemar.nm000148)\n# Motor imagery BCI dataset with pupillometry augmentation\nMotor imagery BCI dataset with pupillometry augmentation.\n## Dataset Overview\n- **Code**: Rozado2015\n- **Paradigm**: imagery\n- **DOI**: 10.1371/journal.pone.0121262\n- **Subjects**: 30\n- **Sessions per subject**: 1\n- **Events**: left_hand=1, rest=2\n- **Trial interval**: [0.0, 6.0] s\n- **Runs per session**: 2\n- **File format**: XDF\n## Acquisition\n- **Sampling rate**: 512.0 Hz\n- **Number of channels**: 32\n- **Channel types**: eeg=32\n- **Montage**: biosemi32\n- **Hardware**: BioSemi ActiveTwo\n- **Reference**: CMS/DRL\n- **Sensor type**: active\n- **Line frequency**: 50.0 Hz\n- **Cap manufacturer**: BioSemi\n- **Electrode material**: sintered Ag/AgCl\n## Participants\n- **Number of subjects**: 30\n- **Health status**: healthy\n- **Age**: mean=38.0, std=9.69, min=15, max=61\n- **Gender distribution**: male=15, female=15\n- **Handedness**: {'right': 27, 'left': 3}\n## Experimental Protocol\n- **Paradigm**: imagery\n- **Task type**: left hand grasping imagery vs rest\n- **Number of classes**: 2\n- **Class labels**: left_hand, rest\n- **Trial duration**: 6.0 s\n- **Study design**: Motor imagery with pupillometry augmentation\n- **Feedback type**: none\n- **Stimulus type**: auditory cue\n- **Stimulus modalities**: auditory\n- **Primary modality**: auditory\n- **Synchronicity**: synchronous\n- **Mode**: offline\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  left_hand\n    ├─ Sensory-event, Experimental-stimulus, Visual-presentation\n    └─ Agent-action\n       └─ Imagine\n          ├─ Move\n          └─ Left, Hand\n  rest\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Rest\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: motor_imagery\n- **Imagery tasks**: left hand grasping, rest\n- **Imagery duration**: 6.0 s\n## Data Structure\n- **Blocks per session**: 2\n- **Block duration**: 300.0 s\n- **Trials context**: 2 experiments of 25 trials each (50 trials total per subject). Each experiment is stored as one XDF file.\n## Signal Processing\n- **Classifiers**: LDA\n- **Feature extraction**: CSP, pupil_diameter\n- **Frequency bands**: bandpass=[8.0, 30.0] Hz\n- **Spatial filters**: CSP\n## Cross-Validation\n- **Method**: 10-fold\n- **Folds**: 10\n- **Evaluation type**: within_subject\n## BCI Application\n- **Environment**: lab\n- **Online feedback**: False\n## Tags\n- **Pathology**: healthy\n- **Modality**: auditory\n- **Type**: motor_imagery\n## Documentation\n- **DOI**: 10.1371/journal.pone.0121262\n- **License**: CC0 1.0\n- **Investigators**: David Rozado, Andreas Duenser, Ben Howell\n- **Senior author**: David Rozado\n- **Institution**: CSIRO\n- **Department**: Digital Productivity Flagship\n- **Country**: AU\n- **Repository**: Harvard Dataverse\n- **Data URL**: https://doi.org/10.7910/DVN/28932\n- **Publication year**: 2015\n- **Keywords**: motor imagery, BCI, pupillometry, EEG, brain-computer interface\n## References\nD. Rozado, T. Duenser, and B. Gruen, \"Improving the performance of an EEG-based motor imagery brain computer interface using task evoked changes in pupil diameter,\" PLoS ONE, vol. 10, no. 3, e0121262, 2015. DOI: 10.1371/journal.pone.0121262\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":1022641604,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000148","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-30T14:08:41.539200+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-29T22:28:37Z"},"total_files":60,"computed_title":"Motor imagery BCI dataset with pupillometry augmentation","nchans_counts":[{"val":32,"count":60}],"sfreq_counts":[{"val":512.0,"count":60}],"stats_computed_at":"2026-05-01T13:49:34.645175+00:00","total_duration_s":20529.431640625,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"b02546fccb601a7c","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Auditory"],"type":["Motor"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot example by paradigm is the labeled dataset “EEG Motor Movement/Imagery Dataset” (motor movement/imagery). It maps an explicit motor imagery protocol to Type=“Motor” and Pathology=“Healthy”. This convention guides assigning Type=Motor for a left-hand motor imagery vs rest BCI paradigm. Unlike that example (visual on-screen targets), the current dataset explicitly states an auditory cue, so Modality should follow the stimulus channel (Auditory) rather than copying Visual from the example.","metadata_analysis":"Key facts from provided metadata:\n- Population/health: “Health status: healthy” and “Number of subjects: 30”.\n- Task/paradigm: “Paradigm: imagery” and “Task type: left hand grasping imagery vs rest”.\n- Stimulus channel: “Stimulus type: auditory cue”, plus “Stimulus modalities: auditory” and “Primary modality: auditory”.\nThese directly support Pathology=Healthy, Modality=Auditory, and Type=Motor (motor imagery BCI).","paper_abstract_analysis":"No useful paper information (no abstract provided in the dataset metadata; only a citation).","evidence_alignment_check":"Pathology:\n- Metadata says: “Health status: healthy”.\n- Few-shot pattern suggests: motor imagery datasets with volunteer participants map to “Healthy” (e.g., “EEG Motor Movement/Imagery Dataset”).\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: “Stimulus type: auditory cue”, “Stimulus modalities: auditory”, “Primary modality: auditory”.\n- Few-shot pattern suggests: modality follows stimulus channel; motor imagery datasets can be Visual (targets) or other depending on cues.\n- Alignment: ALIGN (few-shot supports rule; metadata pins it specifically to Auditory).\n\nType:\n- Metadata says: “Task type: left hand grasping imagery vs rest”, “Detected paradigm: motor_imagery”, and keywords “motor imagery, BCI”.\n- Few-shot pattern suggests: motor imagery/BCI paradigms map to Type=“Motor” (e.g., “EEG Motor Movement/Imagery Dataset”).\n- Alignment: ALIGN.","decision_summary":"Top-2 candidate selection:\n\nPathology:\n1) Healthy (selected)\n- Evidence: “Health status: healthy”; also “Tags - Pathology: healthy”.\n2) Unknown\n- Would apply if no recruitment/diagnosis info were given, but here it is explicit.\nDecision: Healthy. Alignment: aligned with few-shot motor imagery example convention.\n\nModality:\n1) Auditory (selected)\n- Evidence: “Stimulus type: auditory cue”; “Stimulus modalities: auditory”; “Primary modality: auditory”.\n2) Motor\n- Possible confusion because the response/imagery is motoric, but modality is defined by stimulus channel; cues are auditory.\nDecision: Auditory.\n\nType:\n1) Motor (selected)\n- Evidence: “Task type: left hand grasping imagery vs rest”; “Detected paradigm: motor_imagery”; “Keywords: motor imagery, BCI”.\n2) Attention\n- Could be argued due to sustained task engagement, but primary purpose is motor imagery classification for BCI.\nDecision: Motor.\n\nConfidence justification: multiple explicit metadata quotes support each selected label; few-shot match is strong for Type and Pathology."}},"canonical_name":null,"name_confidence":0.86,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Rozado2015"}}