{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4ca2","dataset_id":"nm000205","associated_paper_doi":null,"authors":["Li Zheng","Sen Sun","Hongze Zhao","Weihua Pei","Hongda Chen","Xiaorong Gao","Lijian Zhang","Yijun Wang"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":14,"ages":[24,24,24,24,24,24,24,24,24,24,24,24,24,24],"age_min":24,"age_max":24,"age_mean":24.0,"species":null,"sex_distribution":null,"handedness_distribution":{"r":14}},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000205","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"56f008f2d372d4fdb6e9a4a30ee5a030667bb0bd757c2e2f5b56abc8c866c5f2","license":"CC-BY-4.0","n_contributing_labs":null,"name":"RSVP collaborative BCI dataset from Zheng et al 2020","readme":"# RSVP collaborative BCI dataset from Zheng et al 2020\nRSVP collaborative BCI dataset from Zheng et al 2020.\n## Dataset Overview\n- **Code**: Zheng2020\n- **Paradigm**: p300\n- **DOI**: 10.3389/fnins.2020.579469\n- **Subjects**: 14\n- **Sessions per subject**: 2\n- **Events**: Target=2, NonTarget=1\n- **Trial interval**: [0, 1] s\n- **Runs per session**: 3\n- **File format**: MATLAB\n## Acquisition\n- **Sampling rate**: 1000.0 Hz\n- **Number of channels**: 62\n- **Channel types**: eeg=62\n- **Channel names**: FP1, FPz, FP2, AF3, AF4, F7, F5, F3, F1, Fz, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCz, FC2, FC4, FC6, FT8, T7, C5, C3, C1, Cz, C2, C4, C6, T8, TP7, CP5, CP3, CP1, CPz, CP2, CP4, CP6, TP8, P7, P5, P3, P1, Pz, P2, P4, P6, P8, PO7, PO5, PO3, POz, PO4, PO6, PO8, O1, CB1, Oz, O2, CB2\n- **Montage**: standard_1020\n- **Hardware**: Neuroscan Synamps2\n- **Reference**: vertex (Cz)\n- **Line frequency**: 50.0 Hz\n## Participants\n- **Number of subjects**: 14\n- **Health status**: healthy\n- **Age**: mean=24.9, min=23, max=29\n- **Gender distribution**: female=10, male=4\n- **Handedness**: all right-handed\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: p300\n- **Number of classes**: 2\n- **Class labels**: Target, NonTarget\n- **Trial duration**: 1.0 s\n- **Study design**: RSVP target detection (human vs non-human images); 14 subjects in 7 pairs, synchronized EEG recording\n- **Feedback type**: visual\n- **Stimulus type**: RSVP images\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Mode**: offline\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: p300\n- **Stimulus onset asynchrony**: 100.0 ms\n## Data Structure\n- **Trials**: {'target': 168, 'nontarget': 4032}\n- **Trials context**: per subject across both sessions\n## Signal Processing\n- **Classifiers**: HDCA\n- **Feature extraction**: SIM, CSP, TRCA, PCA\n- **Frequency bands**: bandpass=[2.0, 30.0] Hz\n- **Spatial filters**: SIM, CSP, PCA, CAR, TRCA\n## Cross-Validation\n- **Method**: holdout\n- **Evaluation type**: within_subject, cross_session\n## BCI Application\n- **Applications**: target_image_detection, collaborative_BCI\n- **Environment**: laboratory\n- **Online feedback**: True\n## Tags\n- **Pathology**: Healthy\n- **Modality**: ERP\n- **Type**: RSVP\n## Documentation\n- **DOI**: 10.3389/fnins.2020.579469\n- **License**: CC-BY-4.0\n- **Investigators**: Li Zheng, Sen Sun, Hongze Zhao, Weihua Pei, Hongda Chen, Xiaorong Gao, Lijian Zhang, Yijun Wang\n- **Institution**: Chinese Academy of Sciences\n- **Country**: CN\n- **Data URL**: https://figshare.com/articles/dataset/12824771\n- **Publication year**: 2020\n## References\nZheng, L., Sun, S., Zhao, H., et al. (2020). A Cross-Session Dataset for Collaborative Brain-Computer Interfaces Based on Rapid Serial Visual Presentation. Frontiers in Neuroscience, 14, 579469. https://doi.org/10.3389/fnins.2020.579469\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1"],"size_bytes":5686829262,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000205","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-30T14:09:01.385475+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-24T01:50:11Z"},"total_files":84,"computed_title":"RSVP collaborative BCI dataset from Zheng et al 2020","nchans_counts":[{"val":62,"count":84}],"sfreq_counts":[{"val":1000.0,"count":84}],"stats_computed_at":"2026-05-01T13:49:34.645645+00:00","total_duration_s":30463.102,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"d7cccac13c5df848","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot example by paradigm is the “Cross-modal Oddball Task” dataset (Parkinson’s; oddball/target detection structure) which is labeled with an oddball/target-detection convention where the cognitive construct is treated as attention/cognitive control (though that example’s final Type is driven by clinical focus). This guides mapping RSVP P300 target vs non-target detection to an Attention-type construct. By contrast, the schizophrenia moving-dots “visual discrimination task” example maps a stimulus discrimination/choice task to Perception; RSVP P300 is less about sensory discrimination and more about detecting rare targets among frequent non-targets (oddball-like), pushing toward Attention rather than Perception.","metadata_analysis":"Key population facts: (1) “Health status: healthy” and (2) “Subjects: 14” with demographics “Age: mean=24.9, min=23, max=29”.\nKey task/paradigm facts: (1) “Paradigm: p300”, (2) “Study design: RSVP target detection (human vs non-human images)”, and (3) “Events: Target=2, NonTarget=1”.\nKey stimulus modality facts: (1) “Stimulus type: RSVP images”, (2) “Stimulus modalities: visual”, and (3) “Primary modality: visual”.","paper_abstract_analysis":"No useful paper information (abstract text not provided in the metadata fields; only citation/DOI present).","evidence_alignment_check":"Pathology — Metadata says: “Health status: healthy”. Few-shot pattern suggests: when explicitly healthy, label Healthy (multiple examples labeled Healthy). ALIGN.\nModality — Metadata says: “Stimulus modalities: visual” / “Stimulus type: RSVP images” / “Primary modality: visual”. Few-shot pattern suggests: visual stimulus tasks map to Visual modality. ALIGN.\nType — Metadata says: “Paradigm: p300” with “RSVP target detection” and “Events: Target… NonTarget…”, which is an oddball-like target detection setup. Few-shot pattern suggests: oddball/target detection paradigms are typically categorized under Attention (unless the dataset is primarily clinical/intervention focused). ALIGN (no clinical recruitment focus here).","decision_summary":"Pathology top-2: (A) Healthy — supported by “Health status: healthy” and adult volunteer demographics; (B) Unknown — only if health status were missing. Winner: Healthy. Alignment: aligned.\nModality top-2: (A) Visual — supported by “Stimulus modalities: visual”, “Primary modality: visual”, and “Stimulus type: RSVP images”; (B) Other — only if stimuli were unspecified. Winner: Visual. Alignment: aligned.\nType top-2: (A) Attention — supported by “Paradigm: p300”, “RSVP target detection…”, and “Events: Target… NonTarget…”, consistent with oddball-like attentional target detection; guided by few-shot oddball convention. (B) Perception — plausible because participants visually categorize images (human vs non-human), but the P300 RSVP/target vs non-target emphasis indicates attentional selection rather than pure perceptual discrimination. Winner: Attention. Alignment: aligned.\nConfidence justification: Pathology has an explicit health-status statement. Modality has 3 explicit visual-stimulus statements. Type has explicit P300/RSVP target-detection wording plus a strong few-shot oddball analog."}},"canonical_name":null,"name_confidence":0.86,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Zheng2020"}}