{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4cb6","dataset_id":"nm000231","associated_paper_doi":null,"authors":["Ulrich Hoffmann","Jean-Marc Vesin","Touradj Ebrahimi","Karin Diserens"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":null,"datatypes":["eeg"],"demographics":{"subjects_count":8,"ages":[56,51,47,33,30,30,30,38],"age_min":30,"age_max":56,"age_mean":39.375,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000231","osf_url":null,"github_url":null,"paper_url":null},"funding":["Swiss National Science Foundation Grant No. 200020-112313"],"ingestion_fingerprint":"d3a87e9a7fe4d54f3b25f5169383019b1884603a6c1d954b7b5740cdf2349d03","license":"Unknown","n_contributing_labs":null,"name":"P300 dataset from Hoffmann et al 2008","readme":"# P300 dataset from Hoffmann et al 2008\nP300 dataset from Hoffmann et al 2008.\n## Dataset Overview\n- **Code**: EPFLP300\n- **Paradigm**: p300\n- **DOI**: 10.1016/j.jneumeth.2007.03.005\n- **Subjects**: 8\n- **Sessions per subject**: 4\n- **Events**: Target=2, NonTarget=1\n- **Trial interval**: [0, 1] s\n- **Runs per session**: 6\n- **File format**: MATLAB\n## Acquisition\n- **Sampling rate**: 2048.0 Hz\n- **Number of channels**: 32\n- **Channel types**: eeg=32, misc=2\n- **Channel names**: AF3, AF4, C3, C4, CP1, CP2, CP5, CP6, Cz, F3, F4, F7, F8, FC1, FC2, FC5, FC6, Fp1, Fp2, Fz, MA1, MA2, O1, O2, Oz, P3, P4, P7, P8, PO3, PO4, Pz, T7, T8\n- **Montage**: standard_1020\n- **Hardware**: Biosemi ActiveTwo\n- **Sensor type**: active\n- **Line frequency**: 50.0 Hz\n## Participants\n- **Number of subjects**: 8\n- **Health status**: mixed\n- **Clinical population**: 4 disabled (cerebral palsy, multiple sclerosis, late-stage amyotrophic lateral sclerosis, traumatic brain and spinal-cord injury C4 level), 4 able-bodied\n- **Age**: mean=38.4, min=30, max=56\n- **Gender distribution**: male=7, female=1\n- **BCI experience**: no training required\n- **Species**: human\n## Experimental Protocol\n- **Paradigm**: p300\n- **Number of classes**: 2\n- **Class labels**: Target, NonTarget\n- **Trial duration**: 1.0 s\n- **Study design**: Subjects counted silently how often a prescribed image (one of six: television, telephone, lamp, door, window, radio) was flashed while images were flashed in random sequences\n- **Feedback type**: none\n- **Stimulus type**: image_flash\n- **Stimulus modalities**: visual\n- **Primary modality**: visual\n- **Mode**: offline\n- **Instructions**: Subjects were asked to count silently how often a prescribed image was flashed\n- **Stimulus presentation**: flash_duration=100ms, isi=400ms, display=six images (television, telephone, lamp, door, window, radio)\n## HED Event Annotations\nSchema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n```\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\n```\n## Paradigm-Specific Parameters\n- **Detected paradigm**: p300\n- **Number of targets**: 6\n- **Inter-stimulus interval**: 400.0 ms\n- **Stimulus onset asynchrony**: 400.0 ms\n## Data Structure\n- **Trials**: {'target': 135, 'non-target': 675}\n- **Trials per class**: target=135, non-target=675\n- **Trials context**: per_session\n## Preprocessing\n- **Data state**: raw\n- **Preprocessing applied**: False\n## Signal Processing\n- **Classifiers**: BLDA, FLDA\n- **Feature extraction**: temporal samples from selected electrodes\n- **Frequency bands**: analyzed=[1.0, 12.0] Hz\n## Cross-Validation\n- **Method**: leave-one-session-out\n- **Folds**: 4\n- **Evaluation type**: session-based\n## Performance (Original Study)\n- **Accuracy**: 100.0%\n- **Itr**: 28.8 bits/min\n- **Max Bitrate Disabled Avg**: 19.0\n- **Max Bitrate Able Bodied Avg**: 38.6\n- **Max Bitrate Overall Avg**: 28.8\n## BCI Application\n- **Applications**: environment_control\n- **Environment**: laboratory\n- **Online feedback**: False\n## Tags\n- **Pathology**: Healthy, Cerebral palsy, Multiple sclerosis, Amyotrophic lateral sclerosis, Traumatic brain injury, Post-anoxic encephalopathy\n- **Modality**: Visual\n- **Type**: Research\n## Documentation\n- **DOI**: 10.1016/j.jneumeth.2007.03.005\n- **License**: Unknown\n- **Investigators**: Ulrich Hoffmann, Jean-Marc Vesin, Touradj Ebrahimi, Karin Diserens\n- **Senior author**: Karin Diserens\n- **Contact**: ulrich.hoffmann@epfl.ch\n- **Institution**: Ecole Polytechnique Fédérale de Lausanne\n- **Department**: Signal Processing Institute\n- **Address**: Signal Processing Institute, CH-1015 Lausanne, Switzerland\n- **Country**: CH\n- **Repository**: http://bci.epfl.ch/p300\n- **Publication year**: 2008\n- **Funding**: Swiss National Science Foundation Grant No. 200020-112313\n- **Keywords**: Brain–computer interface, P300, Disabled subjects, Fisher's linear discriminant analysis, Bayesian linear discriminant analysis\n## References\nHoffmann, U., Vesin, J-M., Ebrahimi, T., Diserens, K., 2008. An efficient P300-based brain-computer interfacefor disabled subjects. Journal of Neuroscience Methods . https://doi.org/10.1016/j.jneumeth.2007.03.005\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0","1","2","3"],"size_bytes":2086580715,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000231","raw_key":"dataset_description.json","dep_keys":["README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-30T14:09:18.130003+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-25T16:40:38Z"},"total_files":192,"computed_title":"P300 dataset from Hoffmann et al 2008","nchans_counts":[{"val":32,"count":192}],"sfreq_counts":[{"val":2048.0,"count":192}],"stats_computed_at":"2026-05-01T13:49:34.645905+00:00","total_duration_s":10586.90625,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"1cccf22fee77bbc5","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Other"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.7,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot paradigms are the oddball-style datasets: (1) “Cross-modal Oddball Task” (Parkinson’s) uses standard vs oddball cues and is labeled with an oddball convention, and (2) “EEG: Three-Stim Auditory Oddball and Rest in Acute and Chronic TBI” explicitly uses Target/Standard/Novel tones. These examples guide that Target vs NonTarget stimulus streams are typically treated as an attention/oddball ERP paradigm, with modality determined by stimulus channel. The motor-imagery BCI example also helps style-wise: BCI datasets are labeled by stimulus modality (e.g., visual targets) rather than response mechanics. However, few-shot pathology labels are population-specific; here the participants are a mixed disabled cohort (multiple diagnoses), so we cannot directly reuse a single-diagnosis label like Parkinson’s or TBI.","metadata_analysis":"Key facts from metadata:\n- Pathology / population: “Clinical population: 4 disabled (cerebral palsy, multiple sclerosis, late-stage amyotrophic lateral sclerosis, traumatic brain and spinal-cord injury C4 level), 4 able-bodied” and “Health status: mixed”.\n- Modality: “Stimulus modalities: visual”, “Primary modality: visual”, and “Stimulus type: image_flash”. Also “Subjects counted silently how often a prescribed image … was flashed”.\n- Task/type cues: “Paradigm: p300” with “Events: Target=2, NonTarget=1” and the protocol “Subjects were asked to count silently how often a prescribed image was flashed” (classic P300 oddball/target detection requiring selective attention).","paper_abstract_analysis":"No useful paper information. (Only a DOI/citation is provided; no abstract text included in the metadata payload.)","evidence_alignment_check":"Pathology:\n- Metadata says: “4 disabled (cerebral palsy, multiple sclerosis, late-stage amyotrophic lateral sclerosis, traumatic brain and spinal-cord injury C4 level), 4 able-bodied” and “Health status: mixed”.\n- Few-shot pattern suggests: if a single named clinical diagnosis is recruited (e.g., Parkinson’s, TBI), use that specific label; if the population is non-standard (e.g., visually deprived), use Other.\n- Alignment: PARTIAL. Metadata indicates clear clinical recruitment but with heterogeneous diagnoses not represented as specific allowed labels (except TBI as one of several). Therefore “Other” best matches the convention.\n\nModality:\n- Metadata says: “Stimulus modalities: visual”, “Primary modality: visual”, and “Stimulus type: image_flash”.\n- Few-shot pattern suggests: modality follows stimulus channel (e.g., oddball with tones -> Auditory; moving dots -> Visual).\n- Alignment: ALIGN. Visual is explicit.\n\nType:\n- Metadata says: “Paradigm: p300”, “Events: Target=2, NonTarget=1”, and “Subjects counted silently how often a prescribed image … was flashed” (target detection/oddball attention).\n- Few-shot pattern suggests: oddball/target-detection paradigms map to an attention-oriented ERP construct (even when a clinical cohort exists, the cognitive construct can still be attention; the PD oddball example’s Type was set to Clinical/Intervention due to PD being the primary focus).\n- Alignment: MOSTLY ALIGN. This dataset is a P300 BCI performance dataset in mixed disabled/able-bodied participants; the task construct is best captured as Attention rather than Perception or Clinical/Intervention.","decision_summary":"Top-2 candidate selection:\n\nPathology (winner: Other)\n- Candidate 1: Other\n  - Evidence: “4 disabled (cerebral palsy, multiple sclerosis, late-stage amyotrophic lateral sclerosis, traumatic brain and spinal-cord injury C4 level)” indicates a clinical/disability cohort but heterogeneous and not covered by a single allowed pathology label.\n- Candidate 2: TBI\n  - Evidence: TBI is explicitly mentioned: “traumatic brain and spinal-cord injury C4 level”.\n- Head-to-head: Other wins because the recruited clinical group is mixed (CP, MS, ALS, TBI/SCI) and not primarily a TBI-only cohort.\n- Confidence justification: explicit population description exists, but mapping must collapse multiple diagnoses into a broad label.\n\nModality (winner: Visual)\n- Candidate 1: Visual\n  - Evidence: “Stimulus modalities: visual”, “Primary modality: visual”, “Stimulus type: image_flash”, and flashing “six images”.\n- Candidate 2: Multisensory\n  - Evidence: none (no auditory/tactile stimulus described).\n- Head-to-head: Visual clearly wins.\n- Confidence justification: multiple explicit modality statements.\n\nType (winner: Attention)\n- Candidate 1: Attention\n  - Evidence: oddball-like structure “Target=2, NonTarget=1” plus instruction to “count silently how often a prescribed image … was flashed” (selective attention to targets / P300).\n- Candidate 2: Clinical/Intervention\n  - Evidence: dataset/paper framing: “An efficient P300-based brain-computer interface for disabled subjects” and “environment_control” application suggests assistive/clinical motivation.\n- Head-to-head: Attention wins because the experimental construct is classic P300 target detection; clinical framing exists but does not dominate as an intervention study design in the provided metadata.\n- Confidence justification: explicit P300/target-nontarget description supports Attention, but there is some plausible alternative (Clinical/Intervention) due to disabled cohort and assistive BCI aim."}},"canonical_name":null,"name_confidence":0.86,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Hoffmann2008"}}