{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c7c","dataset_id":"nm000147","associated_paper_doi":null,"authors":["Michele Romani","Devis Zanoni","Elisabetta Farella","Luca Turchet"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.48550/arXiv.2510.10169","datatypes":["eeg"],"demographics":{"subjects_count":22,"ages":[21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21],"age_min":21,"age_max":21,"age_mean":21.0,"species":null,"sex_distribution":{"m":12,"f":10},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000147","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"34df7993f77c2febd02e9d61d152ca6cfebbdddfcfbe33754ff443fff0ad2201","license":"CC-BY-4.0","n_contributing_labs":null,"name":"RomaniBF2025ERP","readme":"RomaniBF2025ERP\n===============\nMOABB class for BrainForm event-related potentials (ERP) dataset.\nDataset Overview\n----------------\n  Code: RomaniBF2025ERP\n  Paradigm: p300\n  DOI: 10.48550/arXiv.2510.10169\n  Subjects: 22\n  Sessions per subject: 2\n  Events: Target=1, NonTarget=2\n  Trial interval: [-0.1, 1.0] s\n  File format: EDF\n  Contributing labs: University of Trento, Fondazione Bruno Kessler\nAcquisition\n-----------\n  Sampling rate: 250.0 Hz\n  Number of channels: 8\n  Channel types: eeg=8\n  Channel names: Fz, C3, Cz, C4, Pz, PO7, Oz, PO8\n  Montage: standard_1020\n  Hardware: g.tec Unicorn Hybrid Black\n  Reference: right mastoid\n  Ground: left mastoid\n  Sensor type: EEG\n  Line frequency: 50.0 Hz\n  Cap manufacturer: g.tec\n  Cap model: Unicorn Hybrid Black\n  Electrode type: conductive gel\nParticipants\n------------\n  Number of subjects: 22\n  Health status: healthy\n  Age: mean=21.87, std=3.22\n  Gender distribution: female=10, male=12\n  BCI experience: naive\nExperimental Protocol\n---------------------\n  Paradigm: p300\n  Number of classes: 2\n  Class labels: Target, NonTarget\n  Trial duration: 0.9 s\n  Tasks: Complex Task (5 colored laser beams), Speller Task (10 color targets)\n  Study design: Within-subject study with two main sessions separated by visual texture swap (counterbalanced). Each session: calibration, tutorial, practice run with Complex Task (5 targets) and Speller Task (10 targets). Optional free-play third session for 16 participants.\n  Study domain: BCI training, serious gaming, skill acquisition\n  Feedback type: visual\n  Stimulus type: flickering\n  Stimulus modalities: visual\n  Primary modality: visual\n  Synchronicity: synchronous\n  Mode: online\n  Training/test split: True\n  Instructions: minimize movement during recording to reduce motion artifacts, focus on flickering targets for calibration and task completion\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  Target\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Target\n  NonTarget\n    ├─ Sensory-event\n    ├─ Experimental-stimulus\n    ├─ Visual-presentation\n    └─ Non-target\nParadigm-Specific Parameters\n----------------------------\n  Detected paradigm: p300\n  Number of targets: 10\n  Stimulus onset asynchrony: 100.0 ms\nData Structure\n--------------\n  Trials: 600\n  Trials context: Per calibration session: 600 total stimulus events (60 target + 540 non-target from 10 unique targets). ~1 minute duration.\nPreprocessing\n-------------\n  Data state: raw\n  Preprocessing applied: False\nSignal Processing\n-----------------\n  Classifiers: LDA\nCross-Validation\n----------------\n  Method: cross-validation\n  Evaluation type: within-subject\nPerformance (Original Study)\n----------------------------\n  Task Accuracy Complex Median T2A: 0.833\n  Task Accuracy Speller Median T3B: 0.833\n  Itr Complex Mean T2A: 10.76\n  Itr Speller Mean T3B: 21.95\n  Calibration Attempts Session1 Mean: 2.64\n  Calibration Attempts Session2 Mean: 2.68\nBCI Application\n---------------\n  Applications: speller, gaming\n  Environment: laboratory\n  Online feedback: True\nTags\n----\n  Pathology: Healthy\n  Modality: ERP\n  Type: P300\nDocumentation\n-------------\n  Description: BrainForm: a Serious Game for BCI Training and Data Collection - gamified BCI training system designed for scalable data collection using consumer hardware\n  DOI: 10.48550/arXiv.2510.10169\n  License: CC-BY-4.0\n  Investigators: Michele Romani, Devis Zanoni, Elisabetta Farella, Luca Turchet\n  Senior author: Luca Turchet\n  Institution: University of Trento\n  Address: 38122, Trento, Italy\n  Country: IT\n  Repository: GitHub\n  Data URL: https://zenodo.org/records/17225966\n  Publication year: 2025\n  Keywords: Brain-Computer Interfaces, Event-Related Potentials, Machine Learning, Serious Games, Human factors\nAbstract\n--------\nBrainForm is a gamified Brain-Computer Interface (BCI) training system designed for scalable data collection using consumer hardware and a minimal setup. We investigated (1) how users develop BCI control skills across repeated sessions and (2) perceptual and performance effects of two visual stimulation textures. Game Experience Questionnaire (GEQ) scores for Flow, Positive Affect, Competence and Challenge were strongly positive, indicating sustained engagement. A within-subject study with multiple runs, two task complexities, and post-session questionnaires revealed no significant performance differences between textures but increased ocular irritation over time. Online metrics—Task Accuracy, Task Time, and Information Transfer Rate—improved across sessions, confirming learning effects for symbol spelling, even under pressure conditions. Our results highlight the potential of BrainForm as a scalable, user-friendly BCI research tool and offer guidance for sustained engagement and reduced training fatigue.\nMethodology\n-----------\nStructured protocol consisting of: (1) introductory tutorial, (2) two practice runs involving calibration and control with up to ten flickering targets, (3) final timed challenge. Two main sessions separated by short break and visual texture swap (counterbalanced). Calibration: 60 trials focusing on single flashing target (~1 minute), repeated until 80%+ accuracy. Tasks: Complex Task (5 colored laser beams, game-oriented) and Speller Task (10 color targets, BCI-oriented symbol spelling). Optional free-play run for 16 participants. Data collection: raw EEG, performance metrics, in-game metadata, and questionnaires (demographic, session questionnaire, GEQ).\nReferences\n----------\nM. Romani, D. Zanoni, E. Farella, and L. Turchet, \"BrainForm: a Serious Game for BCI Training and Data Collection,\" Oct. 14, 2025, arXiv: arXiv:2510.10169. doi: 10.48550/arXiv.2510.10169.\nM. Romani, F. Paissan, A. Fossà, and E. Farella, \"Explicit modelling of subject dependency in BCI decoding,\" Sept. 27, 2025, arXiv: arXiv:2509.23247. doi: 10.48550/arXiv.2509.23247.\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0cb","0grain","1cb","1grain","2cbExtra","2grainExtra"],"size_bytes":140866957,"source":"nemar","storage":{"backend":"s3","base":"s3://nemar/nm000147","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["p300"],"timestamps":{"digested_at":"2026-04-06T13:13:21.703385+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-31T08:55:37Z"},"total_files":120,"computed_title":"RomaniBF2025ERP","nchans_counts":[{"val":8,"count":120}],"sfreq_counts":[{"val":250.0,"count":120}],"stats_computed_at":"2026-04-04T21:29:34.904890+00:00","total_duration_s":22601.487999999998,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"f486ac893176728f","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Learning"],"confidence":{"pathology":0.9,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot conventions are those involving oddball-like target vs non-target ERP paradigms and task-goal framing. The Parkinson’s Cross-modal Oddball example shows that oddball/target-detection paradigms are typically labeled by stimulus modality (often Multisensory/Visual/Auditory depending on cues) and the Type reflects the study’s primary purpose (there: Clinical/Intervention due to PD focus). The Schizophrenia visual discrimination example shows that even when responses/choices exist, modality is set by stimulus channel (Visual) and Type is set by the core construct (Perception in that case). For this dataset, the paradigm is explicitly “p300” with Target/NonTarget, which matches the ERP/oddball convention; however, the dataset’s stated research goal emphasizes “BCI training” and “skill acquisition”, guiding Type toward Learning rather than pure Attention/Perception.","metadata_analysis":"Key quoted facts from the dataset metadata/readme:\n1) Population/health: \"Health status: healthy\" and also \"Tags\\n----\\n  Pathology: Healthy\".\n2) Paradigm/task: \"Paradigm: p300\" and \"Events: Target=1, NonTarget=2\".\n3) Stimulus modality: \"Stimulus modalities: visual\" plus \"Primary modality: visual\" and \"Stimulus type: flickering\".\n4) Study purpose: \"Study domain: BCI training, serious gaming, skill acquisition\".\n5) Learning emphasis in abstract: \"how users develop BCI control skills across repeated sessions\" and \"Online metrics... improved across sessions, confirming learning effects\".","paper_abstract_analysis":"The abstract embedded in the README is useful and reinforces the primary study purpose as training/skill acquisition: \"how users develop BCI control skills across repeated sessions\" and \"improved across sessions, confirming learning effects\". This supports choosing Type=Learning over a purely attentional ERP characterization.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Health status: healthy\" / \"Pathology: Healthy\".\n- Few-shot pattern suggests: use explicit recruitment diagnosis when stated; otherwise Healthy.\n- Alignment: ALIGN (explicitly healthy).\n\nModality:\n- Metadata says: \"Stimulus modalities: visual\", \"Primary modality: visual\", and visual \"flickering\" targets.\n- Few-shot pattern suggests: modality is determined by stimulus channel (e.g., visual discrimination -> Visual; cross-modal oddball -> Multisensory).\n- Alignment: ALIGN (clearly Visual).\n\nType:\n- Metadata says (purpose): \"Study domain: BCI training, serious gaming, skill acquisition\"; abstract says \"develop BCI control skills\" and \"learning effects\".\n- Few-shot pattern suggests: target/non-target ERP paradigms often map to Attention/Perception unless another primary goal dominates (e.g., PD oddball mapped to Clinical/Intervention because pathology focus).\n- Alignment: PARTIAL—task mechanics resemble attention/oddball ERP, but metadata explicitly frames the study around training/skill acquisition. Per instruction priority, explicit metadata about study aim supports Learning as primary Type.","decision_summary":"Top-2 comparative selection:\n\n1) Pathology\n- Candidate A: Healthy\n  Evidence: \"Health status: healthy\"; \"Pathology: Healthy\"; participants are described as \"BCI experience: naive\" with no disorder mentioned.\n- Candidate B: Unknown\n  Evidence: would apply only if no recruitment health info were provided.\n- Decision: Healthy (metadata explicit). Alignment: aligned with few-shot conventions.\n\n2) Modality\n- Candidate A: Visual\n  Evidence: \"Stimulus modalities: visual\"; \"Primary modality: visual\"; \"Feedback type: visual\"; \"Stimulus type: flickering\".\n- Candidate B: Other\n  Evidence: could be considered if stimulus channel were ambiguous or mixed.\n- Decision: Visual (multiple explicit statements). Alignment: aligned with few-shot conventions.\n\n3) Type\n- Candidate A: Learning\n  Evidence: \"Study domain: BCI training, serious gaming, skill acquisition\"; abstract: \"develop BCI control skills across repeated sessions\"; \"improved across sessions, confirming learning effects\".\n- Candidate B: Attention\n  Evidence: \"Paradigm: p300\" with \"Target\" vs \"NonTarget\" is a classic attentional oddball/ERP setup.\n- Decision: Learning, because the dataset’s stated research purpose is skill acquisition/training across sessions, not primarily characterizing attention per se. Confidence reflects competition with Attention due to P300 oddball structure."}},"canonical_name":["Romani2025"],"name_confidence":0.62,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"RomaniBF2025"}}