{"success":true,"database":"eegdash","data":{"_id":"6953f4239276ef1ee07a32bf","dataset_id":"ds003190","associated_paper_doi":null,"authors":["Omar Mendoza-Montoya","Javier M. Antelis"],"bids_version":"1.2","contact_info":["Juan David Chailloux Peguero"],"contributing_labs":null,"data_processed":false,"dataset_doi":"10.18112/openneuro.ds003190.v1.0.1","datatypes":["eeg"],"demographics":{"subjects_count":19,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds003190","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"5c7d7639043cd5e823e6842c296efd4b6090f938715edcc08a688adc7c1e5545","license":"CC0","n_contributing_labs":null,"name":"Assesment of the visual stimuli properties in P300 paradigm","readme":"Dataset description:\nThe database consists of a total of 382 electroencephalographic files from 19 participants. All recordings were collected on channels Fz, Cz, P3, Pz,P4, PO7, PO8 and Oz, according to the 10-20 EEG electrode placement standard, grounded to AFz channel and referenced to right mastoid (M2).\n•Each participant (S1-S19) performed 3 experimental sessions (Session01-Session03) and in each session there are 7 data files.\n•The filenames for these data files are ’Training 4’, ’Training 5 - SF’, ’Training 5 - CF’, ’Training 6’, ’Training 7’, ’Training 8’, and ’Training 9’.\n•The number accompanying the filename indicates the number of stimuli, whereas letters SF and CF for data files with 5 stimuli indicate the type of flash, SF for Standard-Flash of the stimulus and CF for superimposing a yellow smiling Cartoon Face.\n•Note that filenames for data-files with 4, 6, 7, 8, and 9 stimuli do not have a letter and were recorded with the type of flash that provided the greater classification accuracy when using 5 stimuli.\n•Each data file contains the data stream in a 2D matrix where rows correspond to channels and columns correspond to time samples with sampling frequency of 256Hz.\n•There are 10 rows, 1 to 8 for each EEG electrode (in descending order Fz, Cz, P3, Pz, P4, PO7, PO8 and Oz), 9 for time stamps, and 10 for a marker that encode information about the execution of theexperiment.\nThe marker encodes this information as follows:\n•(i)marker numbers 101, 200, 201, 202 and 203, indicate the beginning and end of the five phases in a block\n•(ii)marker numbers 1, 2, 3, 4, 5, 6, 7, 8 and 9, indicate the symbol that is activated on the screen\n•(iii)each phase of the experiment block is identified with a marker\n•(iv)the phases of one block of the experiment are: Fixation, Target Presentation, Preparation, Stimulation and Rest\n•(iv)in particular the Stimulation phase has a start marker and an end marker","recording_modality":["eeg"],"senior_author":"Javier M. Antelis","sessions":["01","02","03"],"size_bytes":1079425010,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["cnos","ctos"],"timestamps":{"digested_at":"2026-04-22T12:25:31.226741+00:00","dataset_created_at":"2020-09-25T22:01:52.624Z","dataset_modified_at":"2020-10-06T01:17:50.000Z"},"total_files":384,"storage":{"backend":"s3","base":"s3://openneuro.org/ds003190","raw_key":"dataset_description.json","dep_keys":["CHANGES","LICENSE","README","participants.tsv"]},"nemar_citation_count":4,"computed_title":"Assesment of the visual stimuli properties in P300 paradigm","nchans_counts":[{"val":9,"count":382},{"val":10,"count":2}],"sfreq_counts":[{"val":256.0,"count":384}],"stats_computed_at":"2026-04-22T23:16:00.221946+00:00","tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Attention"],"confidence":{"pathology":0.6,"modality":0.9,"type":0.75},"reasoning":{"few_shot_analysis":"Closest few-shot paradigms are the oddball/P300-style datasets: (1) 'Cross-modal Oddball Task' (Parkinson's; Multisensory; Clinical/Intervention) shows the convention that oddball-like paradigms are labeled by stimulus modality (visual+auditory => Multisensory) and that when a clinical cohort is explicitly recruited, Pathology becomes that diagnosis and Type can shift to Clinical/Intervention. (2) 'EEG: Three-Stim Auditory Oddball and Rest in Acute and Chronic TBI' illustrates that an oddball paradigm can be categorized under a higher-level cognitive construct rather than task mechanics. For the current dataset, the paradigm is explicitly 'P300', which is typically tied to target detection/attentional selection; with no clinical recruitment stated, we follow the convention of labeling pathology as Healthy when the cohort is normative and unspecified.","metadata_analysis":"Key quoted metadata indicating visual P300 stimulation: (1) Title: \"Assesment of the visual stimuli properties in P300 paradigm\". (2) Readme: \"letters SF and CF ... indicate the type of flash, SF for Standard-Flash of the stimulus and CF for superimposing a yellow smiling Cartoon Face.\" (3) Readme: \"marker numbers 1, 2, 3, 4, 5, 6, 7, 8 and 9, indicate the symbol that is activated on the screen\". (4) Readme: phases include \"Fixation, Target Presentation, Preparation, Stimulation and Rest\". Participant information is minimal: \"19 participants\" and \"Subjects: 19\" with no diagnosis/condition described.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology — Metadata SAYS: only \"19 participants\" / \"Subjects: 19\" with no disorder mentioned; Few-shot pattern SUGGESTS: when no clinical recruitment is described, label as Healthy (vs explicit diagnoses like Parkinson's/TBI in examples). ALIGN (no conflict).\nModality — Metadata SAYS: explicitly \"visual stimuli\" in the title and describes screen-based flashes/symbols (\"flash\", \"symbol that is activated on the screen\"); Few-shot pattern SUGGESTS: modality is based on stimulus channel (e.g., cross-modal oddball => Multisensory). ALIGN.\nType — Metadata SAYS: \"P300 paradigm\" with target/symbol activation and structured phases including \"Target Presentation\"; Few-shot pattern SUGGESTS: oddball/P300 paradigms are categorized by the main construct studied, often attentional target detection rather than motor response mechanics. Mostly ALIGN, though one few-shot oddball dataset was labeled Decision-making; here the P300/target-detection framing makes Attention the better fit.","decision_summary":"Top-2 candidates per category with head-to-head selection:\n\nPathology:\n1) Healthy — Evidence: no clinical group/diagnosis stated (\"19 participants\"; \"Subjects: 19\"). Consistent with few-shot convention that explicit diagnoses drive non-Healthy labels.\n2) Unknown — Evidence: minimal participant characterization beyond count.\nDecision: Healthy wins because the dataset describes an experimental P300 paradigm without any clinical recruitment language.\n\nModality:\n1) Visual — Evidence: title explicitly says \"visual stimuli\"; readme describes \"flash\" properties and \"symbol ... activated on the screen\" and a \"Cartoon Face\" overlay.\n2) Multisensory — Weak evidence: no auditory/tactile stimuli described.\nDecision: Visual wins clearly.\n\nType:\n1) Attention — Evidence: \"P300 paradigm\" (classic attentional target detection), explicit \"Target Presentation\" phase, and symbols being activated as stimuli.\n2) Perception — Evidence: focus on \"visual stimuli properties\" and varying number/type of stimuli could be construed as perceptual discrimination.\nDecision: Attention wins because P300 paradigms primarily index attentional selection/target detection rather than low-level perception.\n\nConfidence justification:\n- Pathology confidence is lower due to lack of explicit 'healthy' wording (only participant count).\n- Modality confidence is high due to multiple explicit visual-stimulus quotes.\n- Type confidence is moderate-high: explicit 'P300' and 'Target Presentation' support Attention, but Perception remains a plausible runner-up given the stimulus-property manipulation."}},"total_duration_s":143089.3125,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"004770448c7e8613","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"author_year":"MendozaMontoya2020","canonical_name":null}}