{"success":true,"database":"eegdash","data":{"_id":"6953f4239276ef1ee07a32a7","dataset_id":"ds002712","associated_paper_doi":null,"authors":["Sara Aurtenetxe","Nicola Molinaro","Doug Davidson","Manuel Carreiras"],"bids_version":"1.1.1","contact_info":["Nicola Molinaro"],"contributing_labs":null,"data_processed":false,"dataset_doi":"10.18112/openneuro.ds002712.v1.0.1","datatypes":["meg"],"demographics":{"subjects_count":25,"ages":[27,29,24,24,27,22,24,21,25,27,22,25,29,25,24,24,29,20,25,24,24,23,21,22,23,21,21,21],"age_min":20,"age_max":29,"age_mean":24.035714285714285,"species":null,"sex_distribution":{"m":14,"f":14},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds002712","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"557d0ab815f0ab674a531bffc446d76d1dbf855edf61e2888d483806c41496df","license":"CC0","n_contributing_labs":null,"name":"Numbers and Letters","readme":"OpenNeuro curator note: This dataset was previously accessible at ds001985. The dataset was reuploaded due to privacy considerations.\nThe experiment is composed by two runs\nWe here report the code triggers for each run:\nRun 1: single item\n10 = single numbers\n15 = single letters\n20 & 25 = single false fonts\nRun 2: strings\n35 = strings numbers\n40 = strings letters\n45 & 50 = strings false fonts\nraw files could be split into two files\n(e.g., run-1 + run-11)","recording_modality":["meg"],"senior_author":"Manuel Carreiras","sessions":[],"size_bytes":109278657976,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["numbersletters"],"timestamps":{"digested_at":"2026-04-22T12:25:27.764209+00:00","dataset_created_at":"2020-04-16T07:54:58.744Z","dataset_modified_at":"2020-05-11T17:41:07.000Z"},"total_files":82,"storage":{"backend":"s3","base":"s3://openneuro.org/ds002712","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"ee1ed7cf2b280515","model":"openai/gpt-5.2","tagged_at":"2026-01-20T10:10:51.408020+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.55,"modality":0.8,"type":0.6},"reasoning":{"few_shot_analysis":"Most similar few-shot conventions are the visually driven perception-style tasks (e.g., the schizophrenia-spectrum dataset using a \"visual discrimination task\" labeled as Modality=Visual and Type=Perception). Although the current dataset does not describe a discrimination judgment explicitly, it similarly involves visually presented symbol categories (numbers/letters/false fonts). This guides mapping the stimulus channel to Visual and the purpose to Perception rather than Motor/Resting-state/Sleep. No few-shot example suggests a specific clinical recruitment based only on stimulus categories, so Pathology defaults toward Healthy/Unknown depending on explicit participant info (absent here).","metadata_analysis":"Key available metadata is only the README. It states: (1) \"The experiment is composed by two runs\" and (2) \"Run 1: single item\" with triggers \"10 = single numbers\", \"15 = single letters\", \"20 & 25 = single false fonts\". It also states (3) \"Run 2: strings\" with triggers \"35 = strings numbers\", \"40 = strings letters\", \"45 & 50 = strings false fonts\". These lines indicate stimulus categories of characters/symbols (letters, numbers, false fonts) presented as single items vs strings, implying a visual orthographic-form paradigm. There is no participant or diagnosis information in the provided metadata.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says nothing about diagnosis/recruitment (no quoted clinical terms), while few-shot patterns only show that when diagnosis is explicit it should be used; here it is absent. ALIGNMENT: neither supports a clinical label; thus we choose Healthy as the normative default but with reduced confidence due to missing participant info.\nModality: Metadata explicitly lists stimulus types as \"numbers\", \"letters\", \"false fonts\" (single items and strings), which are typically visually presented characters. Few-shot convention maps visually presented stimuli to Modality=Visual. ALIGNMENT: aligns.\nType: Metadata indicates an orthographic/character-form stimulus set (letters/numbers/false fonts) but does not describe a higher-level construct (memory, learning, decision-making, etc.). Few-shot convention for stimulus-driven sensory/categorical processing tasks supports Type=Perception when the focus is on processing/discriminating presented stimuli. ALIGNMENT: weakly aligns (inference needed because task goal is not explicitly stated).","decision_summary":"Top-2 candidates:\n- Pathology: (1) Healthy vs (2) Unknown. Evidence for Healthy: no mention of any disorder/clinical recruitment anywhere in the provided README; many OpenNeuro cognitive EEG tasks are healthy cohorts by default. Evidence for Unknown: participant section is entirely missing, so we cannot confirm. Winner: Healthy, but close runner-up due to missing recruitment details.\n- Modality: (1) Visual vs (2) Other. Evidence for Visual: stimuli are \"numbers\", \"letters\", \"false fonts\" presented as \"single item\" and \"strings\"—canonical visual stimuli. Evidence for Other: no explicit word \"visual\" appears, but character strings strongly imply vision. Winner: Visual.\n- Type: (1) Perception vs (2) Attention. Evidence for Perception: stimulus categories (letters/numbers/false fonts) suggest sensory/orthographic form processing without explicit learning/memory/choice policy. Evidence for Attention: could be an attentional RSVP/visual search-like paradigm, but no such description is provided. Winner: Perception with moderate uncertainty.\nConfidence justification: Modality has 2+ direct stimulus quotes; Pathology and Type lack explicit task-purpose/participant statements, relying on inference."}},"nemar_citation_count":1,"computed_title":"Numbers and Letters","nchans_counts":[{"val":312,"count":79},{"val":361,"count":2},{"val":314,"count":1}],"sfreq_counts":[{"val":1000.0,"count":82}],"stats_computed_at":"2026-04-22T23:16:00.221645+00:00","total_duration_s":86630.0,"author_year":"Aurtenetxe2020","canonical_name":null}}