{"success":true,"database":"eegdash","data":{"_id":"6953f4239276ef1ee07a32c6","dataset_id":"ds003420","associated_paper_doi":null,"authors":["Ahmad Mheich","Olivier Dufor","Sahar Yassine","Aya Kabbara","Arnaud Biraben","Fabrice Wendling","Mahmoud Hassan"],"bids_version":"1.2","contact_info":["EBN Lab"],"contributing_labs":null,"data_processed":false,"dataset_doi":"10.18112/openneuro.ds003420.v1.0.2","datatypes":["eeg"],"demographics":{"subjects_count":23,"ages":[23,31,19,23,24,30,19,22,34,19,26,21,20,33,20,30,40,27,24,39,33,25,23],"age_min":19,"age_max":40,"age_mean":26.304347826086957,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds003420","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"7ac2a8f0840af5a4ff1fb954f042102734a360b6f1d244af92d270a9cb1020a3","license":"CC0","n_contributing_labs":null,"name":"HD-EEGtask(Dataset 1)","readme":"#    Dataset 1\n##   Presentation\n     This dataset was collected between 2012 and 2013 in Rennes (France) during two conditions (visual naming and spelling tasks).\n     The dataset consists of naming and spelling the names of visually presented objects. The data was collected in the Rennes University Hospital. This experiment was approved by an independent ethics committee and authorized by the French institutional review board (IRB): \"Comite de Protection des Personnes dans la Recherche Biomedicale Ouest V\" (CCPPRB-Ouest V).\n     This study was registered under the name \"conneXion\" and the agreement number: 2012- A01227-36.\n###  Participants\n     Twenty-three right-handed healthy volunteers of whom 12 females, with an age range between\n     19 and 40 years (mean age 28 year),and 11 males with an age range between 19 and 33 years (mean age 23 years) participated in this study. (See participants.json and participants.tsv for more details)\n#### Experiment\n     * The experiment begins with the verification of inclusion/exclusion criteria.\n     * The participants read the information notice and the consent form.\n     * Then they sign two questionnaires.\n     * One subject -->Two conditions (naming and spelling)--> two runs for each condition.\n     * Each run contains 74 stimuli.\n     * The spelling task always follow the naming task and its instruction was not given before the naming task was completed to avoid any reminiscence of words orthographic structures\n     * Each run contains balanced numbers of animals and objects as well as long and short words.\n     * Pictures are presented on a screen using a computer and the experimental paradigm is presented using E-prime Psychology Software Tools.\n     * The responses produced by the participants were collected via a Logitech microphone and analyzed to detect onsets of speech using Praat v5.3.13(University of Amsterdam, 1012VT Amsterdam, The Netherlands).\n#### EEG acquisition\n     * HD-EEG system (EGI, Electrical Geodesic Inc., 256 electrodes)\n     * Sampling frequency: 1000Hz\n     * Impedances were kept below 5k\n## Contact\n     * If you have any questions or comments, please contact:\n     * Ahmad Mheich: mheich.ahmad@gmail.com","recording_modality":["eeg"],"senior_author":"Mahmoud Hassan","sessions":["naming","spelling"],"size_bytes":50565140405,"source":"openneuro","study_design":null,"study_domain":null,"tasks":[],"timestamps":{"digested_at":"2026-04-22T12:25:31.903370+00:00","dataset_created_at":"2020-12-04T13:06:19.829Z","dataset_modified_at":"2020-12-13T17:40:52.000Z"},"total_files":92,"storage":{"backend":"s3","base":"s3://openneuro.org/ds003420","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"0e11879f761f3deb","model":"openai/gpt-5.2","tagged_at":"2026-01-20T10:15:41.450642+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Other"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Closest few-shot by stimulus channel is the schizophrenia visual discrimination example (Meta-rdk), labeled Modality=Visual based on screen-presented dot stimuli; this guides mapping of screen pictures here to Visual modality. Also, the Motor Movement/Imagery example shows that visual cues do not imply Motor modality (modality is stimulus, not response), supporting Visual here despite spoken responses. For Type, none of the few-shots are language-production (naming/spelling); by convention when the primary construct is not well-covered by available Type labels (no explicit Language label), choose Type=Other rather than forcing Perception/Memory.","metadata_analysis":"Key facts from README: (1) population: \"Twenty-three right-handed healthy volunteers\". (2) paradigm and stimulus channel: \"two conditions (visual naming and spelling tasks)\" and \"naming and spelling the names of visually presented objects\". (3) presentation format: \"Pictures are presented on a screen\". (4) response format (not modality): \"responses ... collected via a Logitech microphone\" and analyzed for \"onsets of speech\".","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says participants were \"healthy volunteers\"; few-shot convention maps explicitly healthy samples to Pathology=Healthy (ALIGN). Modality: Metadata says \"visual naming\" and \"visually presented objects\" / \"Pictures ... on a screen\"; few-shot convention infers modality from stimulus channel (e.g., visual dot discrimination labeled Visual), not from response modality (ALIGN). Type: Metadata emphasizes naming/spelling with spoken responses (language production); few-shot set does not provide a direct language-production analog, and available Type labels lack a dedicated Language category—convention suggests using Type=Other when the cognitive construct does not fit Perception/Memory/Motor/etc. cleanly (ALIGN).","decision_summary":"Pathology top-2: (1) Healthy — supported by \"Twenty-three right-handed healthy volunteers\"; (2) Unknown — would apply if recruitment health status were unstated. Winner: Healthy (explicit). Modality top-2: (1) Visual — supported by \"visual naming and spelling tasks\", \"visually presented objects\", and \"Pictures are presented on a screen\"; (2) Multisensory — possible if auditory stimuli were presented, but none are described (microphone is response capture). Winner: Visual. Type top-2: (1) Other — task is language production/orthographic retrieval: \"naming and spelling the names of visually presented objects\" plus speech-onset recording via microphone; (2) Memory — could be argued due to lexical retrieval and spelling, but dataset framing is not explicitly memory-focused. Winner: Other. Confidence reflects multiple explicit quotes for Pathology/Modality and a clear but label-mismatch-driven mapping for Type."}},"nemar_citation_count":1,"computed_title":"HD-EEGtask(Dataset 1)","nchans_counts":[{"val":256,"count":80},{"val":257,"count":12}],"sfreq_counts":[{"val":1000.0,"count":92}],"stats_computed_at":"2026-04-22T23:16:00.222033+00:00","source_url":"https://openneuro.org/datasets/ds003420","total_duration_s":48729.49,"author_year":"Mheich2020_HD","canonical_name":null}}