{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a33b1","dataset_id":"ds005059","associated_paper_doi":null,"authors":["Haydn G. Herrema","Michael J. Kahana"],"bids_version":"1.7.0","contact_info":["Haydn Herrema"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds005059.v1.0.6","datatypes":["ieeg"],"demographics":{"subjects_count":69,"ages":[48,49,39,31,47,32,27,40,45,49,28,20,34,36,23,34,39,26,24,22,39,20,51,36,52,28,35,34,43,34,26,43,20,29,33,34,57,24,56,44,43,28,30,34,20,18,29,39,46,50,42,58,28,56,20,27,27,29,30,28,21,29,32,24,62,27,21,26,57,53,21,22],"age_min":18,"age_max":62,"age_mean":34.833333333333336,"species":null,"sex_distribution":{"f":32,"m":40},"handedness_distribution":{"r":55,"a":4,"l":10}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005059","osf_url":null,"github_url":null,"paper_url":null},"funding":["DARPA RAM: N66001-14-2-4032"],"ingestion_fingerprint":"5f1b42dc5c639e331d9d03db92d6b6926dffc88a98381e2e46943247fbc58c1d","license":"CC0","n_contributing_labs":null,"name":"Paired Associates Learning: Memory for Word Pairs in Cued Recall","readme":"### Paired Associates Learning of Word Pairs\n#### Description\nThis dataset contains behavioral events and intracranial electrophysiological recordings from a paired associates memory task.  The experiment consists of participants studying pairs of visually presented words, solving simple arithmetic problems that function as a distractor, and then completing a cued recall task.  The data was collected at clinical sites across the country as part of a collaboration with the Computational Memory Lab at the University of Pennsylvania.\nEach session contains 25 lists of the structure: encoding, distractor, cued recall.  During encoding, 6 pairs of words are presented one pair at a time.  Each pair remains on screen for 4000 ms and is followed by a 1000 ms interstimulus interval.  During the cued recall, one randomly chosen word from each pair is shown, and the participant is asked to vocally recall the other word from the pair.  Participants have 5000 ms for each recall, and then the next cue (i.e., a word from another pair) is shown.  All 6 pairs of words are tested on each list.\n#### To Note:\n- The iEEG recordings are labeled either \"monopolar\" or \"bipolar.\"  The monopolar recordings are referenced (typically a mastoid reference), but should always be re-referenced before analysis.  The bipolar recordings are referenced according to a paired scheme indicated by the accompanying bipolar channels tables.\n- Each subject has a unique montage of electrode locations.  MNI and Talairach coordinates are provided when available, along with brain region annotations.\n- Recordings were made on multiple different systems, so we have done the scaling to provide all voltage values in V.\n#### Contact\nFor questions or inquiries, please contact sas-kahana-sysadmin@sas.upenn.edu.","recording_modality":["ieeg"],"senior_author":"Michael J. Kahana","sessions":["0","1","2","3","4","5","6"],"size_bytes":179631838657,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["PAL1"],"timestamps":{"digested_at":"2026-04-22T12:27:14.276308+00:00","dataset_created_at":"2024-04-03T21:09:33.244Z","dataset_modified_at":"2024-04-23T00:32:22.000Z"},"total_files":282,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005059","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"a35f394360293cbe","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Epilepsy"],"modality":["Visual"],"type":["Memory"],"confidence":{"pathology":0.6,"modality":0.85,"type":0.9},"reasoning":{"few_shot_analysis":"Closest few-shot by cognitive construct is the digit span dataset (few-shot: \"EEG, pupillometry... digit span task\"), which is labeled Type=Memory because it is explicitly a working memory/recall paradigm. This guides mapping a paired-associates / cued recall paradigm to Type=Memory as well. For Modality conventions, the digit span example is Modality=Auditory because digits are \"presented auditorily\"; similarly, the current dataset uses \"visually presented words\", so Modality should be Visual (stimulus channel drives modality). The motor movement/imagery example also reinforces that modality follows presented cues (visual targets) rather than the response.","metadata_analysis":"Key task facts: (1) Memory paradigm is explicit: \"paired associates memory task\" and \"participants studying pairs of visually presented words\" followed by \"completing a cued recall task.\" (2) Stimulus modality is explicit: \"pairs of visually presented words\" and \"one randomly chosen word from each pair is shown\" during recall. (3) Recording/population context suggests a clinical invasive cohort: \"intracranial electrophysiological recordings\" and \"data was collected at clinical sites across the country\"; however, no explicit diagnosis (e.g., epilepsy) is stated.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata SAYS (quotes) \"intracranial electrophysiological recordings\" and \"collected at clinical sites\" but does not name a diagnosis; few-shot pattern does not directly cover iEEG clinical cohorts. ALIGNMENT: partial/ambiguous (clinical context aligns with a patient cohort but not a specific pathology). Modality: Metadata SAYS \"visually presented words\"; few-shot convention SUGGESTS modality follows stimulus channel (e.g., digit span auditory -> Auditory). ALIGNMENT: aligns -> Visual. Type: Metadata SAYS \"paired associates memory task\" and \"cued recall\"; few-shot memory example maps explicit memory tasks to Type=Memory. ALIGNMENT: aligns -> Memory.","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n1) Epilepsy — Evidence: invasive \"intracranial electrophysiological recordings\" collected at \"clinical sites\" commonly indicates epilepsy-monitoring cohorts (contextual inference, not explicitly stated).\n2) Unknown — Evidence: no explicit diagnosis/condition is provided in the metadata.\nWinner: Epilepsy, but only by contextual inference from iEEG-at-clinical-sites; diagnosis is not explicitly stated.\n\nModality:\n1) Visual — Evidence: \"pairs of visually presented words\"; \"one randomly chosen word ... is shown\".\n2) Other — would apply if modality were mixed/unclear, but visual presentation is explicit.\nWinner: Visual.\n\nType:\n1) Memory — Evidence: \"paired associates memory task\"; \"cued recall task\"; study/encode word pairs.\n2) Attention — distractor arithmetic exists, but the stated purpose is associative memory and recall.\nWinner: Memory.\n\nConfidence justification (quotes/features): Pathology confidence kept moderate because evidence is indirect (iEEG + clinical sites) with no stated diagnosis; Modality and Type are high because both are directly described with multiple explicit phrases."}},"nemar_citation_count":0,"computed_title":"Paired Associates Learning: Memory for Word Pairs in Cued Recall","nchans_counts":[{"val":112,"count":22},{"val":126,"count":15},{"val":85,"count":11},{"val":128,"count":10},{"val":110,"count":10},{"val":100,"count":9},{"val":88,"count":9},{"val":104,"count":9},{"val":64,"count":8},{"val":72,"count":8},{"val":186,"count":8},{"val":102,"count":7},{"val":121,"count":7},{"val":116,"count":7},{"val":142,"count":6},{"val":92,"count":6},{"val":95,"count":5},{"val":119,"count":5},{"val":94,"count":5},{"val":97,"count":5},{"val":123,"count":4},{"val":106,"count":4},{"val":68,"count":4},{"val":96,"count":4},{"val":140,"count":4},{"val":124,"count":4},{"val":139,"count":4},{"val":130,"count":4},{"val":86,"count":4},{"val":117,"count":3},{"val":74,"count":3},{"val":120,"count":3},{"val":173,"count":3},{"val":80,"count":3},{"val":55,"count":3},{"val":87,"count":3},{"val":84,"count":3},{"val":114,"count":3},{"val":107,"count":3},{"val":58,"count":3},{"val":188,"count":3},{"val":83,"count":3},{"val":108,"count":3},{"val":73,"count":2},{"val":118,"count":2},{"val":115,"count":2},{"val":122,"count":2},{"val":138,"count":2},{"val":149,"count":2},{"val":111,"count":2},{"val":141,"count":2},{"val":177,"count":1},{"val":16,"count":1},{"val":67,"count":1},{"val":146,"count":1},{"val":77,"count":1},{"val":133,"count":1},{"val":52,"count":1},{"val":98,"count":1},{"val":76,"count":1},{"val":99,"count":1},{"val":46,"count":1},{"val":93,"count":1},{"val":14,"count":1},{"val":53,"count":1},{"val":60,"count":1},{"val":90,"count":1}],"sfreq_counts":[{"val":1000.0,"count":193},{"val":500.0,"count":71},{"val":1024.0,"count":8},{"val":499.7071,"count":6},{"val":1600.0,"count":4}],"stats_computed_at":"2026-04-22T23:16:00.308884+00:00","total_duration_s":940736.6235217251,"canonical_name":null,"name_confidence":0.62,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Herrema2024_Paired"}}