{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a33fa","dataset_id":"ds005522","associated_paper_doi":null,"authors":["Haydn G. Herrema","Michael J. Kahana"],"bids_version":"1.7.0","contact_info":["Haydn Herrema"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds005522.v1.0.0","datatypes":["ieeg"],"demographics":{"subjects_count":55,"ages":[48,20,55,20,30,36,47,54,47,34,38,32,36,19,24,23,19,31,29,47,34,28,58,51,39,21,52,20,24,19,23,34,44,36,21,23,23,56,34,39,45,39,26,60,24,50,47,21,49,20,36,34,26,26,29,31,40,23],"age_min":19,"age_max":60,"age_mean":34.37931034482759,"species":null,"sex_distribution":{"f":33,"m":25},"handedness_distribution":{"r":46,"l":8,"a":4}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005522","osf_url":null,"github_url":null,"paper_url":null},"funding":["DARPA RAM: N66001-14-2-4032"],"ingestion_fingerprint":"46e3d5b38265a70302871faefe07bdb76b4bc6f90d7772a3fcb6abfc5e244e7d","license":"CC0","n_contributing_labs":null,"name":"Spatial Navigation Memory of Object Locations","readme":"### Spatial Navigation Memory of Object Locations\n#### Description\nThis dataset contains behavioral events and intracranial electrophysiological recordings from a spatial navigation memory task.  The experiment consists of participants encoding object locations during a guided navigation learning phase and then recalling the object locations during a self-navigation test phase.  The data was collected at clinical sites across the country as part of a collaboration with the Computational Memory Lab at the University of Pennsylvania.\nEach session contains 50 trials (2 practice and 48 experimental), and each overall \"trial\" contains 2 learning trials followed by 1 test trial with the same object at the same location.  For learning trial 1, participants are placed at a random location at a given radius from the object.  They are smoothly turned to face the object (1 s), automatically driven to the object location (3 s), and then paused at the object (1 s).  5 seconds later, participants are placed at a new random location and the process repeats for learning trial 2.  On test trials, participants are placed at a random location and orientation, with the object invisible.  They navigate to where they believe the object was located and press a button to record their response.  The environment for all sessions and trials is 64.8 x 36, with coordinates: x = (-32.4, 32.4), y = (-18.0, 18.0).\nThe trials are blocked by a counterbalanced scheme, so for every trial there is another trial with reflected object position, starting position, and orientation.  Each block contains 2 trials (i.e., 2 x (2 learning, 1 test)), with object (X, Y) and starting locations (x, y):\n- **(X1, Y1)**\n    - **(x1', y1')**\n    - **(x1'', y1'')**\n    - **(x1''', y1''')**\n- **(X2, Y2)**\n    - **(x2', y2')**\n    - **(x2'', y2'')**\n    - **(x2''', y2''')**\nThe paired block contains 2 trials in the opposite order with object and starting locations:\n- **(-X2, -Y2)**\n    - **(-x2', -y2')**\n    - **(-x2'', -y2'')**\n    - **(-x2''', -y2''')**\n- **(-X1, -Y1)**\n    - **(-x1', -y1')**\n    - **(-x1'', -y1'')**\n    - **(-x1''', -y1''')**\n#### To Note\n* The iEEG recordings are labeled either \"monopolar\" or \"bipolar.\"  The monopolar recordings are referenced (typically a mastoid reference), but should always be re-referenced before analysis.  The bipolar recordings are referenced according to a paired scheme indicated by the accompanying bipolar channels tables.\n* Each subject has a unique montage of electrode locations.  MNI and Talairach coordinates are provided when available.\n* Recordings done with the Blackrock system are in units of 250 nV, while recordings done with the Medtronic system are estimated through testing to have units of 0.1 uV.  We have completed the scaling to provide values in V.\n#### Contact\nFor questions or inquiries, please contact sas-kahana-sysadmin@sas.upenn.edu.","recording_modality":["ieeg"],"senior_author":"Michael J. Kahana","sessions":["0","1","2","3","4","5"],"size_bytes":115392336164,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["YC1"],"timestamps":{"digested_at":"2026-04-22T12:28:32.540468+00:00","dataset_created_at":"2024-09-24T15:54:45.157Z","dataset_modified_at":"2024-09-24T17:18:59.000Z"},"total_files":176,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005522","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"8cd7e253b46bccf5","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Unknown"],"modality":["Visual"],"type":["Memory"],"confidence":{"pathology":0.55,"modality":0.7,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot example by cognitive construct is the digit span dataset (Healthy/Auditory/Memory). It demonstrates the convention that when the task is explicitly framed around encoding and later recall, the catalog Type should be labeled as \"Memory\" (not e.g., Decision-making). For modality conventions, the Parkinson’s cross-modal oddball example shows Modality tracks stimulus channels (visual+auditory precues => Multisensory), not the motor response; this guides treating the spatial navigation task’s dominant information channel as the stimulus/environment (likely visual) rather than the button press.","metadata_analysis":"Key task facts are explicit in the README: (1) it is a memory paradigm: \"intracranial electrophysiological recordings from a spatial navigation memory task\"; (2) it includes encoding and retrieval phases: \"participants encoding object locations\" and \"then recalling the object locations\"; (3) it is a navigation-based recall: \"They navigate to where they believe the object was located and press a button to record their response.\" The recording context is clinical iEEG but diagnosis is not stated: \"intracranial electrophysiological recordings\" and \"data was collected at clinical sites across the country\" and notes about electrode montages (\"Each subject has a unique montage of electrode locations\").","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata SAYS it is \"intracranial electrophysiological recordings\" collected at \"clinical sites\" but does NOT name a recruiting diagnosis (no mention of epilepsy, tumor, Parkinson’s, etc.). Few-shot patterns SUGGEST many iEEG datasets are epilepsy/surgical cohorts, but this is an inference. => CONFLICT/INSUFFICIENT FACTS; metadata lacks the required explicit diagnostic fact, so label should remain Unknown.\n\nModality: Metadata SAYS participants are oriented to \"face the object\" and the object can be \"invisible\" during test (implying a perceptual scene/environment), and provides a 2D spatial \"environment\" with coordinates—this strongly suggests a visually presented virtual environment. Few-shot conventions SUGGEST choosing stimulus channel over response channel (as in oddball examples). => ALIGN; select Visual over Motor.\n\nType: Metadata SAYS \"spatial navigation memory task,\" with \"encoding object locations\" then \"recalling the object locations.\" Few-shot digit-span example maps explicit encoding/recall paradigms to Type=Memory. => ALIGN; select Memory.","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n- Candidate 1: Unknown — Evidence: no explicit diagnosis/recruitment criterion is provided; only \"intracranial electrophysiological recordings\" and \"clinical sites\" are stated.\n- Candidate 2: Epilepsy — Evidence: contextual inference from typical iEEG at clinical sites with implanted electrodes, but not stated.\nHead-to-head: Unknown wins because the dataset metadata lacks an explicit clinical diagnosis fact (required by the override rule).\n\nModality:\n- Candidate 1: Visual — Evidence: \"face the object\"; \"object invisible\" on test; spatial \"environment\" with coordinates.\n- Candidate 2: Motor — Evidence: navigation and a button press response, but response type should not determine Modality.\nHead-to-head: Visual wins because the task is driven by spatial/object scene information; motor actions are responses.\n\nType:\n- Candidate 1: Memory — Evidence: \"spatial navigation memory task\"; \"encoding object locations\"; \"recalling the object locations.\"\n- Candidate 2: Perception — Evidence: involves seeing/navigating in an environment, but the stated goal is learning and recall of locations.\nHead-to-head: Memory wins due to explicit encoding+recall framing.\n\nConfidence justification:\n- Pathology confidence is low-moderate because only indirect clinical context is quoted, with no diagnosis label.\n- Modality confidence is moderate because visual presentation is implied by multiple phrases but not named explicitly (e.g., no direct 'visual stimulus' wording).\n- Type confidence is high because multiple explicit memory/encoding/recall quotes align with a clear few-shot convention."}},"nemar_citation_count":0,"computed_title":"Spatial Navigation Memory of Object Locations","nchans_counts":[{"val":110,"count":8},{"val":133,"count":8},{"val":120,"count":7},{"val":88,"count":7},{"val":173,"count":6},{"val":72,"count":6},{"val":126,"count":6},{"val":188,"count":6},{"val":108,"count":5},{"val":56,"count":5},{"val":112,"count":4},{"val":64,"count":4},{"val":68,"count":4},{"val":46,"count":4},{"val":128,"count":4},{"val":127,"count":4},{"val":104,"count":3},{"val":146,"count":3},{"val":86,"count":3},{"val":124,"count":3},{"val":144,"count":3},{"val":186,"count":3},{"val":50,"count":3},{"val":123,"count":3},{"val":92,"count":3},{"val":182,"count":3},{"val":111,"count":2},{"val":163,"count":2},{"val":158,"count":2},{"val":100,"count":2},{"val":96,"count":2},{"val":63,"count":2},{"val":59,"count":2},{"val":85,"count":2},{"val":70,"count":2},{"val":160,"count":2},{"val":170,"count":2},{"val":140,"count":2},{"val":118,"count":2},{"val":130,"count":2},{"val":75,"count":2},{"val":138,"count":2},{"val":180,"count":2},{"val":166,"count":2},{"val":174,"count":1},{"val":172,"count":1},{"val":90,"count":1},{"val":165,"count":1},{"val":151,"count":1},{"val":125,"count":1},{"val":178,"count":1},{"val":177,"count":1},{"val":169,"count":1},{"val":78,"count":1},{"val":109,"count":1},{"val":122,"count":1},{"val":54,"count":1},{"val":80,"count":1},{"val":60,"count":1},{"val":94,"count":1},{"val":105,"count":1},{"val":116,"count":1},{"val":149,"count":1},{"val":84,"count":1},{"val":136,"count":1},{"val":76,"count":1}],"sfreq_counts":[{"val":1000.0,"count":70},{"val":500.0,"count":61},{"val":1600.0,"count":26},{"val":999.0,"count":13},{"val":2000.0,"count":4},{"val":1999.0,"count":2}],"stats_computed_at":"2026-04-22T23:16:00.309856+00:00","total_duration_s":522855.6308898984,"author_year":"Herrema2024_Spatial","canonical_name":null}}