{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3347","dataset_id":"ds004395","associated_paper_doi":null,"authors":["Michael J. Kahana","Joseph H. Rudoler","Lynn J. Lohnas","Karl Healey","Ada Aka","Adam Broitman","Elizabeth Crutchley","Patrick Crutchley","Kylie H. Alm","Brandon S. Katerman","Nicole E. Miller","Joel R. Kuhn","Yuxuan Li","Nicole M. Long","Jonathan Miller","Madison D. Paron","Jesse K. Pazdera","Isaac Pedisich","Christoph T. Weidemann"],"bids_version":"1.6.0","contact_info":["Joseph Rudoler","Ryan Colyer"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds004395.v2.0.0","datatypes":["eeg"],"demographics":{"subjects_count":364,"ages":[25,27,19,19,22,20,19,29,21,19,25,23,20,18,20,21,20,18,19,21,22,20,19,20,23,19,18,18,22,21,21,19,19,19,22,19,19,20,26,21,22,23,18,19,25,21,20,19,20,18,20,21,20,20,19,18,19,26,19,24,24,24,19,20,23,20,29,27,26,20,23,24,20,26,26,28,26,22,24,25,60,25,62,20,61,18,21,69,19,62,77,81,64,22,21,20,78,65,19,85,21,68,73,75,70,62,70,64,79,68,20,18,20,19,20,18,19,20,22,24,25,24,24,24,23,22,23,23,20,68,62,61,63,68,24,26,25,27,66,21,26,66,62,68,71,79,74,63,78,86,75,66,24,25,21,18,20,21,21,28,18,19,18,21,21,24,19,20,25,25,19,29,23,24,23,27,69,20,71,27,20,30,64,20,21,21,22,25,21,27,18,19,19,20,24,26,20,27,19,21,22,26,25,21,22,22,27,19,24,21,17,28,24,20,22,19,20,21,19,18,20,19,19,23,19,21,22,26,18,20,20,23,18,18,24,22,23,18,19,18,18,18,18,22,19,18,21,18,19,21,18,19,19,20,18,23,19,18,24,20,20,19,22,19,18,19,19,18,24,18,18,19,20,26,28,25,22,20,25,23,20,20,19,27,19,21,19,19,20,21,20,19,19,20,22,25,19,23,19,21,18,18,21,29,19,27,22,19,19,19,18,20,18,19,18,18,18,19,18,18,18,19,19,25,18,19,19,19,21],"age_min":17,"age_max":86,"age_mean":27.045592705167174,"species":null,"sex_distribution":{"f":172,"m":143},"handedness_distribution":{"r":320,"l":7,"a":2}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds004395","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"457018a36855c00442166534b58ffc91abda7bac5d15f73c843ef4ee3f0fc1f4","license":"CC0","n_contributing_labs":null,"name":"Penn Electrophysiology of Encoding and Retrieval Study (PEERS)","readme":"﻿The Penn Electrophysiology of Encoding and Retrieval Study (PEERS) aimed to characterize the behavioral and electrophysiological (EEG) correlates of memory encoding and retrieval in highly practiced individuals. Across five PEERS experiments, 300+ subjects contributed more than 7,000 90 minute memory testing sessions with recorded EEG data.\nSee the Computational Memory Lab's [wiki page](https://memory.psych.upenn.edu/PEERS) for more detailed information, and [this paper](https://psyarxiv.com/bu5x8/) for a discussion of the main findings and lessons learned from this large-scale study.\nThis dataset contains 3 experiments:\n* ltpFR (a.k.a. PEERS1-3)\n* ltpFR2 (a.k.a. PEERS4)\n* VFFR (a.k.a. PEERS5)\nElectroencephalogram (EEG) data were recorded with either a 129-channel Geodesic Sensor Net (either GSN 200 model or HydroCel GSN model) using the Netstation acquisition environment (Electrical Geodesics, Inc.; EGI) or with a 128-channel BioSemi headcap using the Biosemi ActiveTwo acquisition system.\n**Note:** subject-specific electrode layouts were NOT recorded. Despite being labeled as \"CapTrak\" space, the coordinates reflect a generic electrode layout for a given headcap and do NOT represent any individual's head shape.\nReferences\n----------\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8","recording_modality":["eeg"],"senior_author":"Christoph T. Weidemann","sessions":["0","1","10","11","12","13","14","15","16","17","18","19","2","20","21","22","23","3","4","5","6","7","8","9"],"size_bytes":9583322399138,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["VFFR","ltpFR","ltpFR2"],"timestamps":{"digested_at":"2026-04-22T12:26:34.612754+00:00","dataset_created_at":"2023-01-10T18:29:34.235Z","dataset_modified_at":"2023-06-01T03:35:20.000Z"},"total_files":6483,"storage":{"backend":"s3","base":"s3://openneuro.org/ds004395","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"769e806395735a1b","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Memory"],"confidence":{"pathology":0.7,"modality":0.6,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot example by research purpose is the digit span dataset (Healthy / Auditory / Memory). That example shows the catalog convention that classic encoding/maintenance/recall paradigms are labeled as Type=Memory (even when there is also a resting segment). PEERS is also explicitly about encoding and retrieval, so the same Type mapping applies. Few-shot examples do not provide a direct analog for PEERS’ stimulus channel, so modality must be inferred primarily from metadata/context.","metadata_analysis":"Key facts from metadata:\n1) Study aim is explicitly memory encoding/retrieval: \"PEERS aimed to characterize the behavioral and electrophysiological (EEG) correlates of memory encoding and retrieval\".\n2) Memory task structure is repeated, long sessions: \"300+ subjects contributed more than 7,000 90 minute memory testing sessions\".\n3) Tasks listed are free-recall variants: \"This dataset contains 3 experiments: * ltpFR ... * ltpFR2 ... * VFFR\".\n4) No clinical recruitment is described; participants are described demographically only: \"Subjects: 364; Sex... Age range: 17-86\" and the population is described as \"highly practiced individuals\" (not patients).","paper_abstract_analysis":"No useful paper information. (Only a link is provided in the README; no abstract text is included in the supplied metadata.)","evidence_alignment_check":"Pathology:\n- Metadata says: no disorder/diagnosis is mentioned; participants are described as \"highly practiced individuals\" and only demographics are provided (e.g., \"Subjects: 364... Age range: 17-86\").\n- Few-shot pattern suggests: when no clinical population is recruited/mentioned, label as Healthy.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: tasks are \"ltpFR\", \"ltpFR2\", \"VFFR\" and the study concerns \"memory encoding and retrieval\", but it does not explicitly state whether items were presented visually or auditorily.\n- Few-shot pattern suggests: modality should reflect the stimulus channel; without explicit stimulus description, avoid overconfident labeling.\n- Alignment: PARTIAL/UNCERTAIN (metadata underspecified), so modality is inferred from typical free-recall EEG paradigms (often visually presented word lists) rather than directly stated.\n\nType:\n- Metadata says: explicit goal is \"memory encoding and retrieval\" and includes \"memory testing sessions\".\n- Few-shot pattern suggests: memory paradigms (e.g., digit span) map to Type=Memory.\n- Alignment: ALIGN.","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n- Candidate 1: Healthy\n  Evidence: no diagnosis language anywhere; demographic-only participant summary (\"Subjects: 364... Age range: 17-86\"); described as \"highly practiced individuals\".\n- Candidate 2: Unknown\n  Evidence: metadata does not explicitly state \"healthy\"/\"controls\".\n- Decision: Healthy (standard EEGDash convention when no clinical recruitment is stated). Alignment status: aligned with few-shot convention.\n\nModality:\n- Candidate 1: Visual\n  Evidence: task labels \"ltpFR/ltpFR2/VFFR\" are free-recall paradigms commonly implemented with visually presented words/images; no auditory stimulation is described.\n- Candidate 2: Unknown\n  Evidence: no explicit quote specifying stimulus modality (visual vs auditory) in provided metadata.\n- Decision: Visual, but with lowered confidence due to lack of explicit stimulus description. Alignment status: uncertain due to metadata underspecification.\n\nType:\n- Candidate 1: Memory\n  Evidence (3+ explicit quotes): \"memory encoding and retrieval\"; \"memory testing sessions\"; experiments are free-recall variants (\"ltpFR\", \"ltpFR2\", \"VFFR\"). Few-shot support: digit span example demonstrates labeling memory paradigms as Type=Memory.\n- Candidate 2: Other\n  Evidence: none stronger than explicit memory aim.\n- Decision: Memory. Alignment status: aligned with few-shot convention.\n\nConfidence justification:\n- Pathology 0.7: supported by absence of clinical terms + demographic-only description (but no explicit \"healthy\" statement).\n- Modality 0.6: inferred from task naming/typical paradigm; no direct stimulus-modality quote.\n- Type 0.9: multiple explicit memory-related quotes + strong few-shot analog for Type=Memory."}},"nemar_citation_count":6,"computed_title":"Penn Electrophysiology of Encoding and Retrieval Study (PEERS)","nchans_counts":[{"val":129,"count":4980},{"val":137,"count":1490},{"val":144,"count":11},{"val":272,"count":2}],"sfreq_counts":[{"val":500.0,"count":4946},{"val":2048.0,"count":1466},{"val":512.0,"count":28},{"val":250.0,"count":17},{"val":1000.0,"count":15},{"val":1024.0,"count":11}],"stats_computed_at":"2026-04-22T23:16:00.307569+00:00","total_duration_s":32816904.91175,"canonical_name":null,"name_confidence":0.86,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Kahana2023"}}