{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3325","dataset_id":"ds004166","associated_paper_doi":null,"authors":["Yang Li (data and curation)","Wenjin Fu (data)","Qiumei Zhang (data)","Xiongying Chen (data)","Xiaohong Li (data)","Boqi Du (data)","Xiaoxiang Deng (data)","Feng Ji (curation)","Qi Dong (curation)","Feng Ji (curation)","Susanne M. Jaeggi (curation)","Chuansheng Chen (curation)","Jun Li (data and curation)"],"bids_version":"1.7.0","contact_info":["Jun Li"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds004166.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":71,"ages":[23,23,25,20,19,21,22,24,24,18,18,18,23,19,19,26,18,20,20,22,22,22,23,22,21,24,22,21,22,20,19,18,20,24,24,23,19,19,27,18,22,25,20,21,24,23,24,23,23,18,22,22,24,22,23,26,20,24,22,22],"age_min":18,"age_max":27,"age_mean":21.7,"species":null,"sex_distribution":{"f":45,"m":15},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds004166","osf_url":null,"github_url":null,"paper_url":null},"funding":["This work was supported by grants from the National Natural Science Foundation of China (31771242). The authors declared that they have no conflict of interest."],"ingestion_fingerprint":"c05ae98f480251c5574846a489273442cd442565b2f535e0f68ed7b1a753065a","license":"CC0","n_contributing_labs":null,"name":"Effects of Forward and Backward Span Trainings on Working Memory: Evidence from a Randomized, Controlled Trial","readme":"## Effects of Forward and Backward Span Trainings on Working Memory: Evidence from a Randomized, Controlled Trial\n### Introduction\n**Overview:** Both forward and backward working memory span tasks have been used in cognitive training, but no study has\n been conducted to test whether the two types of trainings are equally effective. Based on data from a larger randomized\n controlled trial, this study tested the effects of backward span training, forward span training, and no intervention.\nEvent-related potential (ERP) signals were recorded at the pre-, mid-, and post-tests while the subjects were performing\na distractor version of the change detection task, which included three conditions (2 targets and 0 distractor [2T0D];\n 4 targets and 0 distractor [4T0D]; and 2 targets and 2 distractors [2T2D]). Behavioral data were collected from two additional\n tasks: a multi-object version of the change detection task, and a suppress task. Compared to no intervention, both forward\nand backward span trainings led to significantly greater improvement in working memory maintenance, based on indices from\nboth behavioral (Kmax) and ERP data (CDA_2T0D and CDA_4T0D). Backward span training also improved interference control based\non the ERP data (CDA_filtering efficiency) to a greater extent than did forward span training and no intervention, but the three groups\ndid not differ in terms of behavioral indices of interference control. These results have potential implications for optimizing the current\ncognitive training on working memory.\n### Methods\n**Subjects:** Volunteers from university recruited through advertisements.\n**Apparatus:**  At all three time points (pre-, mid-, and post-tests), we used a 64-channel Synamps RT system (Neuroscan, El Paso, USA)\nto record the electroencephalogram (EEG) signals. Subjects were required to sit in a comfortable chair inside a darkened, electrically shielded\nrecording chamber during the EEG recording. The electrode impedance was low (below 5kΩ). The reference electrode was on the left mastoid.\nElectrodes were set both below and above the right eye to record the vertical electrooculographies (EOGs). Electrodes were set at the outer canthi\n of each eye to record the horizontal EOGs.\n**EEG dataset:** Backward group (sub-01~sub020);  Forward group (sub-101~sub120); Control group (sub-201~sub220); Sudoku group (sub-301~sub320).\n                          Pre-test(ses-01); Mid-test(ses-01); Post-test(ses-01);","recording_modality":["eeg"],"senior_author":"Jun Li (data and curation)","sessions":["01","02","03"],"size_bytes":83074443348,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["WM"],"timestamps":{"digested_at":"2026-04-22T12:26:14.394682+00:00","dataset_created_at":"2022-06-17T01:05:58.854Z","dataset_modified_at":"2022-06-17T07:46:14.000Z"},"total_files":213,"storage":{"backend":"s3","base":"s3://openneuro.org/ds004166","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"6f2f9fc1a03786aa","model":"openai/gpt-5.2","tagged_at":"2026-01-20T10:33:51.433552+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Learning"],"confidence":{"pathology":0.8,"modality":0.65,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot convention is the healthy working-memory dataset (Digit span task) labeled as Type=Memory and Modality=Auditory, where the label follows the primary cognitive construct (working memory) rather than the presence of responses. Here, the construct is also working memory, but the dataset is explicitly a training study across pre/mid/post tests, which aligns more with using Type=Learning when the main purpose is training-induced change rather than a single-session memory assessment. For Modality, the few-shot examples show that modality tracks stimulus channel (e.g., Braille=tactile; music/speech=auditory); for a change-detection task this convention typically maps to Visual.","metadata_analysis":"Key metadata indicating population and aim:\n1) Healthy/non-clinical recruitment: \"**Subjects:** Volunteers from university recruited through advertisements.\"\n2) Training/learning focus: \"this study tested the effects of backward span training, forward span training, and no intervention\" and \"ERP signals were recorded at the pre-, mid-, and post-tests\"\n3) Working-memory / change-detection paradigm: \"subjects were performing a distractor version of the change detection task\" with conditions \"2 targets and 0 distractor\" / \"4 targets and 0 distractor\" / \"2 targets and 2 distractors\".\nStimulus modality is not explicitly stated as visual, but 'change detection task' with targets/distractors is canonically a visual working-memory task; no auditory/tactile stimulation is described.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: \"Volunteers from university recruited through advertisements.\" (no diagnosis mentioned).\n- Few-shot pattern suggests: non-clinical volunteer cohorts map to Healthy.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"change detection task\" with \"targets\" and \"distractors\" (no explicit sensory channel stated).\n- Few-shot pattern suggests: modality follows the stimulus channel; change-detection tasks conventionally use visual arrays.\n- Alignment: PARTIAL (metadata implicit; few-shot convention supports Visual).\n\nType:\n- Metadata says: \"effects of ... span trainings on working memory\" and compares \"backward span training, forward span training, and no intervention\" across \"pre-, mid-, and post-tests\".\n- Few-shot pattern suggests: if the primary construct is working memory, Type could be Memory; if the primary purpose is training-induced change, Type can be Learning.\n- Alignment: MIXED but not conflicting; metadata emphasizes training effects, so Learning is favored.","decision_summary":"Top-2 candidates per category with head-to-head selection:\n\nPathology:\n1) Healthy — Evidence: \"Volunteers from university recruited through advertisements.\" No clinical recruitment described.\n2) Unknown — Would apply if population description were absent.\nWinner: Healthy. Evidence alignment: aligned with few-shot healthy-cohort convention.\n\nModality:\n1) Visual — Evidence: \"change detection task\" with \"targets\"/\"distractors\" and multiple set-size conditions; standard implementation is visual arrays; no other stimulus channel described.\n2) Unknown — Because the readme does not explicitly say \"visual\"/\"seen\"/\"screen\"/\"images\".\nWinner: Visual (inference from paradigm + lack of alternative modality). Evidence alignment: partial (implicit in metadata, consistent with few-shot modality convention).\n\nType:\n1) Learning — Evidence: explicit training/RCT framing: \"forward and backward ... trainings\", \"randomized, controlled trial\", measured across \"pre-, mid-, and post-tests\".\n2) Memory — Evidence: explicit construct: \"Working Memory\" and ERP indices of \"working memory maintenance\".\nWinner: Learning because the dataset’s primary purpose is to quantify effects of training/intervention on working memory rather than solely characterize memory processing. Evidence alignment: aligned (few-shot shows Memory for non-training WM tasks; here training emphasis shifts to Learning). Confidence notes: Type is supported by multiple explicit training-related phrases; Modality is less explicit.","confidence":"Pathology: high because recruitment is explicitly non-clinical. Modality: moderate because modality is inferred from the task name/structure rather than directly stated. Type: moderately high because training/RCT language is explicit and repeated, with Memory as a close runner-up due to strong WM emphasis."}},"nemar_citation_count":1,"computed_title":"Effects of Forward and Backward Span Trainings on Working Memory: Evidence from a Randomized, Controlled Trial","nchans_counts":[],"sfreq_counts":[],"stats_computed_at":"2026-04-22T23:16:00.307171+00:00","total_duration_s":null,"author_year":"Li2022","size_human":"77.4 GB","canonical_name":null}}