{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3418","dataset_id":"ds005795","associated_paper_doi":null,"authors":["Jörg Stadler","Torsten Stöter","Nicole Angenstein","Andreas Fügner","Marcel Lommerzheim","Artur Mathysiak","Anke Michalsky","Gabriele Schöps","Johann van der Meer","Susann Wolff","André Brechmann"],"bids_version":"1.10.0","contact_info":["André Brechmann"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds005795.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":34,"ages":[22,31,30,18,36,23,23,23,21,29,28,21,23,21,27,24,24,21,25,32,32,19,25,32,19,30,27,18,23,24,27,32,19,25],"age_min":18,"age_max":36,"age_mean":25.11764705882353,"species":null,"sex_distribution":{"m":23,"f":11},"handedness_distribution":{"r":32,"l":1}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005795","osf_url":null,"github_url":null,"paper_url":null},"funding":["German Science Foundation (BR2267/9-1)"],"ingestion_fingerprint":"729804db33b22243837f3e6c4e914cf65328f986e21f5fe70e7d15ec3285b57a","license":"CC0","n_contributing_labs":null,"name":"MULTI-CLARID (Multimodal Category Learning and Resting-state Imaging Data)","readme":"Overview\nThe study comprises data of a combined fMRI/EEG experiment. The EEG files contain 63 head channels, ECG, EOG, facial EMG and skin conductance data. A physio file contains respiration and finger-pulse data. In addition, a T1 weighted whole-brain anatomical MR scan, a PD weighted (UTE) scan for electrode localization is provided (defacing was performed using https://github.com/cbinyu/pydeface). Additional data of the participants (T2 weighted images, button press dynamics, hearing threshold, hearing abilities, and personality traits (NEO-FFI, BIS/BAS, SVF, ERQ, MMG) are available on request.\nThe study was conducted at the Combinatorial NeuroImaging (CNI) core facility of the Leibniz Institute for Neurobiology (LIN) Magdeburg and was approved by the ethics committee of the University of Magdeburg, Germany. All participants gave written informed consent.\nCurrently you will only find 5 data-sets that include the multi-dimensional category learning experiment (cf. Wolff & Brechmann, Cerebral Cortex, 2023) because of the copyright policy of OpenNeuro (i.e. CC0). If you are interested in the remaining data-sets, please contact brechmann@lin-magdeburg.de. Collaboration is highly welcome!\nDetails of the learning task\nThe auditory category learning experiment comprised 180 trials for which 160 different frequency modulated sounds were presented in pseudo-randomized order with a jittered inter-trial interval of 6, 8, or 10 s plus 19-95 ms in steps of 19 ms in order to ensure a pseudo-random jitter of the sound onset with the onset of the acquisition of an MR volume. Each sound had five different binary features, i.e. duration (short: 400 ms, long 800 ms), direction of the frequency modulation (rising, falling), intensity (soft: 76–81 dB, loud: 86–91 dB), speed of the frequency modulation (slow: 0.25 octaves/s, fast: 0.5 octaves/s), and frequency range (low: 500–831 Hz, high: 1630–2639 Hz with 5 different ranges each). Participants had to learn a target category defined by a combination of the features duration and direction (i.e. long/rising, long/falling, short/rising, or short/falling) by trial and error. In each trial, participants had to indicate via button press whether they thought a sound belonged to the target category (right index finger) or not (right middle finger). They received feedback about the correctness of the response by a prerecorded, female voice in standard German; e.g., \"ja\" (yes) or \"richtig\" (right) following correct responses, \"nein\" (no) or \"falsch\" (wrong) following incorrect responses. In 90% of the trials the feedback immediately followed the button press, in 10% it was delayed by 1500 ms. If participants failed to respond within 2 seconds after FM tone onset, a timeout feedback (\"zu spät\", too late) was presented. During the ~27 min learning experiment, participants were asked to fixate a white cross on grey background and avoid any movements. For the 10 min rs-fMRI, they were asked to close their eyes.\nTechnical details\nMR data were acquired with a 3 Tesla MRI scanner (Philips Achieva dStream) equipped with a 32-channel head coil. The MR scanner generates a trigger signal used to synchronize the multimodal data acquisition. The timing of stimulus events and the participants' responses were controlled by the software Presentation (Neurobehavioral Systems) running on a Windows stimulation-PC.\nAuditory stimuli were presented via a Mark II+ (MR-Confon, Magdeburg, Germany) audio control unit to MR compatible electrodynamic headphones with integrated ear muffs that provide passive damping of ambient scanner noise by ~24 dB. Earplugs (Bilsom 303) further reduce the noise by ~29 dB (SNR).\nButton presses of the participants were recorded with the ResponseBox 2.0 by Covilex (Magdeburg, Germany) that includes a response pad with two buttons. The device delivers continuous 8-bit data at a sampling rate of 500 Hz. The Teensy converts left and right button presses that exceed a defined threshold into USB keyboard events handled by the stimulation-PC.\nRespiration and heart rate was recorded with Invivo MRI Sensors at a sampling rate of 100 Hz and stored on the MRI acquisition PC at 496 Hz sampling rate.\n64-channel EEG (including ECG) was recorded at 5 kHz using two 32-channel amplifiers BrainAmp MRplus (Brain Products GmbH, Gilching, Germany). The amplifier's discriminative resolution was set to 0.5 µV/bit (range of +/-16.38 mV) and the signals were hardware-filtered in the frequency band between 0.01 Hz and 250 Hz. A bipolar 16-channel amplifier BrainAmp ExG MR was used to record 2 EOG, 4 EMG (Corrugator, Zygomaticus) channels as well as signals from 4 carbon wire loops (CWL) for correcting pulse and motion related artifacts. Another BrainAmp ExG MR amplifier with an ExG AUX box was used to record the skin conductance (GSR) at the index finger of the participant's non-dominant hand. All signals are synchronized with the MR trigger via a Sync box and two USB2 adapter. All data were recorded and stored with the BrainVision Recorder software. Preprocessing (MR-artifact correction, bandpass filtering between 0.3 and 125 Hz, downsampling to 500 Hz with subsequent CWL correction) and export of the EEG-data was performed in BrainVision Analyzer 2.3. Raw data for optimized artifact correction are available upon request.","recording_modality":["eeg"],"senior_author":"André Brechmann","sessions":[],"size_bytes":6887944978,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["learning","rest"],"timestamps":{"digested_at":"2026-04-22T12:28:44.805124+00:00","dataset_created_at":"2025-01-08T14:12:16.959Z","dataset_modified_at":"2025-01-28T11:38:42.000Z"},"total_files":39,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005795","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv","physio.json","scans.json","task-learning_bold.json","task-learning_events.json","task-rest_bold.json","task-rest_events.json"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"65d6376f19974476","model":"openai/gpt-5.2","tagged_at":"2026-01-20T18:38:41.672335+00:00"},"tags":{"pathology":["Healthy"],"modality":["Auditory"],"type":["Learning"],"confidence":{"pathology":0.6,"modality":0.9,"type":0.9},"reasoning":{"few_shot_analysis":"Most similar few-shot by paradigm is the labeled example “EEG: Probabilistic Learning with Affective Feedback: Exp #2” (Healthy / Visual / Learning). While that example uses visual cues and affective feedback, the labeling convention is that trial-and-error learning with feedback is Type=Learning. This dataset is also explicitly a category learning experiment with trial-by-trial feedback, so it should follow the same Type mapping. For pathology, multiple few-shots label datasets as Healthy when no disorder-specific recruitment is stated (e.g., “EEG: Three armed bandit gambling task”, “EEG Motor Movement/Imagery Dataset”). For modality, few-shots consistently label based on stimulus input channel (e.g., digit span task labeled Auditory due to auditory digits). Here, stimuli and feedback are auditory, so Modality=Auditory.","metadata_analysis":"Key task/stimulus facts from the README:\n1) Auditory stimulation: “160 different frequency modulated sounds were presented” and “Auditory stimuli were presented … to MR compatible … headphones”.\n2) Learning goal: “Participants had to learn a target category … by trial and error.”\n3) Feedback-based learning: “They received feedback about the correctness of the response by a prerecorded, female voice … following correct responses … following incorrect responses.”\n4) Population/clinical status: no diagnosis or patient recruitment is mentioned; instead it states “All participants gave written informed consent.” and describes a standard experimental setup (combined fMRI/EEG) without any disorder keywords.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: no clinical group is named; only general “All participants gave written informed consent.”\n- Few-shot pattern suggests: when no disorder recruitment is stated, label as Healthy.\n- Alignment: ALIGN (no conflicting explicit diagnosis).\n\nModality:\n- Metadata says: “frequency modulated sounds were presented” and “Auditory stimuli were presented … headphones”; feedback is also auditory (“prerecorded, female voice”).\n- Few-shot pattern suggests: modality determined by stimulus channel (e.g., auditory digits => Auditory).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: “auditory category learning experiment” and “Participants had to learn a target category … by trial and error” with correctness feedback.\n- Few-shot pattern suggests: feedback-driven learning paradigms map to Type=Learning (e.g., probabilistic/reinforcement learning example).\n- Alignment: ALIGN.","decision_summary":"Top-2 candidates per category with head-to-head comparison:\n\nPathology:\n- Candidate 1: Healthy. Evidence: absence of any diagnosis/patient recruitment in README; generic phrasing “All participants gave written informed consent.” Few-shot convention labels such datasets as Healthy when no disorder is specified.\n- Candidate 2: Unknown. Evidence: metadata does not explicitly say “healthy” or “controls”.\n- Decision: Healthy wins because the dataset description is a standard cognitive neuroscience experiment with no clinical recruitment described, matching the few-shot convention.\nConfidence basis: no explicit “healthy” quote, so confidence is capped.\n\nModality:\n- Candidate 1: Auditory. Evidence: “frequency modulated sounds were presented”; “Auditory stimuli were presented … headphones”; auditory voice feedback.\n- Candidate 2: Multisensory. Evidence: button presses and fixation cross exist, but these are not primary sensory stimuli.\n- Decision: Auditory wins because the dominant stimuli and feedback are auditory; response modality is not used for Modality.\nConfidence basis: multiple explicit auditory-stimulus quotes.\n\nType:\n- Candidate 1: Learning. Evidence: “auditory category learning experiment”; “had to learn a target category … by trial and error”; correctness feedback (“received feedback about the correctness”).\n- Candidate 2: Decision-making. Evidence: binary choice each trial (“indicate … belonged to the target category or not”), but framed as learning rather than value/choice policy.\n- Decision: Learning wins because the explicit stated purpose is category learning via trial-and-error with feedback.\nConfidence basis: multiple explicit learning/feedback quotes and strong few-shot analog (probabilistic learning example)."}},"computed_title":"MULTI-CLARID (Multimodal Category Learning and Resting-state Imaging Data)","nchans_counts":[{"val":72,"count":39}],"sfreq_counts":[{"val":500.0,"count":39}],"stats_computed_at":"2026-04-22T23:16:00.310902+00:00","total_duration_s":null,"author_year":"Stadler2025","canonical_name":null}}