{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a33df","dataset_id":"ds005403","associated_paper_doi":null,"authors":["Veillette, J.","Rosen, J.","Margoliash, D.","Nusbaum, H."],"bids_version":"1.6.0","contact_info":["John Veillette"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds005403.v1.0.1","datatypes":["eeg"],"demographics":{"subjects_count":32,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":{"f":17,"m":15},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005403","osf_url":null,"github_url":null,"paper_url":null},"funding":["NSF 1835181"],"ingestion_fingerprint":"734d26f88b34289c98d53eea96b9d89d3a46d0c32fd755bc958d619909422652","license":"CC0","n_contributing_labs":null,"name":"Delayed Auditory Feedback EEG/EGG","readme":"Notes\n----------\nElectroglottography (EGG) and audio are included in the EEG files themselves, rather than in sidecar files, as they were converted from analog to digital on the same hardware. The audio is the audio the subject heard, i.e. their delayed auditory feedback. If you want the speech waveform aligned to the time the subject produced it, you can shift the audio back by the timestamps recorded (for each trial) in the delay field of the events sidecar file.\nEGG has already been minimally preprocessed to correct for phase delays induced by the built-in hardware filter of the EGG amplifier by applying an equivalent software filter in the opposite temporal direction. (This is the same strategy employed by \"zero phase shift\" filters in MATLAB and scipy.)\nData was organized according the the BIDS standard for EEG data using the MNE-BIDS software (Appelhoff et al., 2019; Pernet et al., 2019).\n﻿References\n----------\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8","recording_modality":["eeg"],"senior_author":"Nusbaum, H.","sessions":[],"size_bytes":127189601509,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["daf"],"timestamps":{"digested_at":"2026-04-22T12:27:44.398024+00:00","dataset_created_at":"2024-08-06T14:59:09.909Z","dataset_modified_at":"2024-08-06T18:19:30.000Z"},"total_files":32,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005403","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"4ab4ba731588e96d","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Auditory"],"type":["Motor"],"confidence":{"pathology":0.6,"modality":0.8,"type":0.7},"reasoning":{"few_shot_analysis":"Most similar few-shot conventions:\n- The “Subcortical responses to music and speech…” example is labeled Modality=Auditory because stimuli are sounds presented via earphones; this guides labeling the delayed auditory feedback (DAF) manipulation as Auditory modality.\n- The “EEG Motor Movement/Imagery Dataset” example is labeled Type=Motor when the experimental focus is movement execution/imagery. Although this dataset is speech (not limb) movement, DAF paradigms are fundamentally about speech-motor control with auditory feedback perturbation; this guides choosing Type=Motor over Perception when the participant is actively producing vocal motor output.\nFew-shot examples do not provide a direct DAF/speech-production analogue, so Type relies on task interpretation from metadata.","metadata_analysis":"Key metadata facts (quoted):\n1) Dataset title: \"Delayed Auditory Feedback EEG/EGG\".\n2) README: \"The audio is the audio the subject heard, i.e. their delayed auditory feedback.\" (explicit auditory stimulation/manipulation).\n3) README: \"If you want the speech waveform aligned to the time the subject produced it...\" (implies active speech production).\n4) README: \"Electroglottography (EGG) and audio are included in the EEG files themselves\" (EGG is a vocal-fold/phonation measure, consistent with speech motor production).\n5) Participants: \"Subjects: 32; Sex: {'f': 17, 'm': 15}\" (no clinical diagnosis mentioned).","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: only \"Subjects: 32\" with sex counts; no disorder/diagnosis stated.\n- Few-shot pattern suggests: when no clinical recruitment is mentioned, label as Healthy (e.g., several few-shots describe healthy volunteers explicitly).\n- Alignment: PARTIAL (metadata is silent; few-shot convention supports Healthy as default normative cohort). No explicit conflict.\n\nModality:\n- Metadata says: \"audio ... subject heard, i.e. their delayed auditory feedback\".\n- Few-shot pattern suggests: sound presentation/manipulation -> Auditory (as in music/speech ABR dataset).\n- Alignment: ALIGNS.\n\nType:\n- Metadata says: DAF is what the subject heard, and README references aligning to \"the time the subject produced\" speech; plus inclusion of EGG.\n- Few-shot pattern suggests: active movement/execution paradigms -> Motor (motor movement/imagery example).\n- Alignment: MOSTLY ALIGNS, though an alternative interpretation (auditory/speech perception) is possible because the manipulated variable is auditory feedback. No explicit conflict, just ambiguity about primary construct.","decision_summary":"Top-2 comparative selection:\n\nPathology candidates:\n1) Healthy — Evidence: no clinical keywords anywhere; participants described generically (\"Subjects: 32\"); dataset framed as a general experimental manipulation (DAF) rather than patient cohort.\n2) Unknown — Evidence: metadata never explicitly states \"healthy\" or \"controls\".\nWinner: Healthy. Alignment status: metadata silent but consistent with few-shot convention for non-clinical cohorts.\nConfidence (0.6): contextual inference only (no explicit health-status quote).\n\nModality candidates:\n1) Auditory — Evidence: \"audio ... subject heard, i.e. their delayed auditory feedback\"; task name \"daf\"; DAF is an auditory feedback manipulation.\n2) Multisensory — Evidence: participant produces speech (motor act) but stimulus channel manipulated is auditory.\nWinner: Auditory. Alignment status: strong alignment with metadata and few-shot convention.\nConfidence (0.8): at least one clear explicit quote + strong few-shot analogue.\n\nType candidates:\n1) Motor — Evidence: speech production implied (\"time the subject produced it\"); EGG included (phonation/vocal production measurement); DAF typically probes speech motor control/feedback-based control.\n2) Perception — Evidence: central experimental manipulation is auditory feedback timing (could be framed as auditory timing/perception).\nWinner: Motor, because the task necessarily involves active vocal motor output and physiological vocal measure (EGG), suggesting speech-motor control as primary construct.\nConfidence (0.7): one explicit production-related quote + reasonable inference from inclusion of EGG/DAF; still some ambiguity vs Perception."}},"nemar_citation_count":1,"computed_title":"Delayed Auditory Feedback EEG/EGG","nchans_counts":[{"val":66,"count":32}],"sfreq_counts":[{"val":10000.0,"count":32}],"stats_computed_at":"2026-04-22T23:16:00.309494+00:00","total_duration_s":48177.5467,"canonical_name":null,"name_confidence":0.55,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Veillette2024"}}