{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a346a","dataset_id":"ds006839","associated_paper_doi":null,"authors":["C. Brigitte Aguilar Gonzales","Collaborators from the Experimental and Computational Neuroscience Group"],"bids_version":"1.9.0","contact_info":["Carmen Brigitte Aguilar Gonzales"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds006839.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":36,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds006839","osf_url":null,"github_url":null,"paper_url":null},"funding":["This work was supported by CONICET (Argentina) and Universidad Nacional de Entre Ríos."],"ingestion_fingerprint":"99ade42e0ac3808bfff61ae365456f82d2a935572ec5c3ec0de646fe34c82709","license":"CC0","n_contributing_labs":null,"name":"EEG recordings during sham neurofeedback in virtual reality","readme":"﻿EEG recordings during sham neurofeedback in virtual reality\nDescription\nThis dataset contains EEG recordings acquired during a sham neurofeedback experiment conducted in a virtual reality (VR) environment. The study aimed to investigate how feedback valence (positive, negative, or control) modulates alpha-band activity and during an attentional task. EEG signals were recorded using a 32-channel SynAmps RT amplifier (Compumedics NeuroScan Inc., Charlotte, NC, USA) and Ag/AgCl passive electrodes mounted on an elastic cap (Wuhan Greentek Pty. Ltd., China) following the extended 10–20 international system.\nEach participant completed four conditions:\nPositive feedback (S##_p.cnt) - sham feedback with a reinforcement valence.\nNegative feedback (S##_n.cnt) - sham feedback with a punishment valence.\nControl (S##_c.cnt) — participants observed the VR environment without any feedback.\nResting-state (S##_resting.cnt) — participants alternated between eyes open and eyes closed conditions.\nExperimental design\nFeedback blocks: Each feedback condition consisted of four blocks of approximately 2 minutes each.\nEvents:\n238 — marks the beginning of each 2-minute feedback block.\n222 — indicates an increase in brightness or volume of VR objects.\n190 — indicates a decrease in brightness or volume.\n126 — marks the beginning and end of eyes open/closed periods during the resting condition.\nResting-state order: Eyes open first, followed by eyes closed.\nData format\nOriginal EEG recordings were collected in .cnt format (NeuroScan).\nData were converted to the Brain Imaging Data Structure (BIDS) format using the MNE-BIDS toolbox (Appelhoff et al., 2019).\nEach subject folder (e.g., sub-01/) contains EEG data files (.eeg), event markers, and corresponding JSON sidecar files with acquisition parameters.\nData availability\nThe BIDS-formatted dataset is publicly available on the OpenNeuro repository and linked through the OSF Wiki project.\nReferences\nAppelhoff, S., Sanderson, M., Brooks, T. L., van Vliet, M., Quentin, R., Holdgraf, C., … Gramfort, A. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software, 4(44), 1896. https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., & Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.","recording_modality":["eeg"],"senior_author":"Collaborators from the Experimental and Computational Neuroscience Group","sessions":[],"size_bytes":11119141416,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["control","negative","positive","resting"],"timestamps":{"digested_at":"2026-04-22T12:29:37.658547+00:00","dataset_created_at":"2025-10-24T21:53:47.954Z","dataset_modified_at":"2025-10-29T12:42:28.000Z"},"total_files":144,"storage":{"backend":"s3","base":"s3://openneuro.org/ds006839","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv","task-control_events.json","task-negative_events.json","task-positive_events.json","task-resting_events.json"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"72152fa075b1f946","model":"openai/gpt-5.2","tagged_at":"2026-01-20T19:09:47.968019+00:00"},"tags":{"pathology":["Healthy"],"modality":["Multisensory"],"type":["Attention"],"confidence":{"pathology":0.6,"modality":0.7,"type":0.7},"reasoning":{"few_shot_analysis":"Closest few-shot convention matches:\n- The TBI DPX dataset is labeled Type=Attention because it is a cognitive control/attention task with event markers and performance/feedback structure. This guides mapping an explicit \"attentional task\" study aim to Type=Attention (even when feedback is present).\n- The Cross-modal Oddball dataset is labeled Modality=Multisensory because both visual and auditory cues are presented. This guides treating VR manipulations that include both brightness (visual) and volume (auditory) as Multisensory.\n- The sleep-deprivation resting-state dataset is labeled Modality=Resting State and Type=Resting-state for eyes-open/eyes-closed recordings, which is relevant here as an additional condition, but not the main experiment focus.","metadata_analysis":"Key quoted metadata facts:\n1) Task/aim: \"sham neurofeedback experiment conducted in a virtual reality (VR) environment\" and \"investigate how feedback valence (positive, negative, or control) modulates alpha-band activity and during an attentional task.\"\n2) Stimulus/interaction channel: events include \"222 — indicates an increase in brightness or volume of VR objects\" and \"190 — indicates a decrease in brightness or volume.\"\n3) Additional condition: \"Resting-state (S##_resting.cnt) — participants alternated between eyes open and eyes closed conditions.\"","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: no diagnosis/clinical recruitment is mentioned; only \"Each participant completed four conditions\" (no patient groups stated).\n- Few-shot pattern suggests: when no disorder is specified, label as Healthy.\n- Alignment: ALIGN (absence of clinical population supports Healthy by convention).\n\nModality:\n- Metadata says: VR objects change \"brightness\" (visual) and \"volume\" (auditory): \"increase in brightness or volume\".\n- Few-shot pattern suggests: simultaneous visual+auditory cueing maps to Multisensory (as in cross-modal oddball).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: primary aim is modulation of alpha during \"an attentional task\" and manipulation of \"feedback valence\".\n- Few-shot pattern suggests: studies framed around attentional control tasks map to Type=Attention (e.g., DPX Cog Ctl labeled Attention).\n- Alignment: ALIGN (feedback is present but the stated construct is attention/alpha modulation).","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n- Candidate 1: Healthy\n  Evidence: no clinical terms/diagnoses; generic \"Each participant\" with no patient/control grouping described.\n- Candidate 2: Unknown\n  Evidence: metadata does not explicitly say \"healthy\".\nDecision: Healthy wins because the dataset provides no indication of clinical recruitment (few-shot convention: default to Healthy when no disorder is specified).\nConfidence notes: inference-only (no explicit \"healthy\" quote).\n\nModality:\n- Candidate 1: Multisensory\n  Evidence: \"brightness or volume of VR objects\" implies visual + auditory stimulus manipulation.\n- Candidate 2: Visual\n  Evidence: VR environment strongly suggests predominantly visual stimulation.\nDecision: Multisensory wins because auditory is explicitly implied by \"volume\" alongside visual brightness.\nConfidence notes: supported by 1 explicit quote describing both channels.\n\nType:\n- Candidate 1: Attention\n  Evidence: explicit: \"during an attentional task\"; alpha-band modulation is often analyzed in attention contexts.\n- Candidate 2: Learning\n  Evidence: neurofeedback/valenced reinforcement could be framed as reinforcement learning, but described as sham neurofeedback and the stated target construct is attention.\nDecision: Attention wins because the stated research purpose is attention (not learning as primary construct).\nConfidence notes: supported by explicit aim statement and task description."}},"computed_title":"EEG recordings during sham neurofeedback in virtual reality","nchans_counts":[{"val":29,"count":144}],"sfreq_counts":[{"val":1000.0,"count":144}],"stats_computed_at":"2026-04-22T23:16:00.312021+00:00","total_duration_s":null,"author_year":"Gonzales2025","canonical_name":null}}