{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a347a","dataset_id":"ds007006","associated_paper_doi":null,"authors":["Ying Wu","Enrique Carrillosulub","Leon Lange","Chloe Tanega","Nicole Wells","Erik Virre","Cassandra Vieten"],"bids_version":"v1.10.0","contact_info":["Enrique Carrillosulub"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds007006.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":10,"ages":[59,29,38,47,34,37,23,36,26,57],"age_min":23,"age_max":59,"age_mean":38.6,"species":null,"sex_distribution":{"m":5,"f":5},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds007006","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"9e707c06711692f5235938a775bfe854a6cc6ef11fe20069ee58d70daa5ea6c5","license":"CC0","n_contributing_labs":null,"name":"VR-Compassion Cultivation Training","readme":"VR-CCT Dataset\nCompassion Island was a social world implemented in\nAltspaceVR by tecchnology collaborators Origami Air.\nIt was specifically created for the study of VR-based\naugmentation of compassion cultivation training (CCT).\nIt featured three main settings – a meditation hall,\na garden courtyard with a large willow tree, and a\nclinic. During experimental sessions, participants\ninteracted with two characters in these spaces,\nrepresented as avatars – namely, a guide, who helped\nthe volunteer navigate from setting to setting and\noffered other assistance as needed, and Ivan, who was\nan agitated patient in the clinic. Both characters\nwere animated by live actors in separate locations.\nParticipants were able to converse freely with these\ncharacters whenever they were co-present with either\ncharacter in the same space. All sessions began in the\nmeditation hall, which featured a pulsating orb\ndesigned to help participants regulate their breathing\nduring an audio-recorded guided meditation. Next,\nparticipants were ushered outside to the garden, where\nthey were invited to contemplate a tree with a glowing\ncore while listening to an audio-recorded compassion\nmeditation and performing visualization exercises that\ncentered on universal compassion for all beings.\nLastly, participants were directed into a virtual\nclinic to converse with Ivan, an agitated patient\nwaiting inside the clinic, where participants would\nhave the opportunity to practice exercising the\nfeeling of universal compassion from the garden\nmeditation.","recording_modality":["eeg"],"senior_author":"Cassandra Vieten","sessions":[],"size_bytes":963375182,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["compTree","ivan","medBall","postbaseline","prebaseline"],"timestamps":{"digested_at":"2026-04-22T12:29:56.740616+00:00","dataset_created_at":"2025-12-01T20:39:05.211Z","dataset_modified_at":"2025-12-01T20:47:02.000Z"},"total_files":50,"storage":{"backend":"s3","base":"s3://openneuro.org/ds007006","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv","task-compTree_events.json","task-ivan_events.json","task-medBall_events.json","task-postbaseline_events.json","task-prebaseline_events.json"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"22bbd0ee4800f5d8","model":"openai/gpt-5.2","tagged_at":"2026-01-20T19:16:29.895467+00:00"},"tags":{"pathology":["Healthy"],"modality":["Multisensory"],"type":["Affect"],"confidence":{"pathology":0.6,"modality":0.8,"type":0.7},"reasoning":{"few_shot_analysis":"Most similar few-shot convention is the dataset labeled Type=Affect (\"EEG: Three armed bandit gambling task\"): even though it uses a complex task with feedback and choices, it is labeled by the primary construct (reward/affect) rather than mechanics. This guides labeling the present dataset as Affect because the metadata frames the purpose as compassion cultivation/meditation rather than, e.g., attention or perception. For Modality, the few-shot \"Cross-modal Oddball Task\" shows that when both auditory and visual stimuli are core to the task, Modality=Multisensory is used (not just Visual or Auditory). The current dataset similarly combines VR visuals with audio-recorded meditations and spoken conversation.","metadata_analysis":"Key facts from the dataset README:\n1) VR/social-interaction + meditation/compassion training context: \"Compassion Island was a social world implemented in AltspaceVR\" and it was \"created for the study of VR-based augmentation of compassion cultivation training (CCT).\"\n2) Multisensory stimulation: visual VR scenes (\"meditation hall\", \"a garden courtyard\", \"a virtual clinic\"; \"a pulsating orb\"; \"a tree with a glowing core\") paired with audio (\"audio-recorded guided meditation\"; \"listening to an audio-recorded compassion meditation\").\n3) Affective/compassion target: exercises \"centered on universal compassion for all beings\" and participants \"practice exercising the feeling of universal compassion\".\n4) Population language is non-clinical/unspecified: participants are referred to as \"participants\" and \"the volunteer\" with no diagnosis terms.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: participants are \"the volunteer\" / \"participants\" with no stated disorder or diagnosis.\n- Few-shot pattern suggests: when no clinical group is stated and the experiment is a general behavioral/EEG study, Pathology is typically labeled Healthy (e.g., multiple few-shot datasets explicitly note healthy participants; none suggest labeling a non-clinical VR study as a disorder).\n- Alignment: PARTIAL (metadata does not explicitly say \"healthy\", but it does not indicate any pathology).\n- Resolution: choose Healthy as the best-fitting catalog label for an unspecified, non-clinical volunteer cohort.\n\nModality:\n- Metadata says: VR visual environment plus audio and conversation, e.g., \"AltspaceVR\", \"pulsating orb\", \"glowing core\" and \"audio-recorded guided meditation\", \"converse freely\".\n- Few-shot pattern suggests: when both visual and auditory stimuli are integral, label Multisensory (as in the cross-modal oddball example).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: focus is compassion/meditation: \"compassion cultivation training (CCT)\", \"compassion meditation\", \"universal compassion\", \"practice exercising the feeling\".\n- Few-shot pattern suggests: label by primary construct; affective constructs (reward/liking/affective feedback) map to Type=Affect even if embedded in larger tasks.\n- Alignment: ALIGN (primary aim is affect/compassion cultivation rather than perception/memory/motor).","decision_summary":"Pathology (top-2):\n1) Healthy (selected): supported by non-clinical recruitment language (\"participants\", \"the volunteer\") and absence of any diagnosis terms; consistent with catalog convention for general volunteer EEG/VR experiments.\n2) Unknown (runner-up): plausible because README never explicitly states \"healthy\".\nAlignment: partial; confidence limited because inference is contextual rather than explicit.\n\nModality (top-2):\n1) Multisensory (selected): explicit combined visual VR environment (\"AltspaceVR\", \"pulsating orb\", \"glowing core\") and auditory guidance (\"audio-recorded guided meditation\", \"audio-recorded compassion meditation\"), plus spoken interaction (\"converse freely\").\n2) Visual (runner-up): VR is strongly visual, but audio is clearly central.\nAlignment: strong.\n\nType (top-2):\n1) Affect (selected): explicit compassion/meditation goal (\"compassion cultivation training\", \"universal compassion\", \"practice exercising the feeling of universal compassion\").\n2) Clinical/Intervention (runner-up): plausible because it is a training/augmentation study (\"augmentation of... training\"), but there is no clinical cohort focus stated.\nAlignment: strong; confidence moderate because no additional metadata fields (tasks/events/participants) are provided."}},"computed_title":"VR-Compassion Cultivation Training","nchans_counts":[{"val":64,"count":50}],"sfreq_counts":[{"val":256.0,"count":50}],"stats_computed_at":"2026-04-22T23:16:00.312228+00:00","total_duration_s":12985.0,"author_year":"Wu2025","canonical_name":null}}