{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3467","dataset_id":"ds006802","associated_paper_doi":null,"authors":["Moerel, Denise","Grootswagers, Tijl","Quek, Genevieve L.","Smit, Sophie","Varlet, Manuel"],"bids_version":"1.0.2","contact_info":["Denise Moerel"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds006802.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":24,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds006802","osf_url":null,"github_url":null,"paper_url":null},"funding":["Australian Research Council (ARC) Discovery Project awarded to M.V. (DP220103047)","ARC Discovery Early Career Researcher Award awarded to T.G. (DE230100380)"],"ingestion_fingerprint":"8025182e623afcd8a05965cb69ae24a641bcfc468458ac845e707af2a4c9ea25","license":"CC0","n_contributing_labs":null,"name":"Collaborative rule learning promotes interbrain information alignment","readme":"Experiment Details\nWe recorded EEG from 24 pairs of participants while they performed a 4-way categorisation task based on rules they first agreed upon together. In addition, participants did a pre- and post-test on the same stimuli.\nExperiment length: 1 hour\nMore information:\nPre-print: Moerel, D., Grootswagers, T., Quek, G.L., Smit, S., & Varlet, M. (2025). Information alignment between interacting brains. bioRxiv. doi: https://doi.org/10.1101/2025.01.07.631802\nCode: https://doi.org/10.17605/OSF.IO/HE4TU","recording_modality":["eeg"],"senior_author":"Varlet, Manuel","sessions":[],"size_bytes":66817006188,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["categorisation"],"timestamps":{"digested_at":"2026-04-22T12:29:36.539517+00:00","dataset_created_at":"2025-10-16T22:18:13.502Z","dataset_modified_at":"2025-10-17T02:06:39.000Z"},"total_files":24,"storage":{"backend":"s3","base":"s3://openneuro.org/ds006802","raw_key":"dataset_description.json","dep_keys":["CHANGES","README.txt","participants.json","participants.tsv","task-categorisation_eeg.json","task-categorisation_events.json"]},"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"e7dff7f56705b1f7","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Learning"],"confidence":{"pathology":0.6,"modality":0.6,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot example by task purpose is the visual reinforcement/probabilistic learning dataset (\"EEG: Probabilistic Learning with Affective Feedback\"), labeled Type=Learning and Modality=Visual. While the current dataset uses collaborative rule learning rather than reinforcement learning, the convention in the examples is to label datasets whose primary goal is acquiring/updating rules/categories across experience (often with pre/post tests) as Type=Learning. For modality, several few-shot examples treat screen-based categorization/choice tasks as Visual when stimuli are on-screen, but unlike those examples, this dataset’s metadata does not explicitly state stimulus format, so the few-shot examples only weakly guide modality inference.","metadata_analysis":"Key metadata facts:\n- Population/task context: \"We recorded EEG from 24 pairs of participants\" and they \"performed a 4-way categorisation task\".\n- Learning emphasis: the task was \"based on rules they first agreed upon together\" and \"participants did a pre- and post-test on the same stimuli\".\nThese indicate a healthy (non-clinical) dyadic sample doing a rule-learning/categorization paradigm with pre/post testing.","paper_abstract_analysis":"No useful paper information (only a pre-print link is provided; no abstract text included in the metadata here).","evidence_alignment_check":"Pathology:\n- Metadata says: \"24 pairs of participants\" with no mention of any diagnosis or patient recruitment.\n- Few-shot pattern suggests: when no clinical population is stated and the experiment is a cognitive task, label as Healthy.\n- Alignment: ALIGN (no clinical facts given; Healthy is consistent).\n\nModality:\n- Metadata says: \"4-way categorisation task\" and \"pre- and post-test on the same stimuli\" but does not specify whether stimuli are visual/auditory/tactile.\n- Few-shot pattern suggests: categorization tasks are often Visual in OpenNeuro examples when on-screen stimuli are used; however those examples usually explicitly describe visual stimuli.\n- Alignment: PARTIAL/UNCERTAIN (metadata under-specifies modality; few-shot convention weakly suggests Visual but is not confirmed).\n\nType:\n- Metadata says: \"Collaborative rule learning\" and rules were \"first agreed upon together\" with \"pre- and post-test\".\n- Few-shot pattern suggests: paradigms emphasizing learning (e.g., probabilistic/reinforcement learning) map to Type=Learning.\n- Alignment: ALIGN (explicit learning/rule acquisition framing matches Learning).","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n1) Healthy (selected): supported by lack of any clinical recruitment language and generic wording \"24 pairs of participants\".\n2) Unknown: possible because health screening is not stated.\nDecision: Healthy is stronger given typical non-clinical cognitive EEG studies and no contrary clinical facts.\nConfidence basis: 1 explicit non-clinical participant description (\"24 pairs of participants\") + no pathology indicators.\n\nModality:\n1) Visual (selected): categorisation tasks with \"stimuli\" and pre/post tests are most commonly visual in EEG experiments; few-shot conventions label similar screen-based learning tasks as Visual.\n2) Unknown: metadata never explicitly states visual/auditory/tactile.\nDecision: Visual narrowly wins, but with low confidence due to missing explicit stimulus description.\nConfidence basis: contextual inference from \"categorisation\" + \"stimuli\" only; no direct modality quote.\n\nType:\n1) Learning (selected): explicit in title and readme: \"Collaborative rule learning\" and \"pre- and post-test\" imply rule acquisition/adjustment.\n2) Decision-making: categorisation could be construed as choice/decision, but the framing is learning/alignment rather than value-based policy.\nDecision: Learning is clearly primary.\nConfidence basis: multiple explicit learning-focused phrases (\"rule learning\", \"rules they first agreed upon\", \"pre- and post-test\")."}},"computed_title":"Collaborative rule learning promotes interbrain information alignment","nchans_counts":[{"val":64,"count":24}],"sfreq_counts":[{"val":2048.0,"count":24}],"stats_computed_at":"2026-04-22T23:16:00.311972+00:00","total_duration_s":null,"canonical_name":null,"name_confidence":0.55,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Moerel2025_Collaborative"}}