{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a32ff","dataset_id":"ds003922","associated_paper_doi":null,"authors":["Pesnot Lerousseau, J.","Parise, C.","Ernst, MO.","van Wassenhove, V."],"bids_version":"1.6.0","contact_info":["Jacques Pesnot Lerousseau"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds003922.v1.0.1","datatypes":["meg"],"demographics":{"subjects_count":14,"ages":[23,25,23,33,21,23,27,22,25,27,22,25,27,88],"age_min":21,"age_max":88,"age_mean":29.357142857142858,"species":null,"sex_distribution":{"m":5,"f":9},"handedness_distribution":{"r":14}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds003922","osf_url":null,"github_url":null,"paper_url":null},"funding":["ERC-YStG-263584","ANR-16-CE37-0004-04"],"ingestion_fingerprint":"6d5febfcdc2390bb1007cb9f8b9028ec4477a6203b86b9106785153746e1b11c","license":"CC0","n_contributing_labs":null,"name":"Multisensory Correlation Detector","readme":"### DESCRIPTION\nMagnetoencephalography (MEG) dataset recorded during the presentation of audiovisual sequences with a causality judgment task and temporal order judgment task. This MEG dataset was prepared in the Brain Imaging Data Structure (MEG-BIDS, Niso et al. 2018) format using MNE-BIDS (Appelhoff et al. 2019).\n### PUBLISHED IN\nPesnot Lerousseau, J., Parise, C., Ernst, MO., van Wassenhove, V. (2022). Multisensory correlation computations in the human brain identified by a time-resolved encoding model. *Nature Communications*. http://doi.org/10.1038/s41467-022-29687-6\n### PARTICIPANTS\nThe dataset contains 13 participants (Ab140232, Jl150443, Mm150194, Al150424, Mp110340, Rt160359, Cb140229, Cc160310, Lb160367, Mb160304, Mk150295, Sl160372, Mp150285).\n### EXPERIMENT\nThe experiment consisted of 10 consecutive recording blocks of 8 minutes each, whose order was counterbalanced across participants. Three blocks tested participants on a Causality judgement, and three blocks tested participants with a Temporal order judgement. Importantly, the same audiovisual sequences were used in both tasks in order to maintain a constant flow of feedforward multisensory inputs while manipulating the endogenous task requirements. Each block was composed of 25 repetitions of the 6 possible audiovisual sequences. A total of 75 presentations of each stimulus sequence were thus tested in each task. Four additional recording blocks consisted of participants passively hearing (auditory localizer, 2 blocks) or viewing (visual localizer, 2 blocks) one constitutive modality of the audiovisual sequence. Each localizer block was composed of 25 repetitions of the 6 possible stimuli (auditory or visual part of each stimuli), yielding a total of 50 presentations of each auditory and visual stimuli (2 tasks x 3 blocks x 25 repetitions x 6 sequences + 2 modalities x 25 repetitions x 2 blocks x 6 sequences = 1500 trials in total).\n### STIMULI\nSix audiovisual sequences were presented (DD, DC, CC, AA, AV, VV).\n### BLOCKS\nTen blocks were presented (3 Causality, 3 Temporal, 2 Auditory, 2 Visual).\n### EVENTS\n- 'Causality/DD':11\n- 'Causality/DC':12\n- 'Causality/CC':13\n- 'Causality/AA':14\n- 'Causality/AV':15\n- 'Causality/VV':16\n- 'Temporal/DD':21\n- 'Temporal/DC':22\n- 'Temporal/CC':23\n- 'Temporal/AA':24\n- 'Temporal/AV':25\n- 'Temporal/VV':26\n- 'Auditory/DD':41\n- 'Auditory/DC':42\n- 'Auditory/CC':43\n- 'Auditory/AA':44\n- 'Auditory/AV':45\n- 'Auditory/VV':46\n- 'Visual/DD':51\n- 'Visual/DC':52\n- 'Visual/CC':53\n- 'Visual/AA':54\n- 'Visual/AV':55\n- 'Visual/VV':56\n### MEG\nBrain magnetic fields were recorded in a MSR using a 306 MEG system (Neuromag Elekta LTD, Helsinki). MEG recordings were sampled at 1 kHz and band-pass filtered between 0.03 Hz and 330 Hz.\nFour head position coils (HPI) measured the head position of participants before each block; three fiducial markers (nasion and pre-auricular points) were used for digitization and anatomicalMRI (aMRI) immediately following MEG acquisition.\nElectrooculograms (EOG, horizontal and vertical eye movements) and electrocardiogram (ECG) were simultaneously recorded. Prior to the session, 2 min of empty room recordings was acquired for the computation of the noise covariance matrix.\nBad MEG channels were marked manually.\n### MRI\nThe T1 weighted aMRI was recorded using a 3-T Siemens Trio MRI scanner. Parameters of the sequence were: voxel size: 1.0 × 1.0 × 1.1 mm; acquisition time: 466 s; repetition time TR = 2300 ms; and echo time TE = 2.98 ms\n### BEHAVIOR\nFile sourcedata/behavioral_data.txt\n### REFERENCES\nPesnot Lerousseau, J., Parise, C., Ernst, MO., van Wassenhove, V. (2022). Multisensory correlation computations in the human brain identified by a time-resolved encoding model. Nature Communications. http://doi.org/10.1038/s41467-022-29687-6\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nNiso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. http://doi.org/10.1038/sdata.2018.110","recording_modality":["meg"],"senior_author":"van Wassenhove, V.","sessions":["01","20161129","20161130","20161201","20161206","20161207","20161212","20161213","20161214"],"size_bytes":81285134094,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["mcd","noise","rest"],"timestamps":{"digested_at":"2026-04-22T12:25:54.036409+00:00","dataset_created_at":"2021-11-19T13:29:49.244Z","dataset_modified_at":"2022-05-02T14:19:35.000Z"},"total_files":164,"storage":{"backend":"s3","base":"s3://openneuro.org/ds003922","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"7debc1c1cfdc15be","model":"openai/gpt-5.2","tagged_at":"2026-01-20T10:22:13.981685+00:00"},"tags":{"pathology":["Healthy"],"modality":["Multisensory"],"type":["Perception"],"confidence":{"pathology":0.65,"modality":0.9,"type":0.85},"reasoning":{"few_shot_analysis":"Most similar few-shot conventions:\n- The \"Cross-modal Oddball Task\" example maps simultaneous visual+auditory cueing to Modality=\"Multisensory\". This dataset also uses audiovisual stimuli, so the same modality convention applies.\n- The \"Meta-rdk: Preprocessed EEG data\" example uses an explicit perceptual discrimination/judgment task and is labeled Type=\"Perception\" even though there is a choice/decision component. Here, causality judgment and temporal order judgment are likewise perceptual judgments about sensory timing/causality, so Perception is favored over Decision-making.\nNo few-shot example suggests a pathology label unless a diagnosis/condition is explicitly stated; here none is stated.","metadata_analysis":"Key metadata facts (quotes):\n- Paradigm/stimuli: \"MEG dataset recorded during the presentation of audiovisual sequences\".\n- Tasks: \"with a causality judgment task and temporal order judgment task\" and \"Three blocks tested participants on a Causality judgement, and three blocks tested participants with a Temporal order judgement\".\n- Additional unimodal blocks: \"passively hearing (auditory localizer, 2 blocks) or viewing (visual localizer, 2 blocks)\".\n- Participants: \"The dataset contains 13 participants (Ab140232, Jl150443, ... )\" (no diagnoses/clinical recruitment described).","paper_abstract_analysis":"No useful paper information. (Only a citation is provided; no abstract text included.)","evidence_alignment_check":"Pathology:\n1) Metadata says: only \"The dataset contains 13 participants ...\" with no disorder/clinical recruitment stated.\n2) Few-shot pattern suggests: when no diagnosis is mentioned, label as Healthy (normative cohort convention).\n3) Alignment: ALIGN (both indicate no clinical population specified).\n\nModality:\n1) Metadata says: \"audiovisual sequences\" and lists \"Six audiovisual sequences\" plus auditory-only and visual-only localizers.\n2) Few-shot pattern suggests: audiovisual/cross-modal stimulation -> \"Multisensory\" (as in Cross-modal Oddball Task).\n3) Alignment: ALIGN.\n\nType:\n1) Metadata says: \"causality judgment task and temporal order judgment task\" using the same audiovisual sequences.\n2) Few-shot pattern suggests: perceptual discrimination/judgment tasks are labeled \"Perception\" even if responses involve choice (as in the visual discrimination example).\n3) Alignment: ALIGN.","decision_summary":"Top-2 candidate labels with head-to-head selection:\n\nPathology candidates:\n- Healthy: Supported by absence of any diagnosis/clinical recruitment language (\"The dataset contains 13 participants...\") and typical basic-science MEG psychophysics description.\n- Unknown: Also plausible because metadata never explicitly says \"healthy\".\nDecision: Healthy wins because the study is described purely by task/stimulus methods with no clinical framing, consistent with the catalog convention for non-clinical cohorts.\nConfidence evidence: No explicit \"healthy\" quote; inference from lack of pathology mention -> moderate (0.65).\n\nModality candidates:\n- Multisensory: Strongly supported by \"audiovisual sequences\" and \"Six audiovisual sequences were presented\".\n- Visual: Possible because there are \"visual localizer\" blocks.\nDecision: Multisensory wins because the primary experiment is explicitly audiovisual, with unimodal blocks described as additional localizers.\nConfidence evidence: Multiple explicit quotes about audiovisual stimuli/tasks + strong few-shot analog -> high (0.9).\n\nType candidates:\n- Perception: Supported by \"causality judgment\" and \"temporal order judgment\"—classic multisensory perception constructs.\n- Decision-making: Possible because participants make explicit judgments/choices.\nDecision: Perception wins because the judgments target sensory causality and temporal order (perceptual inference), aligning with the few-shot convention that discrimination/judgment paradigms map to Perception.\nConfidence evidence: 2+ explicit task quotes, clear construct -> high (0.85)."}},"nemar_citation_count":1,"computed_title":"Multisensory Correlation Detector","nchans_counts":[{"val":342,"count":128},{"val":323,"count":23}],"sfreq_counts":[{"val":1000.0,"count":151}],"stats_computed_at":"2026-04-22T23:16:00.306630+00:00","total_duration_s":59354.849,"author_year":"Lerousseau2021","canonical_name":null}}