{"success":true,"database":"eegdash","data":{"_id":"6953f4239276ef1ee07a32c5","dataset_id":"ds003392","associated_paper_doi":null,"authors":["Nicolas Zilber","Philippe Ciuciu","Alexandre Gramfort","Leila Azizi","Virginie van Wassenhove"],"bids_version":"?","contact_info":["Alexandre Gramfort"],"contributing_labs":null,"data_processed":false,"dataset_doi":"10.18112/openneuro.ds003392.v1.0.4","datatypes":["meg"],"demographics":{"subjects_count":12,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds003392","osf_url":null,"github_url":null,"paper_url":null},"funding":["This work was supported by a Marie Curie IRG-249222 and an ERC-StG-263584 to V.vW and an ANR Schubert ANR-0909-JCJC-071 to P.C."],"ingestion_fingerprint":"ba17ff5779892139599bacfea56cd46320373cb56e9be19990103cb02642b550","license":"CC0","n_contributing_labs":null,"name":"NeuroSpin hMT+ Localizer DATA (MEG & aMRI)","readme":"﻿Dataset description: Magnetoencephalography (MEG) dataset recorded during a hMT+ (human visual motion area) localizer task\nPublished in:\nZilber, N., Ciuciu, P., Gramfort, A., Azizi, L., & Van Wassenhove, V. (2014). Supramodal processing optimizes visual perceptual learning and plasticity. Neuroimage, 93, 32-46.\nData curation: Sophie Herbst, Alexandre Gramfort\nThis MEG dataset was prepared in the Brain Imaging Data Structure (MEG-BIDS, Niso et al. 2018) format using MNE-BIDS (Appelhoff et al. 2019).\nThe dataset contains 10 of the 12 participants from the vision-only training group.\nTwo participants were removed, one due to problems with the trigger channel, and one due to different settings in the acquisition preventing us from processing the dataset without prior adjustment.\n## EXPERIMENT\nParticipants were presented with a cloud of moving dots, always starting with incoherent movement (up or down result in equal display, due to the incoherence).\nAfter 500 ms, the movement became coherent in 50% of the trials (95% coherence, up or down) and remained incoherent in the other 50%, lasting for 1000 ms. Participants were instructed to passively view the stimuli for a total of 120 trials.\nEvents:\n1: coherent / down\n2: coherent / up\n3: incoherent / down\n4: incoherent / up\n## MEG\nBrain magnetic fields were recorded in a MSR using a 306 MEG system (Neuromag Elekta LTD, Helsinki). MEG recordings were sampled at\n2 kHz and band-pass filtered between 0.03 and 600 Hz.\nFour head position coils (HPI) measured the head position of participants before each\nblock; three fiducial markers (nasion and pre-auricular points) were\nused for digitization and anatomicalMRI (aMRI) immediately following\nMEG acquisition.\nElectrooculograms (EOG, horizontal and vertical eye\nmovements) and electrocardiogram (ECG) were simultaneously recorded.\nPrior to the session, 5 min of empty room recordings was acquired\nfor the computation of the noise covariance matrix.\nBad MEG channels were marked manually.\n## MRI\nThe T1 weighted aMRI was recorded using a 3-T Siemens Trio MRI\nscanner. Parameters of the sequence were: voxel size: 1.0 × 1.0 ×\n1.1 mm; acquisition time: 466 s; repetition time TR = 2300 ms; and\necho time TE = 2.98 ms\n## References\nZilber, N., Ciuciu, P., Gramfort, A., Azizi, L., & Van Wassenhove, V. (2014). Supramodal processing optimizes visual perceptual learning and plasticity. Neuroimage, 93, 32-46.\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nNiso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. http://doi.org/10.1038/sdata.2018.110","recording_modality":["meg"],"senior_author":"Virginie van Wassenhove","sessions":["19111207","19111208","19111209","19111211","19111212","19111213","19111214","19111215","19111216","19111217","19111218"],"size_bytes":10818491467,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["localizer","noise"],"timestamps":{"digested_at":"2026-04-22T12:25:31.724985+00:00","dataset_created_at":"2020-11-20T19:39:16.349Z","dataset_modified_at":"2021-05-18T15:31:01.000Z"},"total_files":33,"storage":{"backend":"s3","base":"s3://openneuro.org/ds003392","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"9a19bf0eb17df6bf","model":"openai/gpt-5.2","tagged_at":"2026-01-20T10:15:15.434256+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.65,"modality":0.9,"type":0.75},"reasoning":{"few_shot_analysis":"Closest match is the few-shot example “Meta-rdk: Preprocessed EEG data” (Schizophrenia/Psychosis; Visual; Perception), which uses a visual random-dot motion discrimination paradigm (“Stimuli consisted of ... moving dots ... indicated whether the motion direction ... left or right”). While the pathology differs, the task/stimulus structure (moving dot kinematogram / motion coherence manipulation) strongly supports mapping this dataset’s stimulus modality to Visual and research Type to Perception. Other few-shots reinforce that stimulus channel (e.g., auditory tones -> Auditory; resting eyes-closed -> Resting State) drives the Modality label, and sensory discrimination/localizers map to Perception rather than Motor or Resting-state.","metadata_analysis":"Key dataset facts from README: (1) It is a motion-localizer: \"MEG dataset recorded during a hMT+ (human visual motion area) localizer task\". (2) Visual motion dot stimuli: \"Participants were presented with a cloud of moving dots\". (3) Motion coherence manipulation: \"After 500 ms, the movement became coherent in 50% of the trials (95% coherence, up or down) and remained incoherent in the other 50%\". (4) No explicit clinical recruitment; participants described only as: \"The dataset contains 10 of the 12 participants from the vision-only training group.\" (5) Task is passive viewing: \"Participants were instructed to passively view the stimuli\".","paper_abstract_analysis":"No useful paper information (only a citation/title is provided, no abstract text). The paper title mentions perceptual learning/plasticity, but the README specifies this shared dataset is an hMT+ localizer with passive viewing.","evidence_alignment_check":"Pathology: Metadata says participants are from a \"vision-only training group\" with no diagnosis stated; few-shot convention would label such non-clinical cohorts as Healthy. ALIGN (no conflict).\nModality: Metadata explicitly describes a \"cloud of moving dots\" and coherent/incoherent visual motion; few-shot mapping for dot-motion tasks is Visual. ALIGN.\nType: Metadata describes passive viewing of coherent vs incoherent motion for an \"hMT+ ... localizer task\"; few-shot dot-motion task is labeled Perception. The cited paper title references perceptual learning/plasticity, which could suggest Learning, but the shared paradigm in README is a sensory localizer/passive perception task. Mostly ALIGN; slight ambiguity due to paper title, resolved in favor of README task description (Perception).","decision_summary":"Pathology top-2: (1) Healthy—supported by absence of any diagnosis and generic participant description: \"participants ... from the vision-only training group\"; (2) Unknown—possible because README never explicitly says \"healthy\". Winner: Healthy. Alignment: aligns with few-shot convention for non-clinical cohorts. Confidence 0.65 due to lack of explicit 'healthy' wording.\nModality top-2: (1) Visual—\"hMT+ (human visual motion area) localizer\" and \"cloud of moving dots\"; (2) Multisensory—unlikely (no auditory/tactile stimuli described). Winner: Visual. Confidence 0.9 (multiple explicit stimulus quotes + strong few-shot analog).\nType top-2: (1) Perception—passive viewing of coherent/incoherent motion: \"movement became coherent...\" and \"passively view\"; localizer for motion processing; (2) Learning—paper title includes \"perceptual learning and plasticity\" and cohort is \"training group\". Winner: Perception because the dataset itself is a motion localizer/perceptual manipulation rather than an explicit learning/training protocol. Confidence 0.75 (strong task evidence, minor competing cue from citation/title)."}},"nemar_citation_count":0,"computed_title":"NeuroSpin hMT+ Localizer DATA (MEG & aMRI)","nchans_counts":[{"val":320,"count":22}],"sfreq_counts":[{"val":2000.0,"count":22}],"stats_computed_at":"2026-04-22T23:16:00.222022+00:00","total_duration_s":4436.9890000000005,"author_year":"Zilber2020","canonical_name":null}}