{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3306","dataset_id":"ds004011","associated_paper_doi":null,"authors":["Lina Teichmann","Denise Moerel","Anina Rich","Chris Baker"],"bids_version":"1.6.0","contact_info":["Lina Teichmann","Denise Moerel"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds004011.v1.0.3","datatypes":["meg"],"demographics":{"subjects_count":22,"ages":[25,23,22,23,33,27,23,24,31,23,23,31,22,22,22,41,22,22,24,22,25,22,22,23,23,23,23],"age_min":22,"age_max":41,"age_mean":24.666666666666668,"species":null,"sex_distribution":{"f":19,"m":8},"handedness_distribution":{"l":3,"r":24}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds004011","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"18a0fc8b5266defd5cbaf738fcd8f0c31a9f37d211a749ee5902db0b56f23f41","license":"CC0","n_contributing_labs":null,"name":" The nature of neural object representations during dynamic occlusion","readme":"The main folder contains the raw MEG data for all participants in standard bids format. See references.\nThe ‘sourcedata’ folder contains the behavioural data collected during the MEG session as well as the eyetracking data. The data in this folder follows the following trial structure:\n- sourcedata\n\t- beh\n\t\t- sub-[participant number]\n\t\t\t- sub-[participant number]_task-occlusion_run-[run number]_events.csv: contains all the events for each trial in the MEG session, detailing what was shown on the screen.\n\t\t\t- sub-[participant number]_task-occlusion_run-[run number]_occframes.csv: contains all the stimulus positions for each occlusion trial in the MEG session.\n\t\t\t- sub-[participant number]_task-occlusion_run-[run number]_disframes.csv: contains all the stimulus positions for each disappearance trial in the MEG session.\n\t- eyetracking\n\t\t- sub-[participant number]_Occ.edf: edf file containing the eye positions during the MEG session.\nThe ‘derivatives’ folder contains the pre-processed MEG data for each participant. The data in this folder follows the following trial structure:\n- derivatives\n\t- preprocessed\n\t\t- cosmo_p[participant number].mat: cosmomvpa formatted file with the pre-processed data, epoched for each trial, containing the following variables:\n\t\t\t- ds_diss: cosmo data struct containing the disappearance trials epoched relative to stimulus onset (MEG channels)\n\t\t\t- ds_occ: cosmo data struct containing the disappearance trials epoched relative to stimulus onset (MEG channels)\n\t\t\t- ds_loc: cosmo data struct containing the unpredictable position stream trials epoched relative to stimulus onset (MEG channels)\n\t\t\t- ds_eyes_diss: cosmo data struct containing the disappearance trials epoched relative to stimulus onset (eye-x, eye-y, pupil size)\n\t\t\t- ds_eyes_occ: cosmo data struct containing the disappearance trials epoched relative to stimulus onset (eye-x, eye-y, pupil size)\n\t\t\t- ds_eyes_loc: cosmo data struct containing the unpredictable position stream trials epoched relative to stimulus onset (eye-x, eye-y, pupil size)\n\t\t-  cosmo_p[participant number]_position_epochs.mat: cosmomvpa formatted file with the pre-processed data, epoched relative to each position change, containing the following variables:\n\t\t\t- ds_tiny: cell with two entries. First entry contains the disappearance trials epoched relative to position change. Second entry contains the occlusion trials epoched relative to position change. (MEG channels)\n\t\t\t- ds_tiny_eyes: cell with two entries. First entry contains the disappearance trials epoched relative to position change. Second entry contains the occlusion trials epoched relative to position change. (eye-x, eye-y, pupil size)\n------------\nReferences:\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nNiso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. https://doi.org/10.1038/sdata.2018.110","recording_modality":["meg"],"senior_author":"Chris Baker","sessions":[],"size_bytes":212729784057,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["occlusion"],"timestamps":{"digested_at":"2026-04-22T12:25:56.527743+00:00","dataset_created_at":"2022-01-26T23:26:00.152Z","dataset_modified_at":"2022-04-08T01:32:29.000Z"},"total_files":132,"storage":{"backend":"s3","base":"s3://openneuro.org/ds004011","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"93ba6735715b5a54","model":"openai/gpt-5.2","tagged_at":"2026-01-20T10:22:39.209627+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.6,"modality":0.8,"type":0.7},"reasoning":{"few_shot_analysis":"Closest few-shot convention match is the visual psychophysics/discrimination example (Meta-rdk), which uses an explicitly visual task (“We used a visual discrimination task. Stimuli consisted of ... moving dots”) and is labeled Modality=Visual and Type=Perception. This dataset similarly describes screen-presented stimuli and trialwise stimulus position streams (occlusion/disappearance), which by the same convention maps to Visual + Perception rather than Motor (responses are not the modality) or Resting-state.","metadata_analysis":"Key task/stimulus facts from the dataset README indicate a screen-based visual paradigm: (1) events files are described as “detailing what was shown on the screen.” (2) stimulus content is described as position sequences: “contains all the stimulus positions for each occlusion trial” and “contains all the stimulus positions for each disappearance trial.” Additional context: eye tracking was recorded during the task (“edf file containing the eye positions during the MEG session”), consistent with a visually guided perceptual tracking/prediction paradigm. No recruitment/diagnosis information is provided in the supplied metadata snippet.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata SAYS nothing about a disorder/diagnosis or clinical recruitment (no patient groups mentioned). Few-shot pattern SUGGESTS that when no clinical population is described, label as Healthy (e.g., multiple examples with non-clinical cognitive tasks use Healthy). ALIGN (no conflict).\nModality: Metadata SAYS stimuli were presented visually (“what was shown on the screen”; “stimulus positions for each occlusion trial”). Few-shot pattern SUGGESTS screen-presented stimuli map to Visual (e.g., visual discrimination task labeled Visual). ALIGN.\nType: Metadata SAYS the task is structured around stimulus visibility manipulations and tracking positions (“occlusion trial”, “disappearance trial”, “stimulus positions”), which is primarily about processing/predicting visual stimuli. Few-shot pattern SUGGESTS sensory stimulus processing tasks map to Perception rather than Attention unless attentional control is explicitly the target. ALIGN (no explicit attention/control framing present).","decision_summary":"Pathology top-2: (1) Healthy — supported by absence of any clinical recruitment language in provided metadata (no mention of patients/diagnoses), consistent with catalog convention to label non-clinical cohorts as Healthy. (2) Unknown — plausible because participants details are not shown in the snippet. Winner: Healthy. Confidence=0.6 because it is based on lack of clinical mentions rather than an explicit ‘healthy participants’ statement.\nModality top-2: (1) Visual — supported by “shown on the screen” and trialwise “stimulus positions” for occlusion/disappearance. (2) Multisensory — weakly possible because eye tracking exists, but eye tracking is measurement not stimulus. Winner: Visual. Confidence=0.8 due to multiple explicit visual/screen quotes.\nType top-2: (1) Perception — supported by occlusion/disappearance manipulation and tracking stimulus positions (a visual perceptual/predictive processing paradigm). (2) Attention — possible because eye tracking and tracking tasks can involve attentional demands, but not explicitly framed as attention/control in the metadata. Winner: Perception. Confidence=0.7 with one strong contextual inference grounded in multiple stimulus-structure quotes."}},"nemar_citation_count":1,"computed_title":"The nature of neural object representations during dynamic occlusion","nchans_counts":[{"val":309,"count":132}],"sfreq_counts":[{"val":1200.0,"count":132}],"stats_computed_at":"2026-04-22T23:16:00.306735+00:00","total_duration_s":142333.88,"author_year":"Teichmann2022","canonical_name":null}}