{"success":true,"database":"eegdash","data":{"_id":"6953f4239276ef1ee07a3291","dataset_id":"ds000117","associated_paper_doi":null,"authors":["Wakeman, DG","Henson, RN"],"bids_version":"1.0.2","contact_info":["Richard Henson"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds000117.v1.1.0","datatypes":["meg"],"demographics":{"subjects_count":17,"ages":[31,25,30,26,23,26,31,26,29,23,24,24,25,24,30,25],"age_min":23,"age_max":31,"age_mean":26.375,"species":null,"sex_distribution":{"m":9,"f":7},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds000117","osf_url":null,"github_url":null,"paper_url":null},"funding":["UK Medical Research Council (SUAG/010 RG91365), Elekta Ltd."],"ingestion_fingerprint":"fb186bcb8d58514b8323fe81f4d61cdb53804d69e3257353d3cffed8381e567a","license":"CC0","n_contributing_labs":null,"name":"Multisubject, multimodal face processing","readme":"This dataset was obtained from the OpenNeuro project (https://www.openneuro.org). Accession #: ds000117\nThe same dataset is also available here: ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/, but in a non-BIDS format (which may be easier to download by subject rather than by modality)\nNote that it is a subset of the data available on OpenfMRI (http://www.openfmri.org; Accession #: ds000117).\nDescription:  Multi-subject, multi-modal (sMRI+fMRI+MEG+EEG) neuroimaging dataset on face processing\nPlease cite the following reference if you use these data:\n     Wakeman, D.G. & Henson, R.N. (2015). A multi-subject, multi-modal human neuroimaging dataset. Sci. Data 2:150001 doi: 10.1038/sdata.2015.1\nThe data have been used in several publications including, for example:\n   Henson, R.N., Abdulrahman, H., Flandin, G. & Litvak, V. (2019). Multimodal integration of M/EEG and f/MRI data in SPM12. Frontiers in Neuroscience, Methods, 13, 300.\n    Henson, R.N., Wakeman, D.G., Litvak, V. & Friston, K.J. (2011). A Parametric Empirical Bayesian framework for the EEG/MEG inverse problem: generative models for multisubject and multimodal integration. Frontiers in Human Neuroscience, 5, 76, 1-16.\n    Chapter 42 of the SPM12 manual (http://www.fil.ion.ucl.ac.uk/spm/doc/manual.pdf)\n(see ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/Publications for full list), as well as the BioMag2010 data competition and the Kaggle competition: https://www.kaggle.com/c/decoding-the-human-brain)\n==================================================================================\nfunc/\n-----\nUnlike in v1-v3 of this dataset, the first two (dummy) volumes have now been removed (as stated in *.json), so event onset times correctly refer to t=0 at start of third volume\nNote that, owing to scanner error, Subject 10 only has 170 volumes in last run (Run 9)\nmeg/\n----\nThree anatomical fiducials were digitized for aligning the MEG with the MRI: the nasion\n(lowest depression between the eyes) and the left and right ears (lowest depression\nbetween the tragus and the helix, above the tragus). This procedure is illustrated here:\nhttp://neuroimage.usc.edu/brainstorm/CoordinateSystems#Subject_Coordinate_System_.28SCS_.2F_CTF.29\nand in task-facerecognition_fidinfo.pdf\nThe following triggers are included in the .fif files and are also used in the “trigger” column of the meg and bold events files:\nTrigger            Label               Simplified Label\n5         Initial Famous Face               IniFF\n6         Immediate Repeat Famous Face      ImmFF\n7         Delayed Repeat Famous Face        DelFF\n13        Initial Unfamiliar Face           IniUF\n14        Immediate Repeat Unfamiliar Face  ImmUF\n15        Delayed Repeat Unfamiliar Face    DelUF\n17        Initial Scrambled Face            IniSF\n18        Immediate Repeat Scrambled Face   ImmSF\n19        Delayed Repeat Scrambled Face     DelSF\nstimuli/meg/\n------------\nThe .bmp files correspond to those described in the text. There are 6 additional images in this directory, which were used in the practice experiment to familiarize participants with the task (hence some more BIDS validator warnings)\nstimuli/mri/\n------------\nThe .bmp files correspond to those described in the text.\nDefacing\n--------\nDefacing of MPRAGE T1 images was performed by the submitter. A subset of subjects have given consent for non-defaced versions to be shared - in which case, please contact rik.henson@mrc-cbu.cam.ac.uk.\nQuality Control\n---------------\nMriqc was run on the dataset. Results are located in derivatives/mriqc. Learn more about it here: https://mriqc.readthedocs.io/en/latest/\nKnown Issues\n------------\nN/A\nRelationship of Subject Numbering relative to other versions of Dataset\n------------\nThere are multiple versions of the dataset available on the web (see notes above), and these entailed a renumbering of the subjects for various reasons. Here are all the versions and how to match subjects between them (plus some rationale and history for different versions):\n1. Original Paper (N=19): Wakeman & Henson (2015): doi:10.1038/sdata.2015.1\n    Number refers to order that tested (and some, eg 4, 7, 13 etc were excluded for not completing both MRI and MEG sessions)\n2. openfMRI, renumbered from paper: http://openfmri.org/s3-browser/?prefix=ds000117/ds000117_R0.1.1/uncompressed/\n    Numbers 1-19 just made contiguous\n3. FTP subset of N=16: ftp: ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/\n    This set was used for SPM Courses\n    Designed to illustrate multimodal integration, so wanted good MRI+MEG+EEG data for all subjects\n    Removed original subject_01 and subject_06 because bad EEG data; subject_19 because poor EEG and fMRI data\n    (And renumbered subject_14 for some reason).\n4. Current OpenNeuro subset N=16 used for (BIDS): https://openneuro.org/datasets/ds000117\n    OpenNeuro was rebranding of openfMRI, and enforced BIDS format\n    Since this version designed to illustrate multi-modal BIDS, kept same numbering as FTP\nW&H2015       openfMRI    FTP      openNeuro\n========       ======        ===     =======\nsubject_01      sub001\nsubject_02      sub002      Sub01   sub-01\nsubject_03      sub003      Sub02   sub-02\nsubject_05      sub004      Sub03   sub-03\nsubject_06      sub005\nsubject_08      sub006      Sub05   sub-05\nsubject_09      sub007      Sub06   sub-06\nsubject_10      sub008      Sub07   sub-07\nsubject_11      sub009      Sub08   sub-08\nsubject_12      sub010      Sub09   sub-09\nsubject_14      sub011      Sub04   sub-04\nsubject_15      sub012      Sub10   sub-10\nsubject_16      sub013      Sub11   sub-11\nsubject_17      sub014      Sub12   sub-12\nsubject_18      sub015      Sub13   sub-13\nsubject_19      sub016\nsubject_23      sub017      Sub14   sub-14\nsubject_24      sub018      Sub15   sub-15\nsubject_25      sub019      Sub16   sub-16","recording_modality":["meg"],"senior_author":"Henson, RN","sessions":["20090409","20090506","20090511","20090515","20090518","20090601","20091126","20091208","meg"],"size_bytes":94108833435,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["facerecognition","noise"],"timestamps":{"digested_at":"2026-04-06T13:03:47.375319+00:00","dataset_created_at":"2018-03-30T13:14:28.253Z","dataset_modified_at":"2025-01-06T15:37:05.000Z"},"total_files":104,"storage":{"backend":"s3","base":"s3://openneuro.org/ds000117","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","acq-mprage_T1w.json","participants.json","participants.tsv","run-1_echo-1_FLASH.json","run-1_echo-2_FLASH.json","run-1_echo-3_FLASH.json","run-1_echo-4_FLASH.json","run-1_echo-5_FLASH.json","run-1_echo-6_FLASH.json","run-1_echo-7_FLASH.json","run-2_echo-1_FLASH.json","run-2_echo-2_FLASH.json","run-2_echo-3_FLASH.json","run-2_echo-4_FLASH.json","run-2_echo-5_FLASH.json","run-2_echo-6_FLASH.json","run-2_echo-7_FLASH.json","task-facerecognition_bold.json","task-facerecognition_events.json"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"0ebef851eeca36e8","model":"openai/gpt-5.2","tagged_at":"2026-01-20T10:08:02.227880+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.6,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot example by task+stimulus modality is the schizophrenia visual discrimination dataset (Meta-rdk), which uses a clearly visual stimulus paradigm and is labeled Modality=Visual and Type=Perception. Although the clinical population differs, it establishes the convention that a visual stimulus discrimination/processing paradigm should map to Visual + Perception (not Motor, despite responses). The other few-shot examples reinforce that Modality is based on stimulus channel (e.g., Braille letters -> Tactile; ABR/music/speech -> Auditory).","metadata_analysis":"Key dataset facts indicating a visual face-processing paradigm:\n1) Dataset description line: \"multi-modal (sMRI+fMRI+MEG+EEG) neuroimaging dataset on face processing\".\n2) Trigger table explicitly enumerates face-image categories: \"Initial Famous Face\", \"Initial Unfamiliar Face\", and \"Initial Scrambled Face\" (plus immediate/delayed repeats).\n3) Stimulus materials are image files: \"stimuli/meg/ ... The .bmp files correspond to those described in the text\" (also \"stimuli/mri/ ... The .bmp files correspond to those described in the text\").\nPopulation/pathology is not explicitly described in the provided metadata excerpt (no diagnosis terms; discussion is about data formats, fiducials, defacing, QC, and subject numbering).","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- What metadata says: No explicit clinical recruitment is stated; only \"multi-subject\" and administrative notes; no diagnoses mentioned.\n- What few-shot pattern suggests: Many cognitive task datasets without clinical wording are labeled Healthy.\n- ALIGN/CONFLICT: Align (both point to non-clinical / typical participants), but metadata is implicit rather than explicit.\n\nModality:\n- What metadata says: \"dataset on face processing\"; triggers label famous/unfamiliar/scrambled faces; stimuli are \".bmp\" image files.\n- What few-shot pattern suggests: Visual stimuli paradigms map to Modality=Visual.\n- ALIGN/CONFLICT: Align.\n\nType:\n- What metadata says: Study focus is \"face processing\" with manipulations of face category and repetition (initial/immediate/delayed), which is primarily about processing/recognition of visual stimuli.\n- What few-shot pattern suggests: Sensory stimulus processing/discrimination tasks map to Type=Perception.\n- ALIGN/CONFLICT: Align.","decision_summary":"Top-2 candidates and decision per category:\n\nPathology:\n- Candidate 1: Healthy. Evidence: no clinical terms/diagnoses in metadata; described generically as \"multi-subject\" research dataset.\n- Candidate 2: Unknown. Evidence: participant health status is not explicitly stated in the provided excerpt.\nHead-to-head: Healthy is more plausible given the absence of any clinical recruitment language, but the evidence is indirect. Final=Healthy. Confidence=0.6 (contextual inference without explicit participant-health quote).\n\nModality:\n- Candidate 1: Visual. Evidence: \"dataset ... on face processing\"; triggers include \"Initial Famous Face\" / \"Initial Unfamiliar Face\" / \"Initial Scrambled Face\"; stimuli are \".bmp\" image files.\n- Candidate 2: Multisensory. Evidence: dataset is \"multi-modal\" in terms of recording modalities (sMRI+fMRI+MEG+EEG), but that does not imply multisensory stimulation.\nHead-to-head: Visual is strongly supported by explicit face-image stimuli; multisensory is a confusion with recording modality. Final=Visual. Confidence=0.9 (3+ explicit visual-stimulus indicators + few-shot convention match).\n\nType:\n- Candidate 1: Perception. Evidence: explicit \"face processing\" and visual face/scrambled-face stimulus categories.\n- Candidate 2: Memory. Evidence: presence of \"Immediate Repeat\" and \"Delayed Repeat\" conditions could be used to study repetition/recognition memory.\nHead-to-head: The core described paradigm is face processing with stimulus-category manipulations; memory may be involved but is not stated as the primary construct in the provided metadata. Final=Perception. Confidence=0.8 (multiple explicit stimulus/face-processing quotes + strong few-shot analog)."}},"nemar_citation_count":77,"computed_title":"Multisubject, multimodal face processing","nchans_counts":[{"val":394,"count":96}],"sfreq_counts":[{"val":1100.0,"count":96}],"stats_computed_at":"2026-04-04T21:29:34.872865+00:00","total_duration_s":null,"author_year":"Wakeman2018","canonical_name":["Wakeman2015","WakemanHenson"],"name_source":"canonical"}}