{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a33b9","dataset_id":"ds005107","associated_paper_doi":null,"authors":["Wei Xu","et al."],"bids_version":"1.7.0","contact_info":["Wei Xu"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds005107.v2.0.0","datatypes":["meg"],"demographics":{"subjects_count":21,"ages":[23,24,22,22,21,20,21,20,20,23,24,23,22,22,28,20,27,20,23,22,21],"age_min":20,"age_max":28,"age_mean":22.285714285714285,"species":null,"sex_distribution":{"m":11,"f":10},"handedness_distribution":{"r":21}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005107","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"fb673732f5885f3c4d5e62d1b36b7468f8eab8f2384e02c7f8f0aec4f5dc688e","license":"CC0","n_contributing_labs":null,"name":"FACE-DEC","readme":"Main entrance: face_0_main\nPreprocessing: face_1_prep\nDecoding: face_2_dec\nRSA: face_3_rsa\nStatistical: face_4_stat\nBMS: face_6_bayes.m\nDuring original OPM-MEG data acquisition, individual facial point clouds and structural MRIs were not collected due to the unavailability of optical scanning equipment. Hence, all analyses were conducted at the whole-brain & sensor level. The raw data were originally stored using in-house LabVIEW format and were converted into the FIF format later. It should be noted that the sensor coordinates used were approximated by selecting corresponding locations from the Elekta layout and do not reflect the actual sensor positions (only for visualizing topographic maps). We are currently checking all the data to ensure that everything has been uploaded correctly. :)\nCorrespondence: weixu@mail.bnu.edu.cn","recording_modality":["meg"],"senior_author":"et al.","sessions":["01","02"],"size_bytes":29613455000,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["face"],"timestamps":{"digested_at":"2026-04-22T12:27:21.550923+00:00","dataset_created_at":"2024-04-24T06:49:36.871Z","dataset_modified_at":"2025-06-30T23:44:09.000Z"},"total_files":350,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005107","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"db7d2babe8573477","model":"openai/gpt-5.2","tagged_at":"2026-01-20T17:41:22.593966+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Perception"],"confidence":{"pathology":0.6,"modality":0.6,"type":0.55},"reasoning":{"few_shot_analysis":"Most similar few-shot by stimulus/input channel is the schizophrenia visual discrimination dataset (Meta-rdk), which maps an explicitly visual task to Modality=Visual and Type=Perception. While our dataset lacks explicit task description, the repeated 'face_*' pipeline naming convention strongly suggests face-stimulus processing, which (by the few-shot convention) is treated as Visual modality and typically Perception-type research (stimulus/category processing) rather than Motor or Resting-state.","metadata_analysis":"Key available metadata is only the README. It indicates a face-related analysis pipeline: \"Main entrance: face_0_main\" and subsequent steps \"Decoding: face_2_dec\" and \"RSA: face_3_rsa\". It also specifies the recording context as MEG: \"During original OPM-MEG data acquisition...\". There is no mention of any clinical diagnosis or patient recruitment group, nor explicit task/stimulus description beyond the repeated 'face' labeling.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says nothing about a disorder/clinical recruitment (no diagnosis terms; only methods notes like \"OPM-MEG data acquisition\"). Few-shot pattern suggests that when no clinical group is stated, label as Healthy. ALIGN (no conflict), but weak evidence.\nModality: Metadata implies 'face' paradigm via filenames (\"face_0_main\", \"face_2_dec\", \"face_3_rsa\"). Few-shot convention maps face/visual stimuli tasks to Visual modality. ALIGN (inferred, not explicitly stated).\nType: Metadata suggests representational analyses/decoding (\"Decoding: face_2_dec\", \"RSA: face_3_rsa\"), which typically study perceptual/category representations of stimuli. Few-shot convention for stimulus discrimination/processing is Perception. ALIGN (but still inferred due to missing explicit task description).","decision_summary":"Pathology top-2: (1) Healthy—supported by absence of any clinical recruitment statements in README (only technical notes: \"OPM-MEG data acquisition\"); (2) Unknown—because participants/population are never described. Chosen: Healthy (weakly favored by catalog convention when no pathology is indicated). Confidence 0.6.\nModality top-2: (1) Visual—implied by repeated 'face' pipeline naming (\"face_0_main\", \"Decoding: face_2_dec\", \"RSA: face_3_rsa\"); (2) Unknown—because no explicit stimulus description is provided. Chosen: Visual. Confidence 0.6.\nType top-2: (1) Perception—face decoding/RSA typically targets perceptual/categorical representations (\"Decoding: face_2_dec\", \"RSA: face_3_rsa\"); (2) Other—could be methodological/decoding-focused without a clearly stated cognitive construct. Chosen: Perception. Confidence 0.55."}},"nemar_citation_count":1,"computed_title":"FACE-DEC","nchans_counts":[{"val":65,"count":350}],"sfreq_counts":[{"val":1000.0,"count":350}],"stats_computed_at":"2026-04-22T23:16:00.308997+00:00","total_duration_s":113850.44,"author_year":"Xu2024_DEC","canonical_name":null}}