{"success":true,"database":"eegdash","data":{"_id":"69a33a3b897a7725c66f3ee1","dataset_id":"ds006386","associated_paper_doi":null,"authors":["Jiangwei Yu","Aonan He"],"bids_version":"1.7.0","contact_info":["Jiangwei Yu"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds006386.v1.0.1","datatypes":["eeg"],"demographics":{"subjects_count":30,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds006386","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"d64ad18595aa9fb443ad8cb6ca330e40b85a647613b790484e9ed9fdda22a2ca","license":"CC0","n_contributing_labs":null,"name":"PhysioMotion_Artifact","readme":"﻿References\n----------\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8","recording_modality":["eeg"],"senior_author":"Aonan He","sessions":[],"size_bytes":24666877908,"source":"openneuro","storage":{"backend":"s3","base":"s3://openneuro.org/ds006386","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["artifact"],"timestamps":{"digested_at":"2026-04-22T12:29:25.547857+00:00","dataset_created_at":"2025-06-25T13:54:38.919Z","dataset_modified_at":"2026-02-11T03:22:41.000Z"},"total_files":180,"computed_title":"PhysioMotion_Artifact","nchans_counts":[{"val":59,"count":180}],"sfreq_counts":[{"val":1000.0,"count":180}],"stats_computed_at":"2026-04-22T23:16:00.312479+00:00","total_duration_s":208797.82,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"5f76f815185e4430","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Motor"],"type":["Other"],"confidence":{"pathology":0.5,"modality":0.6,"type":0.6},"reasoning":{"few_shot_analysis":"Most similar few-shot by task theme is the “EEG Motor Movement/Imagery Dataset” example (motor actions/imagery), which maps overt movement paradigms to Motor-related labeling (it uses motor execution/imagery with visual cues and is labeled Modality=Visual, Type=Motor). Here, the dataset title/task emphasize motion-related artifacts rather than studying motor control/imagery as a cognitive construct, so the few-shot mainly guides the convention that movement-focused paradigms often imply Motor as the dominant modality, but the research Type may shift to Other when the goal is methodological/artifact characterization rather than motor cognition.","metadata_analysis":"Key available metadata is sparse and largely non-task-specific. The dataset provides: (1) title: \"PhysioMotion_Artifact\" (suggesting motion/physiological motion artifact focus), (2) tasks field: \"artifact\", as in \"tasks\": [\"artifact\"], and (3) participant count only: \"participants_overview\": \"Subjects: 30\". The README contains only general BIDS/MNE-BIDS references and no population/task description beyond that.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says nothing about diagnosis or recruitment criteria (only \"Subjects: 30\"); few-shot patterns do not justify inferring a clinical group; they generally label unspecified volunteer cohorts as Healthy. ALIGN (no conflict), but evidence is weak.\nModality: Metadata suggests motion/artifact (\"PhysioMotion_Artifact\", task \"artifact\"), which weakly implies body movement/manipulations rather than sensory stimulation; few-shot convention for movement paradigms supports choosing Motor modality when movement is the dominant manipulation. ALIGN, but remains inferential.\nType: Metadata indicates artifact-focused task labeling (\"artifact\") rather than a cognitive domain (memory/attention/etc.); few-shot motor example would suggest Type=Motor when studying movement/imagery, but here the likely aim is methodological (artifact characterization). PARTIAL CONFLICT: few-shot motor convention suggests Motor type for movement paradigms, but metadata wording prioritizes artifact/method focus; metadata wins because it more directly indicates the dataset purpose.","decision_summary":"Pathology top-2: (1) Healthy — supported only indirectly by lack of any clinical descriptors and generic \"Subjects: 30\"; (2) Unknown — equally plausible given no explicit recruitment info. Winner: Healthy (more consistent with typical artifact/benchmark datasets, but not explicitly stated). Evidence alignment: aligns; very limited evidence. \nModality top-2: (1) Motor — implied by title \"PhysioMotion_Artifact\" and task \"artifact\" suggesting motion-related artifact induction; (2) Other — possible if artifacts are non-motor (e.g., electrode/cable manipulations) and no stimulus channel is defined. Winner: Motor (best match to “Motion” wording). Evidence alignment: aligns; inference-only. \nType top-2: (1) Other — task explicitly labeled \"artifact\" indicating methodological/artifact characterization; (2) Motor — possible if the dataset is intended to study movement signals. Winner: Other because the only explicit task label is \"artifact\" and no cognitive/motor construct is described. Confidence is limited across categories due to minimal quoted task/population detail (only title/task/subject count)."}},"canonical_name":null,"name_confidence":0.55,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Yu2025"}}