{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a342d","dataset_id":"ds005963","associated_paper_doi":null,"authors":["Rickson C. Mesquita"],"bids_version":"1.7.0","contact_info":[],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds005963.v1.0.0","datatypes":["fnirs"],"demographics":{"subjects_count":10,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005963","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"8bd01461c92ae951733ae7ebb5f2f5a0fd534e608d709ef3b48d1657849ba35b","license":"CC0","n_contributing_labs":null,"name":"FRESH Motor Dataset","readme":"﻿References\n----------\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nIn preperation","recording_modality":["fnirs"],"senior_author":"Rickson C. Mesquita","sessions":["left2s","left3s","right2s","right3s"],"size_bytes":244698664,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["FRESHMOTOR"],"timestamps":{"digested_at":"2026-04-22T12:29:01.189706+00:00","dataset_created_at":"2025-03-06T19:23:10.085Z","dataset_modified_at":"2025-03-06T19:27:45.000Z"},"total_files":40,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005963","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"dd67691fe4eebdca","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Unknown"],"modality":["Unknown"],"type":["Motor"],"confidence":{"pathology":0.4,"modality":0.5,"type":0.7},"reasoning":{"few_shot_analysis":"Most similar few-shot example by paradigm is the 'EEG Motor Movement/Imagery Dataset' (Schalk et al.), which is clearly a motor execution/imagery study and is labeled Type=Motor. That example also shows a convention that many motor paradigms have Visual modality when the movement is cued by on-screen targets. However, in the current dataset, the task/stimulus details needed to apply that same Visual-modality convention are not present, so the few-shot can only guide the likely Type (Motor), not provide missing facts (population, stimulus channel).","metadata_analysis":"Key available metadata is extremely sparse. Relevant quotes:\n1) Title: \"FRESH Motor Dataset\".\n2) Tasks: \"FRESHMOTOR\".\n3) Participants overview: \"Subjects: 10\".\n4) README: \"In preperation\" (no paradigm/population/stimulus description).","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata SAYS only \"Subjects: 10\" with no diagnosis/clinical recruitment described. Few-shot pattern SUGGESTS that when no disorder focus is stated, datasets are often labeled Healthy, but this is not an explicit fact here. ALIGN/CONFLICT: neither; metadata is insufficient, so pathology cannot be determined.\n\nModality: Metadata SAYS only \"Motor\" in the title and task name, without stating the stimulus channel (e.g., visual cues, auditory pacing, etc.). Few-shot pattern SUGGESTS motor paradigms are often visually cued (thus Visual modality in the motor-imagery example), but that requires task description not present here. ALIGN/CONFLICT: no direct alignment possible; insufficient stimulus information.\n\nType: Metadata SAYS \"Motor\" (title: \"FRESH Motor Dataset\"), which directly indicates the research domain. Few-shot pattern SUGGESTS Motor type for motor execution/imagery datasets (e.g., Schalk motor/imagery example). ALIGN: yes (both indicate Motor).","decision_summary":"Top-2 candidates (with head-to-head comparison):\n\nPathology:\n- Candidate 1: Unknown. Evidence: only \"Subjects: 10\"; no mention of patients/controls/diagnosis anywhere.\n- Candidate 2: Healthy. Evidence: absence of clinical terms could imply normative sampling, but this is an inference with no supporting statement.\nDecision: Unknown wins because there is no explicit recruitment/diagnosis information.\nConfidence basis: lack of any pathology-related quotes (only a subject count).\n\nModality:\n- Candidate 1: Unknown. Evidence: no stimulus description beyond motor-themed title/task name.\n- Candidate 2: Visual. Evidence: few-shot motor example uses Visual modality due to on-screen targets, but current metadata does not confirm any visual stimulus.\nDecision: Unknown wins because stimulus channel cannot be determined from provided metadata.\nConfidence basis: only indirect inference possible.\n\nType:\n- Candidate 1: Motor. Evidence quotes: \"FRESH Motor Dataset\" and task \"FRESHMOTOR\"; supported by few-shot motor dataset convention (Type=Motor).\n- Candidate 2: Unknown. Evidence: minimal metadata could, in principle, hide a different construct, but 'Motor' naming strongly suggests the intent.\nDecision: Motor wins due to explicit motor labeling in title/task name and few-shot alignment.\nConfidence basis: one clear metadata cue (motor) plus strong few-shot analog."}},"computed_title":"FRESH Motor Dataset","nchans_counts":[{"val":136,"count":40}],"sfreq_counts":[{"val":8.928571428571429,"count":40}],"stats_computed_at":"2026-04-22T23:16:00.311185+00:00","total_duration_s":23362.976,"canonical_name":null,"name_confidence":0.55,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Mesquita2025"}}