{"success":true,"database":"eegdash","data":{"_id":"69d16e05897a7725c66f4ccd","dataset_id":"nm000270","associated_paper_doi":null,"authors":["Yuan Liu","Zhuolan Gui","De Yan","Zhuang Wang","Ruisi Gao","Ningxin Han","Junying Chen","Jialing Wu","Dong Ming"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.1038/s41597-025-04618-4","datatypes":["eeg"],"demographics":{"subjects_count":27,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000270","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"247376a147d76f0e0fb8a11fef12d8573849dd8eea7060571a32d9191fc6081c","license":"CC-BY-NC-ND-4.0","n_contributing_labs":null,"name":"Liu et al. 2025 — Lower limb motor imagery EEG dataset based on the multi-paradigm and longitudinal-training of stroke patients (Tianjin University)","readme":null,"recording_modality":["eeg"],"senior_author":null,"sessions":["0pre"],"size_bytes":6223321296,"source":"nemar","storage":{"backend":"s3","base":"s3://openneuro.org/nm000270","raw_key":"dataset_description.json","dep_keys":[]},"study_design":null,"study_domain":null,"tasks":["imagery"],"timestamps":{"digested_at":"2026-04-22T12:52:19.331298+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":110,"computed_title":"Liu et al. 2025 — Lower limb motor imagery EEG dataset based on the multi-paradigm and longitudinal-training of stroke patients (Tianjin University)","nchans_counts":[{"val":64,"count":110}],"sfreq_counts":[{"val":1000.0,"count":110}],"stats_computed_at":"2026-04-22T23:16:00.314400+00:00","total_duration_s":32391.890000000003,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"11b8c36365c954ec","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Unknown"],"modality":["Unknown"],"type":["Motor"],"confidence":{"pathology":0.5,"modality":0.5,"type":0.7},"reasoning":{"few_shot_analysis":"Most similar few-shot example by paradigm is the \"EEG Motor Movement/Imagery Dataset\" (Schalk et al.), which contains motor imagery runs and is labeled Type=Motor (with Modality=Visual due to on-screen cues). This guides mapping the task name \"imagery\" to Type=Motor when imagery is the explicit paradigm name. However, unlike the example, this dataset’s metadata does not describe the cue/stimulus channel (visual/auditory/etc.), so Modality cannot be confidently inferred from few-shot conventions alone.","metadata_analysis":"Key available metadata is extremely sparse. Quotes:\n1) Title: \"liu2025 - NEMAR Dataset\".\n2) Participants: \"Subjects: 27\".\n3) Task list includes: \"imagery\" (alongside non-task entries like \".gitignore\" and \"Liu2025.metadata\").\nNo metadata text specifies a clinical population, diagnosis, stimulus type, or concrete task description beyond the name \"imagery\".","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: only \"Subjects: 27\"; no diagnosis/condition stated.\n- Few-shot pattern suggests: many imagery datasets are Healthy, but that is not a stated fact here.\n- Alignment: cannot assess; no explicit pathology facts. Therefore label should not be forced to Healthy.\n\nModality:\n- Metadata says: task name \"imagery\" only; no stimulus/cue modality described.\n- Few-shot pattern suggests: motor imagery tasks are often visually cued (thus sometimes labeled Visual), but this is not explicitly stated for this dataset.\n- Alignment: conflicts in certainty (few-shot suggests likely Visual; metadata provides no confirming facts). Metadata insufficiency means we avoid over-committing.\n\nType:\n- Metadata says: task includes \"imagery\".\n- Few-shot pattern suggests: imagery paradigms map to Type=Motor (as in the motor movement/imagery example).\n- Alignment: aligns reasonably (task name directly indicates imagery; Motor is the closest construct label).","decision_summary":"Top-2 candidates and selection:\n\n1) Pathology:\n- Candidate A: Unknown — supported by absence of any clinical recruitment info (only \"Subjects: 27\").\n- Candidate B: Healthy — plausible by convention for many non-clinical imagery datasets, but not stated.\nDecision: Unknown (metadata does not state Healthy or any disorder). Evidence alignment: insufficient metadata; do not infer pathology.\n\n2) Modality (stimulus/input channel):\n- Candidate A: Unknown — supported by no mention of visual/auditory/tactile cues; only \"imagery\" is provided.\n- Candidate B: Visual — plausible because motor imagery tasks are commonly visually cued (few-shot motor imagery example labeled Visual), but not confirmed here.\nDecision: Unknown (cannot verify cue modality from metadata). Evidence alignment: few-shot suggests Visual but conflicts with lack of explicit metadata.\n\n3) Type (construct/purpose):\n- Candidate A: Motor — supported by task name \"imagery\" and few-shot convention mapping imagery paradigms to Motor.\n- Candidate B: Other — if \"imagery\" referred to non-motor mental imagery (e.g., visual imagery), but no supporting description.\nDecision: Motor. Evidence alignment: metadata term + few-shot convention are consistent.\n\nConfidence justification:\n- Pathology confidence limited (no quotes indicating any pathology or health status).\n- Modality confidence low (no quotes about stimulus channel).\n- Type confidence moderate (one explicit cue: task name \"imagery\"; plus strong few-shot analog)."}},"canonical_name":null,"name_confidence":0.56,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Liu2025"}}