{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a3442","dataset_id":"ds006317","associated_paper_doi":null,"authors":["Zihan Zhang","Yu Bao","Tianyi Jiang","Xiao Ding","Xia Liang","Juntong Du","Yi Zhao","Kai Xiong","Bing Qin","Ting Liu"],"bids_version":"1.10.0","contact_info":["Yu Bao"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds006317.v1.1.1","datatypes":["eeg"],"demographics":{"subjects_count":2,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds006317","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"1b8eb9c1df4bea82b9d403dd05e75431ddc5be3826e738fad1a6047d99f36af3","license":"CC0","n_contributing_labs":null,"name":"Chisco-2.0","readme":"This dataset is a imagined speech dataset with two participants, identified as sub-01 to sub-02. The dataset includes raw data in edf formats. Information also can be found in https://github.com/baoyudu/COFETT .\nReferences\n----------\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8","recording_modality":["eeg"],"senior_author":"Ting Liu","sessions":["01","02","03","04"],"size_bytes":56822054294,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["para1","para2"],"timestamps":{"digested_at":"2026-04-22T12:29:17.227428+00:00","dataset_created_at":"2025-06-06T05:49:27.180Z","dataset_modified_at":"2026-04-08T02:48:48.000Z"},"total_files":64,"storage":{"backend":"s3","base":"s3://openneuro.org/ds006317","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"5bf4bde3cd7816cf","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Unknown"],"type":["Motor"],"confidence":{"pathology":0.7,"modality":0.5,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot by construct is the \"EEG Motor Movement/Imagery Dataset\" (Healthy / Visual / Motor), where the core paradigm is motor imagery and the Type label is set to \"Motor\". Chisco-2.0 is explicitly an \"imagined speech\" dataset, which is also an imagery-based (speech-articulation) motor construct, so this example guides mapping the study purpose to Type=\"Motor\". Few-shot examples that map Modality based on explicitly described stimuli (e.g., auditory digit presentation -> Auditory; resting eyes-closed -> Resting State) highlight that Modality should not be guessed without stimulus description; for Chisco-2.0 the cue modality is not specified, so Modality should remain Unknown rather than inferred.","metadata_analysis":"Key metadata facts:\n- Population/task: \"This dataset is a imagined speech dataset with two participants\".\n- Participants: \"two participants, identified as sub-01 to sub-02\" and \"Subjects: 2\".\n- Task labels are non-descriptive: tasks are [\"para1\", \"para2\"], with no description of what stimuli/cues were presented.\nThere is no mention of any diagnosis, patient group, or recruitment based on a disorder in the provided metadata.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: only \"two participants\" / \"Subjects: 2\" with no clinical terms.\n- Few-shot pattern suggests: when no disorder recruitment is stated, label as Healthy.\n- Alignment: ALIGN (absence of pathology in metadata matches Healthy-by-default convention).\n\nModality:\n- Metadata says: \"imagined speech\" but does NOT state whether prompts were visual, auditory, or other (tasks only \"para1\", \"para2\").\n- Few-shot pattern suggests: Modality is assigned from explicitly described stimulus channel (e.g., digits presented auditorily => Auditory; dot motion => Visual). When not described, avoid guessing.\n- Alignment: ALIGN toward choosing Unknown due to insufficient stimulus details.\n\nType:\n- Metadata says: \"imagined speech dataset\" (imagery-based speech production).\n- Few-shot pattern suggests: imagery/execution paradigms map to Type=\"Motor\" (e.g., motor imagery dataset labeled Motor).\n- Alignment: ALIGN (imagery-based speech most closely fits Motor among allowed Type labels).","decision_summary":"Top-2 candidates per category and selection:\n\nPathology:\n1) Healthy (selected): supported by lack of any diagnosis/patient recruitment statement (\"two participants\", \"Subjects: 2\" only).\n2) Unknown: possible because participants are not explicitly described as healthy.\nHead-to-head: Healthy is stronger given the catalog convention (normative cohort when no disorder focus is stated).\nConfidence basis: one explicit absence-of-pathology context + standard convention.\n\nModality:\n1) Unknown (selected): no explicit stimulus/cue modality described (only \"imagined speech\"; tasks \"para1\", \"para2\").\n2) Visual: plausible if words/phonemes were shown on a screen, but this is not stated.\nHead-to-head: Unknown wins because the metadata does not specify stimuli.\nConfidence basis: explicit lack of modality-defining quotes.\n\nType:\n1) Motor (selected): \"imagined speech\" is most consistent with motor imagery of speech/articulation.\n2) Other: could be framed as BCI/communication rather than motor per se.\nHead-to-head: Motor wins by closest match to imagery-based paradigms in few-shot conventions.\nConfidence basis: one direct quote (imagined speech) + strong few-shot analog (motor imagery -> Motor)."}},"computed_title":"Chisco-2.0","nchans_counts":[{"val":127,"count":64}],"sfreq_counts":[{"val":1000.0,"count":64}],"stats_computed_at":"2026-04-22T23:16:00.311471+00:00","total_duration_s":223638.136,"canonical_name":null,"name_confidence":0.72,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Zhang2025_Chisco_2_0"}}