{"success":true,"database":"eegdash","data":{"_id":"6953f4239276ef1ee07a3294","dataset_id":"ds000248","associated_paper_doi":null,"authors":["Alexandre Gramfort","Matti S Hämäläinen"],"bids_version":"1.4.0","contact_info":["Mainak Jas","Alexandre Gramfort"],"contributing_labs":null,"data_processed":true,"dataset_doi":"10.18112/openneuro.ds000248.v1.2.4","datatypes":["meg"],"demographics":{"subjects_count":2,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds000248","osf_url":null,"github_url":null,"paper_url":null},"funding":["NIH 5R01EB009048","NIH 1R01EB009048","NIH R01EB006385","NIH 1R01HD40712","NIH 1R01NS44319","NIH 2R01NS37462","NIH P41EB015896","ANR-11-IDEX-0003-02","ERC-StG-263584","ERC-StG-676943","ANR-14-NEUC-0002-01"],"ingestion_fingerprint":"0cce9d2add415a018951a866680de4bfb2e12c7aaedbced03d4146ab5d6bf77d","license":"CC0","n_contributing_labs":null,"name":" ","readme":"﻿MNE-Sample-Data\n---------------\nThe MNE software is accompanied by a sample data set. These data were acquired with the Neuromag Vectorview system at MGH/HMS/MIT Athinoula A. Martinos Center Biomedical Imaging. EEG data from a 60-channel electrode cap was acquired simultaneously with the MEG. The original MRI data set was acquired with a Siemens 1.5 T Sonata scanner using an MPRAGE sequence.\nIn the MEG/EEG experiment, checkerboard patterns were presented into the left and right visual field, interspersed by tones to the left or right ear. The interval between the stimuli was 750 ms. Occasionally a smiley face was presented at the center of the visual field. The subject was asked to press a key with the right index finger as soon as possible after the appearance of the face.\nFreesurfer derivatives\n----------------------\n- Calls from the command line:\n  - `recon-all -i sub-01/anat/sub-01_T1w.nii.gz -s sub-01 -all`\n  - `mne make_scalp_surfaces -s sub-01 --overwrite --force`\n  - `mne flash_bem -s sub-01 --overwrite`\n  - `mne watershed_bem -s sub-01 --overwrite`\nReferences\n----------\nA. Gramfort, M. Luessi, E. Larson, D. Engemann, D. Strohmeier, C. Brodbeck, L. Parkkonen, M. Hämäläinen, MNE software for processing MEG and EEG data, NeuroImage, Volume 86, 1 February 2014, Pages 446-460, ISSN 1053-8119\nA. Gramfort, M. Luessi, E. Larson, D. Engemann, D. Strohmeier, C. Brodbeck, R. Goj, M. Jas, T. Brooks, L. Parkkonen, M. Hämäläinen, MEG and EEG data analysis with MNE-Python, Frontiers in Neuroscience, Volume 7, 2013, ISSN 1662-453X\"\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nNiso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. http://doi.org/10.1038/sdata.2018.110\nReferences\n----------\nNiso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. https://doi.org/10.1038/sdata.2018.110","recording_modality":["meg"],"senior_author":"Matti S Hämäläinen","sessions":["19210819"],"size_bytes":186216741,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["audiovisual","noise"],"timestamps":{"digested_at":"2026-04-22T12:25:25.479097+00:00","dataset_created_at":"2018-03-30T14:58:53.917Z","dataset_modified_at":"2020-12-11T09:14:57.000Z"},"total_files":3,"storage":{"backend":"s3","base":"s3://openneuro.org/ds000248","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","acq-epi_T1w.json","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"cb302df59b54975b","model":"openai/gpt-5.2","tagged_at":"2026-01-20T10:09:13.052437+00:00"},"tags":{"pathology":["Healthy"],"modality":["Multisensory"],"type":["Attention"],"confidence":{"pathology":0.7,"modality":0.85,"type":0.7},"reasoning":{"few_shot_analysis":"Most similar few-shot by paradigm and stimulus structure is the “Cross-modal Oddball Task” example (Parkinson’s) which includes simultaneous visual+auditory cues and oddballing. That example maps cross-modal stimulus presentation to Modality=“Multisensory”, and the oddball/target-detection framing supports choosing a cognitive-control/target-detection Type rather than pure sensory Perception. Unlike that example, the current dataset does not recruit a clinical population, so Pathology follows metadata facts (non-clinical sample subject) → “Healthy”.","metadata_analysis":"Key task/stimulus facts from the README:\n1) Multisensory stimulation: “checkerboard patterns were presented into the left and right visual field, interspersed by tones to the left or right ear.”\n2) Target detection with a speeded response: “Occasionally a smiley face was presented at the center of the visual field. The subject was asked to press a key with the right index finger as soon as possible after the appearance of the face.”\n3) No clinical recruitment described: it is introduced as “MNE-Sample-Data… sample data set” with acquisition/site details but no diagnosis/clinical inclusion criteria.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: sample dataset with a single “subject” and no mention of any diagnosis (“MNE software… sample data set”; no disorder terms).\n- Few-shot pattern suggests: when no disorder recruitment is stated, label as Healthy (seen across multiple healthy-task examples).\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: both visual and auditory stimuli are presented (“checkerboard patterns… visual field” and “tones… ear”).\n- Few-shot pattern suggests: cross-modal/combined auditory+visual stimulation maps to “Multisensory” (Cross-modal Oddball Task example).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: rare target with speeded response (“Occasionally a smiley face… press a key… as soon as possible”), consistent with an oddball/target-detection attention task; also includes basic sensory evoked stimulation.\n- Few-shot pattern suggests: oddball/target-detection paradigms are typically categorized under Attention (e.g., cognitive-control/oddball style datasets), while pure stimulus-response characterization would be Perception.\n- Alignment: PARTIAL (both Attention and Perception plausible); choose Attention because the explicit instruction emphasizes detecting the infrequent smiley and responding quickly.","decision_summary":"Pathology (top-2):\n1) Healthy — Evidence: no diagnosis/recruitment stated; described as “sample data set” and “The subject was asked…” with no patient group. Alignment: aligns with few-shot convention for non-clinical datasets.\n2) Unknown — Would apply if participant status were unclear; however absence of any clinical terms makes Healthy more consistent.\nFinal: Healthy. Confidence=0.7 (implicit rather than explicit “healthy” wording).\n\nModality (top-2):\n1) Multisensory — Evidence: “checkerboard patterns… visual field” + “tones… ear” in the same experiment.\n2) Visual — Would be plausible if focusing only on checkerboards/smiley, but explicit auditory tones are integral.\nFinal: Multisensory. Confidence=0.85 (clear dual-modality description, plus strong few-shot analog).\n\nType (top-2):\n1) Attention — Evidence: “Occasionally a smiley face… press a key… as soon as possible” (target detection/oddball-like).\n2) Perception — Evidence: sensory stimulation with checkerboards and tones could be used for evoked responses.\nFinal: Attention. Confidence=0.7 (one strong quote supports target detection, but sensory-evoked Perception remains a close runner-up)."}},"nemar_citation_count":3,"computed_title":"MNE-Sample-Data","nchans_counts":[{"val":315,"count":1},{"val":376,"count":1}],"sfreq_counts":[{"val":600.614990234375,"count":2}],"stats_computed_at":"2026-04-22T23:16:00.221418+00:00","total_duration_s":387.7126008945096,"canonical_name":null,"name_confidence":0.95,"name_meta":{"suggested_at":"2026-04-14T10:18:35.342Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Gramfort2018"}}