{"success":true,"database":"eegdash","data":{"_id":"6953f4239276ef1ee07a32bd","dataset_id":"ds003082","associated_paper_doi":null,"authors":["Jonathan Cote","Etienne de Villers-Sidani"],"bids_version":"1.2.0","contact_info":["Jonathan Cote"],"contributing_labs":null,"data_processed":true,"dataset_doi":"10.18112/openneuro.ds003082.v1.0.0","datatypes":["meg"],"demographics":{"subjects_count":2,"ages":[23],"age_min":23,"age_max":23,"age_mean":23.0,"species":null,"sex_distribution":{"f":1},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds003082","osf_url":null,"github_url":null,"paper_url":null},"funding":["This work was funded in part by grants from the Natural Sciences and Engineering Research Council of Canada (NSERC), the Centre for Research on Brain, Language and Music (CRBLM), and the Reseau quebecois de recherche sur le vieillissement (RQRV)."],"ingestion_fingerprint":"a8aa4aa056c10d8149061f1918dbaf01a649960bbfe3342758d1b0624245ad5c","license":"CC0","n_contributing_labs":null,"name":"Auditory Cortex Mapping Dataset","readme":"# Brainstorm - Auditory Cortex Mapping Dataset\n## License\nThis dataset (MEG and MRI data) was collected by Jonathan Cote of the Neuroplasticity and Sensory Biomarking Lab, Montreal Neurological Institute, McGill University, Canada. Its purpose is to serve as a data example to be used with our MEG-based auditory cortex mapping technique. It is presently released in the Public Domain, and is not subject to copyright in any jurisdiction.\nWe would appreciate though that you reference this dataset in your publications: please acknowledge its authors (Jonathan Cote and Etienne de Villers-Sidani) and cite the mapping technique publication (under review)\nThis dataset will first be a single subject, but might be expanded up to the 10 participants in the future.\n## Presentation of the experiment\n#### Experiment\n* One subject, one acquisition run of around 12 minutes\n* Subject stimulated binaurally with intra-aural earphones (air tubes+transducers)\n* The run contains:\n    * 1795 iso-intensity pure tones (IIPT)\n    * The frequency of these ranges between 100 Hz and 21527 Hz, spaced by 1/4 octave.\n* Random inter-stimulus interval: randomized but averaging at a presentation rate of 3Hz\n* The subject passively listened while looking at a fixation cross\n* Auditory stimuli generated with the Matlab Psychophysics toolbox\n#### MEG acquisition\n* Acquisition at **120000Hz**, with a **CTF 275** system, subject in seating position\n* Recorded at the Montreal Neurological Institute in January 2015\n* Anti-aliasing low-pass filter at 3000Hz, files saved with the 3rd order gradient\n* Recorded channels (340):\n    * 1 Trigger channel indicating the presentation times of the audio stimuli: UADC001 (#306)\n    * 26 MEG reference sensors (#4-#29)\n    * 272 MEG axial gradiometers (#30-#302)\n    * 1 ECG bipolar (#303)\n    * 2 EOG bipolar (vertical #304, horizontal #305)\n    * 3 Unused channels (#1-#3)\n* 3 datasets:\n    * **sub-0001_ses-0001_task-mapping_run-01_meg.ds**: Run #1, 653s, 1795 IIPT, sampled at 12000 Hz\n    * **sub-emptyroom_ses-0001_emptyroom_run-01_meg.ds**: Empty room recording, 120s long, sampled at 12000 Hz\n    * **sub-emptyroom_ses-0001_emptyroom_run-02_meg.ds**: Empty room recording, 120s long, sampled at 2400 Hz\n* Use of the .ds, not the AUX (standard at the MNI) because they are easier to manipulate in FieldTrip\n#### Stimulation delays\n* **Delay #1**: Transmission of the sound.\nBetween when the sound card plays the sound and when the subject receives the sound in the ears. This is the time it takes for the transducer to convert the analog audio signal into a sound, plus the time it takes to the sound to travel through the air tubes from the transducer to the subject's ears. This delay cannot be estimated from the recorded signals: before the acquisition, we placed a sound meter at the extremity of the tubes to record when the sound is delivered. Delay **between 4.8ms and 5.0ms** (std = 0.08ms). At a sampling rate of 2400Hz, this delay can be considered **constant**, we will not compensate for it.\n* **Delay #2**: Recording of the signals.\nThe CTF MEG systems have a constant delay of **4 samples** between the MEG/EEG channels and the analog channels (such as the audio signal UADC001), because of an anti-aliasing filtered that is applied to the first and not the second. This translate here to a **constant delay** of **1.7ms**.\n* **Uncorrected delays**: We will  keep the delays. We decide not to compensate for these delays because they do not introduce any jitter in the responses and they are not going to change anything in the interpretation of the data.\n#### Head shape and fiducial points\n* 3D digitization using a Polhemus Fastrak device driven by Brainstorm (S01_20131218_*.pos)\n* More information: [Digitize EEG electrodes and head shape][1]\n* The output file is copied to each .ds folder and contains the following entries:\n    * The position of the center of CTF coils\n    * The position of the anatomical references we use in Brainstorm: Nasion and connections tragus/helix, as illustrated [here][2].\n* Around 150 head points distributed on the hard parts of the head (no soft tissues)\n#### Subject anatomy\n* Subject with 1.5T MRI\n* Processed with FreeSurfer 5.3\n[1]: http://neuroimage.usc.edu/brainstorm/Tutorials/TutDigitize\n[2]: http://neuroimage.usc.edu/brainstorm/CoordinateSystems#Pre-auricular_points_.28LPA.2C_RPA.29","recording_modality":["meg"],"senior_author":"Etienne de Villers-Sidani","sessions":["0001","20150112"],"size_bytes":14215257908,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["mapping","noise"],"timestamps":{"digested_at":"2026-04-22T12:25:31.062200+00:00","dataset_created_at":"2020-08-17T10:13:10.611Z","dataset_modified_at":"2021-11-17T20:54:33.000Z"},"total_files":3,"storage":{"backend":"s3","base":"s3://openneuro.org/ds003082","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.tsv"]},"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"013ca91ba3da3f0c","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Auditory"],"type":["Perception"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot example by task+stimulus is: \"Subcortical responses to music and speech are alike while cortical responses diverge\" (Healthy / Auditory / Perception). That example is labeled Auditory modality because stimuli are sounds (clicks, music, speech) presented to listeners, and Perception type because the primary aim is sensory/auditory response characterization rather than memory/decision/motor. The present dataset likewise presents auditory tones and is intended for auditory cortex response mapping, matching the same modality/type convention.","metadata_analysis":"Key population/study-purpose and stimulus facts from metadata:\n- Purpose/construct: \"its purpose is to serve as a data example to be used with our MEG-based auditory cortex mapping technique\".\n- Auditory stimulation: \"Subject stimulated binaurally with intra-aural earphones\".\n- Stimulus type: \"1795 iso-intensity pure tones (IIPT)\" and \"frequency ... between 100 Hz and 21527 Hz\".\n- Task context: \"The subject passively listened while looking at a fixation cross\".\n- Participants overview suggests no clinical recruitment: \"Subjects: 2; Sex: {'f': 1}; Age range: 23-23\" (no diagnosis/control/patient grouping mentioned).","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: no disorder is mentioned; only \"One subject\" and \"Subjects: 2 ... Age range: 23-23\".\n- Few-shot pattern suggests: absent any clinical recruitment language, label as Healthy.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"stimulated binaurally with intra-aural earphones\" and \"pure tones\".\n- Few-shot pattern suggests: sound stimuli => Auditory.\n- Alignment: ALIGN.\n\nType:\n- Metadata says: \"auditory cortex mapping\" using many tones, and participant \"passively listened\".\n- Few-shot pattern suggests: sensory stimulation/response characterization without learning/decision/motor => Perception.\n- Alignment: ALIGN.","decision_summary":"Top-2 comparative selections:\n\n1) Pathology\n- Candidate A: Healthy\n  - Evidence: no clinical terms; \"One subject\"; \"Subjects: 2; ... Age range: 23-23\"; dataset described as a technique example rather than patient study (\"data example\").\n- Candidate B: Unknown\n  - Evidence: metadata does not explicitly say \"healthy\".\n- Decision: Healthy (lack of any recruitment by diagnosis aligns with catalog convention).\n- Alignment status: few-shot convention and metadata align.\n\n2) Modality\n- Candidate A: Auditory\n  - Evidence: \"stimulated binaurally with intra-aural earphones\"; \"pure tones (IIPT)\"; frequencies \"100 Hz to 21527 Hz\".\n- Candidate B: Multisensory\n  - Evidence: participant \"looking at a fixation cross\" (visual element), but it is not the primary stimulus manipulated.\n- Decision: Auditory (dominant manipulated stimulus is sound).\n- Alignment status: align with auditory-perception few-shot example.\n\n3) Type\n- Candidate A: Perception\n  - Evidence: \"auditory cortex mapping\"; many tone frequencies; \"passively listened\" implies sensory evoked responses/sensory mapping.\n- Candidate B: Attention\n  - Evidence: fixation cross could imply sustained attention, but no attention manipulation/objective stated.\n- Decision: Perception (primary goal is auditory sensory mapping, not attentional control).\n- Confidence justification quotes/features: purpose statement (mapping), explicit pure-tone stimulation, passive listening instruction."}},"nemar_citation_count":5,"computed_title":"Auditory Cortex Mapping Dataset","nchans_counts":[{"val":300,"count":2},{"val":306,"count":1}],"sfreq_counts":[{"val":12000.0,"count":2},{"val":2400.0,"count":1}],"stats_computed_at":"2026-04-22T23:16:00.221902+00:00","total_duration_s":1064.0,"canonical_name":null,"name_confidence":0.82,"name_meta":{"suggested_at":"2026-04-14T10:18:35.342Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Cote2020"}}