{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c4f","dataset_id":"ds007523","associated_paper_doi":null,"authors":["Corentin Bel","Julie Bonnaire","Christophe Pallier","Jean-Rémi King"],"bids_version":"1.7.0","contact_info":["Christophe Pallier"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds007523.v1.0.0","datatypes":["meg"],"demographics":{"subjects_count":58,"ages":[23,41,34,23,22,30,28,23,22,20,28,30,22,21,19,33,28,18,20,22,25,23,32,25,27,29,31,32,23,23,21,32,32,32,32,22,32,32,32,32,32,32,23,21,32,32,25,32,32,25,29,33,33,31,43,24,34,23],"age_min":18,"age_max":43,"age_mean":27.79310344827586,"species":null,"sex_distribution":{"m":39,"f":19},"handedness_distribution":{"r":58}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds007523","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"f939e597055fe01aed62f2ad0c2da456d3e8cf61c05458a6f697c12dce5964b5","license":"CC0","n_contributing_labs":null,"name":"LPP MEG Listen","readme":"## Summary\nThis dataset contains magnetoencephalography (MEG) recordings collected\nwhile participants listened to the French audiobook of *Le Petit Prince*\nby Antoine de Saint-Exupéry.\nA complementary MEG dataset from the same project, using a reading (RSVP) paradigm, is available on OpenNeuro (accession number: ds007524).\nThis data is analyzed in:\nd’Ascoli, S., Bel, C., Rapin, J. et al. Towards decoding individual words from non-invasive brain recordings. Nature Communications 16, 10521 (2025). https://doi.org/10.1038/s41467-025-65499-0\n------------------------------------------------------------------------\n## Participants\nFifty-eight healthy adults participated in the listening experiment (17\nfemales; mean age = 27.8 years, SD = 5.5 years).\nAll participants were native French speakers, right-handed, and reported\nno history of neurological disorders. Written informed consent was\nobtained prior to participation. The study was approved by the relevant\nlocal ethics committee.\n------------------------------------------------------------------------\n## Stimuli\nThe auditory stimulus consisted of the French audiobook version of *Le\nPetit Prince*.\n- Language: French\n- Format: Continuous audiobook\n- Segmentation: 9 parts\n- Mean duration per part: 10min50s\n- Standard deviation: 55s\n- Minimum duration: 9min40s\n- Maximum duration: 12min30s\nThe same audiobook version was previously used in a publicly available\nfMRI dataset (Li et al., 2022).\n------------------------------------------------------------------------\n## Experimental Procedure\nParticipants were seated in the MEG system after informed consent and\nfamiliarization with the recording environment.\nAuditory stimuli were delivered through MEG-compatible earphones. Sound\nintensity was individually adjusted to a comfortable listening level\nbefore the experiment. Participants were instructed to listen\nattentively and remain as still as possible.\nThe experiment consisted of 9 runs, corresponding to the 9 audiobook\nsegments. Between runs, participants completed 4 multiple-choice\ncomprehension questions presented visually on a screen (not reported here).\nShort breaks were provided between runs. Alertness and movement were monitored\nvia camera during recording.\n------------------------------------------------------------------------\n## Acquisition\n### MEG\nMEG data for all three tasks were recorded inside the same magnetically shielded room using a whole-head Elekta Neuromag TRIUX MEG system (Elekta Oy, Helsinki, Finland), equipped with 102 magnetometers and 204 planar gradiometers. Data were recorded continuously with a sampling rate of 1000 Hz and an online low-pass filter at 330 Hz and high-pass filter at 0.1 Hz.\nVertical and horizontal electrooculograms (EOG) and an electrocardiogram (ECG) were recorded simultaneously using bipolar electrodes to monitor eye movements and heartbeats.\n### Anatomical MRI\nFor each participant, a high-resolution T1-weighted anatomical MRI scan was acquired using a 3T Siemens Magnetom Prisma MRI scanner (Siemens Healthcare, Erlangen, Germany).\nA standard MPRAGE sequence was used. MRI scans were typically acquired right after the MEG recording. Scans were used for coregistration and cortical surface reconstruction for source analysis.\n------------------------------------------------------------------------\n## Data Organization\n### Raw Data\nThe root directory includes:\n-   `dataset_description.json`\n-   `participants.tsv` and `participants.json`\n-   `task-listen_events.json`\n-   `sub-01` to `sub-58`\n-   `sourcedata/`\nEach subject directory (`sub-XX`) contains one session (`ses-01`) with:\n-   `anat/`: T1-weighted MRI (`sub-XX_ses-01_T1w.nii.gz`) and\n    corresponding JSON sidecar\n-   `meg/`: 9 MEG runs (`task-listen_run-01` to `run-09`), each\n    including:\n    -   continuous MEG data (`*_meg.fif`)\n    -   sidecar JSON files\n    -   `events.tsv` and `channels.tsv` files\n    -   coordinate system file (`*_coordsystem.json`)\n    -   calibration and crosstalk files\n-   `sub-XX_ses-01_scans.tsv`: scan-level metadata\nEach run corresponds to one audiobook segment.\nAcquisition parameters are provided in the corresponding sidecar JSON\nfiles.\n------------------------------------------------------------------------\n## References\nNiso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G.,\nGramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J.,\nOostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., & Baillet, S.\n(2018). MEG-BIDS, the brain imaging data structure extended to\nmagnetoencephalography. *Scientific Data*, 5, 180110.\nhttps://doi.org/10.1038/sdata.2018.110\nLi, Jixing, et al. “Le Petit Prince Multilingual Naturalistic fMRI Corpus.” Scientific Data, vol. 9, no. 1, Aug. 2022, p. 530. www.nature.com, https://doi.org/10.1038/s41597-022-01625-7.","recording_modality":["meg"],"senior_author":"Jean-Rémi King","sessions":["01"],"size_bytes":477619624392,"source":"openneuro","storage":{"backend":"s3","base":"s3://openneuro.org/ds007523","raw_key":"dataset_description.json","dep_keys":["CHANGES","LICENSE","README","datacite.yml","participants.json","participants.tsv","task-listen_events.json"]},"study_design":null,"study_domain":null,"tasks":["listen"],"timestamps":{"digested_at":"2026-04-22T12:30:29.849000+00:00","dataset_created_at":"2026-03-17T11:47:46.636Z","dataset_modified_at":"2026-03-30T11:59:59.000Z"},"total_files":579,"computed_title":"LPP MEG Listen","nchans_counts":[{"val":346,"count":484},{"val":404,"count":9},{"val":400,"count":9},{"val":329,"count":9},{"val":343,"count":9},{"val":321,"count":1}],"sfreq_counts":[{"val":1000.0,"count":521}],"stats_computed_at":"2026-04-22T23:16:00.312861+00:00","total_duration_s":341307.479,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"42e43bb2e219ce9c","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Auditory"],"type":["Perception"],"confidence":{"pathology":0.8,"modality":0.8,"type":0.7},"reasoning":{"few_shot_analysis":"Most similar few-shot example by stimulus modality is the dataset titled “Subcortical responses to music and speech are alike while cortical responses diverge,” which is labeled Modality=Auditory and Type=Perception; it also uses continuous auditory stimuli (music/speech) and targets neural responses to auditory/speech input. This convention supports mapping an audiobook listening paradigm to Auditory modality and Perception type (speech/language perception). For pathology conventions, many few-shot examples explicitly assign Healthy when participants are described as healthy volunteers; this matches the current dataset’s explicit recruitment description.","metadata_analysis":"Pathology evidence: the README states “Fifty-eight healthy adults participated in the listening experiment” and also “reported no history of neurological disorders.”\n\nModality evidence: the README states “participants listened to the French audiobook of *Le Petit Prince*” and “Auditory stimuli were delivered through MEG-compatible earphones.”\n\nType/task-purpose evidence: participants were “instructed to listen attentively,” and the linked analysis paper is described as “Towards decoding individual words from non-invasive brain recordings,” indicating the experiment targets neural processing/decoding of speech/words during listening (i.e., auditory language perception under naturalistic stimulation).","paper_abstract_analysis":"No useful paper information (only a citation is provided in the metadata; no abstract text included).","evidence_alignment_check":"Pathology: Metadata says “Fifty-eight healthy adults” and “no history of neurological disorders.” Few-shot pattern suggests labeling such cohorts as Healthy. ALIGN.\n\nModality: Metadata says “participants listened to the French audiobook” and “Auditory stimuli were delivered through ... earphones.” Few-shot pattern maps speech/music listening to Auditory. ALIGN.\n\nType: Metadata emphasizes listening to continuous speech and word decoding (“decoding individual words”) and instructs participants to “listen attentively.” Few-shot pattern for auditory speech/music neural response studies is Type=Perception, while “attentively” could suggest Attention. Mostly ALIGN with Perception being the dominant construct (speech/word processing), with minor ambiguity versus Attention.","decision_summary":"Top-2 candidates per category:\n\nPathology:\n1) Healthy (selected) — supported by: “Fifty-eight healthy adults” and “no history of neurological disorders.”\n2) Unknown — would apply if population health were not specified, but it is explicitly specified.\nAlignment: aligns with few-shot conventions for healthy volunteer cohorts.\nConfidence basis: 2 explicit quotes support Healthy.\n\nModality:\n1) Auditory (selected) — supported by: “participants listened to the French audiobook” and “Auditory stimuli were delivered through ... earphones.”\n2) Multisensory — could be argued because “comprehension questions presented visually,” but these were between runs and “not reported here,” whereas the primary stimulus is continuous auditory audiobook.\nAlignment: aligns with few-shot auditory listening examples.\nConfidence basis: 2 explicit quotes clearly indicate auditory stimulation.\n\nType:\n1) Perception (selected) — supported by naturalistic speech listening plus analysis goal “decoding individual words,” consistent with speech/word perception/processing.\n2) Attention — supported by “listen attentively,” but attention appears instructional rather than the primary construct.\nAlignment: mostly aligns with few-shot convention mapping auditory speech/music stimulus-response studies to Perception.\nConfidence basis: 1 strong quote about word decoding + clear auditory listening paradigm; some residual ambiguity vs Attention."}},"canonical_name":null,"name_confidence":0.65,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Bel2026"}}