{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a33d7","dataset_id":"ds005345","associated_paper_doi":null,"authors":["Zhengwu Ma","Nan Wang","Jixing Li"],"bids_version":"1.8.0","contact_info":["Zhengwu Ma","Wang Nan","Jixing Li","Zhengwu Ma"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds005345.v1.0.1","datatypes":["eeg"],"demographics":{"subjects_count":26,"ages":[26,23,24,21,21,24,27,20,25,23,28,26,24,23,26,21,26,26,23,21,27,25,21,26,24,22],"age_min":20,"age_max":28,"age_mean":23.96153846153846,"species":null,"sex_distribution":{"f":15,"m":11},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005345","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"6b902f73d593c4dfb14e8ac7bf0d97fa07dd8e5e1e45e051eefa0e6027c25492","license":"CC0","n_contributing_labs":null,"name":"Le Petit Prince (LPP) Multi-talker: Naturalistic 7T fMRI and EEG Dataset","readme":"## Participants\nThis dataset includes 25 native Mandarin Chinese speakers (14 females, mean age = 24.04 ± 2.28 years) who participated in both EEG and fMRI experiments. The participants were all right-handed, with no reported history of neurological disorders. They were enrolled in undergraduate or graduate programs in Shanghai. All participants gave informed consent, and the experiments were approved by the Ethics Committee of the Ninth People's Hospital, affiliated with Shanghai Jiao Tong University School of Medicine (SH9H-2019-T33-2 and SH9H-2022-T379-2).\nIn the case of French participants, due to legal constraints, additional session considerations were taken into account, such as shorter session durations.\n## Experiment Procedure\nMRI Scanning Sessions\nParticipants underwent both EEG and fMRI experiments while listening to the Chinese version of *Le Petit Prince*. During the MRI session, participants were instructed to maintain fixation on a crosshair on the screen and minimize eye movements and head motions. The task involved attending to different talkers in the multitalker condition (single male, single female, mixed male, and mixed female talkers).\nSession Breakdown\n- The entire session lasted approximately 70 minutes for fMRI participants, including a series of 4 conditions (single-talker, mixed-attended, and mixed-unattended conditions).\n- Quiz questions were administered after each run to assess participants' comprehension of the narrative.\nIn the French cohort, due to legal time constraints, the experiment durations were adjusted.\n## Stimuli\nThe stimuli were selected excerpts from the Chinese version of *Le Petit Prince* (available at [xiaowangzi.org](http://www.xiaowangzi.org/)). These audio clips were previously used in both EEG (Li et al., 2024) and fMRI (Li et al., 2022) studies.\nThe English and Chinese versions were enhanced with visual stimuli (e.g., images of scenes from the book) to align with the storyline. However, visual stimuli were not presented in the French version to comply with legal restrictions.\n## Acquisition\nMRI Hardware & Scanning Parameters\n- EEG: Data were collected using a 64-channel actiCAP system, sampled at 500 Hz, and filtered between 0.016 and 80 Hz.\n- fMRI: Scanning was performed on a 7.0 T Terra Siemens MRI scanner at the Zhangjiang International Brain Imaging Centre. The scanning parameters differed slightly between the English/Chinese and French studies due to equipment availability.\n  - Functional MRI: 85 interleaved axial slices (1.6×1.6×1.6 mm voxel size, TR = 1000 ms, TE = 22.2 ms)\n  - Anatomical MRI: MP-RAGE sequence, T1-weighted images (voxel size = 0.7×0.7×0.7 mm).\n## Preprocessing\nMRI Data Processing\n1. DICOM to NIfTI Conversion: All raw MRI data were converted to NIfTI format using `dcm2niix` (version 1.0.20220505) and processed using the `fMRIPrep` pipeline (version 20.2.0).\n2. Anatomical Preprocessing:\n   - Skull stripping\n   - Segmentation into gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF)\n   - Registration to the Montreal Neurological Institute (MNI) space using MNI152NLin2009cAsym:res-2 template.\n3. Functional Preprocessing:\n   - Motion correction\n   - Slice-timing correction\n   - Multi-echo ICA for denoising\n   - Voxel resampling to native and MNI spaces.\nNote: Visual stimuli processing for the English and Chinese conditions was handled separately to avoid potential biases in the analysis.","recording_modality":["eeg"],"senior_author":"Jixing Li","sessions":[],"size_bytes":174527101942,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["multitalker"],"timestamps":{"digested_at":"2026-04-22T12:27:40.939218+00:00","dataset_created_at":"2024-07-16T14:14:02.035Z","dataset_modified_at":"2025-04-17T07:54:50.000Z"},"total_files":26,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005345","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"c420b0a2c3222ef7","model":"openai/gpt-5.2","tagged_at":"2026-01-20T17:48:32.195593+00:00"},"tags":{"pathology":["Healthy"],"modality":["Auditory"],"type":["Attention"],"confidence":{"pathology":0.8,"modality":0.7,"type":0.8},"reasoning":{"few_shot_analysis":"Closest few-shot matches by task/stimulus are: (1) “Subcortical responses to music and speech…” (Healthy, Auditory, Perception) because it is primarily continuous auditory stimulation (speech/music) during EEG; this guides choosing an Auditory modality when participants mainly listen. (2) “Cross-modal Oddball Task.” (Multisensory) shows the convention for labeling Multisensory only when auditory and visual stimuli are both explicitly presented as stimuli. In the current dataset, the core paradigm is listening to a story with selective attention to talkers; that maps more naturally to Type=Attention than Perception/Memory by the catalog conventions demonstrated in the few-shots.","metadata_analysis":"Key population and task facts from the README: (1) Healthy cohort: “25 native Mandarin Chinese speakers… with no reported history of neurological disorders.” (2) Auditory narrative + selective attention: “Participants underwent both EEG and fMRI experiments while listening to the Chinese version of Le Petit Prince.” and “The task involved attending to different talkers in the multitalker condition (single male, single female, mixed male, and mixed female talkers).” Additional potentially conflicting stimulus note: “The English and Chinese versions were enhanced with visual stimuli (e.g., images of scenes from the book) to align with the storyline. However, visual stimuli were not presented in the French version…”.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says “no reported history of neurological disorders” (Healthy). Few-shot pattern suggests Healthy for typical non-clinical adult cohorts. ALIGN.\nModality: Metadata clearly includes auditory stimulation (“listening to… Le Petit Prince”). Metadata also mentions possible added visual images for Chinese/English versions, which would suggest Multisensory, but the task instructions emphasize fixation on a crosshair and the paradigm focus is auditory multitalker attention. Few-shot convention (e.g., cross-modal oddball) uses Multisensory when both channels are clearly part of the task stimuli; here this is ambiguous. PARTIAL CONFLICT/AMBIGUITY resolved by selecting Auditory as dominant.\nType: Metadata says the goal is to “attend to different talkers in the multitalker condition,” which matches an Attention construct. Few-shot conventions: auditory continuous stimulation alone often maps to Perception, but explicit selective attending maps better to Attention. ALIGN (with Perception as runner-up).","decision_summary":"Top-2 candidates per category:\n- Pathology: (1) Healthy vs (2) Unknown. Evidence: “no reported history of neurological disorders”; participants are university students. Final: Healthy. Alignment: aligns with few-shot Healthy conventions.\n- Modality: (1) Auditory vs (2) Multisensory. Evidence for Auditory: “listening to the Chinese version of Le Petit Prince”; “audio clips”. Evidence for Multisensory: “enhanced with visual stimuli (e.g., images…)” (but presentation is not fully specified and fixation-crosshair instruction suggests visuals may be minimal/absent). Final: Auditory as dominant channel. Alignment: minor ambiguity with the visual-stimulus note.\n- Type: (1) Attention vs (2) Perception. Evidence for Attention: “attending to different talkers in the multitalker condition”. Evidence for Perception: narrative listening could be passive auditory processing, but selective attention is explicit. Final: Attention. Alignment: consistent with few-shot mapping where attention manipulation is primary."}},"computed_title":"Le Petit Prince (LPP) Multi-talker: Naturalistic 7T fMRI and EEG Dataset","nchans_counts":[{"val":64,"count":26}],"sfreq_counts":[{"val":500.0,"count":26}],"stats_computed_at":"2026-04-22T23:16:00.309367+00:00","total_duration_s":null,"canonical_name":null,"name_confidence":0.92,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Ma2024"}}