{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c50","dataset_id":"ds007524","associated_paper_doi":null,"authors":["Corentin Bel","Julie Bonnaire","Jean-Rémi King","Christophe Pallier"],"bids_version":"1.6.0","contact_info":["Christophe Pallier"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds007524.v1.0.1","datatypes":["meg"],"demographics":{"subjects_count":50,"ages":[43,33,33,33,33,33,34,24,33,34,20,33,43,24,24,33,24,24,33,33,33,23,24,30,23,33,24,36,28,33,24,23,24,24,24,24,32,23,33,24,23,22,32,24,28,21,30,22,22,45],"age_min":20,"age_max":45,"age_mean":28.7,"species":null,"sex_distribution":{"m":40,"f":10},"handedness_distribution":{"r":50}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds007524","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"da1c6117637e759c856c63bb64756bd34d46c9d2350b32200776bbbf0e7e4353","license":"CC0","n_contributing_labs":null,"name":"LittlePrince_MEG_French_Read_Pallier2025","readme":"## Summary\nThis dataset contains magnetoencephalography (MEG) recordings collected while participants read the French text of *Le Petit Prince* presented using a rapid serial visual presentation (RSVP) paradigm.\nA separate dataset containing MEG recordings from the auditory listening paradigm is available on OpenNeuro (accession number: ds007523).\nThis data is analyzed in:\nd’Ascoli, S., Bel, C., Rapin, J. et al. Towards decoding individual words from non-invasive brain recordings. Nature Communications 16, 10521 (2025). https://doi.org/10.1038/s41467-025-65499-0\n------------------------------------------------------------------------\n## Participants\nFifty healthy adults participated in the reading experiment (10 females; mean age = 28.4 years, SD = 5.7 years).\nAll participants were native French speakers, right-handed, and reported no history of neurological disorders. Written informed consent was obtained prior to participation. The study was approved by the relevant local ethics committee.\n------------------------------------------------------------------------\n## Stimuli\nThe stimulus consisted of the French text of *Le Petit Prince*.\nThe text was presented using a rapid serial visual presentation (RSVP) paradigm:\n- Words were displayed individually in white font on a black background\n- Word duration: 225 ms\n- Inter-word interval: 50 ms (black screen)\n- Sentence-final pause: 500 ms\nTiming parameters were selected based on pilot testing to maintain attention and reading fluency.\nThe text was segmented into 9 parts corresponding to the 9 experimental runs.\n- Mean run duration: 8min10s\n- SD: 40s\n- Range: 7min10s to 9min\n------------------------------------------------------------------------\n## Experimental Procedure\nAfter informed consent and familiarization with the MEG environment, participants were seated in the MEG chair inside a magnetically shielded room facing a projection screen.\nViewing distance was fixed at 100 cm. Words appeared sequentially at the center of the screen. Participants were instructed to maintain fixation and read attentively while minimizing movement.\nThe experiment consisted of 9 runs. Short breaks were provided between runs. After each run, participants completed 4 multiple-choice comprehension questions to assess engagement (behavioral responses are not included in this release).\n------------------------------------------------------------------------\n## Acquisition\n### MEG\nMEG data for all three tasks were recorded inside the same magnetically shielded room using a whole-head Elekta Neuromag TRIUX MEG system (Elekta Oy, Helsinki, Finland), equipped with 102 magnetometers and 204 planar gradiometers. Data were recorded continuously with a sampling rate of 1000 Hz and an online low-pass filter at 330 Hz and high-pass filter at 0.1 Hz.\nVertical and horizontal electrooculograms (EOG) and an electrocardiogram (ECG) were recorded simultaneously using bipolar electrodes to monitor eye movements and heartbeats.\n### Anatomical MRI\nFor each participant, a high-resolution T1-weighted anatomical MRI scan was acquired using a 3T Siemens Magnetom Prisma MRI scanner (Siemens Healthcare, Erlangen, Germany).\nA standard MPRAGE sequence was used. MRI scans were typically acquired right after the MEG recording. Scans were used for coregistration and cortical surface reconstruction for source analysis.\n------------------------------------------------------------------------\n## Data Organization\n### Raw Data\nThe root directory includes:\n- `dataset_description.json`\n- `participants.tsv` and `participants.json`\n- `task-read_events.json`\n- `sub-01` to `sub-50`\n- `sourcedata/`\n- `derivatives/`\nEach subject directory (`sub-XX`) contains one session (`ses-01`) with:\n- `anat/`: T1-weighted MRI (`sub-XX_ses-01_T1w.nii.gz`) and corresponding JSON sidecar\n- `meg/`: 9 MEG runs (`task-read_run-01` to `run-09`), each including:\n  - continuous MEG data (`*_meg.fif`)\n  - sidecar JSON files\n  - `events.tsv` and `channels.tsv` files\n  - coordinate system file (`*_coordsystem.json`)\n  - calibration and crosstalk files\n- `sub-XX_ses-01_scans.tsv`: scan-level metadata\nEach run corresponds to one text segment.\nAcquisition parameters are provided in the corresponding sidecar JSON files.\n### Derivatives\nThe `derivatives/` directory contains:\n- `freesurfer/`: subject-specific FreeSurfer reconstructions and morph maps\n- `preprocessed_data/`: preprocessed MEG data (including SSS-processed files), forward and inverse solutions, noise covariance matrices, source spaces, transformation files, evoked data, and source time courses.\n------------------------------------------------------------------------\n## Reference\nNiso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., & Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. *Scientific Data*, 5, 180110. https://doi.org/10.1038/sdata.2018.110","recording_modality":["meg"],"senior_author":"Christophe Pallier","sessions":["01"],"size_bytes":320573795930,"source":"openneuro","storage":{"backend":"s3","base":"s3://openneuro.org/ds007524","raw_key":"dataset_description.json","dep_keys":["CHANGES","LICENSE","README","participants.json","participants.tsv","task-read_events.json"]},"study_design":null,"study_domain":null,"tasks":["read"],"timestamps":{"digested_at":"2026-04-22T12:30:30.361258+00:00","dataset_created_at":"2026-03-17T11:48:20.788Z","dataset_modified_at":"2026-04-03T12:56:45.000Z"},"total_files":500,"computed_title":"LittlePrince_MEG_French_Read_Pallier2025","nchans_counts":[{"val":346,"count":414},{"val":339,"count":27},{"val":338,"count":9}],"sfreq_counts":[{"val":1000.0,"count":450}],"stats_computed_at":"2026-04-22T23:16:00.312875+00:00","total_duration_s":230369.55000000002,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"ad17027995bf3bb3","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Other"],"confidence":{"pathology":0.8,"modality":0.9,"type":0.65},"reasoning":{"few_shot_analysis":"Closest convention matches in the few-shot set are the visual-stimulus cognitive experiments labeled as Visual modality (e.g., the schizophrenia visual discrimination example labeled Modality=Visual and Type=Perception). Those examples indicate that when stimuli are visually presented on a screen, Modality should be Visual regardless of response format. However, unlike the few-shot visual discrimination (motion left/right) which cleanly maps to Perception, this dataset’s primary aim is word/language decoding during continuous reading, a construct not explicitly represented among the provided Type labels; this pushes Type toward Other rather than Perception by convention (use Perception when the study is primarily sensory discrimination/detection).","metadata_analysis":"Key metadata facts:\n- Population: \"Fifty healthy adults participated\" and \"reported no history of neurological disorders.\" \n- Stimulus/task: \"participants read the French text of Le Petit Prince presented using a rapid serial visual presentation (RSVP) paradigm\" and \"Words were displayed individually in white font on a black background\".\n- Research aim context: \"Towards decoding individual words from non-invasive brain recordings\" (paper citation in README), indicating a language/decoding focus rather than simple visual detection.","paper_abstract_analysis":"No useful paper abstract text was provided in the metadata (only a citation).","evidence_alignment_check":"Pathology:\n- Metadata says: \"Fifty healthy adults\" and \"no history of neurological disorders\".\n- Few-shot pattern suggests: datasets describing healthy participants map to Pathology=Healthy.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"read ... presented using a rapid serial visual presentation (RSVP)\" and \"Words were displayed individually ... on a ... screen\".\n- Few-shot pattern suggests: screen-based visual paradigms map to Modality=Visual (e.g., visual discrimination example).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: participants \"read attentively\" and the referenced analysis is \"Towards decoding individual words\".\n- Few-shot pattern suggests: many stimulus-driven tasks map to Perception when the goal is sensory discrimination; but here the construct is language/word decoding during reading (not an allowed Type label).\n- Alignment: PARTIAL CONFLICT/AMBIGUITY. Metadata indicates higher-level language decoding; since no 'Language' Type exists in allowed labels, the closest allowed label is Other rather than Perception. Metadata facts are used; few-shot conventions guide that Perception is for sensory discrimination, which is not the primary aim here.","decision_summary":"Top-2 candidates per category with head-to-head comparison:\n\nPathology:\n1) Healthy — Supported by explicit participant description: \"Fifty healthy adults\" and \"no history of neurological disorders\".\n2) Unknown — Would apply only if population were not described.\nWinner: Healthy (explicit recruitment description). Evidence alignment: aligned with few-shot convention.\n\nModality:\n1) Visual — Strong support: \"rapid serial visual presentation (RSVP)\"; \"Words were displayed individually ... on a ... screen\".\n2) Multisensory — Only if combined auditory/visual stimulation; but this dataset is the reading (visual) paradigm and explicitly contrasts a separate auditory dataset.\nWinner: Visual (direct stimulus description; separate auditory dataset mentioned). Evidence alignment: aligned with few-shot convention.\n\nType:\n1) Other — Best match because the study centers on reading/language and \"decoding individual words\", and 'Language' is not an available Type label.\n2) Perception — Plausible because RSVP word presentation is visually driven; however Perception is better reserved (per conventions) for sensory discrimination/detection as the primary aim.\nWinner: Other (primary construct is language/word decoding rather than basic perceptual discrimination). Evidence alignment: few-shot suggests Perception for visual discrimination tasks, but metadata implies a different construct; choose Other due to label-set limitation.\n\nConfidence justification (quotes/features):\n- Pathology high because of 2 explicit health-related statements: \"Fifty healthy adults\"; \"no history of neurological disorders\".\n- Modality high because of multiple explicit visual-stimulus statements: \"rapid serial visual presentation\"; \"Words were displayed individually\"; \"projection screen\".\n- Type moderate because it requires mapping language/decoding into the closest allowed bucket: \"decoding individual words\" + reading task description, but no direct allowed Type for language."}},"canonical_name":null,"name_confidence":0.62,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"canonical","author_year":"Pallier2025"}}