{"success":true,"database":"eegdash","data":{"_id":"69d16e06897a7725c66f4cdf","dataset_id":"nm000341","associated_paper_doi":null,"authors":["G. Cattan","P. L. C. Rodrigues","M. Congedo"],"bids_version":"1.9.0","contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.5281/zenodo.2617084","datatypes":["eeg"],"demographics":{"subjects_count":12,"ages":[26,26,26,26,26,26,26,26,26,26,26,26],"age_min":26,"age_max":26,"age_mean":26.0,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/nm000341","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"e6abacdb9d10d4463afae7253384974703ec3b23a2ae43c3346bbf5021ba0531","license":"CC-BY-4.0","n_contributing_labs":null,"name":"Cattan, Rodrigues & Congedo 2019 — Passive Head-Mounted Display Music-Listening EEG dataset (PHMD)","readme":"Cattan2019-PHMD\n===============\nPassive Head Mounted Display with Music Listening dataset [1]_.\nDataset Overview\n----------------\n  Code: Cattan2019-PHMD\n  Paradigm: rstate\n  DOI: 10.5281/zenodo.2617084\n  Subjects: 12\n  Sessions per subject: 1\n  Events: off=1, on=2\n  Trial interval: [0, 1] s\n  File format: mat and csv\nAcquisition\n-----------\n  Sampling rate: 512.0 Hz\n  Number of channels: 16\n  Channel types: eeg=16\n  Channel names: Cz, Fc5, Fc6, Fp1, Fp2, Fz, O1, O2, Oz, P3, P4, P7, P8, Pz, T7, T8\n  Montage: standard_1020\n  Hardware: g.USBamp\n  Software: OpenViBE\n  Reference: right earlobe\n  Ground: AFz\n  Sensor type: wet\n  Line frequency: 50.0 Hz\n  Online filters: no digital filter\n  Cap manufacturer: EasyCap\n  Cap model: EC20\n  Electrode type: wet\nParticipants\n------------\n  Number of subjects: 12\n  Health status: healthy\n  Age: mean=26.25, std=2.63\n  Gender distribution: male=9, female=3\n  Species: human\nExperimental Protocol\n---------------------\n  Paradigm: rstate\n  Number of classes: 2\n  Class labels: off, on\n  Trial duration: 60.0 s\n  Study design: focus on the marker and to listen to the music that was diffused during the experiment (Bach Invention from one to ten on harpsichord).\n  Feedback type: none\n  Stimulus type: visual fixation marker\n  Stimulus modalities: visual, auditory\n  Primary modality: auditory\n  Training/test split: False\n  Instructions: Subjects were asked to focus on the marker and to listen to the music that was diffused during the experiment\nHED Event Annotations\n---------------------\n  Schema: HED 8.4.0 | Browse: https://www.hedtags.org/hed-schema-browser\n  off\n    ├─ Experiment-structure\n    └─ Rest\n  on\n    ├─ Experiment-structure\n    └─ Rest\nData Structure\n--------------\n  Blocks per session: 10\n  Block duration: 60.0 s\n  Trials context: 5 blocks with smartphone switched-off and 5 blocks with smartphone switched-on, randomized sequence\nPreprocessing\n-------------\n  Data state: raw, unfiltered\n  Preprocessing applied: False\n  Notes: Data were acquired with no digital filter. No Faraday cage used to mimic real-world usage.\nBCI Application\n---------------\n  Applications: vr_ar\n  Environment: laboratory\n  Online feedback: False\nTags\n----\n  Pathology: Healthy\n  Modality: EEG\n  Type: Resting State\nDocumentation\n-------------\n  Description: This dataset contains electroencephalographic recordings of 12 subjects listening to music with and without a passive head-mounted display\n  DOI: 10.5281/zenodo.2617084\n  Associated paper DOI: 10.2312/vriphys.20181064\n  License: CC-BY-4.0\n  Investigators: G. Cattan, P. L. C. Rodrigues, M. Congedo\n  Senior author: M. Congedo\n  Institution: GIPSA-lab, CNRS, University Grenoble-Alpes, Grenoble INP\n  Address: GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402, France\n  Country: FR\n  Repository: Zenodo\n  Data URL: https://doi.org/10.5281/zenodo.2617084\n  Publication year: 2019\n  How to acknowledge: Python code for manipulating the data is available at https://github.com/plcrodrigues/py.PHMDML.EEG.2017-GIPSA\n  Keywords: Electroencephalography (EEG), Virtual Reality (VR), Passive Head-Mounted Display (PHMD), experiment\nAbstract\n--------\nWe describe the experimental procedures for a dataset that we have made publicly available at https://doi.org/10.5281/zenodo.2617084 in mat (Mathworks, Natick, USA) and csv formats. This dataset contains electroencephalographic recordings of 12 subjects listening to music with and without a passive head-mounted display, that is, a head-mounted display which does not include any electronics at the exception of a smartphone. The electroencephalographic headset consisted of 16 electrodes. Data were recorded during a pilot experiment taking place in the GIPSA-lab, Grenoble, France, in 2017. Python code for manipulating the data is available at https://github.com/plcrodrigues/py.PHMDML.EEG.2017-GIPSA. The ID of this dataset is PHMDML.EEG.2017-GIPSA.\nMethodology\n-----------\nSubjects sat in front of screen at ~50 cm distance without instrumental noise-reduction devices. EEG cap and Samsung Gear were placed on subject. Smartphones were continuously swapped between switched-on and switched-off conditions. Each block consisted of 1 minute of EEG recording with eyes opened. The sequence of 10 blocks was randomized prior to experiment using random number generator with no autocorrelation. Triggers marked beginning of each block (1=switched-off, 2=switched-on).\nReferences\n----------\nG. Cattan, P. L. Coelho Rodrigues, and M. Congedo, ‘Passive Head-Mounted Display Music-Listening EEG dataset’, Gipsa-Lab ; IHMTEK, Research Report 2, Mar. 2019. doi: 10.5281/zenodo.2617084.\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8\n---\nGenerated by MOABB 1.5.0 (Mother of All BCI Benchmarks)\nhttps://github.com/NeuroTechX/moabb","recording_modality":["eeg"],"senior_author":null,"sessions":["0"],"size_bytes":242518067,"source":"nemar","storage":{"backend":"s3","base":"s3://openneuro.org/nm000341","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["rstate"],"timestamps":{"digested_at":"2026-04-22T12:52:29.450105+00:00","dataset_created_at":null,"dataset_modified_at":null},"total_files":12,"computed_title":"Cattan, Rodrigues & Congedo 2019 — Passive Head-Mounted Display Music-Listening EEG dataset (PHMD)","nchans_counts":[{"val":16,"count":12}],"sfreq_counts":[{"val":512.0,"count":12}],"stats_computed_at":"2026-04-22T23:16:00.314643+00:00","total_duration_s":9849.9765625,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"c4c27a6856271f9b","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Auditory"],"type":["Resting-state"],"confidence":{"pathology":0.9,"modality":0.85,"type":0.9},"reasoning":{"few_shot_analysis":"Closest few-shot conventions are the resting-state examples: (1) “A Resting-state EEG Dataset for Sleep Deprivation” labeled Healthy / Resting State / Resting-state—shows that passive eyes-open/closed recordings with no cognitive task are labeled Type=Resting-state and Modality=Resting State when no explicit stimulus is central. (2) Dementia resting-state dataset labeled Modality=Resting State and Type=Clinical/Intervention illustrates how resting-state paradigms are treated as Resting-state unless the primary purpose is clinical classification. For modality with auditory content, the “Subcortical responses to music and speech…” example shows auditory listening paradigms map to Modality=Auditory, but that dataset is not resting-state (it is stimulus-driven auditory physiology). Here, because the metadata explicitly states a resting-state paradigm with music listening and explicitly names the primary modality as auditory, we follow the convention: Type=Resting-state, Modality determined by dominant stimulus channel (auditory).","metadata_analysis":"Key participant/pathology facts: (1) “Health status: healthy” (Participants section). (2) “Tags\\n----\\n  Pathology: Healthy”.\n\nKey modality/task facts: (1) “Passive Head Mounted Display with Music Listening dataset”. (2) “Stimulus type: visual fixation marker” and “Stimulus modalities: visual, auditory” plus “Primary modality: auditory”. (3) “Instructions: Subjects were asked to focus on the marker and to listen to the music”.\n\nKey type (resting) facts: (1) “Paradigm: rstate” and “Tasks: [ 'rstate' ]”. (2) HED annotations: “off … Rest” and “on … Rest”. (3) “Tags… Type: Resting State”.","paper_abstract_analysis":"No useful paper information beyond the included dataset abstract; it reiterates “12 subjects listening to music with and without a passive head-mounted display” and does not suggest an active cognitive task.","evidence_alignment_check":"Pathology — Metadata says: “Health status: healthy” and “Pathology: Healthy”. Few-shot pattern suggests: if no disorder-recruited population, label Healthy. ALIGN.\n\nModality — Metadata says: “Stimulus modalities: visual, auditory” and explicitly “Primary modality: auditory”, with “listen to the music”. Few-shot pattern suggests: auditory listening studies → Auditory; multimodal cueing (e.g., cross-modal oddball) → Multisensory. PARTIAL TENSION (Auditory vs Multisensory), resolved by explicit metadata fact “Primary modality: auditory” (metadata wins).\n\nType — Metadata says: “Paradigm: rstate”, HED tags “Rest”, and “Type: Resting State”. Few-shot pattern suggests: passive resting paradigms → Type=Resting-state (as in sleep-deprivation resting dataset). ALIGN.","decision_summary":"Top-2 candidates per category with head-to-head comparison:\n\nPathology:\n1) Healthy — Supported by “Health status: healthy”, “Pathology: Healthy”, and the absence of any diagnosis-focused recruitment.\n2) Unknown — Would apply only if participant health were not specified.\nWinner: Healthy (explicitly stated). Alignment: aligns with few-shot resting-state healthy conventions.\nConfidence basis: 2+ explicit quotes (“Health status: healthy”; “Pathology: Healthy”) plus consistent context.\n\nModality:\n1) Auditory — Supported by “Primary modality: auditory”, “listen to the music”, and music-listening description.\n2) Multisensory — Supported by “Stimulus modalities: visual, auditory” and fixation marker + music.\nWinner: Auditory because the dataset explicitly designates the dominant channel (“Primary modality: auditory”). Alignment: mostly aligns with few-shot auditory-listening conventions, with acknowledged mixed-modality presentation.\nConfidence basis: 3 explicit features, but runner-up is plausible due to explicit multimodal note.\n\nType:\n1) Resting-state — Supported by “Paradigm: rstate”, HED “Rest” for both conditions, and “Type: Resting State”.\n2) Perception — Could be argued due to music listening, but no discrimination/detection task or behavioral goal is described.\nWinner: Resting-state (explicitly labeled and structured as rest blocks). Alignment: aligns with few-shot resting-state conventions.\nConfidence basis: 3+ explicit rest labels/annotations."}},"canonical_name":null,"name_confidence":0.72,"name_meta":{"suggested_at":"2026-04-14T10:18:35.344Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Cattan2019_PHMD"}}