{"success":true,"database":"eegdash","data":{"_id":"69d16e04897a7725c66f4c57","dataset_id":"ds007615","associated_paper_doi":null,"authors":["Henrik Normannseth","Stein Andersson","Christoffer Hatlestad-Hall"],"bids_version":"1.9.0","contact_info":["Christoffer Hatlestad-Hall"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds007615.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":69,"ages":[20,22,21,21,25,20,24,25,26,29,25,29,24,29,27,27,29,23,26,25,27,30,22,21,23,20,20,28,24,29,19,26,28,21,23,22,23,28,25,22,33,28,25,21,25,40,25,26,28,26,28,23,26,36,23,40,23,28,29,24,23,23,26,21,21,23,24,26,21],"age_min":19,"age_max":40,"age_mean":25.26086956521739,"species":null,"sex_distribution":{"f":69},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds007615","osf_url":null,"github_url":null,"paper_url":null},"funding":["University of Oslo"],"ingestion_fingerprint":"6c7682b4ebde7df964b67e3e02ebaa411aab397ddbe7152b720fa896e9a775e5","license":"CC0","n_contributing_labs":null,"name":"LDAEP and resting-state EEG in healthy women","readme":"# LDAEP and resting-state EEG in healthy women\n## The dataset at a glance\n- 69 participants, all female.\n- Age range: 19-40 years, mean age 25.3 years (SD 4.2).\n- A single recording session comprising two paradigms: LDAEP (54 participants) and resting-state (eyes-open and eyes-closed; 69 participants).\n- 64 EEG channels and 4 auxiliary oculogram channels.\n- Signal sampling rate: 2048 Hz.\n- Additional data include hormonal contraceptive use, menstrual cycle phase, depressive symptoms (BDI-II), impulsivity (UPPS-P), and lifestyle factors.\n## Introduction\nThis dataset contains EEG recordings from 69 healthy women, acquired at the Department of Psychology, University of Oslo, Norway. The data were collected as part of a study investigating the relationship between hormonal contraceptive use and central serotonergic activity indexed by the loudness dependence of auditory evoked potentials (LDAEP).\nTwo paradigms were recorded per participant: a resting-state recording (four minutes eyes closed followed by four minutes eyes open) and na LDAEP paradigm (1000 Hz tones at five intensity levels: 55, 65, 75, 85, and 95 dB SPL; 80 trials per level). The resting-state recording was always conducted first to avoid auditory stimulus contamination. Resting-state data are available for all 69 participants. Due to a technical issue with the auditory stimulation equipment, LDAEP data are available for 54 of the 69 participants.\nThe data were recorded with a BioSemi ActiveTwo system, using 64 Ag-AgCl electrodes positioned according to the extended 10-20 system (10-10), at a sampling rate of 2048 Hz. Raw data are stored in BrainVision format (triplet of `*.eeg`, `*.vhdr`, `*.vmrk`).\nAlongside the EEG data, the dataset includes questionnaire data on hormonal contraceptive use, menstrual cycle, depressive symptoms (BDI-II), impulsivity (UPPS-P), and lifestyle factors.\n## Disclaimer\nThe dataset is provided \"as is\". The authors take no responsibility with regard to data quality. The user is solely responsible for ascertaining that the data used for publications or in other contexts fulfil the required quality criteria.\n## The data\n### Raw data files\nEach participant's EEG data are stored in the `sub-##/eeg/` directory in BrainVision format. Up to two tasks are included per participant:\n- `task-rest`: Eyes-closed (`acq-ec`) and eyes-open (`acq-eo`) resting-state (4 minutes each). Available for all 69 participants.\n- `task-ldaep`: LDAEP auditory stimulation paradigm. Available for 54 participants only.\nEach `task`/`acq` combination comprises three data files (`*.eeg`, `*.vhdr`, `*.vmrk`) and is accompanied by a sidecar metadata file (`*.json`), a channels information file (`*_channels.tsv`), and an events file (`*_events.tsv` with `*_events.json`). The data signals are unfiltered, except for a standard software anti-aliasing filter (recorded in Norway; the line noise frequency is 50 Hz).\nThe EOG channels – HOG1, HOG2, VOG1, VOG2 – are positioned near the outer canthi (1 = left, 2 = right) and above (1) and below (2) the right eye.\nPlease note that the data does not come with any pre-defined quality assessment. Whilst most data files are of high quality, individual files may vary. It is the user's responsibility to verify the quality of each data file. The dataset does not include quality assessment or preprocessing code. For an example LDAEP pipeline, please refer to the paper referenced below.\n### Participant and phenotype data\nCore demographic variables are provided in participants.tsv at the root level (age, sex, and hormonal contraceptive user group).\nAdditional participant-level data are organised in the `phenotype/` directory:\n- `hc_usage.tsv`: Hormonal contraceptive usage details and menstrual cycle data (26 variables). Includes contraceptive type, progestin type, duration of use, self-reported mood changes and side effects (coded yes/no), prior HC use history, pregnancy history, and menstrual cycle phase.\n- `lifestyle.tsv`: Medication use, nicotine use and type, alcohol consumption, and recreational drug use and frequency (7 variables).\n- `bdi.tsv`: Beck Depression Inventory-II total score, cognitive-affective subscale score, and somatic subscale score (4 variables). Subscales follow the female-specific factor structure of Dozois et al. (1998).\n- `upps.tsv`: UPPS-P Impulsive Behavior Scale subscale scores: lack of perseverance, urgency, lack of premeditation, and sensation seeking (5 variables).\nEach phenotype file is accompanied by a JSON sidecar with variable descriptions and coding schemes. To protect participant privacy, free-text questionnaire responses were excluded from the dataset; only coded categorical and numeric variables are provided.\n## How to cite\nAll use of this dataset in a publication context requires the following paper to be cited:\nNormannseth, H., Hatlestad-Hall, C., Rygvold, T. W., Hadzic, A., & Andersson, S. (2025). Hormonal contraceptive use is associated with reduced central serotonergic activity indexed by the loudness dependence of auditory evoked potentials. Frontiers in Human Neuroscience, 19, 1647425. https://doi.org/10.3389/fnhum.2025.1647425\nA dataset descriptor article is currently in works.\n## Contact\nQuestions regarding the dataset may be addressed to the corresponding author, Christoffer Hatlestad-Hall, Department of Neurology, Oslo University Hospital, Norway (chrihat (at) ous-research.no).\n## References\nThe dataset was standardised and organised in accordance with BIDS using [MNE-BIDS](https://mne.tools/mne-bids/):\nAppelhoff, S., Sanderson, M., Brooks, T., Van Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A., & Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software, 4(44), 1896. https://doi.org/10.21105/joss.01896\nThe relevant [BIDS specification](https://bids-specification.readthedocs.io/en/stable/) publications:\nGorgolewski, K. J., Auer, T., Calhoun, V. D., Craddock, R. C., Das, S., Duff, E. P., Flandin, G., Ghosh, S. S., Glatard, T., Halchenko, Y. O., Handwerker, D. A., Hanke, M., Keator, D., Li, X., Michael, Z., Maumet, C., Nichols, B. N., Nichols, T. E., Pellman, J., … Poldrack, R. A. (2016). The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Scientific Data, 3(1), 160044. https://doi.org/10.1038/sdata.2016.44\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., & Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6(1), 103–108. https://doi.org/10.1038/s41597-019-0104-8\nOther articles:\nDozois, D.J., Dobson, K.S., & Ahnberg, J.L. (1998). A psychometric evaluation of the Beck Depression inventory-II. Psychological Assessment, 10, 83-89. https://doi.org/10.1037/1040-3590.10.2.83","recording_modality":["eeg"],"senior_author":"Christoffer Hatlestad-Hall","sessions":[],"size_bytes":37200493446,"source":"openneuro","storage":{"backend":"s3","base":"s3://openneuro.org/ds007615","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["ldaep","rest"],"timestamps":{"digested_at":"2026-04-22T12:30:34.396308+00:00","dataset_created_at":"2026-04-01T12:36:31.690Z","dataset_modified_at":"2026-04-02T11:41:17.000Z"},"total_files":192,"computed_title":"LDAEP and resting-state EEG in healthy women","nchans_counts":[{"val":68,"count":192}],"sfreq_counts":[{"val":2048.0,"count":192}],"stats_computed_at":"2026-04-22T23:16:00.313010+00:00","total_duration_s":66774.71630859375,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"d18d9bbe64fe5515","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Auditory"],"type":["Perception"],"confidence":{"pathology":0.9,"modality":0.8,"type":0.7},"reasoning":{"few_shot_analysis":"Most similar few-shot conventions:\n- The example “Subcortical responses to music and speech...” is labeled Healthy / Auditory / Perception, and is based on explicit auditory stimulus presentation (clicks, music/speech). This guides mapping an auditory evoked-potential paradigm (like LDAEP with tones) to Modality=Auditory and Type=Perception.\n- The example “A Resting-state EEG Dataset for Sleep Deprivation” is labeled Healthy / Resting State / Resting-state when the paradigm is explicitly resting with eyes open/closed and no active task. This guides considering Resting State labels when the dataset is primarily resting.\nThis dataset contains both an auditory evoked-potential paradigm (LDAEP) and eyes-open/eyes-closed rest, so the few-shot examples suggest the main ambiguity is Perception/Auditory vs Resting-state/Resting State; selection should follow what the metadata frames as the primary study purpose.","metadata_analysis":"Key quoted metadata facts:\n- Population: “This dataset contains EEG recordings from 69 healthy women” and “69 participants, all female.”\n- Paradigms present: “a single recording session comprising two paradigms: LDAEP ... and resting-state (eyes-open and eyes-closed; 69 participants).”\n- Auditory stimulus (LDAEP): “LDAEP paradigm (1000 Hz tones at five intensity levels: 55, 65, 75, 85, and 95 dB SPL; 80 trials per level).”\n- Resting-state: “a resting-state recording (four minutes eyes closed followed by four minutes eyes open)” and file naming “task-rest: Eyes-closed ... and eyes-open ... resting-state (4 minutes each).”\n- Study aim emphasizes LDAEP: “investigating the relationship between hormonal contraceptive use and central serotonergic activity indexed by the loudness dependence of auditory evoked potentials (LDAEP).”","paper_abstract_analysis":"No useful paper information. (Only a citation is provided; no abstract text included in the metadata supplied.)","evidence_alignment_check":"Pathology:\n- Metadata says: “69 healthy women” (explicit healthy cohort).\n- Few-shot pattern suggests: similar healthy non-clinical cohorts map to Pathology=Healthy.\n- ALIGN.\n\nModality:\n- Metadata says: auditory stimulation is present in LDAEP: “1000 Hz tones at five intensity levels...”; also rest is present: “resting-state (eyes-open and eyes-closed).”\n- Few-shot pattern suggests: auditory stimulus paradigms map to Modality=Auditory (e.g., ABR/music-speech example), while pure eyes-open/closed rest maps to Modality=Resting State (sleep deprivation resting example).\n- PARTIAL CONFLICT due to mixed paradigms. Metadata framing emphasizes LDAEP as the key index (“serotonergic activity indexed by... LDAEP”), so select Auditory as the dominant stimulus modality.\n\nType:\n- Metadata says: goal is to study “loudness dependence of auditory evoked potentials (LDAEP)” using tones, which is fundamentally sensory-evoked response measurement; also includes a “resting-state recording”.\n- Few-shot pattern suggests: sensory-evoked discrimination/encoding paradigms map to Type=Perception (auditory ABR example), whereas no-task eyes-open/closed maps to Type=Resting-state.\n- PARTIAL CONFLICT because dataset includes both. Given the stated scientific target is LDAEP (an auditory evoked potential measure), Type=Perception is slightly better aligned with the primary construct described, while acknowledging resting-state is also included.","decision_summary":"Top-2 candidates and selection:\n\nPathology:\n1) Healthy — Evidence: “EEG recordings from 69 healthy women”; “69 participants, all female.”\n2) Unknown — (runner-up only if health status were unclear; not the case)\nDecision: Healthy. Alignment: aligns with few-shot healthy conventions.\nConfidence basis: multiple explicit healthy-cohort statements.\n\nModality:\n1) Auditory — Evidence: “LDAEP paradigm (1000 Hz tones at five intensity levels...)”; aim is “indexed by... auditory evoked potentials (LDAEP).” Few-shot analog: Healthy/Auditory/Perception auditory-response dataset.\n2) Resting State — Evidence: “resting-state (eyes-open and eyes-closed; 69 participants)”; “four minutes eyes closed... four minutes eyes open.” Few-shot analog: Healthy/Resting State/Resting-state resting datasets.\nDecision: Auditory (because metadata frames LDAEP as the central indexed construct, with rest included as an additional paradigm).\nConfidence basis: clear auditory-stimulus quotes, but mixed-paradigm design reduces certainty.\n\nType:\n1) Perception — Evidence: “loudness dependence of auditory evoked potentials (LDAEP)” and graded tone intensities; few-shot convention maps stimulus-evoked sensory paradigms to Perception.\n2) Resting-state — Evidence: explicit eyes-open/eyes-closed rest paradigm for all participants.\nDecision: Perception (LDAEP-focused study purpose), noting resting-state is also present.\nConfidence basis: explicit LDAEP/evoked-potential aim supports Perception, but coexistence with resting makes the runner-up plausible."}},"canonical_name":null,"name_confidence":0.55,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Normannseth2026"}}