{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a33d6","dataset_id":"ds005343","associated_paper_doi":null,"authors":["Armen Bagdasarov","Michael S. Gaffrey"],"bids_version":"1.8.0","contact_info":["Gaffrey Lab"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds005343.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":43,"ages":[9,5,6,6,9,9,9,9,9,6,9,9,9,9,9,10,6,10,6,6,9,9,6,9,9,6,9,9,6,9,9,9,6,9,6,6,9,9,6,9,7,6,7],"age_min":5,"age_max":10,"age_mean":7.883720930232558,"species":null,"sex_distribution":{"f":20,"m":23},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds005343","osf_url":null,"github_url":null,"paper_url":null},"funding":["Charles Lafitte Foundation"],"ingestion_fingerprint":"35c86b8a99cbd1f165444138e29daa555caf0666eae32f2cd2ccd89df1847153","license":"CC0","n_contributing_labs":null,"name":"Gaffrey Lab Infant Microstates and Attention","readme":"Participants were 43, 5-10-month-old infants. Their caregivers provided informed consent and compensation was provided for their participation. Infant-caregiver dyads were part of a larger study investigating the impact of bias and discrimination on prenatal and postnatal maternal health and infant development. All research was approved by the Duke University Health System Institutional Review Board and carried out in accordance with the Declaration of Helsinki. Infants sat on their caregiver’s lap and watched up to 15 minutes of relaxing videos with sound (i.e., 10, 90-second videos separated by breaks during which caregivers could play with their infant). Before each video started, an attention grabber (i.e., three-second video of a noisy rattle) directed the infant’s attention to the screen. Videos were presented with E-Prime software (Psychological Software Tools, Pittsburgh, PA). Caregivers were instructed to silently sit still during videos. If infants shifted their attention away from the screen, caregivers were permitted to re-direct their attention only by pointing to the screen. EEG was recorded at 1000 Hertz (Hz) and referenced to the vertex (channel Cz) using a 128-channel HydroCel Geodesic Sensor Net (Electrical Geodesics, Eugene, OR). Impedances were maintained below 50 kilohms throughout the EEG session. For more information, visit: https://github.com/gaffreylab/","recording_modality":["eeg"],"senior_author":"Michael S. Gaffrey","sessions":[],"size_bytes":24373540791,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["resting"],"timestamps":{"digested_at":"2026-04-22T12:27:40.639388+00:00","dataset_created_at":"2024-07-16T02:28:09.557Z","dataset_modified_at":"2024-07-16T12:00:23.000Z"},"total_files":43,"storage":{"backend":"s3","base":"s3://openneuro.org/ds005343","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv","task-resting_events.json"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"87a22cf2b2f12c82","model":"openai/gpt-5.2","tagged_at":"2026-01-20T17:48:06.814986+00:00"},"tags":{"pathology":["Development"],"modality":["Multisensory"],"type":["Perception"],"confidence":{"pathology":0.7,"modality":0.8,"type":0.7},"reasoning":{"few_shot_analysis":"Closest few-shot conventions:\n- The \"Cross-modal Oddball Task\" example is labeled Modality=Multisensory when both visual and auditory cues are presented. This guides assigning Multisensory here because the infant stimuli are videos with sound.\n- The \"Subcortical responses to music and speech\" example shows that passive stimulus exposure (no demanding task) still maps to a stimulus-driven Type (there labeled Perception). This supports choosing Perception over Resting-state when clear external stimuli are presented throughout.\n- The resting-state sleep-deprivation example clarifies that Resting-state is used when the metadata explicitly indicates resting-state/eyes open/closed with no stimulus stream; that convention is weaker here because continuous audiovisual stimuli are present.","metadata_analysis":"Key metadata facts:\n- Population/age: \"Participants were 43, 5-10-month-old infants.\" (developmental age group)\n- Stimuli: \"watched up to 15 minutes of relaxing videos with sound\" (simultaneous visual + auditory input)\n- Additional attention cue: \"an attention grabber (i.e., three-second video of a noisy rattle) directed the infant’s attention to the screen.\" (also audiovisual)\n- Paradigm nature: \"Infants sat on their caregiver’s lap\" and \"Caregivers were instructed to silently sit still during videos.\" (passive viewing/listening; no explicit behavioral task described).","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: \"5-10-month-old infants\" (developmental population).\n- Few-shot pattern suggests: not directly covered, but pediatric/child cohorts are treated as Development when age/development is the defining recruitment feature.\n- Alignment: ALIGN.\n\nModality:\n- Metadata says: \"videos with sound\" and \"video of a noisy rattle\".\n- Few-shot pattern suggests: multisensory when concurrent visual+auditory stimuli are used (as in the Cross-modal Oddball example).\n- Alignment: ALIGN.\n\nType:\n- Metadata says: passive exposure to \"relaxing videos\" with an \"attention grabber\"; no explicit cognitive construct named (no memory/decision/motor demand).\n- Few-shot pattern suggests: when there is sustained sensory stimulation without a complex cognitive objective, Perception is often used; Resting-state is reserved for explicit rest/eyes-open/eyes-closed with no stimulus stream.\n- Alignment: PARTIAL (metadata does not explicitly state 'perception' or 'resting-state'); decision follows the convention that continuous audiovisual stimulation is better captured by Perception than Resting-state.","decision_summary":"Top-2 candidates with head-to-head comparison:\n\n1) Pathology:\n- Candidate A: Development\n  Evidence: \"5-10-month-old infants\" (developmental age group).\n- Candidate B: Healthy\n  Evidence: no diagnosis mentioned; could be a normative cohort.\n- Decision: Development wins because recruitment is explicitly an infant developmental cohort rather than an adult healthy cohort.\n\n2) Modality:\n- Candidate A: Multisensory\n  Evidence: \"videos with sound\"; \"video of a noisy rattle\" (visual + auditory concurrently).\n- Candidate B: Visual\n  Evidence: primary engagement is looking at the screen (videos).\n- Decision: Multisensory wins due to explicit auditory accompaniment to the videos.\n\n3) Type:\n- Candidate A: Perception\n  Evidence: continuous external sensory stimulation: \"watched... videos with sound\"; attention orienting to stimulus: \"attention grabber... directed the infant’s attention\".\n- Candidate B: Resting-state\n  Evidence: \"relaxing videos\" could be used as a low-demand baseline, but not explicitly called resting-state.\n- Decision: Perception wins because the session is stimulus-driven rather than a classic no-stimulus resting paradigm.\n\nConfidence justification:\n- Pathology: supported by 1 strong explicit quote indicating developmental age.\n- Modality: supported by 2 explicit quotes indicating audiovisual stimuli.\n- Type: supported by stimulus/task description but no explicit construct term; Perception vs Resting-state remains somewhat ambiguous."}},"computed_title":"Gaffrey Lab Infant Microstates and Attention","nchans_counts":[{"val":129,"count":43}],"sfreq_counts":[{"val":1000.0,"count":43}],"stats_computed_at":"2026-04-22T23:16:00.309355+00:00","total_duration_s":53738.841,"author_year":"Bagdasarov2024","canonical_name":null}}