{"success":true,"database":"eegdash","data":{"_id":"69a33a3b897a7725c66f3eed","dataset_id":"ds007406","associated_paper_doi":null,"authors":["Allison Edit","Attila Pohlmann"],"bids_version":"1.8.0","contact_info":["Attila Pohlmann"],"contributing_labs":null,"data_processed":false,"dataset_doi":"doi:10.18112/openneuro.ds007406.v1.0.0","datatypes":["eeg"],"demographics":{"subjects_count":10,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds007406","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"5a1760212f02177dfb08ec39a277bb7ed46a8d72a6b6a3a834b9f8f0d727ed98","license":"CC0","n_contributing_labs":null,"name":"EEG dataset on consumer responses to extreme versus traditional marketing videos","readme":"This dataset comprises EEG recordings from ten participants exposed to six marketing video stimuli from three companies (Red Bull, GoPro, Columbia Sportswear), categorized as traditional product-focused advertisements versus \"extreme\" authentic documentary-style videos. Data were collected using a 14-channel EMOTIV EPOC X headset.","recording_modality":["eeg"],"senior_author":"Attila Pohlmann","sessions":[],"size_bytes":27012236,"source":"openneuro","storage":{"backend":"s3","base":"s3://openneuro.org/ds007406","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.tsv","task-extremeversustraditionalvideos_events.json"]},"study_design":null,"study_domain":null,"tasks":["extremeversustraditionalvideos"],"timestamps":{"digested_at":"2026-04-22T12:30:16.329681+00:00","dataset_created_at":"2026-02-12T22:29:37.137Z","dataset_modified_at":"2026-02-12T22:32:19.000Z"},"total_files":10,"computed_title":"EEG dataset on consumer responses to extreme versus traditional marketing videos","nchans_counts":[{"val":14,"count":10}],"sfreq_counts":[{"val":256.0,"count":10}],"stats_computed_at":"2026-04-22T23:16:00.312631+00:00","total_duration_s":1800.234375,"tagger_meta":{"config_hash":"3557b68bca409f28","metadata_hash":"3008b3a0ff66f421","model":"openai/gpt-5.2","tagged_at":"2026-04-07T09:32:40.872789+00:00"},"tags":{"pathology":["Healthy"],"modality":["Multisensory"],"type":["Affect"],"confidence":{"pathology":0.6,"modality":0.7,"type":0.6},"reasoning":{"few_shot_analysis":"Few-shot conventions used:\n- For Modality, the Cross-modal Oddball example (Parkinson's; Modality=Multisensory) shows that when both auditory and visual stimuli are presented together, the label should be Multisensory.\n- For Type, the “Three armed bandit gambling task” example (Healthy; Type=Affect) illustrates that when the study purpose is about affective/reward-related responses to stimuli (rather than pure perception), Type can be Affect. This guides mapping “consumer responses to marketing videos” toward an affective/valuation framing rather than Perception.\nThese examples guide conventions, but no few-shot explicitly covers marketing videos; final labels rely primarily on metadata facts/inference.","metadata_analysis":"Key metadata facts (quoted):\n1) Stimuli: participants were “exposed to six marketing video stimuli” and videos are “categorized as traditional product-focused advertisements versus 'extreme' authentic documentary-style videos.”\n2) Recording setup: “EEG recordings from ten participants” using “a 14-channel EMOTIV EPOC X headset.”\n3) Task name: “extremeversustraditionalvideos.”\nPopulation/pathology: no diagnosis or clinical recruitment is mentioned anywhere; only “ten participants.”","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology:\n- Metadata says: only “ten participants” (no disorder/diagnosis/recruitment criteria stated).\n- Few-shot pattern suggests: when no clinical population is described, label as Healthy (e.g., multiple few-shots with non-clinical volunteers labeled Healthy).\n- Alignment: ALIGN (metadata is non-clinical/unspecified; convention maps this to Healthy rather than a disorder).\n\nModality:\n- Metadata says: “marketing video stimuli” (videos typically include visual stream and accompanying audio).\n- Few-shot pattern suggests: combined visual+auditory cues should be Multisensory (Cross-modal Oddball).\n- Alignment: ALIGN (video implies audiovisual; few-shot supports Multisensory when both channels are involved).\n\nType:\n- Metadata says: “consumer responses” to different styles of marketing/advertising videos.\n- Few-shot pattern suggests: stimulus-evoked preference/valuation/emotional engagement studies often map to Affect (e.g., gambling/reward task labeled Affect when the construct is affect/reward processing).\n- Alignment: PARTIAL. Marketing “consumer responses” could reflect Affect (emotional engagement/attitudes) or Decision-making (purchase intention/choice), but the metadata does not mention explicit choices, value-based decisions, or learning; thus Affect is the best match by convention, but with moderate uncertainty.","decision_summary":"Top-2 comparative selection:\n\n1) Pathology\n- Candidate A: Healthy\n  Evidence: no clinical terms; dataset is “EEG recordings from ten participants” with no diagnosis mentioned.\n- Candidate B: Unknown\n  Evidence: participants are not explicitly called “healthy,” only “ten participants.”\nHead-to-head: Healthy is stronger because the dataset context is a typical non-clinical neuromarketing-style exposure study and lacks any recruitment-by-diagnosis language; per catalog convention, such datasets are labeled Healthy rather than Unknown when nothing clinical is indicated.\n\n2) Modality\n- Candidate A: Multisensory\n  Evidence: “marketing video stimuli” strongly implies audiovisual input.\n- Candidate B: Visual\n  Evidence: explicit mention of “video stimuli” guarantees visual content; audio is not explicitly stated.\nHead-to-head: Multisensory is slightly stronger because videos are ordinarily audiovisual and the study contrasts “traditional advertisements” vs “extreme documentary-style videos,” both typically presented with sound; however, because audio is not explicitly mentioned, Visual remains plausible.\n\n3) Type\n- Candidate A: Affect\n  Evidence: “consumer responses” to marketing/advertising content commonly targets affective engagement/attitudes; no explicit perceptual discrimination or memory demand is described.\n- Candidate B: Decision-making\n  Evidence: consumer research can involve preference/valuation/purchase intention, but none is explicitly stated in the metadata.\nHead-to-head: Affect is stronger because the metadata emphasizes exposure and response to marketing content, without describing explicit choices, value-based learning, or decision policies.\n\nConfidence justification (evidence count):\n- Pathology confidence is limited (no explicit “healthy” quote; inference from absence of clinical recruitment).\n- Modality confidence is moderate (explicit “video stimuli” + strong convention that videos are audiovisual, but audio not explicitly stated).\n- Type confidence is moderate-low (construct inferred from “consumer responses” without details on ratings/choices)."}},"canonical_name":null,"name_confidence":0.43,"name_meta":{"suggested_at":"2026-04-14T10:18:35.343Z","model":"openai/gpt-5.2 + openai/gpt-5.4-mini + deterministic_fallback"},"name_source":"author_year","author_year":"Edit2026"}}