{"success":true,"database":"eegdash","data":{"_id":"69de6d29897a7725c670234e","dataset_id":"nm000112","associated_paper_doi":null,"authors":["Yisi Liu","Olga Sourina","Minh Khoa Nguyen"],"bids_version":"1.7.0","canonical_name":null,"contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"10.82901/nemar.nm000112","datatypes":["eeg"],"demographics":{"subjects_count":123,"ages":[21,20,22,21,22,22,18,22,20,20,21,22,20,18,21,21,21,20,21,20,21,21,22,18,20,20,20,20,20,18,22,21,21,21,20,21,20,20,20,20,21,18,20,20,20,20,20,20,20,20,20,18,18,18,18,18,18,18,18,20,20,22,28,20,20,20,24,20,26,20,22,29,18,17,29,29,33,22,20,29,22,30,22,28,21,20,18,26,18,30,22,25,25,30,30,30,22,24,28,22,26,22,26,30,30,38,34,26,34,26,26,30,36,20,21,26,28,30,30,34,34,22,30],"age_min":17,"age_max":38,"age_mean":22.943089430894307,"species":null,"sex_distribution":{"f":75,"m":48},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000112","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"b260f3e5bbf2828b1c0e5d1e166f6e4cfb914a6b63025d5e76575f88425582e4","license":"CC-BY-4.0","n_contributing_labs":null,"name":"FACED - Finer-grained Affective Computing EEG Dataset","readme":"[![DOI](https://img.shields.io/badge/DOI-10.82901%2Fnemar.nm000112-blue)](https://doi.org/10.82901/nemar.nm000112)\n# FACED - Finer-grained Affective Computing EEG Dataset\n## Introduction\nThe Finer-grained Affective Computing EEG Dataset (FACED) contains scalp EEG recordings from 123 healthy participants who watched 28 emotion-eliciting video clips designed to evoke nine different emotion categories. The dataset includes four negative emotions (anger, fear, disgust, sadness) from Ekman's basic emotions and four positive emotions (amusement, inspiration, joy, tenderness) selected based on recent psychological and neuroscience progress and application needs. Participants provided detailed self-reported emotion ratings on 12 dimensions: eight emotions, arousal, valence, liking, and familiarity. The dataset is designed to facilitate cross-subject affective computing research and development of EEG-based emotion recognition algorithms for real-world applications.\n## Overview of the experiment\nParticipants (123 subjects, 75 female, ages 17-38, mean=23.2 years) were seated 60 cm from a 22-inch LCD monitor in a regular office environment. Each trial consisted of: (1) a 5-second fixation cross, (2) a video clip of varying length (typically 30-60 seconds), and (3) subjective emotional rating on 12 items (anger, fear, disgust, sadness, amusement, inspiration, joy, tenderness, valence, arousal, liking, familiarity) on a continuous 0-7 scale, followed by at least 30 seconds rest. Video clips were presented in blocks: three positive blocks, three negative blocks, and one neutral block, with 20 arithmetic problems between blocks to minimize carryover effects. The 28 video clips were designed to target nine emotion categories, with randomized presentation order across participants. EEG was recorded using a 32-channel biosignal recording system sampled at either 1000 Hz (92 subjects) or 250 Hz (31 subjects), with channels positioned according to the International 10-20 system. Signal units were recorded in either Volts or microVolts depending on the hardware configuration used.\n**Video stimulus information:**\nThe dataset includes 28 video clips designed to elicit nine emotion categories (Trigger values 1–28):\n- Anger (Videos 1-3): Durations 73-81 seconds, negative valence\n- Disgust (Videos 4-6): Durations 69-91 seconds, negative valence\n- Fear (Videos 7-9): Durations 56-106 seconds, negative valence\n- Sadness (Videos 10-12): Durations 45-82 seconds, negative valence\n- Neutral (Videos 13-16): Durations 35-43 seconds, neutral valence\n- Amusement (Videos 17-19): Durations 56-73 seconds, positive valence\n- Inspiration (Videos 20-22): Durations 76-129 seconds, positive valence\n- Joy (Videos 23-25): Durations 34-68 seconds, positive valence\n- Tenderness (Videos 26-28): Durations 54-77 seconds, positive valence\nMetadata for each video (duration, source film, source database, valence, targeted emotion) is read from Stimuli_info.xlsx.\n**Event markers (from evt.bdf annotations):**\n- 100: Task/block start\n- 101: Video onset\n- 102: Video offset\n- 1–28: Video index (appears just before 101, used to link to stimulus metadata)\n- 201/202: Block boundary markers\n- \"Start Impedance\" / \"Stop Impedance\": Technical markers (ignored)\nThe conversion script reads evt.bdf annotations for each subject, parses video presentation spans (from video index + 101 to 102), and creates MNE Annotations with the source film title (video_title) as description. These annotations are exported to BIDS events.tsv with extra columns:\n- emotion_label: targeted emotion category (Anger, Disgust, Fear, Sadness, Neutral, Amusement, Inspiration, Joy, Tenderness)\n- binary_label: positive/negative/neutral classification\n- video_index: 1–28\n- Self-reported ratings (Joy, Tenderness, Inspiration, Amusement, Anger, Disgust, Fear, Sadness, Arousal, Valence, Familiarity, Liking)\n## Description of the preprocessing if any\nRaw BDF files from the biosignal recording system have been converted to BIDS format. Channel names are standardized to match the International 10-20 nomenclature. Subjects have been assigned numeric IDs (sub-000 through sub-122) corresponding to their original subject designations in the dataset. Recording dates have been set to a default value (2023-01-01) due to privacy considerations, while time relationships between files are preserved. Subject demographic information (age, sex) has been extracted from the Recording_info.csv file and properly formatted for BIDS.\nStimulus timing information from the evt.bdf event files has been parsed and enriched with metadata from Stimuli_info.xlsx. Each video presentation is annotated with the targeted emotion category (Anger, Disgust, Fear, Sadness, Neutral, Amusement, Inspiration, Joy, Tenderness) and includes self-reported ratings from After_remarks.mat when available.\n## Citation\nWhen using this dataset, please cite:\n1. Liu, Y., Sourina, O., & Nguyen, M. K. (2023). Finer-grained Affective Computing EEG Dataset. Scientific Data, 10(1), 809. https://doi.org/10.1038/s41597-023-02650-w\n2. Synapse Platform: https://www.synapse.org/#!Synapse:syn50614194\n3. The dataset is available at the Synapse platform repository.\n**Data curators:**\nPierre Guetschel (BIDS conversion)\nOriginal data collection team:\n- Yisi Liu (Nanyang Technological University)\n- Olga Sourina (Nanyang Technological University)\n- Minh Khoa Nguyen (Nanyang Technological University)\n---\n## Automatic report\n*Report automatically generated by `mne_bids.make_report()`.*\n>  The FACED - Finer-grained Affective Computing EEG Dataset dataset was created\nby Yisi Liu, Olga Sourina, and Minh Khoa Nguyen and conforms to BIDS version\n1.7.0. This report was generated with MNE-BIDS\n(https://doi.org/10.21105/joss.01896). The dataset consists of 123 participants\n(comprised of 48 male and 75 female participants; handedness were all unknown;\nages ranged from 17.0 to 38.0 (mean = 22.94, std = 4.66)) . Data was recorded\nusing an EEG system (Biosemi) sampled at 1000.0, and 250.0 Hz with line noise at\nn/a Hz. There were 123 scans in total. Recording durations ranged from 3468.0 to\n6743.0 seconds (mean = 4544.83, std = 647.24), for a total of 559013.71 seconds\nof data recorded over all scans. For each dataset, there were on average 32.0\n(std = 0.0) recording channels per scan, out of which 32.0 (std = 0.0) were used\nin analysis (0.0 +/- 0.0 were removed from analysis).","recording_modality":["eeg"],"senior_author":null,"sessions":[],"size_bytes":33722277172,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000112","raw_key":"dataset_description.json","dep_keys":["README.md","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["watchingVideoClips"],"timestamps":{"digested_at":"2026-04-30T14:08:32.349523+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-23T03:26:21Z"},"total_files":123,"author_year":"Liu2024_112","name_source":"canonical","nchans_counts":[{"val":32,"count":123}],"sfreq_counts":[{"val":1000.0,"count":68},{"val":250.0,"count":55}],"computed_title":"FACED - Finer-grained Affective Computing EEG Dataset","stats_computed_at":"2026-05-01T13:49:34.660227+00:00","total_duration_s":559013.7119999999}}