{"success":true,"database":"eegdash","data":{"_id":"6953f4249276ef1ee07a343a","dataset_id":"ds006142","associated_paper_doi":null,"authors":["Ana Matran-Fernandez","Sebastian Halder"],"bids_version":"1.7.0","contact_info":["Ana Matran-Fernandez","Sebastian Halder"],"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.18112/openneuro.ds006142.v1.0.2","datatypes":["eeg"],"demographics":{"subjects_count":27,"ages":[25,24,47,26,29,21,29,31,31,30,26,32,28,29,23,29,33,25,29,30,23,27,23,25,34,22,22],"age_min":21,"age_max":47,"age_mean":27.88888888888889,"species":null,"sex_distribution":{"m":19,"f":8},"handedness_distribution":{"r":25,"l":2}},"experimental_modalities":null,"external_links":{"source_url":"https://openneuro.org/datasets/ds006142","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"038df33d006fd1630b4e572bd4f5ea232d44faaf3c6ed43a5f72b14e7b86e4b8","license":"CC0","n_contributing_labs":null,"name":"Essex EEG Movie Memory dataset","readme":"﻿# Essex EEG Movie Memory Dataset\nAuthors: Ana Matran-Fernandez and Sebastian Halder\n### Description\nThis dataset contains raw electroencephalography (EEG) signals recorded from 27 participants while watching 10-second long clips\nextracted from movies that they had previously watched. For each clip, participants were asked whether they recognised the movie it belonged to,\nand if so, whether they remembered having watched it previously or not.\nIf a participant reported recognising or remembering a clip, it was shown a second time to capture (via a mouse click) time annotations\nof the instants that prompted this recognition.\n### EEG\nEEG data were acquired with a BioSemi ActiveTwo system with 64 electrodes positioned according to the international 10-20 system.\nThe sampling rate was 2048 Hz.\n### Stimuli\nThe clips used in the study were originally annotated in terms of their memorability by Cohendet et al (see References).\nThis dataset can be requested from the authors.\n### Example code\nWe have prepared an example script to demonstrate how to load the EEG data into Python using MNE and MNE-BIDS packages.\nThis script is located in the 'code' directory.\n### References\nRomain Cohendet, Karthik Yadati, Ngoc Q. K. Duong, and Claire-Hélène Demarty. 2018. Annotating, Understanding, and Predicting Long-term Video Memorability. In Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval (ICMR '18). Association for Computing Machinery, New York, NY, USA, 178–186. https://doi.org/10.1145/3206025.3206056\nReferences\n----------\nAppelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896\nPernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8","recording_modality":["eeg"],"senior_author":"Sebastian Halder","sessions":[],"size_bytes":26073650133,"source":"openneuro","study_design":null,"study_domain":null,"tasks":["MovieMemory"],"timestamps":{"digested_at":"2026-04-22T12:29:10.025451+00:00","dataset_created_at":"2025-04-17T12:18:55.249Z","dataset_modified_at":"2025-09-03T07:40:31.000Z"},"total_files":27,"storage":{"backend":"s3","base":"s3://openneuro.org/ds006142","raw_key":"dataset_description.json","dep_keys":["CHANGES","README","participants.json","participants.tsv"]},"tagger_meta":{"config_hash":"4a051be509a0e3d0","metadata_hash":"ca29e8806dcbe4c5","model":"openai/gpt-5.2","tagged_at":"2026-01-20T18:51:25.427479+00:00"},"tags":{"pathology":["Healthy"],"modality":["Visual"],"type":["Memory"],"confidence":{"pathology":0.6,"modality":0.8,"type":0.8},"reasoning":{"few_shot_analysis":"Most similar few-shot convention for cognitive construct is the digit span dataset (Healthy / Auditory / Memory): it labels tasks whose primary aim is remembering/recognition as Type=Memory (even though there are responses and multiple phases). In the present dataset, the central manipulation is whether participants recognize/remember movie clips, which maps to the same Type=Memory convention. For Modality, the few-shot examples consistently label by stimulus channel (e.g., visual discrimination -> Visual; digit span auditory digits -> Auditory), guiding us to label movie clips as Visual (not Motor, despite mouse clicks). No few-shot example indicates a clinical cohort; thus Pathology defaults to Healthy when no disorder recruitment is stated.","metadata_analysis":"Key task/stimulus facts from README: (1) \"EEG signals recorded from 27 participants while watching 10-second long clips extracted from movies\" (visual movie stimuli). (2) \"For each clip, participants were asked whether they recognised the movie it belonged to, and if so, whether they remembered having watched it previously or not\" (explicit recognition/episodic memory report). (3) \"If a participant reported recognising or remembering a clip, it was shown a second time to capture (via a mouse click) time annotations of the instants that prompted this recognition\" (response is mouse click, but stimulus remains visual clips). No phrases indicate any diagnosis/patient recruitment.","paper_abstract_analysis":"No useful paper information.","evidence_alignment_check":"Pathology: Metadata says: no clinical population mentioned; only \"27 participants\" with no diagnosis stated. Few-shot pattern suggests: when no disorder recruitment is stated, label as Healthy rather than a clinical label. ALIGN (no conflict), but evidence is indirect.\nModality: Metadata says: \"watching 10-second long clips extracted from movies\" (visual stimuli). Few-shot pattern suggests: label by stimulus channel (e.g., visual discrimination -> Visual), not by response device. ALIGN.\nType: Metadata says: \"asked whether they recognised...\" and \"whether they remembered having watched it previously\" (memory recognition). Few-shot pattern suggests: tasks centered on remembering/recognition are Type=Memory (e.g., digit span labeled Memory). ALIGN.","decision_summary":"Pathology candidates: (1) Healthy — supported by absence of any diagnosis/recruitment language (\"27 participants\" only) and few-shot convention to default to Healthy for normative cohorts; (2) Unknown — plausible because README never explicitly says \"healthy\". Winner: Healthy (conventionally assumed in absence of clinical recruitment). Confidence=0.6 due to lack of explicit health statement.\nModality candidates: (1) Visual — supported by \"watching 10-second long clips\" and \"clips extracted from movies\"; (2) Other — only if clips were not clearly visual, but they are movies. Winner: Visual. Confidence=0.8 (2 explicit stimulus quotes + strong few-shot convention).\nType candidates: (1) Memory — supported by \"recognised\" and \"remembered having watched it previously\" and recognition-annotation repeat viewing; (2) Decision-making — could be argued due to yes/no judgments, but the construct is memory recognition. Winner: Memory. Confidence=0.8 (2 explicit memory quotes + few-shot analog to memory-task labeling)."}},"computed_title":"Essex EEG Movie Memory dataset","nchans_counts":[{"val":65,"count":27}],"sfreq_counts":[{"val":2048.0,"count":27}],"stats_computed_at":"2026-04-22T23:16:00.311363+00:00","total_duration_s":94665.22998046875,"author_year":"MatranFernandez2025","canonical_name":null}}