{"success":true,"database":"eegdash","data":{"_id":"69de3cac897a7725c66ff168","dataset_id":"nm000232","associated_paper_doi":null,"authors":["Alessandro T. Gifford","Kshitij Dwivedi","Gemma Roig","Radoslaw M. Cichy"],"bids_version":"1.9.0","canonical_name":null,"contact_info":null,"contributing_labs":null,"data_processed":true,"dataset_doi":"doi:10.17605/OSF.IO/3JK45","datatypes":["eeg"],"demographics":{"subjects_count":10,"ages":[33,25,31,26,31,24,34,25,32,24],"age_min":24,"age_max":34,"age_mean":28.5,"species":null,"sex_distribution":{"f":8,"m":2},"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000232","osf_url":null,"github_url":null,"paper_url":null},"funding":["German Research Council (DFG) grants CI 241/1-1, CI 241/3-1, CI 241/1-7","European Research Council (ERC) Starting Grant ERC-2018-StG 803370","Hessian Ministry of Higher Education, Research, Science and the Arts (HMWK)"],"ingestion_fingerprint":"e92082e4a74e0a08f6338efdf986940ef8e88696e59bca81789c92020fa3e948","license":"CC-BY 4.0","n_contributing_labs":null,"name":"THINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition","readme":"THINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition\n========================================================================================\nOverview\n--------\nEEG dataset of 10 subjects who viewed 16,540 distinct training images and 200\ntest images (each repeated ~80 times) using rapid serial visual presentation\n(RSVP) at 5 Hz, recorded on a BrainVision actiCHamp system at 1000 Hz.\nThe source files store 63 EEG channels (the online reference electrode is\nnot stored). Stimuli are drawn from the THINGS database (Hebart et al. 2019).\nEach subject completed 4 separate sessions; each session contained:\n  - 5 training runs (~3,360 trials each) covering ~16,540 unique images\n  - 1 test run (~4,080 trials) of 200 images repeated 20× per session\n  - 2 resting-state runs (one before, one after the main experiment)\nTotal: ~32,540 training trials + ~16,000 test trials per subject across 4 sessions.\nRecording setup\n---------------\n- Manufacturer: Brain Products (actiCHamp)\n- 63 EEG channels (one electrode served as online reference and is not\n  stored in the source files)\n- 10-10 cap layout\n- Sampling rate: 1000 Hz\n- Online band-pass: 0.01-100 Hz\n- Triggers recorded as BrainVision stimulus annotations (not as a\n  dedicated stim channel)\nTasks (BIDS labels)\n-------------------\n- task-train: training run (RSVP of unique images)\n- task-test:  test run (RSVP of repeated test images)\n- task-rest:  resting state (eyes open, fixation cross)\nRun numbering\n-------------\n- task-train: run-01..run-05 per session (5 training parts)\n- task-test:  single run per session\n- task-rest:  run-01 (before main task) and run-02 (after main task)\nEvents\n------\nevents.tsv columns:\n  onset, duration, sample, value, trial_type\n  tot_img_number     - global image ID (1-16540 for train; 1-200 for test;\n                       'n/a' for target catch trials)\n  img_category       - integer category index\n  category_name      - human-readable category, e.g. \"01175_roller_coaster\"\n  block, sequence    - hierarchical position within the run\n  img_in_sequence    - image position within its 20-image sequence\n  soa                - actual stimulus onset asynchrony (~200 ms)\ntrial_type values:\n  image  - normal training/test image presentation\n  target - random catch trial (subject must press a button)\n  rest_marker - resting-state start/end marker\nSubject information\n-------------------\nparticipants.tsv contains age and sex (both extracted from the\nbehavioural .mat files in the source data).\nFolder layout\n-------------\n/sub-XX/ses-YY/eeg/        - main BIDS data (BDF + sidecars)\n/sourcedata/               - original BrainVision .eeg/.vhdr/.vmrk and\n                             behavioural .mat files\n/derivatives/preprocessed_eeg/   - authors' preprocessed train/test epochs\n/derivatives/resting_state/      - authors' preprocessed resting state\n/stimuli/                  - image set (training_images.zip, test_images.zip)\n                             plus image_metadata.npy\n/code/                     - this conversion script\nReference\n---------\nGifford, A.T., Dwivedi, K., Roig, G., & Cichy, R.M. (2022). A large and rich\nEEG dataset for modeling human visual object recognition. NeuroImage, 264,\n119754. https://doi.org/10.1016/j.neuroimage.2022.119754\nCode: https://github.com/gifale95/eeg_encoding\nOSF:  https://osf.io/3jk45/","recording_modality":["eeg"],"senior_author":null,"sessions":["01","02","03","04"],"size_bytes":218909663166,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000232","raw_key":"dataset_description.json","dep_keys":["README","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["rest","rest1","rest2","test","train"],"timestamps":{"digested_at":"2026-04-30T14:09:18.975341+00:00","dataset_created_at":null,"dataset_modified_at":"2026-04-11T16:09:00Z"},"total_files":638,"computed_title":"THINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition","nchans_counts":[{"val":63,"count":319}],"sfreq_counts":[{"val":1000.0,"count":319}],"stats_computed_at":"2026-05-01T13:49:34.660043+00:00","total_duration_s":314203.775,"author_year":"Gifford2019"}}