{"success":true,"database":"eegdash","data":{"_id":"69de6d29897a7725c6702354","dataset_id":"nm000134","associated_paper_doi":null,"authors":["Jonathan Xu","Ugo Bruzadin Nunes","Wangshu Jiang","Samuel Ryther","Jordan Pringle","Paul S. Scotti","Arnaud Delorme","Reese Kneeland"],"bids_version":"1.9.0","canonical_name":null,"contact_info":null,"contributing_labs":null,"data_processed":false,"dataset_doi":"10.82901/nemar.nm000134","datatypes":["eeg"],"demographics":{"subjects_count":20,"ages":[],"age_min":null,"age_max":null,"age_mean":null,"species":null,"sex_distribution":null,"handedness_distribution":null},"experimental_modalities":null,"external_links":{"source_url":"https://nemar.org/dataexplorer/detail/nm000134","osf_url":null,"github_url":null,"paper_url":null},"funding":[],"ingestion_fingerprint":"cd051149d86cdfcc7c1dfdae70b1e78128461fd6cfa77f993b5883b4b6c711cb","license":"CC-BY-NC-ND-4.0","n_contributing_labs":null,"name":"Alljoined-1.6M","readme":"[![DOI](https://img.shields.io/badge/DOI-10.82901%2Fnemar.nm000134-blue)](https://doi.org/10.82901/nemar.nm000134)\n# Alljoined-1.6M: Million-Trial EEG Dataset with Consumer-Grade Hardware\n## Overview\nAlljoined-1.6M is a large-scale EEG dataset of neural responses to rapid serial visual presentation (RSVP) of natural images, recorded using a consumer-grade 32-channel EMOTIV FLEX2 system. Twenty healthy adult participants (ages 23-63; 15 male, 5 female) each completed four recording sessions, generating over 1.6 million visual stimulus trials in total.\nThe dataset was designed to evaluate whether deep neural network-based brain-computer interface (BCI) research and semantic decoding methods can be effectively conducted with affordable consumer-grade EEG systems (approximately $2.2k versus $35-60k for research-grade systems).\n**Reference:** Xu, J., Bruzadin Nunes, U., Jiang, W., Ryther, S., Pringle, J., Scotti, P. S., Delorme, A., & Kneeland, R. (2025). Alljoined-1.6M: A Million-Trial EEG-Image Dataset for Evaluating Affordable Brain-Computer Interfaces. <https://doi.org/10.48550/arXiv.2508.18571>\n## Recording Setup\n- **Equipment:** EMOTIV FLEX2, 32-channel sintered Ag/AgCl gel-based electrodes\n- **Connectivity:** wireless Bluetooth 5.2\n- **Sampling rate:** 256 Hz (resampled to 250 Hz in published analyses)\n- **Montage:** extended 10-20 system, focused on occipital/visual regions\n- **Channels:** Cz, Fp1, F7, F3, CP5, CP1, P1, P3, P5, P7, PO9, PO7, PO3, O1, O9, Pz, POz, Oz, O10, O2, PO4, PO8, PO10, P8, P6, P4, P2, CP2, CP6, F4, F8, Fp2\n- **Firmware filters:** dual 50/60 Hz notch filter (built into EMOTIV firmware)\n- **Cost:** approximately $2.2k (approximately 27x cheaper than research-grade systems)\n## Task Paradigm\nRapid Serial Visual Presentation (RSVP) with orthogonal oddball detection. Each trial consisted of an image presented for 100 ms, followed by 100 ms of blank screen (200 ms total cycle). A small semi-transparent red fixation dot (0.2 x 0.2 degrees, 50% opacity) was present throughout.\nOddball detection: participants pressed a button when they detected catch trials featuring a Woody (Toy Story) character, which appeared in approximately 6% of sequences. Detection window was up to 2 seconds post-sequence. This task maintained engagement without biasing perception toward specific image categories.\nViewing distance: 60 cm; viewing angle: 7 degrees.\n## Stimulus Set\n16,740 unique images from the THINGS database (26,000 total images across 1,854 object categories), identical to the THINGS-EEG2 stimulus set for direct comparison.\n- **Test images:** shown 80 times per participant (4 sessions x 4 test blocks x 5 presentations)\n- **Training images:** shown 4-5 times per participant\n- **Randomization:** constrained so no image repeats within 2 intervening items\n## Subjects, Sessions, and Runs\n20 subjects, 4 sessions each (sub-08 has an additional session `ses-02old`, a retake of session 2). Each session contains 19 RSVP blocks (runs), approximately 5 minutes each. The first 4 runs per session present test images; the remaining 15 runs present training images.\nTotal: 83,520 image trials per subject; approximately 1.6 million trials across all 20 participants.\n| Subject | Sessions | Runs | Notes |\n|---------|----------|------|-------|\n| sub-01 | 4 | 76 | |\n| sub-02 | 4 | 76 | |\n| sub-03 | 4 | 76 | |\n| sub-04 | 4 | 76 | |\n| sub-05 | 4 | 76 | |\n| sub-06 | 4 | 76 | |\n| sub-07 | 4 | 76 | |\n| sub-08 | 5 | 81 | Includes ses-02old (session 2 retake) |\n| sub-09 | 4 | 76 | |\n| sub-10 | 4 | 76 | |\n| sub-11 | 4 | 76 | |\n| sub-12 | 4 | 76 | |\n| sub-13 | 4 | 76 | |\n| sub-14 | 4 | 76 | |\n| sub-15 | 4 | 76 | |\n| sub-16 | 4 | 76 | |\n| sub-17 | 4 | 76 | |\n| sub-18 | 4 | 76 | |\n| sub-19 | 4 | 76 | |\n| sub-20 | 4 | 76 | |\nParticipants were recruited from San Francisco via local platforms (Craigslist 55%, Instawork 35%) and filtered from an initial pool of 48 for high behavioral engagement. Mean oddball detection performance: 88% AUC (+/- 1% SE).\n## Data Format\nRaw continuous EEG recordings are stored as European Data Format (EDF) files, the native export format of the EMOTIV FLEX2 system (16-bit resolution). Only the 32 EEG channels are retained; EMOTIV metadata channels (timestamps, counters, contact quality, motion sensors, etc.) were excluded during conversion.\n**Per-run files:**\n| Path | Description |\n|------|-------------|\n| `sub-XX/ses-YY/eeg/sub-XX_ses-YY_task-images_run-ZZ_eeg.edf` | Raw EEG |\n| `sub-XX/ses-YY/eeg/sub-XX_ses-YY_task-images_run-ZZ_events.tsv` | Events |\n| `sub-XX/ses-YY/eeg/sub-XX_ses-YY_task-images_run-ZZ_events.json` | Event metadata |\n| `sub-XX/ses-YY/eeg/sub-XX_ses-YY_task-images_run-ZZ_channels.tsv` | Channels |\n| `sub-XX/ses-YY/eeg/sub-XX_ses-YY_task-images_run-ZZ_eeg.json` | Recording parameters |\n| `sub-XX/ses-YY/eeg/sub-XX_ses-YY_space-CapTrak_coordsystem.json` | Coordinate system |\n| `sub-XX/ses-YY/eeg/sub-XX_ses-YY_space-CapTrak_electrodes.tsv` | Electrode positions |\nEvent annotations in the events.tsv files use the following `trial_type` format from the EMOTIV recording system:\n- `stim_test,{image_id},-1,{trial}` -- test image presentation\n- `oddball,...` -- oddball (catch) trial\n- `behav,...` -- behavioral response (button press)\n## Source Data\nThe `sourcedata/` directory contains the original EMOTIV JSON metadata files from each recording block. These files include the raw EMOTIV marker data with precise timestamps, UUIDs, and port information as recorded by the EMOTIV software. They are the original, unprocessed recording artifacts from the EMOTIV system, not derived products, and are stored in `sourcedata/` per BIDS conventions.\n```\nsourcedata/sub-XX/ses-YY/eeg/sub-XX_ses-YY_task-images_run-ZZ_recording.json\n```\n## Code\nThe `code/` directory contains the original Alljoined-1.6M analysis code, cloned from <https://github.com/Alljoined/Alljoined-1.6M>.\n## BIDS Conversion\nConverted to BIDS by Yahya Shirazi (Swartz Center for Computational Neuroscience, UC San Diego) using MNE-BIDS and custom scripts.\n- **Source data:** HuggingFace <https://huggingface.co/datasets/Alljoined/Alljoined-1.6M>\n- EMOTIV channel `Afz` renamed to `AFz` (standard 10-20 capitalization)\n- Session label `session_02 old` sanitized to `ses-02old` for BIDS compliance\n- 95 EMOTIV metadata channels excluded (only 32 EEG channels retained)\n- Conversion validated with round-trip integrity checks (data amplitude, per-channel correlation, sampling frequency, event count, and event timing)\n## License and Terms of Use\nThis dataset is distributed under CC-BY-NC-ND-4.0 (Creative Commons Attribution-NonCommercial-NoDerivatives 4.0) with the following additional terms imposed by the Alljoined team. By using this dataset you agree to all conditions below.\n1. Researcher shall use the Dataset only for non-commercial research and educational purposes, in accordance with Alljoined's [Terms of Use](https://www.alljoined.com/terms-of-use).\n2. **No Warranties:** Alljoined makes no representations or warranties regarding the Dataset, including but not limited to warranties of non-infringement or fitness for a particular purpose.\n3. **Full Responsibility:** Researcher accepts full responsibility for his or her use of the Dataset and shall defend and indemnify Alljoined, including their employees, officers and agents, against any and all claims arising from Researcher's use of the Dataset.\n4. **Privacy Compliance:** Researcher shall comply with Alljoined's [Privacy Policy](https://www.alljoined.com/privacy-policy) and ensure that any use of the Dataset respects the privacy rights of individuals whose data may be included.\n5. **Sharing Rights:** Researcher may provide research associates and colleagues with access to the Dataset provided that they first agree to be bound by these terms and conditions.\n6. **Termination Rights:** Alljoined reserves the right to terminate Researcher's access to the Dataset at any time.\n7. **Commercial Entity Binding:** If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.\n8. **Governing Law:** The law of the State of California shall apply to all disputes under this agreement.\n- Full terms: <https://www.alljoined.com/terms-of-use>\n- Privacy policy: <https://www.alljoined.com/privacy-policy>\n## References\nXu, J., Bruzadin Nunes, U., Jiang, W., Ryther, S., Pringle, J., Scotti, P. S., Delorme, A., & Kneeland, R. (2025). Alljoined-1.6M: A Million-Trial EEG-Image Dataset for Evaluating Affordable Brain-Computer Interfaces. https://doi.org/10.48550/arXiv.2508.18571\nXu, J., Lee, S. K., & Jiang, W. (2024). Alljoined -- A dataset for EEG-to-Image decoding. https://doi.org/10.48550/arXiv.2404.05553","recording_modality":["eeg"],"senior_author":null,"sessions":["01","02","02old","03","04"],"size_bytes":8802166106,"source":"nemar","storage":{"backend":"nemar","base":"s3://nemar/nm000134","raw_key":"dataset_description.json","dep_keys":["LICENSE","README.md","participants.json","participants.tsv"]},"study_design":null,"study_domain":null,"tasks":["images"],"timestamps":{"digested_at":"2026-04-30T14:08:38.082373+00:00","dataset_created_at":null,"dataset_modified_at":"2026-03-18T04:36:36Z"},"total_files":1525,"author_year":"Xu2025","name_source":"canonical","nchans_counts":[{"val":32,"count":1525}],"sfreq_counts":[{"val":256.0,"count":1525}],"computed_title":"Alljoined-1.6M","stats_computed_at":"2026-05-01T13:49:34.660280+00:00","total_duration_s":466182.04296875}}