Crowd Sourced Emotional Multimodal Actors Dataset (CREMA-D)

Resource type
Authors/contributors
Title
Crowd Sourced Emotional Multimodal Actors Dataset (CREMA-D)
Abstract
CREMA-D is a data set of 7,442 original clips from 91 actors. These clips were from 48 male and 43 female actors between the ages of 20 and 74 coming from a variety of races and ethnicities (African America, Asian, Caucasian, Hispanic, and Unspecified). Actors spoke from a selection of 12 sentences. The sentences were presented using one of six different emotions (Anger, Disgust, Fear, Happy, Neutral and Sad) and four different emotion levels (Low, Medium, High and Unspecified). Participants rated the emotion and emotion levels based on the combined audiovisual presentation, the video alone, and the audio alone. Due to the large number of ratings needed, this effort was crowd-sourced and a total of 2443 participants each rated 90 unique clips, 30 audio, 30 visual, and 30 audio-visual. 95% of the clips have more than 7 ratings.
Date
22/11/2024, 10:58
Repository
Cheyney Computer Science
Accessed
22/11/2024, 14:07
Library Catalog
GitHub
Extra
original-date: 2017-09-29T17:28:11Z
Citation
Cao, H., Cooper, D. G., Keutmann, M. K., Gur, R. C., Nenkova, A., & Verma, R. (2024). Crowd Sourced Emotional Multimodal Actors Dataset (CREMA-D). Cheyney Computer Science. https://github.com/CheyneyComputerScience/CREMA-D (Original work published 2017)