Full catalogue
Multimodal Signal Processing (MSP) Conversation corpus
Resource type
Authors/contributors
- Martinez-Lucas, Luz (Author)
- Abdelwahab, Mohammed (Author)
- Busso, Carlos (Author)
Title
Multimodal Signal Processing (MSP) Conversation corpus
Abstract
The MSP-Conversation corpus contains interactions annotated with time-continuous emotional traces for arousal (calm to active), valence (negative to positive), and dominance (weak to strong). Time-continuous annotations offer the flexibility to explore emotional displays at different temporal resolutions while leveraging contextual information. Release 1.0 contains 74 conversations with duration between 10-20 minutes (more than 15 hours). The conversations have been annotated by at least five workers. This is an ongoing effort, where our plan is to increase the size of the corpus. We have already identified 52 new conversations that we have started to annotate for the second release (28hrs 15min in total).
A key feature of the corpus is that the recordings overlap with the recordings included in the MSP-Podcast database, which contains sentence-level annotations of short segments retrieved from podcasts. The MSP-Podcast corpus is not appropriate to study contextual information, as the isolated turns are separately evaluated, missing the temporal relationship between consecutive speaking turns. The MSP-Conversation corpus complements the MSP-Podcast, providing the perfect platform to explore temporal information.
Citation
Martinez-Lucas, L., Abdelwahab, M., & Busso, C. (n.d.). Multimodal Signal Processing (MSP) Conversation corpus. https://ecs.utdallas.edu/research/researchlabs/msp-lab/MSP-Conversation.html
Link to this record
Relations