Skip to Main content Skip to Navigation
Conference papers

On the use of Self-supervised Pre-trained Acoustic and Linguistic Features for Continuous Speech Emotion Recognition

Abstract : Pre-training for feature extraction is an increasingly studied approach to get better continuous representations of audio and text content. In the present work, we use wav2vec and camemBERT as self-supervised learned models to represent our data in order to perform continuous emotion recognition from speech (SER) on AlloSat, a large French emotional database describing the satisfaction dimension, and on the state of the art corpus SEWA focusing on valence, arousal and liking dimensions. To the authors' knowledge, this paper presents the first study showing that the joint use of wav2vec and BERT-like pre-trained features is very relevant to deal with continuous SER task, usually characterized by a small amount of labeled training data. Evaluated by the well-known concordance correlation coefficient (CCC), our experiments show that we can reach a CCC value of 0.825 instead of 0.592 when using MFCC in conjunction with word2vec word embedding on the AlloSat dataset.
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03003469
Contributor : Manon Macary <>
Submitted on : Monday, December 7, 2020 - 10:03:56 AM
Last modification on : Wednesday, December 16, 2020 - 2:21:36 PM
Long-term archiving on: : Monday, March 8, 2021 - 6:24:47 PM

File

2011.09212.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03003469, version 1
  • ARXIV : 2011.09212

Collections

Citation

Manon Macary, Marie Tahon, Yannick Estève, Anthony Rousseau. On the use of Self-supervised Pre-trained Acoustic and Linguistic Features for Continuous Speech Emotion Recognition. IEEE Spoken Language Technology Workshop, Jan 2021, Virtual, China. ⟨hal-03003469⟩

Share

Metrics

Record views

123

Files downloads

160