Recognizing emotion from Turkish speech using acoustic features

dc.authoridOFLAZOGLU, CAGLAR/0000-0001-5074-2617
dc.authoridYILDIRIM, Serdar/0000-0003-3151-9916
dc.contributor.authorOflazoglu, Caglar
dc.contributor.authorYildirim, Serdar
dc.date.accessioned2024-09-18T21:00:26Z
dc.date.available2024-09-18T21:00:26Z
dc.date.issued2013
dc.departmentHatay Mustafa Kemal Üniversitesien_US
dc.description.abstractAffective computing, especially from speech, is one of the key steps toward building more natural and effective human-machine interaction. In recent years, several emotional speech corpora in different languages have been collected; however, Turkish is not among the languages that have been investigated in the context of emotion recognition. For this purpose, a new Turkish emotional speech database, which includes 5,100 utterances extracted from 55 Turkish movies, was constructed. Each utterance in the database is labeled with emotion categories (happy, surprised, sad, angry, fearful, neutral, and others) and three-dimensional emotional space (valence, activation, and dominance). We performed classification of four basic emotion classes (neutral, sad, happy, and angry) and estimation of emotion primitives using acoustic features. The importance of acoustic features in estimating the emotion primitive values and in classifying emotions into categories was also investigated. An unweighted average recall of 45.5% was obtained for the classification. For emotion dimension estimation, we obtained promising results for activation and dominance dimensions. For valence, however, the correlation between the averaged ratings of the evaluators and the estimates was low. The cross-corpus training and testing also showed good results for activation and dominance dimensions.en_US
dc.description.sponsorshipTurkish Scientific and Technical Research Council (TUBITAK) [109E243]en_US
dc.description.sponsorshipThis work was supported by the Turkish Scientific and Technical Research Council (TUBITAK) under project no. 109E243.en_US
dc.identifier.doi10.1186/1687-4722-2013-26
dc.identifier.issn1687-4722
dc.identifier.scopus2-s2.0-84893671436en_US
dc.identifier.scopusqualityQ2en_US
dc.identifier.urihttps://doi.org/10.1186/1687-4722-2013-26
dc.identifier.urihttps://hdl.handle.net/20.500.12483/12677
dc.identifier.wosWOS:000328878500001en_US
dc.identifier.wosqualityQ4en_US
dc.indekslendigikaynakWeb of Scienceen_US
dc.indekslendigikaynakScopusen_US
dc.language.isoenen_US
dc.publisherSpringeren_US
dc.relation.ispartofEurasip Journal on Audio Speech and Music Processingen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectTurkish emotional speech databaseen_US
dc.subjectEmotion recognitionen_US
dc.subjectEmotion primitives estimationen_US
dc.subjectCross-corpus evaluationen_US
dc.titleRecognizing emotion from Turkish speech using acoustic featuresen_US
dc.typeArticleen_US

Dosyalar

Orijinal paket
Listeleniyor 1 - 1 / 1
Yükleniyor...
Küçük Resim
İsim:
Tam Metin / Full Text
Boyut:
391.22 KB
Biçim:
Adobe Portable Document Format