Detecting emotional state of a child in a conversational computer game
dc.authorid | YILDIRIM, Serdar/0000-0003-3151-9916 | |
dc.contributor.author | Yildirim, Serdar | |
dc.contributor.author | Narayanan, Shrikanth | |
dc.contributor.author | Potamianos, Alexandros | |
dc.date.accessioned | 2024-09-18T20:16:49Z | |
dc.date.available | 2024-09-18T20:16:49Z | |
dc.date.issued | 2011 | |
dc.department | Hatay Mustafa Kemal Üniversitesi | en_US |
dc.description.abstract | The automatic recognition of user's communicative style within a spoken dialog system framework, including the affective aspects, has received increased attention in the past few years. For dialog systems, it is important to know not only what was said but also how something was communicated, so that the system can engage the user in a richer and more natural interaction. This paper addresses the problem of automatically detecting frustration, politeness, and neutral attitudes from a child's speech communication cues, elicited in spontaneous dialog interactions with computer characters. Several information sources such as acoustic, lexical, and contextual features, as well as, their combinations are used for this purpose. The study is based on a Wizard-of-Oz dialog corpus of 103 children, 7-14 years of age, playing a voice activated computer game. Three-way classification experiments, as well as, pairwise classification between polite vs. others and frustrated vs. others were performed. Experimental results show that lexical information has more discriminative power than acoustic and contextual cues for detection of politeness, whereas context and acoustic features perform best for frustration detection. Furthermore, the fusion of acoustic, lexical and contextual information provided significantly better classification results. Results also showed that classification performance varies with age and gender. Specifically, for the politeness detection task, higher classification accuracy was achieved for females and 10-11 years-olds, compared to males and other age groups, respectively. (C) 2010 Elsevier Ltd. All rights reserved. | en_US |
dc.description.sponsorship | Division of Computing and Communication Foundations; Direct For Computer & Info Scie & Enginr [1029373] Funding Source: National Science Foundation | en_US |
dc.identifier.doi | 10.1016/j.csl.2009.12.004 | |
dc.identifier.endpage | 44 | en_US |
dc.identifier.issn | 0885-2308 | |
dc.identifier.issn | 1095-8363 | |
dc.identifier.issue | 1 | en_US |
dc.identifier.scopus | 2-s2.0-77955414344 | en_US |
dc.identifier.scopusquality | Q1 | en_US |
dc.identifier.startpage | 29 | en_US |
dc.identifier.uri | https://doi.org/10.1016/j.csl.2009.12.004 | |
dc.identifier.uri | https://hdl.handle.net/20.500.12483/9764 | |
dc.identifier.volume | 25 | en_US |
dc.identifier.wos | WOS:000282563500003 | en_US |
dc.identifier.wosquality | Q2 | en_US |
dc.indekslendigikaynak | Web of Science | en_US |
dc.indekslendigikaynak | Scopus | en_US |
dc.language.iso | en | en_US |
dc.publisher | Academic Press Ltd- Elsevier Science Ltd | en_US |
dc.relation.ispartof | Computer Speech and Language | en_US |
dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | en_US |
dc.rights | info:eu-repo/semantics/closedAccess | en_US |
dc.subject | Emotion recognition | en_US |
dc.subject | Spoken dialog systems | en_US |
dc.subject | Children speech | en_US |
dc.subject | Spontaneous speech | en_US |
dc.subject | Natural emotions | en_US |
dc.subject | Child-computer interaction | en_US |
dc.subject | Feature extraction | en_US |
dc.title | Detecting emotional state of a child in a conversational computer game | en_US |
dc.type | Article | en_US |
Dosyalar
Orijinal paket
1 - 1 / 1