Detecting emotional state of a child in a conversational computer game

dc.authoridYILDIRIM, Serdar/0000-0003-3151-9916
dc.contributor.authorYildirim, Serdar
dc.contributor.authorNarayanan, Shrikanth
dc.contributor.authorPotamianos, Alexandros
dc.date.accessioned2024-09-18T20:16:49Z
dc.date.available2024-09-18T20:16:49Z
dc.date.issued2011
dc.departmentHatay Mustafa Kemal Üniversitesien_US
dc.description.abstractThe automatic recognition of user's communicative style within a spoken dialog system framework, including the affective aspects, has received increased attention in the past few years. For dialog systems, it is important to know not only what was said but also how something was communicated, so that the system can engage the user in a richer and more natural interaction. This paper addresses the problem of automatically detecting frustration, politeness, and neutral attitudes from a child's speech communication cues, elicited in spontaneous dialog interactions with computer characters. Several information sources such as acoustic, lexical, and contextual features, as well as, their combinations are used for this purpose. The study is based on a Wizard-of-Oz dialog corpus of 103 children, 7-14 years of age, playing a voice activated computer game. Three-way classification experiments, as well as, pairwise classification between polite vs. others and frustrated vs. others were performed. Experimental results show that lexical information has more discriminative power than acoustic and contextual cues for detection of politeness, whereas context and acoustic features perform best for frustration detection. Furthermore, the fusion of acoustic, lexical and contextual information provided significantly better classification results. Results also showed that classification performance varies with age and gender. Specifically, for the politeness detection task, higher classification accuracy was achieved for females and 10-11 years-olds, compared to males and other age groups, respectively. (C) 2010 Elsevier Ltd. All rights reserved.en_US
dc.description.sponsorshipDivision of Computing and Communication Foundations; Direct For Computer & Info Scie & Enginr [1029373] Funding Source: National Science Foundationen_US
dc.identifier.doi10.1016/j.csl.2009.12.004
dc.identifier.endpage44en_US
dc.identifier.issn0885-2308
dc.identifier.issn1095-8363
dc.identifier.issue1en_US
dc.identifier.scopus2-s2.0-77955414344en_US
dc.identifier.scopusqualityQ1en_US
dc.identifier.startpage29en_US
dc.identifier.urihttps://doi.org/10.1016/j.csl.2009.12.004
dc.identifier.urihttps://hdl.handle.net/20.500.12483/9764
dc.identifier.volume25en_US
dc.identifier.wosWOS:000282563500003en_US
dc.identifier.wosqualityQ2en_US
dc.indekslendigikaynakWeb of Scienceen_US
dc.indekslendigikaynakScopusen_US
dc.language.isoenen_US
dc.publisherAcademic Press Ltd- Elsevier Science Ltden_US
dc.relation.ispartofComputer Speech and Languageen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectEmotion recognitionen_US
dc.subjectSpoken dialog systemsen_US
dc.subjectChildren speechen_US
dc.subjectSpontaneous speechen_US
dc.subjectNatural emotionsen_US
dc.subjectChild-computer interactionen_US
dc.subjectFeature extractionen_US
dc.titleDetecting emotional state of a child in a conversational computer gameen_US
dc.typeArticleen_US

Dosyalar

Orijinal paket
Listeleniyor 1 - 1 / 1
[ N/A ]
İsim:
Tam Metin / Full Text
Boyut:
186.91 KB
Biçim:
Adobe Portable Document Format