dc.contributor.author |
Martirosian, O
|
|
dc.contributor.author |
Barnard, E
|
|
dc.date.accessioned |
2008-01-24T13:56:13Z |
|
dc.date.available |
2008-01-24T13:56:13Z |
|
dc.date.issued |
2007-11 |
|
dc.identifier.citation |
Martirosian, O and Barnard, E. 2007. Speech-based emotion detection in a resource-scarce environment. 18th Annual Symposium of the Pattern Recognition Association of South Africa (PRASA), Pietermaritzburg, Kwazulu-Natal, South Africa, 28-30 November 2007, pp 5 |
en |
dc.identifier.isbn |
978-1-86840-656-2 |
|
dc.identifier.uri |
http://hdl.handle.net/10204/1975
|
|
dc.identifier.uri |
http://search.sabinet.co.za/WebZ/images/ejour/comp/comp_v40_a5.pdf:sessionid=0:bad=http://search.sabinet.co.za/ejour/ejour_badsearch.html:portal=ejournal:
|
|
dc.description |
2007: PRASA |
en |
dc.description |
This paper is published in the South African Computer Journal, Vol 40, pp 18-22 |
|
dc.description.abstract |
The authors explore the construction of a system to classify the dominant emotion in spoken utterances, in an environment where resources such as labelled utterances are scarce. The research addresses two issues relevant to detecting emotion in speech: (a) compensating for the lack of resources and (b) finding features of speech which best characterise emotional expression in the cultural environment being studied (South African telephone speech). Emotional speech was divided into three classes: active, neutral and passive emotion. An emotional speech corpus was created by naive annotators using recordings of telephone speech from a customer service call centre. Features were extracted from the emotional speech samples and the most suitable features selected by sequential forward selection (SFS). A consistency check was performed to compensate for the lack of experienced annotators and emotional speech samples. The classification accuracy achieved is 76.9%, with 95% classification accuracy for active emotion |
en |
dc.language.iso |
en |
en |
dc.publisher |
18th Annual Symposium of the Pattern Recognition Association of South Africa (PRASA) |
en |
dc.subject |
Emotion recognition |
en |
dc.subject |
Resource creation |
en |
dc.subject |
Cultural factors |
en |
dc.subject |
SFS |
en |
dc.subject |
Sequential forward selection |
en |
dc.title |
Speech-based emotion detection in a resource-scarce environment |
en |
dc.type |
Conference Presentation |
en |
dc.identifier.apacitation |
Martirosian, O., & Barnard, E. (2007). Speech-based emotion detection in a resource-scarce environment. 18th Annual Symposium of the Pattern Recognition Association of South Africa (PRASA). http://hdl.handle.net/10204/1975 |
en_ZA |
dc.identifier.chicagocitation |
Martirosian, O, and E Barnard. "Speech-based emotion detection in a resource-scarce environment." (2007): http://hdl.handle.net/10204/1975 |
en_ZA |
dc.identifier.vancouvercitation |
Martirosian O, Barnard E, Speech-based emotion detection in a resource-scarce environment; 18th Annual Symposium of the Pattern Recognition Association of South Africa (PRASA); 2007. http://hdl.handle.net/10204/1975 . |
en_ZA |
dc.identifier.ris |
TY - Conference Presentation
AU - Martirosian, O
AU - Barnard, E
AB - The authors explore the construction of a system to classify the dominant emotion in spoken utterances, in an environment where resources such as labelled utterances are scarce. The research addresses two issues relevant to detecting emotion in speech: (a) compensating for the lack of resources and (b) finding features of speech which best characterise emotional expression in the cultural environment being studied (South African telephone speech). Emotional speech was divided into three classes: active, neutral and passive emotion. An emotional speech corpus was created by naive annotators using recordings of telephone speech from a customer service call centre. Features were extracted from the emotional speech samples and the most suitable features selected by sequential forward selection (SFS). A consistency check was performed to compensate for the lack of experienced annotators and emotional speech samples. The classification accuracy achieved is 76.9%, with 95% classification accuracy for active emotion
DA - 2007-11
DB - ResearchSpace
DP - CSIR
KW - Emotion recognition
KW - Resource creation
KW - Cultural factors
KW - SFS
KW - Sequential forward selection
LK - https://researchspace.csir.co.za
PY - 2007
SM - 978-1-86840-656-2
T1 - Speech-based emotion detection in a resource-scarce environment
TI - Speech-based emotion detection in a resource-scarce environment
UR - http://hdl.handle.net/10204/1975
ER -
|
en_ZA |