Estimating the Effects of Voice Quality and Speech Intelligibility of Audio Compression in Automatic Emotion Recognition

Автор: A. Pramod Reddy, Dileep kumar Ravikanti, Rakesh Betala, K. Venkatesh Sharma, K. Shirisha Reddy

Журнал: International Journal of Image, Graphics and Signal Processing @ijigsp

Статья в выпуске: 3 vol.15, 2023 года.

Бесплатный доступ

This paper projects, the impact & accuracy of speech compression on AER systems. The effects of various codecs like MP3, Speex, and Adaptive multi-rate(NB & WB) are compared with the uncompressed speech signal. Loudness enlistment, or a steeper-than-normal increase in perceived loudness with presentation level, is associated with sensorineural hearing loss. Amplitude compression is frequently used to compensate for this abnormality, such as in a hearing aid. As an alternative, one may enlarge these by methods of expansion as speech intelligibility has been represented as the perception of rapid energy changes, may make communication more understandable. However, even if these signal-processing methods improve speech understanding, their design and implementation may be constrained by insufficient sound quality. Therefore, syllabic compression and temporal envelope expansion were assessed for in speech intelligibility and sound quality. An adaptive technique based on brief, commonplace words either in noise or with another speaker competing was used to assess the speech intelligibility. Speech intelligibility was tested in steady-state noise with a single competing speaker using everyday sentences. The sound quality of four artistic excerpts and quiet speech was evaluated using a rating scale. With a state-of-art, spectral error, compression error ratio, and human labeling effects, The experiments are carried out using the Telugu dataset and well-known EMO-DB. The results showed that all speech compression techniques resulted in reduce of emotion recognition accuracy. It is observed that human labeling has better recognition accuracy. For high compression, it is advised to use the overall mean of the unweighted average recall for the AMR-WB and SPEEX codecs with 6.6 bit rates to provide the optimum quality for data storage.

Еще

Speech Compression, speech intelligibility, emotion recognition, CER

Короткий адрес: https://sciup.org/15018763

IDR: 15018763   |   DOI: 10.5815/ijigsp.2023.03.06

Список литературы Estimating the Effects of Voice Quality and Speech Intelligibility of Audio Compression in Automatic Emotion Recognition

  • A. Pramod Reddy and V. Vijayarajan, “Recognition of human emotion with spectral features using multi layer-perceptron,” Int. J. Knowledge-Based Intell. Eng. Syst., vol. 24, no. 3, 2020, doi: 10.3233/KES-200044.
  • A. P. Reddy and V. Vijayarajan, “Audio compression with multi-algorithm fusion and its impact in speech emotion recognition,” Int. J. Speech Technol., pp. 1–9, 2020.
  • E. Villchur, “Signal processing to improve speech intelligibility in perceptive deafness,” J. Acoust. Soc. Am., vol. 53, no. 6, pp. 1646–1657, 1973.
  • K. Bengtsson, “Talandet som levd erfarenhet.: En studie av fyra barn med Downs syndrom.,” Estetisk-filosofiska fakulteten, 2006.
  • L. Laaksonen, H. Pulakka, V. Myllylä, and P. Alku, “Development, evaluation and implementation of an artificial bandwidth extension method of telephone speech in mobile terminal,” IEEE Trans. Consum. Electron., vol. 55, no. 2, pp. 780–787, 2009, doi: 10.1109/TCE.2009.5174454.
  • R. P. Lippmann, L. D. Braida, and N. I. Durlach, “Study of multichannel amplitude compression and linear amplification for persons with sensorineural hearing loss,” J. Acoust. Soc. Am., vol. 69, no. 2, pp. 524–534, 1981.
  • I. V Nábělek, “Performance of hearing-impaired listeners under various types of amplitude compression,” J. Acoust. Soc. Am., vol. 74, no. 3, pp. 776–791, 1983.
  • D. K. Bustamante and L. D. Braida, “Multiband compression limiting for hearing-impaired listeners,” J. Rehabil. Res. Dev., vol. 24, no. 4, pp. 149–160, 1987.
  • H. Levitt, M. Bakke, J. Kates, A. Neuman, T. Schwander, and M. Weiss, “Signal processing for hearing impairment.,” Scand. Audiol. Suppl., vol. 38, pp. 7–19, 1993.
  • G. Walker, D. Byrne, and H. Dillon, “Learning effects with a closed response set nonsense syllable test,” Aust. New Zeal. J. Audiol., vol. 4, no. 1, pp. 27–31, 1982.
  • R. Plutchik, “A general psychoevolutionary theory of emotion,” in Theories of emotion, Elsevier, 1980, pp. 3–33.
  • R. Plutchik, Emotion: A psychoevolutionary synthesis. Harpercollins College Division, 1980.
  • J. Boyd, “Sony unleashes new Aibo robot dog,” IEEE Spectrum. IEEE, 2017.
  • Y. Attabi and P. Dumouchel, “Anchor models for emotion recognition from speech,” IEEE Trans. Affect. Comput., vol. 4, no. 3, pp. 280–290, 2013, doi: 10.1109/T-AFFC.2013.17.
  • M. F. Teng, “Emotional Development and Construction of Teacher Identity: Narrative Interactions about the Pre-Service Teachers’ Practicum Experiences.,” Aust. J. Teach. Educ., vol. 42, no. 11, pp. 117–134, 2017.
  • R. Plutchik, “A psychoevolutionary theory of emotions.” Sage Publications, 1982.
  • Y. Qian and A. Mita, “Acceleration-based damage indicators for building structures using neural network emulators,” Struct. Control Heal. Monit. Off. J. Int. Assoc. Struct. Control Monit. Eur. Assoc. Control Struct., vol. 15, no. 6, pp. 901–920, 2008.
  • D. King, S. M. Ritchie, M. Sandhu, S. Henderson, and B. Boland, “Temporality of emotion: Antecedent and successive variants of frustration when learning chemistry,” Sci. Educ., vol. 101, no. 4, pp. 639–672, 2017.
  • I. Varga, R. D. De Lacovo, and P. Usai, “Standardization of the AMR wideband speech codec in 3GPP and ITU-T,” IEEE Commun. Mag., vol. 44, no. 5, pp. 66–73, 2006.
  • K. Pisanski et al., “Vocal indicators of body size in men and women: A meta-analysis,” Anim. Behav., vol. 95, pp. 89–99, 2014, doi: 10.1016/j.anbehav.2014.06.011.
  • A. M. Kondoz, Digital speech: coding for low bit rate communication systems. John Wiley \& Sons, 2005.
  • M. Agiwal, A. Roy, and N. Saxena, “Next generation 5G wireless networks: A comprehensive survey,” IEEE Commun. Surv. \& Tutorials, vol. 18, no. 3, pp. 1617–1655, 2016.
  • A. Nishimura, “Data hiding in pitch delay data of the adaptive multi-rate narrow-band speech codec,” in 2009 fifth international conference on intelligent information hiding and multimedia signal processing, 2009, pp. 483–486.
  • J. G. Beerends et al., “Perceptual objective listening quality assessment (POLQA), the third generation ITU-T standard for end-to-end speech quality measurement part I—Temporal alignment,” J. Audio Eng. Soc., vol. 61, no. 6, pp. 366–384, 2013.
  • P. Coverdale, S. Moller, A. Raake, and A. Takahashi, “Multimedia quality assessment standards in ITU-T SG12,” IEEE Signal Process. Mag., vol. 28, no. 6, pp. 91–97, 2011.
  • C. Spearman, “The proof and measurement of association between two things.,” 1961.
  • C. Spearman, “The proof and measurement of association between two things,” Am. J. Psychol., vol. 100, no. 3/4, pp. 441–471, 1987.
  • A. F. Lotz, I. Siegert, M. Maruschke, and A. Wendemuth, “Audio Compression And Its Impact On Emotion Recognition in Affective Computing,” Elektron. Sprachsignalverarbeitung 2017, pp. 1–8, 2017.
Еще
Статья научная