English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Towards capturing fine phonetic variation in speech using articulatory features

Scharenborg, O., Wan, V., & Moore, R. K. (2007). Towards capturing fine phonetic variation in speech using articulatory features. Speech Communication, 49, 811-826. doi:10.1016/j.specom.2007.01.005.

Item is

Files

show Files
hide Files
:
17EFE803d01.pdf (Publisher version), 5MB
Name:
17EFE803d01.pdf
Description:
-
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
License:
-

Locators

show

Creators

show
hide
 Creators:
Scharenborg, Odette1, Author           
Wan, V., Author
Moore, R. K., Author
Affiliations:
1Speech and Hearing Research Group, Department of Computer Science, University of Sheffield, ou_persistent22              

Content

show
hide
Free keywords: Human speech recognition; Automatic speech recognition; Articulatory feature classification; Fine phonetic variation
 Abstract: The ultimate goal of our research is to develop a computational model of human speech recognition that is able to capture the effects of fine-grained acoustic variation on speech recognition behaviour. As part of this work we are investigating automatic feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. In the experiments reported here, we analysed the classification results from support vector machines (SVMs) and multilayer perceptrons (MLPs). MLPs have been widely and successfully used for the task of multi-value articulatory feature classification, while (to the best of our knowledge) SVMs have not. This paper compares the performance of the two classifiers and analyses the results in order to better understand the articulatory representations. It was found that the SVMs outperformed the MLPs for five out of the seven articulatory feature classes we investigated while using only 8.8–44.2% of the training material used for training the MLPs. The structure in the misclassifications of the SVMs and MLPs suggested that there might be a mismatch between the characteristics of the classification systems and the characteristics of the description of the AF values themselves. The analyses showed that some of the misclassified features are inherently confusable given the acoustic space. We concluded that in order to come to a feature set that can be used for a reliable and accurate automatic description of the speech signal; it could be beneficial to move away from quantised representations.

Details

show
hide
Language(s): eng - English
 Dates: 2007
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1016/j.specom.2007.01.005
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Speech Communication
  Other : Speech Commun.
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Amsterdam, Netherlands : North-Holland
Pages: - Volume / Issue: 49 Sequence Number: - Start / End Page: 811 - 826 Identifier: ISSN: 0167-6393
CoNE: https://pure.mpg.de/cone/journals/resource/954925483662