skip to main content
Guest
My Research
My Account
Sign out
Sign in
This feature requires javascript
Library Search
Find Databases
Browse Search
E-Journals A-Z
E-Books A-Z
Citation Linker
Help
Language:
English
Vietnamese
This feature required javascript
This feature requires javascript
Primo Search
All Library Resources
All
Course Materials
Course Materials
Search For:
Clear Search Box
Search in:
All Library Resources
Or hit Enter to replace search target
Or select another collection:
Search in:
All Library Resources
Search in:
Print Resources
Search in:
Digital Resources
Search in:
Online E-Resources
Advanced Search
Browse Search
This feature requires javascript
Search Limited to:
Search Limited to:
Resource type
criteria input
All items
Books
Articles
Images
Audio Visual
Maps
Graduate theses
Show Results with:
criteria input
that contain my query words
with my exact phrase
starts with
Show Results with:
Search type Index
criteria input
anywhere in the record
in the title
as author/creator
in subject
Full Text
ISBN
ISSN
TOC
Keyword
Field
Show Results with:
in the title
Show Results with:
anywhere in the record
in the title
as author/creator
in subject
Full Text
ISBN
ISSN
TOC
Keyword
Field
This feature requires javascript
A customized residual neural network and bi-directional gated recurrent unit-based automatic speech recognition model
ISSN: 0957-4174 ;EISSN: 1873-6793 ;DOI: 10.1016/j.eswa.2022.119293
Digital Resources/Online E-Resources
Citations
Cited by
View Online
Details
Recommendations
Reviews
Times Cited
External Links
This feature requires javascript
Actions
Add to My Research
Remove from My Research
E-mail
Print
Permalink
Citation
EasyBib
EndNote
RefWorks
Delicious
Export RIS
Export BibTeX
This feature requires javascript
Title:
A customized residual neural network and bi-directional gated recurrent unit-based automatic speech recognition model
Author:
Selim Reza
;
Marta Campos Ferreira
;
J.J.M. Machado
;
João Manuel R. S. Tavares
Subjects:
Ciências da engenharia e tecnologias
;
Ciências Tecnológicas
;
Engineering and technology
;
Technological sciences
Description:
Speech recognition aims to convert human speech into text and has applications in security, healthcare, commerce, automobiles, and technology, just to name a few. Inserting residual neural networks before recurrent neural network cells improves accuracy and cuts training time by a good margin. Furthermore, layer normalization instead of batch normalization is more effective in model training and performance enhancement. Also, the size of the datasets presents tremendous influences in achieving the best performance. Leveraging these tricks, this article proposes an automatic speech recognition model with a stacked five layers of customized Residual Convolution Neural Network and seven layers of Bi-Directional Gated Recurrent Units, including a logarithmic so f tmax for the model output. Each of them incorporates a learnable per-element affine parameter-based layer normalization technique. The training and testing of the new model were conducted on the LibriSpeech corpus and LJ Speech dataset. The experimental results demonstrate a character error rate (CER) of 4.7 and 3.61% on the two datasets, respectively, with only 33 million parameters without the requirement of any external language model.
Creation Date:
2022-04
Language:
English
Identifier:
ISSN: 0957-4174
EISSN: 1873-6793
DOI: 10.1016/j.eswa.2022.119293
Source:
Universidade do Porto Institutional Repository Open Access
This feature requires javascript
This feature requires javascript
Back to results list
This feature requires javascript
This feature requires javascript
Searching Remote Databases, Please Wait
Searching for
in
scope:(TDTS),scope:(SFX),scope:(TDT),scope:(SEN),primo_central_multiple_fe
Show me what you have so far
This feature requires javascript
This feature requires javascript