skip to main content
Language:
Search Limited to: Search Limited to: Resource type Show Results with: Show Results with: Search type Index

ACCURATE AND FAST RECURRENT NEURAL NETWORK SOLUTION FOR THE AUTOMATIC DIACRITIZATION OF ARABIC TEXT

Jordanian journal of computers and information technology (Online), 2020-06, Vol.6 (2), p.103-121 [Peer Reviewed Journal]

2020. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the associated terms available at https://www.jjcit.org/page/Open-Access-Policy . ;ISSN: 2413-9351 ;EISSN: 2415-1076 ;DOI: 10.5455/jjcit.71-1567402817

Full text available

Citations Cited by
  • Title:
    ACCURATE AND FAST RECURRENT NEURAL NETWORK SOLUTION FOR THE AUTOMATIC DIACRITIZATION OF ARABIC TEXT
  • Author: Gheith Abandah ; Abdel-Karim, Asma
  • Subjects: arabic natural language processing ; arabic text ; automatic diacritization ; bidirectional neural network ; Datasets ; long short-term memory ; Neural networks ; recurrent neural networks ; sequence transcription
  • Is Part Of: Jordanian journal of computers and information technology (Online), 2020-06, Vol.6 (2), p.103-121
  • Description: Arabic is mostly written now without its diacritics (short vowels). Adding these diacritics decreases reading ambiguity among other benefits. This work aims to develop a fast and accurate machine learning solution to diacritize Arabic text automatically. This paper uses long short-term memory (LSTM) recurrent neural networks to diacritize Arabic text. Intensive experiments are performed to evaluate proposed alternative design and data encoding options towards a fast and accurate solution. Our experiments involve investigating and handling problems in sequence lengths, proposing and evaluating alternative encodings of the diacritized output sequences and tuning and evaluating neural network options including architecture, network size and hyper-parameters. This paper recommends a solution that can be fast trained on a large dataset and uses four bidirectional LSTM layers to predict the diacritics of the input sequence of Arabic letters. This solution achieves a diacritization error rate of 2.46% on the LDC ATB3 dataset benchmark and 1.97% on the larger new Tashkeela dataset. This latter rate is 47% improvement over the best-published previous result.
  • Publisher: Amman: Scientific Research Support Fund of Jordan Princess Sumaya University for Technology
  • Language: Arabic;English
  • Identifier: ISSN: 2413-9351
    EISSN: 2415-1076
    DOI: 10.5455/jjcit.71-1567402817
  • Source: ProQuest Central
    DOAJ Directory of Open Access Journals

Searching Remote Databases, Please Wait