skip to main content
Language:
Search Limited to: Search Limited to: Resource type Show Results with: Show Results with: Search type Index

T5 for Hate Speech, Augmented Data, and Ensemble

Sci, 2023-09, Vol.5 (4), p.37 [Peer Reviewed Journal]

2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. ;ISSN: 2413-4155 ;EISSN: 2413-4155 ;DOI: 10.3390/sci5040037

Full text available

Citations Cited by
  • Title:
    T5 for Hate Speech, Augmented Data, and Ensemble
  • Author: Adewumi, Tosin ; Sabry, Sana Sabah ; Abid, Nosheen ; Liwicki, Foteini ; Liwicki, Marcus
  • Subjects: Aggressiveness ; Classification ; Datasets ; Hate speech ; Language ; LSTM ; Machine Learning ; Maskininlärning ; Neural networks ; NLP ; RoBERTa
  • Is Part Of: Sci, 2023-09, Vol.5 (4), p.37
  • Description: We conduct relatively extensive investigations of automatic hate speech (HS) detection using different State-of-The-Art (SoTA) baselines across 11 subtasks spanning six different datasets. Our motivation is to determine which of the recent SoTA models is best for automatic hate speech detection and what advantage methods, such as data augmentation and ensemble, may have on the best model, if any. We carry out six cross-task investigations. We achieve new SoTA results on two subtasks—macro F1 scores of 91.73% and 53.21% for subtasks A and B of the HASOC 2020 dataset, surpassing previous SoTA scores of 51.52% and 26.52%, respectively. We achieve near-SoTA results on two others—macro F1 scores of 81.66% for subtask A of the OLID 2019 and 82.54% for subtask A of the HASOC 2021, in comparison to SoTA results of 82.9% and 83.05%, respectively. We perform error analysis and use two eXplainable Artificial Intelligence (XAI) algorithms (Integrated Gradient (IG) and SHapley Additive exPlanations (SHAP)) to reveal how two of the models (Bi-Directional Long Short-Term Memory Network (Bi-LSTM) and Text-to-Text-Transfer Transformer (T5)) make the predictions they do by using examples. Other contributions of this work are: (1) the introduction of a simple, novel mechanism for correcting Out-of-Class (OoC) predictions in T5, (2) a detailed description of the data augmentation methods, and (3) the revelation of the poor data annotations in the HASOC 2021 dataset by using several examples and XAI (buttressing the need for better quality control). We publicly release our model checkpoints and codes to foster transparency.
  • Publisher: Basel: MDPI AG
  • Language: English
  • Identifier: ISSN: 2413-4155
    EISSN: 2413-4155
    DOI: 10.3390/sci5040037
  • Source: DOAJ Directory of Open Access Journals
    AUTh Library subscriptions: ProQuest Central
    SWEPUB Freely available online
    Coronavirus Research Database

Searching Remote Databases, Please Wait