RT info:eu-repo/semantics/article T1 Depression Classification From Tweets Using Small Deep Transfer Learning Language Models A1 Rizwan, Muhammad A1 Mushtaq, Muhammad Faheem A1 Akram, Urooj A1 Mehmood, Arif A1 Ashraf, Imran A1 Sahelices, Benjamín K1 Depression K1 Bit error rate K1 Social networking (online) K1 Transformers K1 Public healthcare K1 Transfer learning K1 Blogs AB Depression detection from social media texts such as Tweets or Facebook comments could be very beneficial as early detection of depression may even avoid extreme consequences of long-term depression i.e. suicide. In this study, depression intensity classification is performed using a labeled Twitter dataset. Further, this study makes a detailed performance evaluation of four transformer-based pre-trained small language models, particularly those having less than 15 million tunable parameters i.e. Electra Small Generator (ESG), Electra Small Discriminator (ESD), XtremeDistil-L6 (XDL) and Albert Base V2 (ABV) for classification of depression intensity using Tweets. The models are fine-tuned to get the best performance by applying different hyperparameters. The models are tested by classification of depression intensity of labeled tweets for three label classes i.e. ‘severe’, ‘moderate’, and ‘mild’ by downstream fine-tuning the parameters. Evaluation metrics such as accuracy, F1, precision, recall, and specificity are calculated to evaluate the performance of the models. Comparative analysis of these models is also done with a moderately larger model i.e. DistilBert which has 67 million tunable parameters for the same task with the same experimental settings. Results indicate that ESG outperforms all other models including DistilBert due to its better deep contextualized text representation as it gets the best F1 score of 89% with comparatively less training time. Further optimization of ESG is also proposed to make it suitable for low-powered devices. This study helps to achieve better classification performance of depression detection as well as to choose the best language model in terms of performance and less training time for Twitter-related downstream NLP tasks. PB Institute of Electrical and Electronics Engineers SN 2169-3536 YR 2022 FD 2022 LK https://uvadoc.uva.es/handle/10324/65895 UL https://uvadoc.uva.es/handle/10324/65895 LA spa NO Vol. 10, pp. 129176-129189 DS UVaDOC RD 16-ago-2024