Mostrar el registro sencillo del ítem

dc.contributor.authorRizwan, Muhammad
dc.contributor.authorMushtaq, Muhammad Faheem
dc.contributor.authorAkram, Urooj
dc.contributor.authorMehmood, Arif
dc.contributor.authorAshraf, Imran
dc.contributor.authorSahelices, Benjamín
dc.date.accessioned2024-02-07T12:11:29Z
dc.date.available2024-02-07T12:11:29Z
dc.date.issued2022
dc.identifier.citationVol. 10, pp. 129176-129189es
dc.identifier.issn2169-3536es
dc.identifier.urihttps://uvadoc.uva.es/handle/10324/65895
dc.description.abstractDepression detection from social media texts such as Tweets or Facebook comments could be very beneficial as early detection of depression may even avoid extreme consequences of long-term depression i.e. suicide. In this study, depression intensity classification is performed using a labeled Twitter dataset. Further, this study makes a detailed performance evaluation of four transformer-based pre-trained small language models, particularly those having less than 15 million tunable parameters i.e. Electra Small Generator (ESG), Electra Small Discriminator (ESD), XtremeDistil-L6 (XDL) and Albert Base V2 (ABV) for classification of depression intensity using Tweets. The models are fine-tuned to get the best performance by applying different hyperparameters. The models are tested by classification of depression intensity of labeled tweets for three label classes i.e. ‘severe’, ‘moderate’, and ‘mild’ by downstream fine-tuning the parameters. Evaluation metrics such as accuracy, F1, precision, recall, and specificity are calculated to evaluate the performance of the models. Comparative analysis of these models is also done with a moderately larger model i.e. DistilBert which has 67 million tunable parameters for the same task with the same experimental settings. Results indicate that ESG outperforms all other models including DistilBert due to its better deep contextualized text representation as it gets the best F1 score of 89% with comparatively less training time. Further optimization of ESG is also proposed to make it suitable for low-powered devices. This study helps to achieve better classification performance of depression detection as well as to choose the best language model in terms of performance and less training time for Twitter-related downstream NLP tasks.es
dc.format.mimetypeapplication/pdfes
dc.language.isospaes
dc.publisherInstitute of Electrical and Electronics Engineerses
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/*
dc.subject.classificationDepressiones
dc.subject.classificationBit error ratees
dc.subject.classificationSocial networking (online)es
dc.subject.classificationTransformerses
dc.subject.classificationPublic healthcarees
dc.subject.classificationTransfer learninges
dc.subject.classificationBlogses
dc.titleDepression Classification From Tweets Using Small Deep Transfer Learning Language Modelses
dc.typeinfo:eu-repo/semantics/articlees
dc.identifier.doi10.1109/ACCESS.2022.3223049es
dc.relation.publisherversionhttps://ieeexplore.ieee.org/document/9954391/keywords#keywordses
dc.identifier.publicationfirstpage129176es
dc.identifier.publicationlastpage129189es
dc.identifier.publicationtitleDepression Classification From Tweets Using Small Deep Transfer Learning Language Modelses
dc.identifier.publicationvolume10es
dc.peerreviewedSIes
dc.description.projectThis work was supported in part by the Department of Informatics, University of Valladolid, Spain; in part by the Spanish Ministry of Economy and Competitiveness through Feder Funds under Grant TEC2017-84321-C4-2-R; in part by MINECO/AEI/ERDF (EU) under Grant PID2019-105660RB-C21 / AEI / 10.13039/501100011033; in part by the Aragón Government under Grant T58_20R research group; and in part by the Construyendo Europa desde Aragón under Grant ERDF 2014-2020es
dc.identifier.essn2169-3536es
dc.rightsAtribución-NoComercial-CompartirIgual 4.0 Internacional*
dc.type.hasVersioninfo:eu-repo/semantics/publishedVersiones


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem