Skip navigation
Use este identificador para citar ou linkar para este item: https://repositorio.ufpe.br/handle/123456789/58277

Compartilhe esta página

Registro completo de metadados
Campo DCValorIdioma
dc.contributor.advisorMELO, Silvio de Barros Melo-
dc.contributor.authorEMMANUEL, Yves Emmanuel Francisco do Ó-
dc.date.accessioned2024-10-25T12:49:49Z-
dc.date.available2024-10-25T12:49:49Z-
dc.date.issued2024-10-11-
dc.date.submitted2024-10-24-
dc.identifier.citationEMMANUEL, Yves. Low-Rank Transformations of the Syntactic-Semantic Space in Multiple Natural Language Tasks. 2024. 22 f. TCC (Graduação) - Curso de Ciência da Computação, Centro de Informática, Universidade Federal de Pernambuco, Recife, 2024.pt_BR
dc.identifier.urihttps://repositorio.ufpe.br/handle/123456789/58277-
dc.description.abstractThis project evaluates the LoRA-multi approach for parameterefficient fine-tuning of Transformer models across various natural language processing tasks. By applying Low-Rank Adaptation (LoRA) to BERT and DistilBERT models and combining them with a Mixture-of-Expert strategy on the X-LoRA framework, we assess their performance on established benchmarks, including SWAG, SQuAD, and WNUT17. Our experimental results validate that while full fine-tuning yields higher performance on specific tasks, the use of LoRA significantly reduces the number of trainable parameters, providing a practical balance between model performance and computational efficiency. This is particularly beneficial for multi-task learning scenarios, where minimizing resource consumption is crucial. Additionally, we explore the implications of these findings for deploying specialized models in real-world applications, highlighting the X-LoRA framework’s capability to maintain versatility while adapting to various downstream tasks. Ultimately, our work aims to advance the understanding of how low-rank adaptations can enhance the efficiency of Transformer models, paving the way to understanding how these models encode syntactic-semantic information.pt_BR
dc.format.extent21p.pt_BR
dc.language.isoengpt_BR
dc.rightsopenAccesspt_BR
dc.rights.urihttp://creativecommons.org/licenses/by-nd/3.0/br/*
dc.subjectNatural Language Processingpt_BR
dc.subjectMachine Learningpt_BR
dc.subjectNeural Networkspt_BR
dc.subjectFine-tuningpt_BR
dc.titleLow-Rank Transformations of the Syntactic-Semantic Space in Multiple Natural Language Taskspt_BR
dc.typebachelorThesispt_BR
dc.contributor.authorLatteshttp://lattes.cnpq.br/9707441769466871pt_BR
dc.degree.levelGraduacaopt_BR
dc.contributor.advisorLatteshttp://lattes.cnpq.br/3847692220708299pt_BR
dc.subject.cnpqÁreas::Ciências Exatas e da Terra::Ciência da Computaçãopt_BR
dc.degree.departament::(CIN-DCC) - Departamento de Ciência da Computaçãopt_BR
dc.degree.graduation::CIn-Curso de Ciência da Computaçãopt_BR
dc.degree.grantorUniversidade Federal de Pernambucopt_BR
dc.degree.localRecifept_BR
Aparece nas coleções:(TCC) - Ciência da Computação

Arquivos associados a este item:
Arquivo Descrição TamanhoFormato 
TCC - Yves Emmanuel.pdf488,11 kBAdobe PDFThumbnail
Visualizar/Abrir


Este arquivo é protegido por direitos autorais



Este item está licenciada sob uma Licença Creative Commons Creative Commons