Use este identificador para citar ou linkar para este item:
https://repositorio.ufpe.br/handle/123456789/58277
Compartilhe esta página
Registro completo de metadados
Campo DC | Valor | Idioma |
---|---|---|
dc.contributor.advisor | MELO, Silvio de Barros Melo | - |
dc.contributor.author | EMMANUEL, Yves Emmanuel Francisco do Ó | - |
dc.date.accessioned | 2024-10-25T12:49:49Z | - |
dc.date.available | 2024-10-25T12:49:49Z | - |
dc.date.issued | 2024-10-11 | - |
dc.date.submitted | 2024-10-24 | - |
dc.identifier.citation | EMMANUEL, Yves. Low-Rank Transformations of the Syntactic-Semantic Space in Multiple Natural Language Tasks. 2024. 22 f. TCC (Graduação) - Curso de Ciência da Computação, Centro de Informática, Universidade Federal de Pernambuco, Recife, 2024. | pt_BR |
dc.identifier.uri | https://repositorio.ufpe.br/handle/123456789/58277 | - |
dc.description.abstract | This project evaluates the LoRA-multi approach for parameterefficient fine-tuning of Transformer models across various natural language processing tasks. By applying Low-Rank Adaptation (LoRA) to BERT and DistilBERT models and combining them with a Mixture-of-Expert strategy on the X-LoRA framework, we assess their performance on established benchmarks, including SWAG, SQuAD, and WNUT17. Our experimental results validate that while full fine-tuning yields higher performance on specific tasks, the use of LoRA significantly reduces the number of trainable parameters, providing a practical balance between model performance and computational efficiency. This is particularly beneficial for multi-task learning scenarios, where minimizing resource consumption is crucial. Additionally, we explore the implications of these findings for deploying specialized models in real-world applications, highlighting the X-LoRA framework’s capability to maintain versatility while adapting to various downstream tasks. Ultimately, our work aims to advance the understanding of how low-rank adaptations can enhance the efficiency of Transformer models, paving the way to understanding how these models encode syntactic-semantic information. | pt_BR |
dc.format.extent | 21p. | pt_BR |
dc.language.iso | eng | pt_BR |
dc.rights | openAccess | pt_BR |
dc.rights.uri | http://creativecommons.org/licenses/by-nd/3.0/br/ | * |
dc.subject | Natural Language Processing | pt_BR |
dc.subject | Machine Learning | pt_BR |
dc.subject | Neural Networks | pt_BR |
dc.subject | Fine-tuning | pt_BR |
dc.title | Low-Rank Transformations of the Syntactic-Semantic Space in Multiple Natural Language Tasks | pt_BR |
dc.type | bachelorThesis | pt_BR |
dc.contributor.authorLattes | http://lattes.cnpq.br/9707441769466871 | pt_BR |
dc.degree.level | Graduacao | pt_BR |
dc.contributor.advisorLattes | http://lattes.cnpq.br/3847692220708299 | pt_BR |
dc.subject.cnpq | Áreas::Ciências Exatas e da Terra::Ciência da Computação | pt_BR |
dc.degree.departament | ::(CIN-DCC) - Departamento de Ciência da Computação | pt_BR |
dc.degree.graduation | ::CIn-Curso de Ciência da Computação | pt_BR |
dc.degree.grantor | Universidade Federal de Pernambuco | pt_BR |
dc.degree.local | Recife | pt_BR |
Aparece nas coleções: | (TCC) - Ciência da Computação |
Arquivos associados a este item:
Arquivo | Descrição | Tamanho | Formato | |
---|---|---|---|---|
TCC - Yves Emmanuel.pdf | 488,11 kB | Adobe PDF | ![]() Visualizar/Abrir |
Este arquivo é protegido por direitos autorais |
Este item está licenciada sob uma Licença Creative Commons