Por favor, use este identificador para citar o enlazar este ítem:
https://repositorio.ufpe.br/handle/123456789/65398
Comparte esta pagina
Título : | Exploring the latent space: compact representations with autoencoders |
Autor : | PAZ, Anthonny Dayvson Lino |
Palabras clave : | Autoencoders; Classification; Data Reconstruction; Dimensionality Reduction; Latent Space |
Fecha de publicación : | 14-ago-2025 |
Citación : | PAZ, Anthonny Dayvson Lino; REN, Tsang Ing. Exploring the latent space: compact representations with autoencoders. 2025. 17 f. TCC (Graduação) - Curso de Ciência da Computação, Cin, Universidade Federal de Pernambuco, Recife, 2025. |
Resumen : | This work analyzes how latent-space size and bottleneck structure affect reconstruction quality and downstream utility in convolutional autoencoders. We implement and evaluate four variants—standard (conv_ae), sparse (conv_sparse), denoising (conv_denoising) and variational (conv_vae)—on CIFAR-10 across five latent dimensions (16, 32, 64, 128, 256). Reconstructions are assessed with MSE, SSIM, PSNR, ERGAS and UQI; latent embeddings are evaluated with supervised classifiers (Logistic Regression, MLP, Random Forest, KNN) and unsupervised clustering (KMeans, GMM, HDBSCAN) using ARI and NMI. Results indi cate that conv_sparse attains the best perceptual reconstruction scores (e.g., MSE ≈ 0.0021, SSIM ≈ 0.89 at d = 256), conv_denoising yields the most discriminative embeddings for classification (best MLP accuracy ≈ 0.5471 at d = 256), and conv_vae underperforms at small d. Unsupervised clustering recovery is weak (ARI/NMI typically < 0.2), motivating future work on contrastive and clustering-aware objectives and modified VAE losses. |
URI : | https://repositorio.ufpe.br/handle/123456789/65398 |
Aparece en las colecciones: | (TCC) - Ciência da Computação |
Ficheros en este ítem:
Fichero | Descripción | Tamaño | Formato | |
---|---|---|---|---|
TCC - Anthonny Dayvson Lino Paz.pdf | 5,68 MB | Adobe PDF | ![]() Visualizar/Abrir |
Este ítem está protegido por copyright original |
Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons