Skip navigation
Por favor, use este identificador para citar o enlazar este ítem: https://repositorio.ufpe.br/handle/123456789/40461

Comparte esta pagina

Título : Out-of-the-box parameter control for evolutionary and swarm-based algorithms with distributed reinforcement learning
Autor : LACERDA, Marcelo Gomes Pereira de
Palabras clave : Inteligência Computacional; Inteligência de enxames; Computação evolucionária; Aprendizagem por reforço
Fecha de publicación : 19-mar-2021
Editorial : Universidade Federal de Pernambuco
Citación : LACERDA, Marcelo Gomes Pereira de. Out-of-the-box parameter control for evolutionary and swarm-based algorithms with distributed reinforcement learning. 2021. Tese (Doutorado em Ciência da Computação) – Universidade Federal de Pernambuco, Recife, 2021.
Resumen : Despite the success of evolutionary and swarm-based algorithms in many different application areas, such algorithms are very sensitive to the values of their parameters. According to the No Free Lunch Theorem, there is no parameter setting for a given algorithm that works best for every possible problem. Thus, finding a quasi-optimal parameter setting that maximizes the performance of a given metaheuristic in a specific problem is necessary. As manual parameter adjustment for evolutionary and swarm-based algorithms can be very hard and time demanding, automating this task has been one of the greatest and most important challenges in the field. Out-of-the-box parameter control methods are techniques that dynamically adjust the parameters of a metaheuristics during its execution and can be applied to any parameter, metaheuristic and optimization problem. Very few studies about out-of-the-box parameter control methods can be found in the literature, and most of them apply reinforcement learning algorithms to train effective parameter control policies. Even though these studies have presented very interesting and promising results, the problem of parameter control for metaheuristics is far from being solved. A few important gaps were identified in the literature of this field, namely: (1) training parameter control policies with reinforcement learning can be very computational-demanding; (2) reinforcement learning algorithms usually require the adjustment of many hyperparameters, what makes difficult its successful use. Moreover, the search for an optimal policy can be very unstable; (3) and, very limited benchmarks have been used to assess the generality of the out-of-the-box methods proposed so far in the literature. To address such gaps, the primary objective of this work is to propose an out-of-the-box policy training method for parameter control of mono-objective evolutionary and swarm-based algorithms with distributed reinforcement learning.The proposed method had its generality tested on a comprehensive experimental benchmark with 133 scenarios with 5 different metaheuristics, solving several numerical (continuous), binary, and combinatorial optimization problems. The scalability of the proposed architecture was also dully assessed. Moreover, extensive analyses of the hyperparameters of the proposed method were performed. The experimental results showed that the three aforementioned gaps were successfully addressed by the proposed method, besides a few other secondary advancements in the field, all commented in this thesis.
URI : https://repositorio.ufpe.br/handle/123456789/40461
Aparece en las colecciones: Teses de Doutorado - Ciência da Computação

Ficheros en este ítem:
Fichero Descripción Tamaño Formato  
TESE Marcelo Gomes Pereira de Lacerda.pdf5,44 MBAdobe PDFVista previa
Visualizar/Abrir


Este ítem está protegido por copyright original



Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons Creative Commons