Achieving Efficient Prompt Engineering in Large Language Models Using a Hybrid and Multi-Objective Optimization Framework
Publicado en línea: 25 jun 2025
Páginas: 67 - 82
Recibido: 11 mar 2025
Aceptado: 04 may 2025
DOI: https://doi.org/10.2478/cait-2025-0012
Palabras clave
© 2025 Sridevi Kottapalli Narayanaswamy et al., published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Prompt optimization is crucial for enhancing the performance of large language models. Traditional Bayesian Optimization (BO) methods face challenges such as local refinement limitations, insufficient parameter tuning, and difficulty handling multi-objectives. This study introduces a hybrid multi-objective optimization framework that integrates BO for global exploration and a Genetic Algorithm for fine-tuning prompt hyperparameters using evolutionary techniques. The Non-dominated Sorting Genetic Algorithm II is employed to identify Pareto-optimal solutions, balancing accuracy, efficiency, and interpretability. The framework is evaluated using the GLUE benchmark dataset with BERT-based tokenization for structured input representation. Experimental results demonstrate that the proposed model achieves 95% accuracy, 85% efficiency, and 79% interpretability across three benchmark datasets, outperforming conventional BO-based methods. The findings confirm that the hybrid approach significantly enhances search efficiency, refinement, and multi-objective optimization, leading to more effective and robust prompt optimization.