Acceso abierto

Through the Thicket: A Study of Number-Oriented LLMS Derived from Random Forest Models

, , ,  y   
18 mar 2025

Cite
Descargar portada

This paper introduces a novel approach to training Large Language Models (LLMs) using knowledge transfer from a Random Forest (RF) ensemble. By converting RF decision paths into natural language, this method enhances both the classification accuracy and explanation capabilities of LLMs. Our approach integrates three preprocessing techniques: Relation Encoding, Integer Normalisation, and Verbal Description of Values, tailored for numerical data, improving the model’s ability to interpret structured inputs effectively. Leveraging RF’s ensemble properties, we generate rule-based explanations that can be objectively validated, offering a cost-effective alternative to human evaluations. Experiments on well-known datasets demonstrate high classification accuracy highlighting the potential of our framework for numerical and structured data applications. This study also contributes to Explainable Artificial Intelligence (XAI) by providing LLMs with structured, objectively verifiable explanations, making them more accessible and interpretable for real-world decision-making tasks.

Idioma:
Inglés
Calendario de la edición:
4 veces al año
Temas de la revista:
Informática, Inteligencia artificial, Bases de datos y minería de datos