Through the Thicket: A Study of Number-Oriented LLMS Derived from Random Forest Models
Data publikacji: 18 mar 2025
Zakres stron: 279 - 298
Otrzymano: 07 paź 2024
Przyjęty: 04 mar 2025
DOI: https://doi.org/10.2478/jaiscr-2025-0014
Słowa kluczowe
© 2025 Michał Romaszewski et al., published by Sciendo
This work is licensed under the Creative Commons Attribution 4.0 International License.
This paper introduces a novel approach to training Large Language Models (LLMs) using knowledge transfer from a Random Forest (RF) ensemble. By converting RF decision paths into natural language, this method enhances both the classification accuracy and explanation capabilities of LLMs. Our approach integrates three preprocessing techniques: Relation Encoding, Integer Normalisation, and Verbal Description of Values, tailored for numerical data, improving the model’s ability to interpret structured inputs effectively. Leveraging RF’s ensemble properties, we generate rule-based explanations that can be objectively validated, offering a cost-effective alternative to human evaluations. Experiments on well-known datasets demonstrate high classification accuracy highlighting the potential of our framework for numerical and structured data applications. This study also contributes to Explainable Artificial Intelligence (XAI) by providing LLMs with structured, objectively verifiable explanations, making them more accessible and interpretable for real-world decision-making tasks.