Data publikacji: 25 wrz 2025
Zakres stron: 3 - 21
Otrzymano: 02 maj 2025
Przyjęty: 26 cze 2025
DOI: https://doi.org/10.2478/cait-2025-0019
Słowa kluczowe
© 2025 Badrus Zaman et al., published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
This study develops a detoxification model for Indonesian text by leveraging Large Language Models (LLMs) to transform toxic content into neutral expressions while preserving original meaning. Addressing the lack of effective detoxification methods in Bahasa Indonesia – mainly due to the scarcity of parallel datasets – the research applies supervised learning by fine-tuning LLaMA3-8B and Sahabat-AI on crowdsourced parallel datasets, complemented by unsupervised techniques such as masking and paraphrasing. Human evaluation shows that the structurally enhanced Sahabat-AI model outperforms other approaches in reducing toxicity, preserving content, and ensuring fluency. While masking achieves the fastest inference time, it often fails to retain meaning; paraphrasing offers fluency but alters the intended meaning. The LLaMA3-8B model effectively retained meaning but left residual toxicity. These findings underscore the effectiveness of the enhanced Sahabat-AI model in detoxifying Indonesian text, contributing to healthier digital discourse, and preserving a more peaceful society.