Zitieren

In this paper the impact of large language models (LLM) on the fake news phenomenon is analysed. On the one hand decent text‐generation capabilities can be misused for mass fake news production. On the other, LLMs trained on huge volumes of text have already accumulated information on many facts thus one may assume they could be used for fact‐checking. Experiments were designed and conducted to verify how much LLM responses are aligned with actual fact‐checking verdicts. The research methodology consists of an experimental dataset preparation and a protocol for interacting with ChatGPT, currently the most sophisticated LLM. A research corpus was explicitly composed for the purpose of this work consisting of several thousand claims randomly selected from claim reviews published by fact‐ checkers. Findings include: it is difficult to align the respons‐ es of ChatGPT with explanations provided by fact‐checkers; prompts have significant impact on the bias of responses. ChatGPT at the current state can be used as a support in fact‐checking but cannot verify claims directly.

eISSN:
2450-0097
Sprache:
Englisch
Zeitrahmen der Veröffentlichung:
4 Hefte pro Jahr
Fachgebiete der Zeitschrift:
Wirtschaftswissenschaften, Volkswirtschaft, andere, Finanz, Mathematik und Statistik für Ökonomen, Ökonometrie