An evaluation of orthodontic information quality regarding artificial intelligence (AI) chatbot technologies: A comparison of ChatGPT and google BARD
Publié en ligne: 03 juin 2024
Pages: 149 - 157
Reçu: 01 janv. 2024
Accepté: 01 mars 2024
DOI: https://doi.org/10.2478/aoj-2024-0012
Mots clés
© 2024 Can Arslan et al., published by Sciendo
This work is licensed under the Creative Commons Attribution 4.0 International License.
Introduction
In recent times, chatbots have played an increasing and noteworthy role in the field of medical practice. The present research was conducted to evaluate the accuracy of the responses provided by ChatGPT and BARD, two of the most utilised chatbot programs, when interrogated regarding orthodontics.
Materials and methods
Twenty-four popular questions about conventional braces, clear aligners, orthognathic surgery, and orthodontic retainers were chosen for the study. When submitted to the ChatGPT and Google BARD platforms, an experienced orthodontist and an orthodontic resident rated the responses to the questions using a five-point Likert scale, with five indicating evidence-based information, four indicating adequate information, three indicating insufficient information, two indicating incorrect information, and one indicating no response. The results were recorded in Microsoft Excel for comparison and analysis.
Results
No correlation was found between the ChatGPT and Google BARD scores and word counts. However, a moderate to significant relationship was observed between the scores and several listed references. No significant association was found between the number of words and references, and a statistically significant difference was observed in both investigators’ numerical rating scales using the AI tools (
Conclusion
Generally, ChatGPT and BARD provide satisfactory responses to common orthodontic inquiries that patients might ask. ChatGPT’s answers marginally surpassed those of Google BARD in quality.