Otwarty dostęp

An evaluation of orthodontic information quality regarding artificial intelligence (AI) chatbot technologies: A comparison of ChatGPT and google BARD


Zacytuj

Introduction

In recent times, chatbots have played an increasing and noteworthy role in the field of medical practice. The present research was conducted to evaluate the accuracy of the responses provided by ChatGPT and BARD, two of the most utilised chatbot programs, when interrogated regarding orthodontics.

Materials and methods

Twenty-four popular questions about conventional braces, clear aligners, orthognathic surgery, and orthodontic retainers were chosen for the study. When submitted to the ChatGPT and Google BARD platforms, an experienced orthodontist and an orthodontic resident rated the responses to the questions using a five-point Likert scale, with five indicating evidence-based information, four indicating adequate information, three indicating insufficient information, two indicating incorrect information, and one indicating no response. The results were recorded in Microsoft Excel for comparison and analysis.

Results

No correlation was found between the ChatGPT and Google BARD scores and word counts. However, a moderate to significant relationship was observed between the scores and several listed references. No significant association was found between the number of words and references, and a statistically significant difference was observed in both investigators’ numerical rating scales using the AI tools (p = 0.014 and p = 0.030, respectively).

Conclusion

Generally, ChatGPT and BARD provide satisfactory responses to common orthodontic inquiries that patients might ask. ChatGPT’s answers marginally surpassed those of Google BARD in quality.

eISSN:
2207-7480
Język:
Angielski
Częstotliwość wydawania:
Volume Open
Dziedziny czasopisma:
Medicine, Basic Medical Science, other