Türk Medline
ADR Yönetimi
ADR Yönetimi

IS CHATGPT' S KNOWLEDGE ON RHINOLOGY ACCURATE? CAN IT BE UTILIZED IN MEDICAL EDUCATION AND PATIENT INFORMATION?

Aykut ÖZDOĞAN, Burçay TELLİOĞLU, Oğuzhan KATAR

European Journal of Rhinology and Allergy - 2026;9(1):21-26

Department of Otorhinolaryngology, Ankara Etlik City Hospital, Ankara, Türkiye

 

Objective: ChatGPT is a new artificial intelligence model designed to create human-like chats. As knowledge advances and technology improves, it is promising in medicine, especially as a resource for patients and clinicians. The aim of our study was to assess the accuracy and consistency of ChatGPT's answers to rhinology-related questions. Methods: In March 2024, ChatGPT (version 4) was presented with 130 rhinology questions. Each question was asked twice, and the consistency and reproducibility of the answers were evaluated. The answers were evaluated by three ENT physicians. Results: ChatGPT's answers were consistent at a rate of 91.5% (119/130). Among the inconsistent answers, the second answer was more correct in 10/11 cases. Statistically, the second answer was more correct (p:0011). As a result of the controller evaluation, the number of answers evaluated as completely correct was 99/81/80 (76.2%, 62.3%, and 61.5%, respectively). However, completely incorrect answers were observed in 5.4%, 4.6%, and 5.4%, respectively. Accordingly, there was no statistically significant difference between the controllers (p:0.270). Conclusion: The inaccuracy of ChatGPT in patient information and education processes is considered acceptable and reliable. However, it is also observed that ChatGPT answers are not always correct and can provide misleading responses to some questions. We believe that it would be safer and more accurate to use ChatGPT as an informative and educational tool for patients under expert control.