Sadık Emre ERGİNOĞLU, Nuri Koray ÜLGEN, Mehmet Orçun AKKURT, Ali Said NAZLIGÜL, Nihat Yiğit
Baltalimanı Dergisi - 2025;1(2):33-38
Background/Objectives: Fracture diagnosis in orthopedics and traumatology is essential for optimal treatment outcomes. However, emergency department overcrowding, variable image quality, and minimally displaced fractures contribute to diagnostic errors. Artificial intelligence (AI), particularly deep learning algorithms, has emerged as a transformative tool in medical imaging. This narrative review aims to synthesize the diagnostic accuracy, clinical integration potential, and limitations of AI algorithms in the radiographic detection of appendicular skeletal fractures. Methods: A comprehensive literature search was conducted using relevant keywords in the PubMed, Scopus, and Web of Science databases. In accordance with PRISMA criteria, 1326 records were screened. After removing 328 duplicates, the titles/abstracts of 998 studies were evaluated. The full texts of 240 eligible studies were reviewed, and ultimately, 100 studies were included in this narrative review. Results: Meta-analysis findings revealed that AI demonstrates high diagnostic accuracy in fracture detection (pooled sensitivity: 87- 94%, pooled specificity: 91-96%). AI showed comparable or slightly higher sensitivity, approaching human reader performance (92- 96% vs. 81-88%). Prospective studies demonstrated that AI integration reduced reporting times by an average of 30-40% in emergency department settings and significantly improved the diagnostic accuracy of particularly inexperienced physicians. However, a significant proportion of the studies were retrospective and single-centered, and dataset heterogeneity limited generalizability. Conclusion: AI algorithms approach human in detecting appendicular fractures and have the potential to improve clinical workflows. Current evidence supports positioning AI as a "decision support tool," with the ultimate responsibility remaining with the physician. Future studies should focus on multi-center prospective validations, randomized controlled trials, and explainable AI (XAI) models.