Applied Sciences, Vol. 14, Pages 10323: Automated Assessment of Reporting Completeness in Orthodontic Research Using LLMs: An Observational Study
Applied Sciences doi: 10.3390/app142210323
Authors: Fahad Alharbi Saeed Asiri
This study evaluated the usability of Large Language Models (LLMs), specifically ChatGPT, in assessing the completeness of reporting in orthodontic research abstracts. We focused on two key areas: randomized controlled trials (RCTs) and systematic reviews, using the CONSORT-A and PRISMA guidelines for evaluation. Twenty RCTs and twenty systematic reviews published between 2018 and 2022 in leading orthodontic journals were analyzed. The results indicated that ChatGPT achieved perfect agreement with human reviewers on several fundamental reporting items; however, significant discrepancies were noted in more complex areas, such as randomization and eligibility criteria. These findings suggest that while LLMs can enhance the efficiency of literature appraisal, they should be used in conjunction with human expertise to ensure a comprehensive evaluation. This study underscores the need for further refinement of LLMs to improve their performance in assessing research quality in orthodontics and other fields.