AI as a patient educator: Evaluating ChatGPT’s role in disseminating information on herpes zoster ophthalmicus

Authors

Seçil Yigen İritaş, İlter İritaş
  • Seçil Yigen İritaş (Author) Department of Dermatology and Venereal Diseases, Dr. Lütfi Kırdar City Hospital, İstanbul, Türkiye https://orcid.org/0000-0003-4462-6526
  • İlter İritaş (Author) Department of Ophtalmology, Pendik State Hospital, İstanbul, Türkiye; Department of Ophthalmology, St. Mary’s Hospital, Isle of Wight NHS Trust, Newport, United Kingdom https://orcid.org/0000-0003-1789-4787
https://doi.org/10.18621/eurj.1616435
Objectives: The aim of this study is to evaluate the accuracy and quality of the responses provided by the artificial intelligence model, ChatGPT, to questions about Herpes Zoster Ophthalmicus (HZO). HZO is a condition caused by the involvement of the ophthalmic branch of the trigeminal nerve, which can lead to severe ocular complications. Given the increasing use of artificial intelligence in healthcare, this study explores the capacity of ChatGPT to contribute to patient education on this disease.
Methods: Seven questions selected by a dermatologist and an ophthalmologist from a list of twenty frequently asked questions about HZO were posed to the ChatGPT 4.0 model. The responses were evaluated using a four-point rating scale. Responses were independently rated as "excellent," "satisfactory with minimal explanation required," "satisfactory with moderate explanation required," or "unsatisfactory." The readability of the seven questions was assessed using the Flesch Reading Ease Score (FRES) and Flesch-Kincaid Grade Level (FKGL) criteria.
Results: ChatGPT provided accurate and informative responses to all seven questions. Six responses were rated as "excellent," and one response was rated as "satisfactory with minimal explanation required". Inter-rater reliability was calculated using Cohen's kappa, which was found to be 0.416 (95% confidence interval, 0.007, 0.825). A subsequent readability analysis using the Flesch Reading Ease Score (FRES) and the Flesch-Kincaid Grade Level (FKGL) revealed that the answers ranged from moderately difficult to challenging. The FRES values ranged from 41.13 to 57.24, while the FKGL scores varied from 9.8 to 13.3, suggesting a reading level corresponding to that of a high school to early college level.
Conclusions: ChatGPT has demonstrated a strong capacity to effectively respond to patient questions about HZO. It was observed that it produced content suitable for readers educated at high school and university level and provided clear and detailed medical information. It can be used as a complementary tool for patient education, especially as a 24/7 resource for patients who have difficulty accessing healthcare services, following prior review by dermatologists and ophthalmologists.
Artificial intelligence, ChatGPT, herpes zoster ophthalmicus, patient education

1. Kunt MM, Karaca MA, Erbil B, Akpınar E. [Emergency Medicine and Artificial Intelligence]. Anatolian J Emerg Med. 2021;4(3):114-117. [Article in Turkish]

2. Şensoy E, Çıtırık M. [Impact of Language Variation English and Turkish on Artificial Intelligence Chatbot Performance in Oculofacial Plastic and Orbital Surgery: A Study of ChatGPT-3.5, Copilot, and Gemini]. Osmangazi J Med. 2024;46(5):781-786. doi: 10.20515/otd.1520495. [Article in Turkish]

3. Gaudiani MA, Castle JP, Abbas MJ, et al. ChatGPT-4 Generates More Accurate and Complete Responses to Common Patient Questions About Anterior Cruciate Ligament Reconstruction Than Google's Search Engine. Arthrosc Sports Med Rehabil. 2024;6(3):100939. doi: 10.1016/j.asmr.2024.100939.

4. Chen R, Zhang Y, Choi S, Nguyen D, Levin NA. The chatbots are coming: Risks and benefits of consumer-facing artificial intelligence in clinical dermatology. J Am Acad Dermatol. 2023;89(4):872-874. doi: 10.1016/j.jaad.2023.05.088.

5. Antaki F, Touma S, Milad D, El-Khoury J, Duval R. Evaluating the Performance of ChatGPT in Ophthalmology: An Analysis of Its Successes and Shortcomings. Ophthalmol Sci. 2023;3(4):100324. doi: 10.1016/j.xops.2023.100324.

6. Gross GE, Eisert L, Doerr HW, et al. S2k guidelines for the diagnosis and treatment of herpes zoster and postherpetic neuralgia. J Dtsch Dermatol Ges. 2020;18(1):55-78. doi: 10.1111/ddg.14013.

7. Patil A, Goldust M, Wollina U. Herpes zoster: A Review of Clinical Manifestations and Management. Viruses. 2022;14(2):192. doi: 10.3390/v14020192.

8. Mika AP, Martin JR, Engstrom SM, Polkowski GG, Wilson JM. Assessing ChatGPT Responses to Common Patient Questions Regarding Total Hip Arthroplasty. J Bone Joint Surg Am. 2023;105(19):1519-1526. doi: 10.2106/JBJS.23.00209.

9. Flesch R. A new readability yardstick. J Appl Psychol. 1948;32(3):221-323. doi: 10.1037/h0057532.

10. Kincaid JP, Fishburne RP, Rogers RL, Chissom BS. Derivation of new readability formulas (automated readability index, fog count, and flesch reading ease formula) for Navy enlisted personnel. Research Branch Report. 1975;8(75):1–37. doi: 10.21236/ADA006655.

11. Campbell DJ, Estephan LE, Mastrolonardo EV, Amin DR, Huntley CT, Boon MS. Evaluating ChatGPT responses on obstructive sleep apnea for patient education. J Clin Sleep Med. 2023;19(12):1989-1995. doi: 10.5664/jcsm.10728.

12. Lakdawala N, Channa L, Gronbeck C, et al. Assessing the Accuracy and Comprehensiveness of ChatGPT in Offering Clinical Guidance for Atopic Dermatitis and Acne Vulgaris. JMIR Dermatol. 2023;6:e50409. doi: 10.2196/50409.

13. Abdul Latheef EN, Pavithran K. Herpes zoster: a clinical study in 205 patients. Indian J Dermatol. 2011;56(5):529-532. doi: 10.4103/0019-5154.87148.

14. Zhou J, Li J, Ma L, Cao S. Zoster sine herpete: a review. Korean J Pain. 2020;33(3):208-215. doi: 10.3344/kjp.2020.33.3.208.

15. AlShehri Y, McConkey M, Lodhia P. ChatGPT Provides Satisfactory but Occasionally Inaccurate Answers to Common Patient Hip Arthroscopy Questions. Arthroscopy. 2025;41(5):1337-1347. doi: 10.1016/j.arthro.2024.06.017.

16. Şahin MF, Keleş A, Özcan R, et al. Evaluation of information accuracy and clarity: ChatGPT responses to the most frequently asked questions about premature ejaculation. Sex Med. 2024;12(3):qfae036. doi: 10.1093/sexmed/qfae036.

17. OpenAI. (n.d.). ChatGPT release notes. Retrieved January 8, 2025.

18. Campbell DJ, Estephan LE, Mastrolonardo EV, Amin DR, Huntley CT, Boon MS. Evaluating ChatGPT responses on obstructive sleep apnea for patient education. J Clin Sleep Med. 2023;19(12):1989-1995. doi: 10.5664/jcsm.10728.

19. Masalkhi M, Ong J, Waisberg E, Lee AG. Google DeepMind's gemini AI versus ChatGPT: a comparative analysis in ophthalmology. Eye (Lond). 2024;38(8):1412-1417. doi: 10.1038/s41433-024-02958-w.

There are 19 references in total.
1.
Yigen İritaş S, İritaş İlter. AI as a patient educator: Evaluating ChatGPT’s role in disseminating information on herpes zoster ophthalmicus. Eur Res J. 2025;11(4):769-775. doi:10.18621/eurj.1616435

Downloads

Article Information

  • Article Type Research Article
  • Submitted February 21, 2026
  • Published July 3, 2025
  • Issue Vol. 11 No. 4 (2025)
  • Section Research Article
  • File Downloads 295
  • Abstract Views 110
  • Altmetrics
  • Share
Download data is not yet available.