Dear Editor,
I read Ahmet Yigitbay intriguing work on the artificial intelligence tool ChatGPT. The widespread use and accessibility of artificial intelligence tools will certainly affect medical education. Currently, many medical students use artificial intelligence tools that convert voice recordings into text and pictures instead of taking notes in lectures (1). I believe that in the near future, artificial intelligence will have a place in all areas of the education process, such as learning, teaching, and evaluation. The fact that ChatGPT was not as successful as humans in answering the questions in this study conducted by Yigitbay suggests that these tools have not yet matured for widespread use.
In the method section of the study, the author stated that the TOTEK exam questions between 2019 and 2023 were asked to be answered via ChatGPT. It is understood that questions were asked of ChatGPT in 2024, the year this study was conducted. However, when analyzing by years in the results section, it is understood as if these questions were asked to ChatGPT every year between 2019 and 2023. This is particularly evident in the third paragraph of the Results. In the last sentence of this paragraph, it is stated that this variability may stem from changes in the datasets used to train the model, updates to the model itself, or differences in the complexity of exam questions across the years (2). Since all questions were asked of ChatGPT at the same time (2024), it cannot be considered that the artificial intelligence training data has changed.
Financial Disclosure: This study received no financial support.