Background Large language models (LLMs), the most recent advancements in artificial intelligence (AI), have profoundly affected academic publishing and raised important ethical and practical concerns. This study examined the prevalence and content of AI guidelines in Korean medical journals to assess the current landscape and inform future policy implementation.
Methods The top 100 Korean medical journals determined by Hirsh index were surveyed. Author guidelines were collected and screened by a human researcher and AI chatbot to identify AI-related content. The key components of LLM policies were extracted and compared across journals. The journal characteristics associated with the adoption of AI guidelines were also analyzed.
Results Only 18% of the surveyed journals had LLM guidelines, which is much lower than previously reported in international journals. However, the adoption rates increased over time, reaching 57.1% in the first quarter of 2024. High-impact journals were more likely to have AI guidelines. All journals with LLM guidelines required authors to declare LLM tool use and 94.4% prohibited AI authorship. The key policy components included emphasizing human responsibility (72.2%), discouraging AI-generated content (44.4%), and exempting basic AI tools (38.9%).
Conclusion While the adoption of LLM guidelines among Korean medical journals is lower than the global trend, there has been a clear increase in implementation over time. The key components of these guidelines align with international standards, but greater standardization and collaboration are needed to ensure the responsible and ethical use of LLMs in medical research and writing.
Citations
Citations to this article as recorded by
Sense and sensibility of article submission platforms are needed regarding verification of AI use: a stakeholders’ perspective Jaime A. Teixeira da Silva, Joshua Wang AI and Ethics.2025; 5(6): 6127. CrossRef
Large Language Models(LLMs) in Political Science Research: Analysis of Topical Trends and Usage Patterns Inbok RHEE The Korean Journal of International Relations.2025; 65(3): 257. CrossRef
Performance of large language models in fluoride-related dental knowledge: a comparative evaluation study of ChatGPT-4, Claude 3.5 Sonnet, Copilot, and Grok 3 Raju Biswas, Atanu Mukhopadhyay, Santanu Mukhopadhyay Journal of Yeungnam Medical Science.2025; 42: 53. CrossRef
Role of Medical Editors in the Age of Generative Artificial Intelligence Sun Huh Healthcare Informatics Research.2025; 31(4): 317. CrossRef
What should researchers do in the era of artificial intelligence? Min Cheol Chang Journal of Yeungnam Medical Science.2025; 43: 2. CrossRef
An institutional review board (IRB) should independently safeguard the right, safety, and well-being of all clinical trial subjects. It should consist of members who are qualified and experienced to review and evaluate the science, medical aspects, and ethics of the proposed trial. They have to pursue continuing efforts to improve the standards of review. The levels of review include the full board review, expedited review, continuing review, or exempt from review, while the levels of decision-making include approval, conditional approval, deferred approval, and disapproval. Investigators must follow the approved protocols and regulations honestly, and it is the IRB's mission to audit clinical trial sites as well.