Background Large language models (LLMs), the most recent advancements in artificial intelligence (AI), have profoundly affected academic publishing and raised important ethical and practical concerns. This study examined the prevalence and content of AI guidelines in Korean medical journals to assess the current landscape and inform future policy implementation.
Methods The top 100 Korean medical journals determined by Hirsh index were surveyed. Author guidelines were collected and screened by a human researcher and AI chatbot to identify AI-related content. The key components of LLM policies were extracted and compared across journals. The journal characteristics associated with the adoption of AI guidelines were also analyzed.
Results Only 18% of the surveyed journals had LLM guidelines, which is much lower than previously reported in international journals. However, the adoption rates increased over time, reaching 57.1% in the first quarter of 2024. High-impact journals were more likely to have AI guidelines. All journals with LLM guidelines required authors to declare LLM tool use and 94.4% prohibited AI authorship. The key policy components included emphasizing human responsibility (72.2%), discouraging AI-generated content (44.4%), and exempting basic AI tools (38.9%).
Conclusion While the adoption of LLM guidelines among Korean medical journals is lower than the global trend, there has been a clear increase in implementation over time. The key components of these guidelines align with international standards, but greater standardization and collaboration are needed to ensure the responsible and ethical use of LLMs in medical research and writing.
Citations
Citations to this article as recorded by
How editors perceive the use of generative artificial intelligence in writing academic papers: a narrative review Sun Huh Journal of the Korean Medical Association.2026; 69(2): 111. CrossRef
Research hotspots and trends of artificial intelligence ethics in medicine: A bibliometric analysis from 1999 to 2025 Jialu Li, Jun Li, Yunjing Qiu, Shiquan He, Zhenxiang Zhang, Peng Wang, Hui Zhang, Beilei Lin DIGITAL HEALTH.2026;[Epub] CrossRef
Sense and sensibility of article submission platforms are needed regarding verification of AI use: a stakeholders’ perspective Jaime A. Teixeira da Silva, Joshua Wang AI and Ethics.2025; 5(6): 6127. CrossRef
Large Language Models(LLMs) in Political Science Research: Analysis of Topical Trends and Usage Patterns Inbok RHEE The Korean Journal of International Relations.2025; 65(3): 257. CrossRef
Performance of large language models in fluoride-related dental knowledge: a comparative evaluation study of ChatGPT-4, Claude 3.5 Sonnet, Copilot, and Grok 3 Raju Biswas, Atanu Mukhopadhyay, Santanu Mukhopadhyay Journal of Yeungnam Medical Science.2025; 42: 53. CrossRef
Role of Medical Editors in the Age of Generative Artificial Intelligence Sun Huh Healthcare Informatics Research.2025; 31(4): 317. CrossRef
What should researchers do in the era of artificial intelligence? Min Cheol Chang Journal of Yeungnam Medical Science.2025; 43: 2. CrossRef