RT Journal Article SR Electronic T1 A Review of the Opportunities and Challenges with Large Language Models in Radiology: The Road Ahead JF American Journal of Neuroradiology JO Am. J. Neuroradiol. FD American Society of Neuroradiology SP 1292 OP 1299 DO 10.3174/ajnr.A8589 VO 46 IS 7 A1 Soni, Neetu A1 Ora, Manish A1 Agarwal, Amit A1 Yang, Tianbao A1 Bathla, Girish YR 2025 UL http://www.ajnr.org/content/46/7/1292.abstract AB SUMMARY: In recent years, generative artificial intelligence (AI), particularly large language models (LLMs) and their multimodal counterparts, multimodal large language models, including vision language models, have generated considerable interest in the global AI discourse. LLMs, or pre-trained language models (such as ChatGPT, Med-PaLM, LLaMA), are neural network architectures trained on extensive text data, excelling in language comprehension and generation. Multimodal LLMs, a subset of foundation models, are trained on multimodal data sets, integrating text with another modality, such as images, to learn universal representations akin to human cognition better. This versatility enables them to excel in tasks like chatbots, translation, and creative writing while facilitating knowledge sharing through transfer learning, federated learning, and synthetic data creation. Several of these models can have potentially appealing applications in the medical domain, including, but not limited to, enhancing patient care by processing patient data; summarizing reports and relevant literature; providing diagnostic, treatment, and follow-up recommendations; and ancillary tasks like coding and billing. As radiologists enter this promising but uncharted territory, it is imperative for them to be familiar with the basic terminology and processes of LLMs. Herein, we present an overview of the LLMs and their potential applications and challenges in the imaging domain.AIartificial intelligenceBERTbidirectional encoder representations from transformersCLIPcontrastive language-image pre-trainingFMfoundation modelsGPTgenerative pre-trained transformerLLMlarge language modelNLPnatural language processingPLMpre-trained language modelRAGretrieval augmented generationSAMsegment anything modelVLMvision language model