PT - JOURNAL ARTICLE AU - Soni, Neetu AU - Ora, Manish AU - Agarwal, Amit AU - Yang, Tianbao AU - Bathla, Girish TI - A Review of the Opportunities and Challenges with Large Language Models in Radiology: The Road Ahead AID - 10.3174/ajnr.A8589 DP - 2025 Jul 01 TA - American Journal of Neuroradiology PG - 1292--1299 VI - 46 IP - 7 4099 - http://www.ajnr.org/content/46/7/1292.short 4100 - http://www.ajnr.org/content/46/7/1292.full SO - Am. J. Neuroradiol.2025 Jul 01; 46 AB - SUMMARY: In recent years, generative artificial intelligence (AI), particularly large language models (LLMs) and their multimodal counterparts, multimodal large language models, including vision language models, have generated considerable interest in the global AI discourse. LLMs, or pre-trained language models (such as ChatGPT, Med-PaLM, LLaMA), are neural network architectures trained on extensive text data, excelling in language comprehension and generation. Multimodal LLMs, a subset of foundation models, are trained on multimodal data sets, integrating text with another modality, such as images, to learn universal representations akin to human cognition better. This versatility enables them to excel in tasks like chatbots, translation, and creative writing while facilitating knowledge sharing through transfer learning, federated learning, and synthetic data creation. Several of these models can have potentially appealing applications in the medical domain, including, but not limited to, enhancing patient care by processing patient data; summarizing reports and relevant literature; providing diagnostic, treatment, and follow-up recommendations; and ancillary tasks like coding and billing. As radiologists enter this promising but uncharted territory, it is imperative for them to be familiar with the basic terminology and processes of LLMs. Herein, we present an overview of the LLMs and their potential applications and challenges in the imaging domain.AIartificial intelligenceBERTbidirectional encoder representations from transformersCLIPcontrastive language-image pre-trainingFMfoundation modelsGPTgenerative pre-trained transformerLLMlarge language modelNLPnatural language processingPLMpre-trained language modelRAGretrieval augmented generationSAMsegment anything modelVLMvision language model