Abstract
The rapid proliferation of Large Language Models (LLMs) has heralded a new era in artificial intelligence, demonstrating remarkable capabilities in understanding, generating, and reasoning with human language. Their potential to revolutionize scientific discovery, particularly in chemistry, is immense. However, standalone LLMs are inherently limited by their reliance on static pre-training data, leading to issues such as factual hallucination, outdated knowledge, and a lack of transparency in their reasoning processes. Retrieval-Augmented Generation (RAG) has emerged as a powerful paradigm to mitigate these limitations by grounding LLM responses in external, up-to-date, and verifiable knowledge sources. This survey provides a comprehensive overview of the intersection of RAG and LLMs within the chemical sciences. We delve into the foundational concepts of LLMs and RAG, detail the unique architectures and methodologies required for handling diverse chemical data, and systematically review their applications across drug discovery, materials science, reaction prediction, and chemical literature mining. Furthermore, we critically examine the existing challenges, limitations, and ethical considerations inherent in deploying RAG-LLMs in chemistry. Finally, we discuss promising future directions, emphasizing the need for robust evaluation benchmarks and advanced multimodal RAG systems to unlock the full potential of these transformative technologies in accelerating chemical innovation.



![Author ORCID: We display the ORCID iD icon alongside authors names on our website to acknowledge that the ORCiD has been authenticated when entered by the user. To view the users ORCiD record click the icon. [opens in a new tab]](https://www.cambridge.org/engage/assets/public/coe/logo/orcid.png)