Abstract
Molecular representation learning (MRL) has recently emerged as a fundamental domain in cheminformatics. It aims to replace traditional handcrafted molecular descriptors with machine-learned representations derived from raw chemical data. This survey presents a comprehensive overview of MRL approaches, outlining the evolution from unimodal methods—such as graph, string, and image-based encoders—to recent multimodal frameworks that integrate several molecular data types, including structural, textual, and experimental inputs. We categorize existing multimodal methodologies based on their integration strategies—alignment, translation, and fusion—and examine their training strategies. These models are discussed in light of the emerging concept of chemical foundation models, which seek to unify multiple chemical modalities through large-scale self-supervised learning, to enable the creation of robust, transferable representations applicable across a wide range of chemical tasks. We conclude by identifying the defining characteristics of chemical foundation models, reviewing recent efforts in this developing field, and outlining future directions toward the creation of a universal chemical foundation model.



![Author ORCID: We display the ORCID iD icon alongside authors names on our website to acknowledge that the ORCiD has been authenticated when entered by the user. To view the users ORCiD record click the icon. [opens in a new tab]](https://www.cambridge.org/engage/assets/public/coe/logo/orcid.png)