A Robust and Versatile Generative Model for Inverse Design of Polymers

17 November 2025, Version 1
This content is an early or alternative research output and has not been peer-reviewed by Cambridge University Press at the time of posting.

Abstract

Efficiently designing polymers to meet specific requirements can expedite their translation into practical applications and lower development costs. Although generative inverse design is more efficient than trial-and-error or forward prediction–screening strategies, the imperfect validity of current polymer generative models prevents their seamless integration into scientific discovery workflows. Moreover, their limited controllability—such as the inability to reliably generate polymers with specific functional groups or classes—further constrains their practical utility. In this work, we integrate the robust Group SELFIES method with the state-of-the-art polymer generator PolyTAO to achieve generating 100% chemically valid polymer structures, removing a longstanding bottleneck in polymer design. Compared with previous polymer generation models, this work can generate—on demand—polymers that match specified chemical motifs, polymer classes, and target properties across an effectively unbounded chemical space. We further introduce a task-agnostic, continuous pretraining strategy that combines physics-informed heuristics with reinforcement learning. This approach preserves strong generative performance on user-defined tasks, even in low-data regimes. As a proof of concept, we rigorously validated the dielectric constants of 30 polyimides generated via controlled, on-demand design using first-principles calculations, finding deviations of less than 10% from their target values. Designed as a powerful backend engine for polymer inverse design, our model is deployment-ready, and integrates seamlessly with high-throughput, self-driving laboratories and industrial synthesis pipelines.

Keywords

Polymer Design
Machine Learning
Generative Models
Group SELFIES
Polymer Representation Learning
Continuous Pretraining
Reinforcement Learning

Comments

Comments are not moderated before they are posted, but they can be removed by the site moderators if they are found to be in contravention of our Commenting and Discussion Policy [opens in a new tab] - please read this policy before you post. Comments should be used for scholarly discussion of the content in question. You can find more information about how to use the commenting feature here [opens in a new tab] .
This site is protected by reCAPTCHA and the Google Privacy Policy [opens in a new tab] and Terms of Service [opens in a new tab] apply.