Hostname: page-component-77f85d65b8-8wtlm Total loading time: 0 Render date: 2026-03-27T04:30:27.640Z Has data issue: false hasContentIssue false

Numerical simulation, clustering, and prediction of multicomponent polymer precipitation

Published online by Cambridge University Press:  17 November 2020

Pavan Inguva
Affiliation:
Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge 02139, Massachusetts, USA. Department of Chemical Engineering, Imperial College London, London SW7 2AZ, United Kingdom.
Lachlan R. Mason
Affiliation:
Data Centric Engineering Program, The Alan Turing Institute, London NW1 2DB, United Kingdom.
Indranil Pan
Affiliation:
Data Centric Engineering Program, The Alan Turing Institute, London NW1 2DB, United Kingdom. Centre for Environmental Policy, Imperial College London, London SW7 2AZ, United Kingdom.
Miselle Hengardi
Affiliation:
Department of Chemical Engineering, Imperial College London, London SW7 2AZ, United Kingdom.
Omar K. Matar*
Affiliation:
Department of Chemical Engineering, Imperial College London, London SW7 2AZ, United Kingdom.
*
*Corresponding author. E-mail: o.matar@imperial.ac.uk

Abstract

Multicomponent polymer systems are of interest in organic photovoltaic and drug delivery applications, among others where diverse morphologies influence performance. An improved understanding of morphology classification, driven by composition-informed prediction tools, will aid polymer engineering practice. We use a modified Cahn–Hilliard model to simulate polymer precipitation. Such physics-based models require high-performance computations that prevent rapid prototyping and iteration in engineering settings. To reduce the required computational costs, we apply machine learning (ML) techniques for clustering and consequent prediction of the simulated polymer-blend images in conjunction with simulations. Integrating ML and simulations in such a manner reduces the number of simulations needed to map out the morphology of polymer blends as a function of input parameters and also generates a data set which can be used by others to this end. We explore dimensionality reduction, via principal component analysis and autoencoder techniques, and analyze the resulting morphology clusters. Supervised ML using Gaussian process classification was subsequently used to predict morphology clusters according to species molar fraction and interaction parameter inputs. Manual pattern clustering yielded the best results, but ML techniques were able to predict the morphology of polymer blends with ≥90% accuracy.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2020. Published by Cambridge University Press
Figure 0

Figure 1. Workflow for integrating the physics-based simulation set with machine learning (ML) dimensionality reduction, clustering, and prediction algorithms.

Figure 1

Table 1. Different states that a simulation could take. Images from states 1 to 3b are used in the data set.

Figure 2

Figure 2. Representative solutions for each Gibbs energy state. The state ID for each line is annotated next to the line for reference.

Figure 3

Figure 3. Dimensionality reduction and clustering results—Red: Method is unable to yield useful results; Yellow: Method is able to yield results of some significance; however, the method is still inadequate; Green: Method that yielded the best results.

Figure 4

Figure 4. Captured variance and optimal cluster number: the optimal number of clusters remained between 5 and 6 independent of the number of principal components (PCs) retained.

Figure 5

Figure 5. Principal component analysis (PCA) dimensionality reduction (2 principal components retained) with $ k $-means clustering (6 clusters). Sample images from each cluster are shown.

Figure 6

Figure 6. Number of clusters as a function of number of embedding dimensions and perplexity. Configurations with 4, 6, and 9 embedding have been omitted for clarity. The variance in the optimal number of clusters is shown in parentheses below the $ x $-axis. The variance generally increases with the number of embedding dimensions.

Figure 7

Figure 7. t-distributed stochastic neighbor embedding (t-SNE) results in two-dimension (2D) with perplexity 30 and 5 clusters. Sample images from each cluster are shown. The clustering performance is similar to the results from using principal component analysis (PCA). There is a variety of different morphologies present in each cluster, but the species of the continuous phase is comparatively consistent.

Figure 8

Table 2. Autoencoder performance for each architecture. Results for Dense–2 and Conv–Dense architectures have been omitted due to similarity with other Dense autoencoders. Dense and Conv–Dense autoencoders were observed to have lower accuracies and produce poorer image reconstructions than Conv autoencoders.

Figure 9

Figure 8. $ k $-means clustering on Conv–4 (4 filters) embedding (a plane of $ k=4 $ clusters is shown for reference). Performing $ k $-means clustering directly on the Conv autoencoder bottleneck values consistently resulted in 4–6 clusters.

Figure 10

Figure 9. (a) Reduced datapoints labeled by initial composition: each group of constant $ \left({a}_0,{b}_0\right) $ contains mulitple simulations with varying $ {\chi}_{ij} $ (half of the cluster labels are omitted for clarity); (b) affinity propagation clustering on Conv–4 (4 filters) with t-distributed stochastic neighbor embedding (t-SNE); (c) manual clustering on Conv–4 (4 filters) with t-SNE embedding: applying t-SNE to the bottleneck values of Conv autoencoders arranged the datapoints based on initial composition. Affinity propagation yielded $ k=21 $, while manual clustering following the trend yielded $ k=24 $ clusters.

Figure 11

Figure 10. Prediction of blend morphology for (a) $ {\chi}_{ij}={\chi}_{ik}={\chi}_{jk}=0.003 $ and (b) $ {\chi}_{ij}={\chi}_{jk}=0.006,{\chi}_{ik}=0.003 $.

Supplementary material: PDF

Inguva et al. supplementary material

Inguva et al. supplementary material

Download Inguva et al. supplementary material(PDF)
PDF 4.7 MB
Submit a response

Comments

No Comments have been published for this article.