Abstract
Machine learning (ML) models have been widely used as efficient surrogates to predict adsorption in metal-organic frameworks (MOFs), for gas storage, chemical separations, and catalysis applications. The “black box” nature of these ML models, however, remains a significant barrier between predictions and the design of novel MOFs. In this work, we introduce PoroNet, an intrinsically interpretable graph neural network architecture built on a graph representation of the pore network (i.e., pore graph). In a pore graph, nodes represent individual pores and edges represent pore connections. PoroNet shows highly accurate predictions of hydrogen (H2) uptake and deliverable capacity in MOFs, as well as on a benchmark simulated adsorption dataset that contains spherical and linear alkane adsorbates. More importantly, accurate pore-level contributions to total adsorption can be learned using PoroNet either through direct supervised learning or as latent representations while fitting the total adsorption capacity. In direct supervised learning with explicit pore-level labels, we show that PoroNet architecture is data-efficient, achieving comparable performance to the standard approach with only a fraction of simulation runs needed for model training. The pore-level contribution helps interpret the ML predictions of the total adsorption behavior, identify the key pore properties that govern the adsorption, and provide significant insights into pore engineering. We demonstrate that PoroNet is a powerful tool for high-throughput pore screening and for deriving valuable MOF design rules for cryogenic H2 storage applications. Lastly, we discuss the potential of leveraging interpretable ML for scalable scientific and material discovery.
Supplementary materials
Title
Supporting Information
Description
Details on GCMC simulations, PoroNet architecture, predictions using PoroNet and PoroNet-Base models, data efficiency, high-throughput pore screening, and method benchmark for extracting pore-level data
Actions



![Author ORCID: We display the ORCID iD icon alongside authors names on our website to acknowledge that the ORCiD has been authenticated when entered by the user. To view the users ORCiD record click the icon. [opens in a new tab]](https://www.cambridge.org/engage/assets/public/coe/logo/orcid.png)