Distilling Knowledge from Catalysis Literature with Long-Context LLM Agents

05 September 2025, Version 2
This content is an early or alternative research output and has not been peer-reviewed by Cambridge University Press at the time of posting.

Abstract

Decades of catalysis knowledge remain locked in unstructured prose, hindering data-driven discovery. Existing text-mining tools struggle to establish the synthesis-structure-performance relationships critical for catalyst knowledge discovery, as they rarely connect synthesis protocols in one section with the resulting material properties and performance outcomes reported elsewhere. Here, we present CATDA (Corpus-aware Automated Text-to-Graph Catalyst Discovery Agent), a long-context large language model (LLM)–driven agentic framework that reads full documents and distills them into actionable, provenance–tracked knowledge graphs linking material properties, multi-step synthesis, conditions, and testing outcomes. Applied at corpus scale, CATDA extracts data with near-human fidelity (F1 = 0.983) and a 12-fold speedup over manual curation. This structured knowledge is made accessible through two synergistic applications: a DatasetAgent for exporting machine-learning-ready tables, and a CatAgent providing a conversational, citation-linked interface for interactive discovery. The high-quality dataset enabled the training of a predictive model for ethylbenzene conversion, while simultaneously exposing systemic challenges such as feature sparsity and protocol heterogeneity in the source literature. By transforming the literature into a queryable and computable resource, CATDA offers a scalable route to accelerate large-scale data analysis, quantitative modeling, and rational catalyst design paradigm.

Keywords

large language model
literature mining
knowledge graph
dataset extraction

Supplementary materials

Title
Description
Actions
Title
Supplementary Information
Description
Supplementary Information for Distilling Knowledge from Catalysis Literature with Long-Context LLM Agents
Actions

Comments

Comments are not moderated before they are posted, but they can be removed by the site moderators if they are found to be in contravention of our Commenting and Discussion Policy [opens in a new tab] - please read this policy before you post. Comments should be used for scholarly discussion of the content in question. You can find more information about how to use the commenting feature here [opens in a new tab] .
This site is protected by reCAPTCHA and the Google Privacy Policy [opens in a new tab] and Terms of Service [opens in a new tab] apply.