Return on Experiment: Information-per-Cost Metrics for Self-Driving Chemical Laboratories

06 January 2026, Version 1
This content is an early or alternative research output and has not been peer-reviewed by Cambridge University Press at the time of posting.

Abstract

Self-driving laboratories (SDLs) integrate artificial intelligence with laboratory automation to accelerate chemical optimization through iterative experiment design, execution, and analysis. Current evaluation practices emphasize outcome-centric metrics such as best-achieved yield, number of experiments to reach a target, and Pareto-front advancement. However, these summaries do not explicitly quantify learning efficiency—how much model-relevant information is gained per unit resource—nor do they credit experiments that reduce uncertainty without improving observed outcomes, including negative or null results that delineate the boundaries of viable chemical space. We propose Return on Experiment (RoE), a resource-normalized metric that couples information return to total cost. RoE is defined as ΔI/C, where ΔI denotes an information-gain term quantified through reduction in posterior predictive uncertainty (operationalized as domain-averaged variance reduction, entropy decrease, or improvement in held-out predictive log-score), and C denotes audited resource costs spanning both experimental expenditures (time, reagents, monetary cost, environmental impact) and computational expenditures (token usage, compute hours). RoE is reported either as per-resource values preserving interpretability across cost dimensions, or as a scalar following explicit normalization and weighting protocols. We formalize standardized ΔI definitions with objective normalization and reference-set specifications enabling cross-study comparison, establish calibration reporting as a gating requirement for uncertainty-based RoE interpretation, and propose a minimum reporting standard encompassing selection criteria, resource boundaries, and multi-objective handling. Illustrative applications demonstrate that RoE discriminates optimization strategies masked by single-metric reporting, credits informative negative experiments, and preserves trade-off structure in multi-objective campaigns. RoE complements existing outcome metrics by making learning efficiency an explicit, auditable, and reportable quantity for autonomous chemistry evaluation.

Keywords

Self-driving laboratories
Return on Experiment
Autonomous chemistry
Bayesian optimization
Uncertainty quantification

Comments

Comments are not moderated before they are posted, but they can be removed by the site moderators if they are found to be in contravention of our Commenting and Discussion Policy [opens in a new tab] - please read this policy before you post. Comments should be used for scholarly discussion of the content in question. You can find more information about how to use the commenting feature here [opens in a new tab] .
This site is protected by reCAPTCHA and the Google Privacy Policy [opens in a new tab] and Terms of Service [opens in a new tab] apply.