Abstract
Self-driving laboratories (SDLs) integrate artificial intelligence with laboratory automation to accelerate chemical optimization through iterative experiment design, execution, and analysis. Current evaluation practices emphasize outcome-centric metrics such as best-achieved yield, number of experiments to reach a target, and Pareto-front advancement. However, these summaries do not explicitly quantify learning efficiency—how much model-relevant information is gained per unit resource—nor do they credit experiments that reduce uncertainty without improving observed outcomes, including negative or null results that delineate the boundaries of viable chemical space.
We propose Return on Experiment (RoE), a resource-normalized metric that couples information return to total cost. RoE is defined as ΔI/C, where ΔI denotes an information-gain term quantified through reduction in posterior predictive uncertainty (operationalized as domain-averaged variance reduction, entropy decrease, or improvement in held-out predictive log-score), and C denotes audited resource costs spanning both experimental expenditures (time, reagents, monetary cost, environmental impact) and computational expenditures (token usage, compute hours). RoE is reported either as per-resource values preserving interpretability across cost dimensions, or as a scalar following explicit normalization and weighting protocols.
We formalize standardized ΔI definitions with objective normalization and reference-set specifications enabling cross-study comparison, establish calibration reporting as a gating requirement for uncertainty-based RoE interpretation, and propose a minimum reporting standard encompassing selection criteria, resource boundaries, and multi-objective handling. Illustrative applications demonstrate that RoE discriminates optimization strategies masked by single-metric reporting, credits informative negative experiments, and preserves trade-off structure in multi-objective campaigns. RoE complements existing outcome metrics by making learning efficiency an explicit, auditable, and reportable quantity for autonomous chemistry evaluation.



![Author ORCID: We display the ORCID iD icon alongside authors names on our website to acknowledge that the ORCiD has been authenticated when entered by the user. To view the users ORCiD record click the icon. [opens in a new tab]](https://www.cambridge.org/engage/assets/public/coe/logo/orcid.png)