We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This article describes a formal proof of the Kepler conjecture on dense sphere packings in a combination of the HOL Light and Isabelle proof assistants. This paper constitutes the official published account of the now completed Flyspeck project.
We present a new approach to graph limit theory that unifies and generalizes the two most well-developed directions, namely dense graph limits (even the more general $L^p$ limits) and Benjamini–Schramm limits (even in the stronger local-global setting). We illustrate by examples that this new framework provides a rich limit theory with natural limit objects for graphs of intermediate density. Moreover, it provides a limit theory for bounded operators (called P-operators) of the form $L^\infty (\Omega )\to L^1(\Omega )$ for probability spaces $\Omega $. We introduce a metric to compare P-operators (for example, finite matrices) even if they act on different spaces. We prove a compactness result, which implies that, in appropriate norms, limits of uniformly bounded P-operators can again be represented by P-operators. We show that limits of operators, representing graphs, are self-adjoint, positivity-preserving P-operators called graphops. Graphons, $L^p$ graphons, and graphings (known from graph limit theory) are special examples of graphops. We describe a new point of view on random matrix theory using our operator limit framework.
This paper examines Euler characteristics and characteristic classes in the motivic setting. We establish a motivic version of the Becker–Gottlieb transfer, generalizing a construction of Hoyois. Making calculations of the Euler characteristic of the scheme of maximal tori in a reductive group, we prove a generalized splitting principle for the reduction from $\operatorname{GL}_{n}$ or $\operatorname{SL}_{n}$ to the normalizer of a maximal torus (in characteristic zero). Ananyevskiy’s splitting principle reduces questions about characteristic classes of vector bundles in $\operatorname{SL}$-oriented, $\unicode[STIX]{x1D702}$-invertible theories to the case of rank two bundles. We refine the torus-normalizer splitting principle for $\operatorname{SL}_{2}$ to help compute the characteristic classes in Witt cohomology of symmetric powers of a rank two bundle, and then generalize this to develop a general calculus of characteristic classes with values in Witt cohomology.
Previous studies on emergency management of large-scale urban networks have commonly concentrated on system development to off-load intensive computations to remote cloud servers or improving communication quality during a disaster and ignored the effect of energy consumption of vehicles, which can play a vital role in large-scale evacuation owing to the disruptions in energy supply. Hence, in this paper we propose a cloud-enabled navigation system to direct vehicles to safe areas in the aftermath of a disaster in an energy and time efficient fashion. A G-network model is employed to mimic the behaviors and interactions between individual vehicles and the navigation system, and analyze the effect of re-routing decisions toward the vehicles. A gradient descent optimization algorithm is used to gradually reduce the evacuation time and fuel consumption of vehicles by optimizing the probabilistic choices of linked road segments at each intersection. The re-routing decisions arrive at the intersections periodically and will expire after a short period. When a vehicle reaches an intersection, if the latest re-routing decision has not expired, the vehicle will follow this advice, otherwise, the vehicle will stick to the shortest path to its destination. The experimental results indicate that the proposed algorithm can reduce the evacuation time and the overall fuel utilization especially when the number of evacuated vehicles is large.
This article addresses the inference of physics models from data, from the perspectives of inverse problems and model reduction. These fields develop formulations that integrate data into physics-based models while exploiting the fact that many mathematical models of natural and engineered systems exhibit an intrinsically low-dimensional solution manifold. In inverse problems, we seek to infer uncertain components of the inputs from observations of the outputs, while in model reduction we seek low-dimensional models that explicitly capture the salient features of the input–output map through approximation in a low-dimensional subspace. In both cases, the result is a predictive model that reflects data-driven learning yet deeply embeds the underlying physics, and thus can be used for design, control and decision-making, often with quantified uncertainties. We highlight recent developments in scalable and efficient algorithms for inverse problems and model reduction governed by large-scale models in the form of partial differential equations. Several illustrative applications to large-scale complex problems across different domains of science and engineering are provided.
We analyze the hereditarily ordinal definable sets $\operatorname {HOD} $ in $M_n(x)[g]$ for a Turing cone of reals x, where $M_n(x)$ is the canonical inner model with n Woodin cardinals build over x and g is generic over $M_n(x)$ for the Lévy collapse up to its bottom inaccessible cardinal. We prove that assuming $\boldsymbol \Pi ^1_{n+2}$-determinacy, for a Turing cone of reals x, $\operatorname {HOD} ^{M_n(x)[g]} = M_n(\mathcal {M}_{\infty } | \kappa _{\infty }, \Lambda ),$ where $\mathcal {M}_{\infty }$ is a direct limit of iterates of $M_{n+1}$, $\delta _{\infty }$ is the least Woodin cardinal in $\mathcal {M}_{\infty }$, $\kappa _{\infty }$ is the least inaccessible cardinal in $\mathcal {M}_{\infty }$ above $\delta _{\infty }$, and $\Lambda $ is a partial iteration strategy for $\mathcal {M}_{\infty }$. It will also be shown that under the same hypothesis $\operatorname {HOD}^{M_n(x)[g]} $ satisfies $\operatorname {GCH} $.
Over the past few years, deep learning has risen to the foreground as a topic of massive interest, mainly as a result of successes obtained in solving large-scale image processing tasks. There are multiple challenging mathematical problems involved in applying deep learning: most deep learning methods require the solution of hard optimisation problems, and a good understanding of the trade-off between computational effort, amount of data and model complexity is required to successfully design a deep learning approach for a given problem.. A large amount of progress made in deep learning has been based on heuristic explorations, but there is a growing effort to mathematically understand the structure in existing deep learning methods and to systematically design new deep learning methods to preserve certain types of structure in deep learning. In this article, we review a number of these directions: some deep neural networks can be understood as discretisations of dynamical systems, neural networks can be designed to have desirable properties such as invertibility or group equivariance and new algorithmic frameworks based on conformal Hamiltonian systems and Riemannian manifolds to solve the optimisation problems have been proposed. We conclude our review of each of these topics by discussing some open problems that we consider to be interesting directions for future research.
Building upon ideas of the second and third authors, we prove that at least $2^{(1-\unicode[STIX]{x1D700})(\log s)/(\text{log}\log s)}$ values of the Riemann zeta function at odd integers between 3 and $s$ are irrational, where $\unicode[STIX]{x1D700}$ is any positive real number and $s$ is large enough in terms of $\unicode[STIX]{x1D700}$. This lower bound is asymptotically larger than any power of $\log s$; it improves on the bound $(1-\unicode[STIX]{x1D700})(\log s)/(1+\log 2)$ that follows from the Ball–Rivoal theorem. The proof is based on construction of several linear forms in odd zeta values with related coefficients.
It is proved that if $\varphi \colon A\to B$ is a local homomorphism of commutative noetherian local rings, a nonzero finitely generated B-module N whose flat dimension over A is at most $\operatorname {edim} A - \operatorname {edim} B$ is free over B and $\varphi $ is a special type of complete intersection. This result is motivated by a ‘patching method’ developed by Taylor and Wiles and a conjecture of de Smit, proved by the first author, dealing with the special case when N is flat over A.
Ergodic optimization is the study of problems relating to maximizing orbits and invariant measures, and maximum ergodic averages. An orbit of a dynamical system is called $f$-maximizing if the time average of the real-valued function $f$ along the orbit is larger than along all other orbits, and an invariant probability measure is called $f$-maximizing if it gives $f$ a larger space average than any other invariant probability measure. In this paper, we consider the main strands of ergodic optimization, beginning with an influential model problem, and the interpretation of ergodic optimization as the zero temperature limit of thermodynamic formalism. We describe typical properties of maximizing measures for various spaces of functions, the key tool of adding a coboundary so as to reveal properties of these measures, as well as certain classes of functions where the maximizing measure is known to be Sturmian.