To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We consider the problem of finding the minimum of inhomogeneous Gaussian lattice sums: Given a lattice $L \subseteq \mathbb {R}^n$ and a positive constant $\alpha $, the goal is to find the minimizers of $\sum _{x \in L} e^{-\alpha \|x - z\|^2}$ over all $z \in \mathbb {R}^n$.
By a result of Bétermin and Petrache from 2017 it is known that for steep potential energy functions—when $\alpha $ tends to infinity—the minimizers in the limit are found at deep holes of the lattice. In this paper, we consider minimizers which already stabilize for all $\alpha \geq \alpha _0$ for some finite $\alpha _0$; we call these minimizers stable cold spots.
Generic lattices do not have stable cold spots. For several important lattices, like the root lattices, the Coxeter-Todd lattice, and the Barnes-Wall lattice, we show how to apply the linear programming bound for spherical designs to prove that the deep holes are stable cold spots. We also show, somewhat unexpectedly, that the Leech lattice does not have stable cold spots.
A numerical method is proposed for a class of one-dimensional stochastic control problems with unbounded state space. This method solves an infinite-dimensional linear program, equivalent to the original formulation based on a stochastic differential equation, using a finite element approximation. The discretization scheme itself and the necessary assumptions are discussed, and a convergence argument for the method is presented. Its performance is illustrated by examples featuring long-term average and infinite horizon discounted costs, and additional optimization constraints.
The Bruss–Robertson–Steele (BRS) inequality bounds the expected number of items of random size which can be packed into a given suitcase. Remarkably, no independence assumptions are needed on the random sizes, which points to a simple explanation; the inequality is the integrated form of an $\omega$-by-$\omega$ inequality, as this note proves.
We propose a variation of the pointwise residual method for solving primal and dual ill-posed linear programming with approximate data, sensitive to small perturbations. The method leads to an auxiliary problem, which is also a linear programming problem. Theorems of existence and convergence of approximate solutions are established and optimal estimates of approximation of initial problem solutions are achieved.
In this paper a feasible direction method is presented to find all efficient extreme points for a special class of multiple objective linear fractional programming problems, when all denominators are equal. This method is based on the conjugate gradient projection method, so that we start with a feasible point and then a sequence of feasible directions towards all efficient adjacent extremes of the problem can be generated. Since methods based on vertex information may encounter difficulties as the problem size increases, we expect that this method will be less sensitive to problem size. A simple production example is given to illustrate this method.
We propose a new primal-dual interior-point algorithm based on a new kernel function for linear optimization problems. New search directions and proximity functions are proposed based on the kernel function. We show that the new algorithm has and iteration bounds for large-update and small-update methods, respectively, which are currently the best known bounds for such methods.
In this paper, using the framework of self-regularity, we propose a hybrid adaptive algorithm for the linear optimization problem. If the current iterates are far from a central path, the algorithm employs a self-regular search direction, otherwise the classical Newton search direction is employed. This feature of the algorithm allows us to prove a worst case iteration bound. Our result matches the best iteration bound obtained by the pure self-regular approach and improves on the worst case iteration bound of the classical algorithm.
Degeneracies occur with increasing frequency in some large scale linear programming problems, but with a simple change to the (revised) simplex method, resulting stationarity of the algorithm can be reduced. The method introduced here may also prevent cycling; neither the lexicographic refinement of Dantzig, Orden and Wolfe nor the perturbation technique of Charnes may be required to prevent cycling.
A method is proposed for driving degenerate feasible solutions to linear programming problems away from essential degeneracy and in particular for identifying essentially degenerate optimal solutions. An essentially degenerate cycling example is also given, so answering a question raised earlier.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.