To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter gives a “look under the hood” at the algorithm that actually lets us perform computations over a polynomial ring. In order to work with polynomials, we need to be able to answer the ideal membership question. For example, there is no chance of writing down a minimal free resolution if we cannot even find a minimal set of generators for an ideal. How might we do this? If R = k[x], then the Euclidean algorithm allows us to solve the problem. What makes things work is that there is an invariant (degree), and a process which reduces the invariant. Then ideal membership can be decided by the division algorithm. When we run the univariate division algorithm, we “divide into” the initial (or lead) term. In the multivariate case we'll have to come up with some notion of initial term – for example, what is the initial term of x2y + y2x? It turns out that this means we have to produce an ordering of the monomials of R = k[x1, …, xn]. This is pretty straightforward. Unfortunately, we will find that even once we have a division algorithm in place, we still cannot solve the question of ideal membership. The missing piece is a multivariate analog of the Euclidean algorithm, which gave us a good set of generators (one!) in the univariate case. But there is a simple and beautiful solution to our difficulty; the Buchberger algorithm is a systematic way of producing a set of generators (a Gröbner basis) for an ideal or module over R so that the division algorithm works.
Although the title of this book is “Computational Algebraic Geometry”, it could also be titled “Snapshots of Commutative Algebra via Macaulay 2”. The aim is to bring algebra, geometry, and combinatorics to life by examining the interplay between these areas; it also provides the reader with a taste of algebra different from the usual beginning graduate student diet of groups and field theory. As background the prerequisite is a decent grounding in abstract algebra at the level of [56]; familiarity with some topology and complex analysis would be nice but is not indispensable. The snapshots which are included here come from commutative algebra, algebraic geometry, algebraic topology, and algebraic combinatorics. All are set against a backdrop of homological algebra. There are several reasons for this: first and foremost, homological algebra is the common thread which ties everything together. The second reason is that many computational techniques involve homological algebra in a fundamental way; for example, a recurring motif is the idea of replacing a complicated object with a sequence of simple objects. The last reason is personal – I wanted to give the staid and abstract constructs of homological algebra (e.g. derived functors) a chance to get out and strut their stuff. This is said only half jokingly – in the first class I ever had in homological algebra, I asked the professor what good Tor was; the answer that Tor is the derived functor of tensor product did not grip me.
In this chapter we give a quick introduction to sheaves, Čech cohomology, and divisors on curves. The first main point is that many objects, in mathematics and in life, are defined by local information–imagine a road atlas where each page shows a state and a tiny fraction of the adjacent states. If you have two different local descriptions, how can you relate them? In the road map analogy, when you switch pages, where are you on the new page? Roughly speaking, a sheaf is a collection of local data, and cohomology is the mechanism for “gluing” local information together. The second main point is that geometric objects do not necessarily live in a fixed place. They have a life of their own, and we can embed the same object in different spaces. For an algebraic curve C, it turns out that the ways in which we can map C to ℙn are related to studying sets of points (divisors) on the curve. If the ground field is ℂ, the maximum principle tells us that there are no global holomorphic functions on C, so it is natural to consider meromorphic functions. Hence, we'll pick a bunch of points on the curve, and study functions on C with poles only at the points. Sheaves and cohomology enter the picture because, while it is easy to describe a meromorphic function locally, it is hard to get a global understanding of such things.
We study a stochastic scheduling problem with a single machine subject to random breakdowns. We address the preemptive-repeat model; that is, if a breakdown occurs during the processing of a job, the work done on this job is completely lost and the job has to be processed from the beginning when the machine resumes its work. The objective is to complete all jobs so that the the expected weighted flow time is minimized. Limited results have been published in the literature on this problem, all with the assumption that the machine uptimes are exponentially distributed. This article generalizes the study to allow that (1) the uptimes and downtimes of the machine follow general probability distributions, (2) the breakdown patterns of the machine may be affected by the job being processed and are thus job dependent; (3) the processing times of the jobs are random variables following arbitrary distributions, and (4) after a breakdown, the processing time of a job may either remain a same but unknown amount, or be resampled according to its probability distribution. We derive the necessary and sufficient condition that ensures the problem with the flow-time criterion to be well posed under the preemptive-repeat breakdown model. We then develop an index policy that is optimal for the problem. Several important situations are further considered and their optimal solutions are obtained.
For a compound Poisson process (CPP) with only positive jumps, an elegant formula connects the density of the hitting time for a lower straight line with that of the process itself at time t, h(x; t), considered as a function of time and position jointly. We prove an analogous (albeit more complicated) result for the first time the CPP crosses an upper straight line. We also consider the conditional density of the CPP at time t, given that the upper line has not been reached before t. Finally, it is shown how to compute certain moment integrals of h.
We analyze a feedback system consisting of a finite buffer fluid queue and a responsive source. The source alternates between silence periods and active periods. At random epochs of times, the source becomes ready to send a burst of fluid. The length of the bursts (length of the active periods) are independent and identically distributed with some general distribution. The queue employs a threshold discarding policy in the sense that only those bursts at whose commencement epoch (the instant at which the source is ready to send) the workload (i.e., the amount of fluid in the buffer) is less than some preset threshold are accepted. If the burst is rejected then the source backs off from sending. We work within the framework of Poisson counter-driven stochastic differential equations and obtain the moment generating function and hence the probability density function of the stationary workload process. We then comment on the stability of this fluid queue. Our explicit characterizations will further provide useful insights and “engineering” guidelines for better network designing.