To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the early 1980s when Gröbner bases and the Buchberger Algorithm spread through the research community, there were two main approaches to their introduction: the most common was (and still is) presenting these notions in the frame of rewriting rules, showing their relationship to the Knuth–Bendix Algorithm, and stressing their rôle in giving a canonical representation for the elements of commutative finite algebras over a field. I was among the standard-bearers of the alternative approach which saw Gröbner bases as a generalization of Macaulay's H-bases and Hironaka's standard bases and stressed their ability to lift properties to a polynomial algebra from its graded algebra.
While both these aspects of Gröbner theory and the related results will be discussed in depth in this text, I have for several years stressed its relation to elementary linear algebra: Gröbner bases can be described as a finite model of an infinite linear Gauss-reduced basis of an ideal viewed as a vectorspace, and Buchberger's algorithm can be presented as the corresponding generalization of the Gaussian elimination algorithm. This approach allows me also to link Gröbner theory directly to the Duality Theory which will be discussed in Part five, mainly to the Möller algorithm and (in the next volume) to the Auziger–Stetter resolution.
The purpose of this paper is to investigate whether knowledge transformers that are featured in the learning process are also present in the creative process. First, this was achieved by reviewing accounts of inventions and discoveries with the view of explaining them in terms of knowledge transformers. Second, this was achieved by reviewing models and theories of creativity and identifying the existence of the knowledge transformers. The investigation shows that there is some evidence to show that the creative process can be explained through knowledge transformers. Hence, it is suggested that one of links between learning and creativity is through the knowledge transformers.
The method known as the analysis of interconnected decision areas (AIDA) has been in use for nearly 40 years, but has made little headway into engineering design. This paper describes an implementation of AIDA that is useful to engineering designers wishing to combine the solution principles of various subfunctions within a product in new ways. Traditionally, the method is used to understand how one decision affects the options available to other decisions in a large-scale project. The method is to be used interactively with designers participating in a brainstorming session so that ideas are added to AIDA and immediately combined with other compatible ideas. The existing implementation has been tested in a classroom setting in which upper level undergraduates have successfully used the AIDA method along with numerous of design methods to solve conceptual design problems.
Special Issue Part 1 (Issue 3) and Part 2 (Issue 4) of AIEDAM are based on a workshop on Learning and Creativity held at the 2002 conference on Artificial Intelligence in Design, AID '02 (www.cad.strath.ac.uk/AID02_workshop/Workshop_webpage.html; Gero, 2002). It was the sixth of similar workshops, with the previous five focusing on Machine Learning in Design and being held at AIDs '92, '94, '96, '98, and '00 (Gero, 1992, 2000; Gero & Sudweeks, 1994, 1996, 1998). The first three workshops also resulted in special issues of AIEDAM (Maher et al., 1994; Duffy et al., 1996, 1998).
Some designs are sufficiently creative that they are considered to be inventions. The invention process is typically characterized by a singular moment when the prevailing thinking concerning a long-standing problem is, in a “flash of genius,” overthrown and replaced by a new approach that could not have been logically deduced from what was previously known. This paper discusses such logical discontinuities using an example based on the history of one of the most important inventions of the 20th century in electrical engineering, namely, the invention of negative feedback by AT&T's Harold S. Black. This 1927 invention overthrew the then prevailing idiom of positive feedback championed by Westinghouse's Edwin Howard Armstrong. The paper then shows how this historically important discovery can be readily replicated by an automated design and invention technique patterned after the evolutionary process in nature, namely, genetic programming. Genetic programming employs Darwinian natural selection along with analogs of recombination (crossover), mutation, gene duplication, gene deletion, and mechanisms of developmental biology to breed an ever improving population of structures. Genetic programming rediscovers negative feedback by conducting an evolutionary search for a structure that satisfies Black's stated high-level goal (i.e., reduction of distortion in amplifiers). Like evolution in nature, genetic programming conducts its search probabilistically without resort to logic using a process that is replete with logical discontinuities. The paper then shows that genetic programming can routinely produce many additional inventive and creative results. In this regard, the paper discusses the automated rediscovery of numerous 20th-century patented inventions involving analog electrical circuits and controllers, the Sallen–Key filter, and six 21st-century patented inventions. In addition, two patentable new inventions (controllers) have been created in the same automated way by means of genetic programming. The paper discusses the promising future of automated invention by means of genetic programming in light of the fact that, to date, increased computer power has yielded progressively more substantial results, including numerous human-competitive results, in synchrony with Moore's law. The paper argues that evolutionary search by means of genetic programming is a promising approach for achieving creative, human-competitive, automated design because illogic and creativity are inherent in the evolutionary process.
Can artificial systems be creative? Can they be designed to be creative on their own? What are the requirements of such creative artificial systems? To be able to support humans who are expected to deliver creative solutions, or to automate part of their tasks, this paper presents a proposal for creativity requirements that provide a basis for designing creative artificial systems.
This chapter introduces the notion of a ring, more specifically, a commutative ring with unity. The theory of rings provides a useful conceptual framework for reasoning about a wide class of interesting algebraic structures. Intuitively speaking, a ring is an algebraic structure with addition and multiplication operations that behave like we expect addition and multiplication should. While there is a lot of terminology associated with rings, the basic ideas are fairly simple.
Definitions, basic properties, and examples
Definition 9.1. A commutative ring with unityis a set R together with addition and multiplication operations on R, such that:
(i) the set R under addition forms an abelian group, and we denote the additive identity by 0R;
(ii) multiplication is associative; that is, for all a, b, c ∈ R, we have a(bc) = (ab)c;
(iii) multiplication distributes over addition; that is, for all a, b, c ∈ R, we have a(b + c) = ab + ac and (b + c)a = ba + ca;
(iv) there exists a multiplicative identity; that is, there exists an element 1R ∈ R, such that 1R · a = a = a · 1R for all a ∈ R;
(v) multiplication is commutative; that is, for all a, b ∈ R, we have ab = ba.
There are other, more general (and less convenient) types of rings–one can drop properties (iv) and (v), and still have what is called a ring.
This chapter introduces the notion of an abelian group. This is an abstraction that models many different algebraic structures, and yet despite the level of generality, a number of very useful results can be easily obtained.
Definitions, basic properties, and examples
Definition 8.1.An abelian group is a set G together with a binary operation ⋆ on G such that
(i) for all a, b, c ∈ G, a ⋆ (b ⋆ c) = (a ⋆ b) ⋆ c (i.e., ⋆ is associative),
(ii) there exists e ∈ G (called the identity element) such that for all a ∈ G, a ⋆ e = a = e ⋆ a,
(iii) for all a ∈ G there exists a′ ∈ G (called the inverse of a) such that a ⋆ a′ = e = a′ ⋆ a,
(iv) for all a, b ∈ G, a ⋆ b = b ⋆ a (i.e., ⋆ is commutative).
While there is a more general notion of a group, which may be defined simply by dropping property (iv) in Definition 8.1, we shall not need this notion in this text. The restriction to abelian groups helps to simplify the discussion significantly. Because we will only be dealing with abelian groups, we may occasionally simply say “group” instead of “abelian group.”
In this chapter, we discuss Euclid's algorithm for computing greatest common divisors. It turns out that Euclid's algorithm has a number of very nice properties, and has applications far beyond that purpose.
The basic Euclidean algorithm
We consider the following problem: given two non-negative integers a and b, compute their greatest common divisor, gcd(a, b). We can do this using the well-known Euclidean algorithm, also called Euclid's algorithm.
The basic idea of Euclid's algorithm is the following. Without loss of generality, we may assume that a ≥ b ≥ 0. If b = 0, then there is nothing to do, since in this case, gcd(a, 0) = a. Otherwise, if b > 0, we can compute the integer quotient q ≔ └a/b┘ and remainder r ≔ a mod b, where 0 ≤ r < b. From the equation
it is easy to see that if an integer d divides both b and r, then it also divides a; likewise, if an integer d divides a and b, then it also divides r. From this observation, it follows that gcd(a, b) = gcd(b, r), and so by performing a division, we reduce the problem of computing gcd(a, b) to the “smaller” problem of computing gcd(b, r).
In this chapter, we discuss basic definitions and results concerning matrices. We shall start out with a very general point of view, discussing matrices whose entries lie in an arbitrary ring R. Then we shall specialize to the case where the entries lie in a field F, where much more can be said.
One of the main goals of this chapter is to discuss “Gaussian elimination,” which is an algorithm that allows us to efficiently compute bases for the image and kernel of an F-linear map.
In discussing the complexity of algorithms for matrices over a ring R, we shall treat a ring R as an “abstract data type,” so that the running times of algorithms will be stated in terms of the number of arithmetic operations in R. If R is a finite ring, such as ℤm, we can immediately translate this into a running time on a RAM (in later chapters, we will discuss other finite rings and efficient algorithms for doing arithmetic in them).
If R is, say, the field of rational numbers, a complete running time analysis would require an additional analysis of the sizes of the numbers that appear in the execution of the algorithm. We shall not attempt such an analysis here—however, we note that all the algorithms discussed in this chapter do in fact run in polynomial time when R = ℚ, assuming we represent rational numbers as fractions in lowest terms. Another possible approach for dealing with rational numbers is to use floating point approximations.
This chapter concerns itself with the question: how many primes are there? In Chapter 1, we proved that there are infinitely many primes; however, we are interested in a more quantitative answer to this question; that is, we want to know how “dense” the prime numbers are.
This chapter has a bit more of an “analytical” flavor than other chapters in this text. However, we shall not make use of any mathematics beyond that of elementary calculus.
Chebyshev's theorem on the density of primes
The natural way of measuring the density of primes is to count the number of primes up to a bound x, where x is a real number. For a real number x ≥ 0, the function π(x) is defined to be the number of primes up to x. Thus, π(1) = 0, π(2) = 1, π(7.5) = 4, and so on. The function π is an example of a “step function,” that is, a function that changes values only at a discrete set of points. It might seem more natural to define π only on the integers, but it is the tradition to define it over the real numbers (and there are some technical benefits in doing so).
Let us first take a look at some values of π(x). Table 5.1 shows values of π(x) for x = 103i and i = 1, …, 6.
In this chapter, we review standard asymptotic notation, introduce the formal computational model we shall use throughout the rest of the text, and discuss basic algorithms for computing with large integers.
Asymptotic notation
We review some standard notation for relating the rate of growth of functions. This notation will be useful in discussing the running times of algorithms, and in a number of other contexts as well.
Suppose that x is a variable taking non-negative integer or real values, and let g denote a real-valued function in x that is positive for all sufficiently large x; also, let f denote any real-valued function in x. Then
f = O(g) means that |f(x)| ≤ cg(x) for some positive constant c and all sufficiently large x (read, “f is big-O of g”),
f = Ω(g) means that f(x) ≥ cg(x) for some positive constant c and all sufficiently large x (read, “f is big-Omega of g”),
f = ⊗(g) means that cg(x) ≤ f(x) ≤ dg(x), for some positive constants c and d and all sufficiently large x (read, “f is big-Theta of g”),
f = o(g) means that f/g → 0 as x → ∞ (read, “f is little-o of g”), and
f ∼ g means that f/g → 1 as x → ∞ (read, “f is asymptotically equal to g”).
Example 3.1. Let f(x) ≔ x2 and g(x) ≔ 2x2 - x+1. Then f = O(g) and f = Ω(g). Indeed, f = ⊗(g). ▪