To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Although programming languages are commonly thought to have their early history in the 1940s and 1950s when the first digital computers came about, they in fact go back much earlier to the creation of the Jacquard loom (1801) and also player pianos (˜1870s), both of which used physical punch cards to code instructions. Regardless of whether you use punch cards or a more modern means to create and store your programs, programming languages are described by specifying their syntax and semantics. Syntax refers to the form, indicating what symbols can be put together in what order to make a meaningful construct in any given language. In other words, the syntax of a programming language is the set of rules that define what is a valid input or program. Semantics, on the other hand, refers to the meaning of expressions within a language. Although we will not give a complete, rigorous description of the syntax of the Mathematica language here, it is important to understand some of the basic structures and their syntax upon which everything is built. Fortunately, the Mathematica language can be understood quickly by learning about just a few basic objects. In this chapter we will focus on the Mathematica language with particular emphasis on expressions. We will also look at how to define and name new expressions, how to combine them using logical operators, and how to control properties of expressions through the use of attributes.
Well-designed tools are not simply things of beauty to be admired. They are, above all, a joy to use. They seem to have their own consistent and readily apparent internal logic; using them seems natural – intuitive even – in that it is hard to imagine using any other tool, and, typically, a minimal amount of effort is required to solve the problem for which those tools were designed. You might even begin to think that your problems were designed for the tool rather than the other way around.
Programming with Mathematica is, first and foremost, a joy. Having used various programming languages throughout my life (starting with Algol and Fortran), it is now hard for me to imagine using a tool other than Mathematica to solve most of the computational problems that I encounter. Having at my fingertips an extremely well-thought-out language, combined with tools for analysis, modeling, simulation, visualization, interface creation, connections to other technologies, import and export, seems to give me everything I might need.
Ultimately though, no tool can solve every problem you might encounter; what really makes Mathematica the indispensable tool for many computational scientists, engineers, and even artists and musicians, is its capability for infinite extension through programming. As a language, built upon the shoulders of such giants as Lisp, Prolog, Apl and C++, Mathematica has extended some of the best ideas from these languages and created some new ones of its own.
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
— Donald E. Knuth (Knuth 1992)
When you are first learning to program in a language your emphasis is usually on correctness, that is, getting your programs to run and return accurate and error-free results – and rightly so. There is little point in trying to speed up a program that returns incorrect answers! You develop your programs, prototyping with simple inputs so that you can see at a glance how things are progressing. At some point in the development process you start to increase the size or complexity of the inputs to your program and, if all goes well, the program scales well. But commonly, there are bottlenecks at various stages of the computation that slow things down. Some of these may be unavoidable, but often you can find optimizations that improve the efficiency and running time of your programs. This chapter introduces some of the optimization principles to think about both during the development process and after they are complete and you are satisfied that they produce the desired output.
There are two measures we will focus on – timing and memory footprint. Sometimes one plays a more prominent role than the other. But ultimately, squeezing another tenth of a second out of a computation that is only going to be run once or twice does not make a lot of sense.
Many important and classical problems in mathematics and computer science are defined, or have solutions in terms of recursive definitions: the factorial function, the natural numbers, many divide-and-conquer algorithms, and parsers for programming languages all use recursion in fundamental ways. A function is defined using recursion if in its definition, it makes calls to itself. The great advantage of recursive definitions is their simplicity and directness. Their one major drawback however, is how quickly the depth and complexity can increase to the point of making your computations intractable.
This programming paradigm is easily implemented in Mathematica in a manner that is both natural and quite efficient. In fact, many of the built-in operations of Mathematica could be written in Mathematica itself using recursion. In this chapter, we will present several examples of recursion and explain how recursive functions are written and what you can do to work around some of their potential inefficiencies.
Fibonacci numbers
Recursive definitions of mathematical quantities were used by mathematicians for centuries before computers even existed. One famous example is the definition of a special sequence of numbers first studied in the Middle Ages by the Italian mathematician Leonardo Fibonacci (ca. 1170 – ca. 1250). The Fibonacci numbers have since been studied extensively, finding application in such diverse areas as random number generation, compression algorithms, musical tunings, phyllotaxy in plants, population generation, and much more. See Knuth (1997) for a detailed discussion.
When you have developed several programs for some related tasks, you will find it convenient to group them together and make them available as a cohesive application that can easily be used and incorporated in your work. Packages and applications are part of the framework in Mathematica that makes this possible. A package is simply a text file containing Mathematica code. Typically you put related functions in a package. So there might be a computational geometry package or a random walks package that includes functions in support of those tasks. An application, in Mathematica, is a set of packages together with various user-interface elements such as documentation, palettes, and perhaps stylesheets.
When you develop an application, it is important to think about how your functions work with each other as well as how well they integrate with the rest of Mathematica. The user's interface to your programs should be as close as possible to that of the built-in functions in Mathematica so that users can more easily pick up the syntax and usage. Packages provide the framework to do this. In this chapter, features such as options, argument checking, messaging, and documentation are all discussed in the context of a larger application – random walks. We will gather much of the code fragments from earlier chapters and add an options framework, error and usage messages, and some new interactive visualization tools as we develop the RandomWalks package in this chapter.
Functional programming, the use and evaluation of functions as a programming paradigm, has a long and rich history in programming languages. Lisp came about in the search for a convenient language for representing mathematical concepts in programs. It borrowed from the lambda calculus of the logician Alonzo Church. More recent languages have in turn embraced many aspects of Lisp – in addition to Lisp's offspring such as Scheme and Haskell, you will find elements of functional constructs in Java, Python, Ruby, and Perl. Mathematica itself has clear bloodlines to Lisp, including the ability to operate on data structures such as lists as single objects and in its representation of mathematical properties through rules. Being able to express ideas in science, mathematics, and engineering in a language that naturally mirrors those fields is made much easier by the integration of these tools.
Functions not only offer a familiar paradigm to those representing ideas in science, mathematics, and engineering, they provide a consistent and efficient mechanism for computation and programming. In Mathematica, unlike many other languages, functions are considered “first class” objects, meaning they can be used as arguments to other functions, they can be returned as values, and they can be part of many other kinds of data objects such as arrays. In addition, you can create and use functions at runtime, that is, when you evaluate an expression. This functional style of programming distinguishes Mathematica from traditional procedural languages like C and Fortran.
The analysis of randomized search heuristics on classes of functions is fundamental to the understanding of the underlying stochastic process and the development of suitable proof techniques. Recently, remarkable progress has been made in bounding the expected optimization time of a simple evolutionary algorithm, called (1+1) EA, on the class of linear functions. We improve the previously best known bound in this setting from (1.39+o(1))en ln n to en ln n+O(n) in expectation and with high probability, which is tight up to lower-order terms. Moreover, upper and lower bounds for arbitrary mutation probabilities p are derived, which imply expected polynomial optimization time as long as p = O((ln n)/n) and p = Ω(n−C) for a constant C > 0, and which are tight if p = c/n for a constant c > 0. As a consequence, the standard mutation probability p = 1/n is optimal for all linear functions, and the (1+1) EA is found to be an optimal mutation-based algorithm. Furthermore, the algorithm turns out to be surprisingly robust since the large neighbourhood explored by the mutation operator does not disrupt the search.