To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The purpose of object-oriented programming is to build your actual system, to develop the code that fulfills your system's design. Your design artifacts, such as unified modeling language (UML) class and state machine diagrams, drive the development of your source code. Design and programming are highly interrelated and iterative and you will often move back and forth between them. Your programming efforts will quickly reveal weaknesses in your design that need to be addressed, and your design efforts will reveal potential strategies to code the system effectively.
Developers will typically focus on two types of source code: object-oriented code, such as Java or C#, and database-oriented code, such as data definition language (DDL), data manipulation language (DML), stored procedures, and triggers. Section 13.4 describes how to implement common object-oriented concepts in Java and Chapter 14 describes database coding. The end goal of your programming efforts is to produce a component, subsystem, or even a full-fledged application that can undergo testing in the large (described in Chapter 3).
This chapter discusses these topics:
Philosophies for effective programming;
Programming tips and techniques for writing high-quality code;
Test-driven development (TDD);
From object design to Java code; and
What you have learned.
PHILOSOPHIES FOR EFFECTIVE PROGRAMMING
Over the years I have found that the following philosophies have helped improve my effectiveness as a programmer:
Always write high-quality, clean code. Quality work is one of the practices of extreme programming (XP) (Beck 2000), and my experiences confirm the importance of writing high-quality code at all times. High-quality code is easier to write, easier to understand, and easier to enhance. Sloppy code will only slow you down.
Worrying about the increasing rate of change in the IT industry is so 15 minutes ago.
Your skills, and how you apply them, are significant determinants of your success as a software professional. By reading this book, you have gained the fundamental knowledge required to begin learning object technology and agile techniques. The bottom line is that software development is hard; it takes skilled people to be successful at it. By making it to the end of this book you have learned the basics of modern software development; you have learned some useful concepts and techniques; you have started to get a handle on how they fit together; and you have applied them by working through the review questions and case studies. You are now in a good position to continue your learning process, a process that will last throughout your entire career. This chapter provides insight into where the software profession currently is, where it is going, what skills will be needed over the next few years, and how you can obtain those skills.
BECOME A GENERALIZING SPECIALIST
I believe that one of the biggest problems the IT industry faces is over specialization of skills. Many organizations have IT departments filled with Java specialists, database specialists, business analyst specialists, project management specialists, and so on. This approach is based on an organizational behavior paradigm called Taylorism, named after Frederick Taylor, who in his 1911 paper The Principles of Scientific Management set the stage for large-scale manufacturing.
Modeling and documentation are critical aspects of any software project. Modeling is the act of creating an abstraction of a concept, and documentation is a permanent record of information. In traditional software processes, such as the IEEE 12207 (http://www.ieee.org), modeling is included as one or more serial phases. Modern prescriptive processes, such as the rational unified process (RUP) (Kruchten 2000) or the enterprise unified process (EUP) (http://www.enterpriseunifiedprocess.info), which describe in specific detail the activities required to develop software, include modeling disciplines that you work through in an evolutionary manner. Agile software processes, such as feature-driven development (FDD) (Palmer and Felsing 2002) and extreme programming (XP) (Beck 2000), also include evolutionary modeling efforts, although in FDD, modeling is an explicit activity, whereas in XP it is implicit. The point is that modeling and documentation are important parts of software development, so it makes sense to want to be as effective and efficient at it as possible.
This chapter describes agile model–driven design (AMDD), an approach to software development where your implementation efforts are guided by agile models that are just barely good enough. This chapter addresses the following topics:
The Object Primer is a straightforward, easy-to-understand introduction to agile software development (ASD) using object-oriented (OO) and relational database technologies. It covers the fundamental concepts of ASD and OO and describes how to take an agile approach to requirements, analysis, and design techniques applying the techniques of the unified modeling language (UML) 2 as well as other leading-edge techniques, including agile model–driven development (AMDD) and test-driven development (TDD) approaches. During the 1990s OO superceded the structured paradigm as the primary technology paradigm for software development. Now during the 2000s ASD is superceding traditional, prescriptive approaches to software development. While OO and ASD are often used to develop complex systems, learning them does not need to be complicated. This book is different from many other introductory books about these topics—it is written from the point of view of a real-world developer, someone who has lived through the difficulty of learning these concepts.
WHO SHOULD READ The Object Primer?
This book is aimed at two primary audiences—existing developers and university/college students who want to gain the fundamental skills required to succeed on modern software development projects. Throughout this book I use the term “developer” broadly: a developer is anyone involved in the development of a software application. This includes programmers, analysts, designers, business stakeholders, database administrators, support engineers, and so on.
We compare two deforestation techniques: short cut fusion formalised in category theory and the syntactic composition of tree transducers. The former strongly depends on types and uses the parametricity property or free theorem, whereas the latter makes no use of types at all and allows more general compositions. We introduce the notion of a categorical transducer, which is a generalisation of a catamorphism, and show a corresponding fusion result, which is a generalisation of the ‘acid rain theorem’. We prove the following main theorems: (i) The class of all categorical transducers builds a category where composition is fusion; (ii) The semantics of categorical transducers is a functor. (iii) The subclass of top-down categorical transducers is a subcategory. (iv) Syntactic composition of top-down tree transducers is equivalent to the fusion of top-down categorical transducers.
In Girard (2001), J.-Y. Girard presents a new theory, The Ludics, which is a model of realisibility of logic that associates proofs with designs, and formulas with behaviours. In this article we study the interpretation in this semantics of formulas with first-order quantifications and their proofs. We extend to the first-order quantifiers the full completeness theorem obtained in Girard (2001) for $MALL_2$. A significant part of this article is devoted to the study of a uniformity property for the families of designs that represent proofs of formulas depending on a first-order free variable.
Computers are changing almost every aspect of our lives. They're changing how we relate to one another and even changing how we think of ourselves. The very idea that my brain is a biological computer that could be, in some fundamental mathematical sense, no more powerful than the laptop on which I'm typing these words is mind-boggling. The fact that I can program a computer to control a robot, play chess, or find a cure for disease is tremendously empowering.
This book is organized as a series of essays that explore interesting and fundamental topics in computer science with the aim of showing how computers and computer programs work and how the various aspects of computer science are connected. Along the way I hope to convey to you some of my fascination with computers and my enthusiasm for working in a field whose explosive growth is fueled in no small measure by the ability of computers to support collaboration and information sharing.
While not meant to be exhaustive, this book examines a wide range of topics, from digital logic and machine language to artificial intelligence and searching the World Wide Web. These topics are explored by interacting with programs and experimenting with short fragments of code while considering such questions as:
How can a computer learn to recognize junk email?
What happens when you click on a link in a browser?
It's difficult to work with computers for long without thinking about the brain as a computer and the mind as a program. The very idea of explaining mental phenomena in computational terms is obvious and exciting to some and outrageous and disturbing to others. For those of us working in artificial intelligence, it's easy to get carried away and extrapolate from current technology to highly capable, intelligent and even emotional robots. These extrapolations may turn out to be fantasies and our robots may never exhibit more than rudimentary intelligence and sham emotions, but I think not.
Scientific knowledge about brains and cognition has expanded dramatically in recent years and scholars interested in the nature of mind are scrambling to make sense of mountains of data. Computer software is surpassing human ability in strategic planning, mathematical reasoning and medical diagnosis. Computer-controlled robots are cleaning offices, conducting guided tours in museums, and delivering meals in hospitals. However, a lot more is at stake here than solving engineering problems or unraveling scientific puzzles.
The various claims concerning the nature of mind are complicated and hotly contested. Many of them date back thousands of years. Some are believed to threaten the moral, religious and social fabric of society. It's feared by some that if individuals believe certain claims – whether or not they are true – they will no longer be bound by prevailing social and moral contracts.
The technology supporting the World Wide Web enables computations and data to be distributed over networks of computers, and this lets you use programs that you don't have to install or maintain and gain access to information that you don't have store or worry about backing up. The World Wide Web is essentially a global computer with a world-wide file system.
Many of the documents in this huge file system are accessible to anyone with a browser and a network connection. Indeed, their authors really want you to take a look at what they have to offer in the terms of advice, information, merchandise and services. Some of these documents, we tell ourselves, contain exactly what we're looking for, but it isn't always easy to find them.
It's difficult enough to search through files full of your own personal stuff – now there are billions of files generated by millions of authors. Some of those authors are likely to be somewhat misleading in promoting whatever they're offering, others may be downright devious, and some will just be clueless, ignorant about whatever they're talking about. How can we sift through this morass of data and find what we're looking for?
SPIDERS IN THE WEB
Billions of files are stored on web servers all over the world and you don't need a browser to access them.
In Daniel Dennett's Darwin's Dangerous Idea: Evolution and the Meanings of Life, the dangerous idea is natural selection, Charles Darwin's name for the mechanism governing the evolution of species that he codiscovered with Alfred Russel Wallace. Natural selection is the “process by which individual characteristics that are more favorable to reproductive success are ‘chosen,’ because they are passed on from one generation to the next, over characteristics that are less favorable”.
Natural selection explains how characteristics that promote reproductive success persist and under certain circumstances can come to dominate less favorable characteristics, but it doesn't explain how those characteristics are passed on or how new characteristics come into being. It was Gregor Mendel, living around Darwin's time, who suggested that heritable characteristics are packaged in the discrete units we now call genes and that offspring inherit a combination of their parents' genes.
When you combine Darwin's and Mendel's ideas, you have the basis for genetic algorithms, an interesting class of algorithms generally attributed to John Holland that take their inspiration from evolutionary biology. These algorithms simulate some aspects of biological evolution – while ignoring others – to tackle a wide range of problems from designing circuits to scheduling trains. Genetic algorithms allow the algorithm designer to specify the criterion for reproductive success by supplying a fitness function, alluding to the idea that in a competitive environment only the fittest survive to reproduce.
On the morning I began this chapter, I was driving into work and passed a sign near a grade school advertising a “Bug Safari”. When I got to work, I found out from the Web that “Butterfly Walks”, “Spider Safaris”, “Pond Dips” and similar excursions are regular fare at summer camps and neighborhood activity centers. This intrigued me: who would lead these tours? I initially thought that the most likely candidates would be trained entomologists or specialists such as lepidopterists. An entomologist studies bugs, beetles, butterflies, spiders and the like (not to be confused – as I usually do – with the similar-sounding etymologist, someone who studies the derivation of words), while lepidopterists specialize in lepidoptera, butterflies and moths.
But then I realized that it's a rare specialist who can resist being distracted by his or her own specialty. I thought about what this tendency might portend for my attempt to speak to a general audience about concepts in computer science. By most accounts, I'm an expert in computer science and a specialist in artificial intelligence. But just as a trained and practicing physicist is not an expert in all matters spanning the length and breadth of physics, from quantum electrodynamics to cosmology, so I'm not an expert in all areas of computer science.
In this chapter, I'll be talking about programming languages, an area of computer science about which I know a fair bit but certainly am not an expert.
Have you ever wondered what's going on when you click your mouse on some underlined text in a browser window and suddenly the screen fills with text and graphics that clearly come from some other faraway place? It's as if you've been transported to another location, as if a window has opened up on another world. If you're on a fast cable or DSL (“digital subscriber line”, the first of many acronyms in this chapter) connection, the transformation is almost instantaneous; if you're using a slow modem, then updating the screen can take several seconds or even minutes, but in any case the behind-the-scenes machinations making this transformation possible leave little evidence. Occasionally, however, you'll catch fleeting glimpses of the machinery through little cracks in the user interface.
If you use a dial-up connection and modem to connect with your Internet service provider, you may hear an awful squawking as your modem and the service provider's modem initiate two-way communication. Similar noisy exchanges can occur when one fax machine attempts to communicate with a second. In both cases, computer programs at each end of a telephone connection are validating, handshaking, synchronizing and otherwise handling the transmission of information. As smarter modems and fax machines replace older technology, these noisy accompaniments are being silenced, since a human need no longer overhear them to check that things are proceeding as desired.
There is a reason I liken computations to evanescent spirits. They are called into being with obscure incantations, spin filaments of data into complex webs of information, and then vanish with little trace of their ephemeral calculations. Of course, there are observable side effects, say when a computation activates a physical device or causes output to be sent to a printer, displayed on a screen or stored in a file. But the computations themselves are difficult to observe in much the way your thoughts are difficult to observe.
I'm exaggerating somewhat; after all, unlike our brains, computers are designed by human beings. We can trace every signal and change that occurs in a computer. The problem is that a complete description of all those signals and changes does not help much in understanding what's going on during a complex computation; in particular, it doesn't help figure out what went wrong when the results of a computation don't match our expectations.
An abstraction is a way of thinking about problems or complex phenomena that ignores some aspects in order to simplify thinking about others. The idea of a digital computer is itself an abstraction. The computers we work with are electrical devices whose circuits propagate continuously varying signals; they don't operate with discrete integers 0, 1, 2, …, or for that matter with binary digits 0 and 1.