To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
If you are like most other programmers, you have probably been thinking about updating your technical skills. You have been hearing a lot about Java, object-oriented development, and Internet applications. These topics have been getting a tremendous amount of press lately. In the fanfare, you may have heard someone suggest that COBOL programmers will be obsolete and can't possibly make the switch to OO. Can this possibly be true? We don't think so.
We wrote this book because we believe it is important that you learn Java and OO development. Although we don't claim learning a new programming language is a trivial task, the fact that you already know COBOL gives you a head start on learning Java. Don't let what others say bother you.
We work with COBOL as consultants for industry, in our classrooms, and as authors. However, we also work with Java and object-oriented development. From our perspective, we believe COBOL and Java are highly complementary development tools in the evolving computing environment. COBOL does a great job of processing and maintaining a firm's data. Java plays an equally important role of capturing and reporting data by connecting clients to the server across a variety of networked computers, with little concern about the specific hardware and operating systems involved.
This chapter shows you how to work with arrays. You will learn how to define and manipulate both single and multidimensional arrays. In this chapter, as in others, we will develop real working programs to illustrate using Java arrays.
NOTES
COBOL uses the terms single-level table; Java uses one-dimensional array.
In keeping with the spirit of Java, here we will use array and dimension.
This chapter begins with the declaration and population of one-dimensional arrays, then illustrates how to declare and populate two-dimensional arrays. The examples use both numeric and string data values.
The chapter also describes how to search an array using Java and how to pass arrays as arguments to methods.
You will see that, although internally Java treats array processing somewhat differently than COBOL, Java array handling looks a lot like COBOL table processing to the programmer. Of course, we will continue to point out significant differences between Java and COBOL and the pitfalls to avoid when writing Java from a COBOL programmer's perspective.
This chapter assumes you understand the following:
COBOL:
Defining one & two level tables
Initializing one & two level tables
Table lookup techniques
Using subscripts and indexes
Perform-varying statement
Java:
OO concepts (Chapter 2)
Java program structure (Chapter 3)
Defining data (Chapter 4)
Decision making (Chapter 6)
Looping (Chapter 7)
DECLARING ONE-DIMENSIONAL ARRAYS
We begin our discussion of one-dimensional arrays by looking at loan processing for the Community National Bank (CNB).
This chapter introduces you to Java data access techniques using classes supplied in three packages: java. io, java. sql, and java. net. You are undoubtedly familiar with reading and writing files using COBOL. Java, however, takes a somewhat different approach, and therefore you will not find the clear parallels between COBOL and Java in this chapter that you have seen in the previous chapters.
Here we develop programs to demonstrate sequential file input-output (I-O) and database access. In addition we demonstrate a technique called object serialization used by Java to store intact objects in files for later retrieval. Although a demonstration of network access is beyond the scope of this book, we will present an overview.
The chapter begins with a brief description of the I-O classes contained in the java. io package and their hierarchy. Then, a relatively simple sequential file I-O demonstration is presented. Next we repeat the example using a relational database. Object persistence is then discussed and illustrated using Java's Object Serialization classes. The chapter concludes with a discussion of network access using classes in the java. net package.
The purpose of this chapter is to introduce you to Java computation. The basic arithmetic operators add, subtract, multiply, and divide (+, -, *, /) are the same as those we use in COBOL's COMPUTE statement. However, Java operations such as exponentiation (raising a value to a power) and rounding are accomplished using methods in the Math class. Java also has operators that enable us to write computation shortcuts if we wish.
Also, you will recall from Chapter 4 that Java is very particular about data types. This sensitivity is raised to a higher level when we do computation, as discussed more thoroughly in this chapter.
The chapter begins with a discussion of Java exceptions. An exception is Java's way of informing us of some condition that occurs while our program is running. While doing computation, for example, this could be an error like dividing by zero. Exceptions are not limited to arithmetic, however. You will see how exceptions are used to signal us about any condition, ranging from computation errors to hardware problems. In this chapter we will illustrate the use of custom exceptions using the familiar CheckingAccount class developed in Chapter 3.
This chapter shows you how to write Java loops. You will learn how to write loops that mirror the familiar COBOL PERFORM statement including the PERFORM—UNTIL, PERFORM—VARYING—UNTIL and PERFORM—VARYING—UNTIL—AFTER. You will see that Java has three different types of loops: while, do, and for. We will use each of these to duplicate the work done by the COBOL PERFORM statement. In addition, we will review loops that test for the terminating condition at the beginning and at the end of the loop. We will also demonstrate writing nested loops in Java.
The chapter begins with the simple COBOL PERFORM—UNTIL statement and shows you how to accomplish the same thing in Java. Then we work with the PERFORM—VARYING—UNTIL and finally the PERFORM—VARYING—UNTIL—AFTER statement. Working programs are developed to illustrate the Java loop statements in action. At the end of the chapter, we design and develop a small program for the Community Naitonal Bank to compute a loan amortization, using some of the Java looping statements.
This chapter assumes you understand the following:
COBOL:
Perform-until
Inline perform (COBOL-85)
Perform-varying-until
Perform-varying-until-after
With test after (COBOL-85)
Java:
OO concepts (Chapter 2)
Java program structure (Chapter 3)
Defining data (Chapter 4)
Arithmetic (Chapter 5)
Decision making (Chapter 6)
LOOP STRUCTURE
We write program loops to repeat a sequence of instructions.
In this chapter you will learn how to implement the selection and case structure using Java. You will see how to write if and switch statements. Java and COBOL if statements are very similar and therefore straightforward. However, switch is a distant cousin of the COBOL EVALUATE verb and has some similarities, but it also has many important differences, which will be discussed and illustrated in this chapter.
We will first examine the Java logical operators, conditions, and the if statement. This discussion includes the emulation of COBOL condition names using Java. Next we present the switch statement and explain its use. The chapter concludes with the development of a new method, computeServiceCharge(), for Checking Account. This method is first written using nested if statements, then written again using switch.
The chapter assumes you understand the following:
COBOL
Condition names
Logical operators
IF—ELSE—END—IF
EVALUATE
CONTINUE
Java
OO concepts (Chapter 2)
Java program structure (Chapter 3)
Defining data (Chapter 4)
Computation (Chapter 5)
SERVICE CHARGES AT COMMUNITY NATIONAL BANK
The Community National Bank system computes a service charge for each checking account each month.
This chapter is devoted to the introduction of the key concepts essential to your understanding of the OO paradigm in general and writing OO code in particular. This chapter describes the primary OO concepts: objects, classes, inheritance, encapsulation, polymorphism, and dynamic binding. A basic understanding of these ideas will enable you to begin writing Java programs and developing object-oriented systems.
Several examples are used throughout the chapter to illustrate the concepts. In most cases, the examples are based on everyday things with which you are already familiar. A case study (The Community National Bank Case) is introduced at the beginning of the chapter and will be used to illustrate OO concepts. The case study will continue to be used throughout the remainder of the book.
NOTES
OO development is a different way of thinking about the development of systems—it is not simply a programming technique.
Although programming languages were the first to utilize the ideas of OO, the concepts of OO are prevalent throughout all phases of development—analysis, design, and programming.
It is very important that you understand the basics of OO before attempting to write a Java program.
So, you want to learn Java. Why? You probably want to for one or all of the following reasons:
As a COBOL programmer you feel a need to update your skills;
Java is hot and so are the jobs for Java programmers;
Object technology is hot and Java fits perfectly with the object-oriented development your boss is requiring you to learn; and
You are a naturally curious person who wants to see what this Java stuff is all about.
Whatever your reasons, we're glad you're here!
In this book, we place the emphasis on learning Java from a COBOL perspective. Of the several million COBOL programmers worldwide, a significant number are, or soon will be, learning Java. We have designed this book to help you with that task. We will take what you know about COBOL and apply as much of it as possible to learning Java. But first, let's learn a little about Java in general, nontechnical terms and about object-oriented development. Chapters 1 and 2, respectively, have been set aside for these purposes.
We introduce a typed λ-calculus which allows the use of exceptions in the ML style. It is an extension of the system $AF_2$ of Krivine & Leivant (Krivine, 1990; Leivant, 1983). We show its main properties: confluence, strong normalization and weak subject reduction. The system satisfies the “the proof as program” paradigm as in $AF_2$. Moreover, the underlined logic of our system is intuitionistic logic.
This pearl explains Church numerals, twice. The first explanation links Church numerals to Peano numerals via the well-known encoding of data types in the polymorphic λ-calculus. This view suggests that Church numerals are folds in disguise. The second explanation, which is more elaborate, but also more insightful, derives Church numerals from first principles, that is, from an algebraic specification of addition and multiplication. Additionally, we illustrate the use of the parametricity theorem by proving exponentiation as reverse application correct.
The parallel-functional language Eden has a non-deterministic construct, the process abstraction merge, which interleaves a set of input lists to produce a single non-deterministic list. Its non-deterministic behaviour is a consequence of its reactivity: it immediately copies to the output list any value appearing at any of the input lists. This feature is essential in reactive systems and very useful in some deterministic parallel algorithms. The presence of non-determinism creates some problems such that some internal transformations in the compiler must be disallowed. The paper describes several non-determinism analyses developed for Eden aimed at detecting the parts of the program that, even in the presence of a process merge, still exhibit a deterministic behaviour. A polynomial cost algorithm which annotates Eden expressions is described in detail. A denotational semantics is described for Eden and the correctness of all the analyses is proved with respect to this semantics.
Functional programming fits well with the use of descriptive markup in HTML and XML. There is also a good fit between S-expressions in Lisp and the XML data set. These similarities are exploited in LAML which is a software package for Scheme. LAML supports exact mirrors of the three variants of XHTML 1.0, SVG 1.0, and a number of more specialized XML languages. The mirrors are all synthesized from document type definitions (DTDs). Each element in a mirror is represented by a named function in Scheme. The mirror functions validate the XML document while it is generated. The validation is based on finite state automata automatically derived from the DTD.
We analyse machines that implement the call-by-value reduction strategy of the λ-calculus: two environment machines – CAM and SECD – and two encodings into the $\pi$-calculus – due to Milner and Vasconcelos. To establish the relation between the various machines, we setup a notion of reduction machine and two notions of correspondences: operational – in which a reduction step in the source machine is mimicked by a sequence of steps in the target machine – and convergent – where only reduction to normal form is simulated. We show that there are operational correspondences from the λ-calculus into CAM, and from CAM and from SECD into the $\pi$-calculus. Plotkin completes the picture by showing that there is a convergent correspondence from the λ-calculus into SECD.
The functional programming language Haskell and its type system are used to define and analyse the nature of some problems and tools in machine learning and data mining. Data types and type-classes for statistical models are developed that allow models to be manipulated in a precise, type-safe and flexible way. The statistical models considered include probability distributions, mixture models, function-models, time-series, and classification- and function-model-trees. The aim is to improve ways of designing and programming with models, not only of applying them.
In this article the focus is on methodology for analysing learner-learner oral conversations mediated by computers. With the increasing availability of synchronous voice-based groupware and the additional facilities offered by audio-graphic tools, language learners have opportunities for collaborating on oral tasks, supported by visual and textual stimuli via computer-conferencing. Used synchronously with real-time voice-based work, these tools present learners with the challenge of learning a new type of oral interaction, and researchers with the need for developing methodologies for redefining L2 oral competence in these environments. In this paper we address the latter. We examine approaches from the interactionist branch of Second Language Acquisition research, and we question the ability of this model of language learning to fully account for the processes that take place when learners are interacting with machines while talking to each other. To complement the socio-cognitive insights of that school, we look to interactional linguistics and to social semiotics. Building on findings from these fields, we offer a qualitative discussion of the discourses evidenced in conversational data from two distance-learning projects that use synchronous voice in conjunction with other stimuli, in an intermediate French programme at the UK Open University.We then present detailed conclusions about the methodological challenges involved in analysing the oral competence of students who use these tools.
If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?
Seymour Cray (1925–1996), father of supercomputing
Instruction partitioning for pipelining
In Section 4.3 we learned that the execution of a processor instruction consists of two basic steps: fetching the instruction from memory, and then executing it. In the simplest implementation of a processor, the complete fetch-execute cycle would be completed for one instruction before the next one begins. We saw this type of one-instruction-at-a-time operation in the algorithmic behavioral model in Chapter 4. To speed up the execution of instructions, however, we can break the fetch-execute cycle into several simpler sub-operations. We then can overlap the execution of different instructions in an assembly line fashion where each step in the assembly line is dedicated to performing one specific operation for each instruction. This assembly line processing is called pipelining.
The first step in designing a pipeline for a processor is to determine the smaller sub-operations within the fetch-execute cycle that must be performed to execute an instruction. For the VeSPA processor, each instruction will perform some of the following sub-operations, although not all of the instructions will perform all of the operations:
Fetch the instruction from memory.
Increment the program counter.
Fetch the operands from the registers.
Compute a memory address.
Read an operand from memory.
Write a result to the memory or to a register.
The next step in designing the pipeline for VeSPA, or for any other processor ISA, is to determine how these individual operations should be partitioned into the different stages of the pipeline.