We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Internet has fostered rapid growth in the use of application servers. Previously inaccessible outside private Intranets, application servers are increasingly appearing as the middle layer of three-tiered network applications. A GUI executing on a desktop establishes a session with an application server that implements product features on top of a third tier of legacy systems or databases. Supported by growing customer access to the Internet, the application server allows a business to rapidly deploy information products, and services. Java catalyzes the process by speeding the development of both the GUI and server software as well as making the GUI platform-independent.
Application server development is a complex undertaking. Supporting simultaneous GUI connections, application servers must protect the integrity of system data from malicious clients and the privacy of clients from each other. Traditionally this has been accomplished by guarding sensitive data with access control checks. Associated with each protected object, an access control list (ACL) names authorized principals and permitted operations. The server checks the ACL before taking potentially damaging actions. Although this is called an access list approach, its essential characteristic is not the use of a list, but the checking of permissions after granting a reference to the protected object. In this approach, the reference does not imply a right to use the protected object.
Described here is an alternative way of protecting objects based on a capability approach.
There is A widespread myth that computers, being fast adding machines, do math well. We all know that this is not true but sometimes we believe anyway. Sometimes we forget that the numeric answers we get have high precision but perhaps no accuracy. I know that I should do the error analysis on each and every floating point operation but sometimes I don't.
I have always believed in “consumer arithmetic.” (i.e., I don't care how fast I get the wrong answer. I care how fast I get the right answer.) And I really believe that programming language libraries should support math at least as good as high school algebra. When I get a numeric problem, I want the computer to tell me it can get the answer, it can get close, or it can't solve the problem to the accuracy I want given the data I have. What I don't want is a string of digits, which may or may not have any meaning, with no indication if they have meaning or not.
I will explore one small corner of the universe and how math happens in the Java language. I use a specific example from a talk I heard on Interval Arithmetic—more on that later. I'll warn you now that I consider myself a user of numbers rather than a mathematician, but we'll definitely do some math along the way.
Last month, Tim Matthews described how JavaSoft is developing a Java Cryptography Architecture (JCA) and extensions (Java Cryptography Extensions, or JCE). He described their contents and structure in the java.security package, and outlined their uses. This month I will present some actual code using the base functionality in the JCA, and next month will program using the JCE and use multiple Providers.
After reading this article, you will, I trust, be able to write a program in Java (an application or applet) that can sign or verify data using DSA with the security package. Beyond the specific DSA example presented here, though, I hope you will understand the JCA model enough to be able to quickly write code for any operation in the package.
Before beginning, however, it is important to note that the java.security package is not part of the JDK 1.0.2, only JDK 1.1 and above. Furthermore, there are significant differences between the security packages in JDK 1.1 and 1.2. This article (and next month's) describes features in 1.2. If you have not yet left 1.0.2 behind, now would be a good time to do so. After all, with 1.2, you are not only getting the security package, you are also getting improved cloning, serialization and many other features.
Now let's look at what a Java program needs to do to use the JCA. Most everything in cryptography begins with the random number generator.
You have probably heard or read something about software agents; they have been a hot topic for some time now and agent-based commercial products are available. There are several schools of thought on what the term “agent” really means. For example, in the context of artificial intelligence, an agent is often described as an autonomous, intelligent entity: They can make decisions and perform actions based on perceived inputs in order to achieve some goal (Russell, S., and P. Norvig, Artificial Intelligence: A Modern Approach, 1995).
Mobility is a common characteristic of many agent definitions. A mobileagent is an active object that can move both data and functionality (code) to multiple places within a distributed system. It doesn't matter what the ultimate purpose of the agent is or whether or not it can be classified as “intelligent.”
A mobile agent should be able to execute on any machine within a network, regardless of the processor type or operating system. In addition, the agent code should not have to be installed on every machine that the agent could potentially visit; it should move with the agent's data automatically. Therefore, it is desirable to implement agents on top of a mobile code system, such as Java virtual machine (VM). The dynamic nature of Java classes and objects, combined with advanced networking capabilities, makes Java highly qualified for use as a mobile agent platform.
Java has rapidly emerged as an important tool for the development of cross-platform, distributed applications. While Java has been primarily associated with the Web, it is a much broader technology with applications far beyond the Internet. To take advantage of the capabilities of Java and its wide range of application programming services, an organization must enhance the skill base of its workforce in several critical areas, among them object-oriented programming, interactive and event-driven application development, distributed (client/server) systems architecture, and coming up rapidly, design and use of components (beans) for Java's component architecture, JavaBeans.
To create an effective and successful migration strategy for its development and project management staff, a company must consider several factors. These factors are discussed in detail in this article.
AUDIENCE
Each group of potential Java users will likely need a different migration strategy. While one group, such as software developers, may need in-depth knowledge of the Java programming facilities, another group, such as project managers, need only a superficial understanding of the language features but a strong understanding of the impact that the use of Java has on a project. The audience for a migration effort can be broken into five distinct groups:
The control structure of an interactive program can be quite complex when implemented in a sequential language. This problem arises because interactive programs must be able to deal with asynchronous external events responsively, while also executing the application code. The usual solution to this problem is to structure the program around a central event loop, which dispatches control to parts of the application in reaction to external events. Often, the event-loop is part of a library, and applications are programmed using the so-called “inverted program structure.” This is not too bad when the application is purely reactive, that is, it only does something as a reaction to user input, but many interactive applications do not fit this model. In such cases, the application must define special call-backs to perform computation when the system is otherwise idle. In order to guarantee responsiveness, these call-backs must execute quickly and then pass control back to the event loop. In effect, the use of an event-loop is a “poor man's concurrency.” This structure makes programming computationally significant algorithms difficult, and leads to a bias towards reactive, or input-driven, systems.
A more natural way to program these systems is as a set of communicating processes. Some window systems, such as the X Window System, provide concurrency between applications by using multiple system processes, but the individual applications are sequential and suffer from the problems mentioned above.
This book is about the union of two important paradigms in programming languages, namely, higher-order languages and concurrent languages. Higher-order programming languages, often referred to as “functional programming” languages, are languages that support functions as first-class values. The language used here is the popular higher-order language Standard ML (SML) [MTH90, MTHM97], which is the most prominent member of the ML family of languages. In particular, the bulk of this book focuses on concurrent programming using the language Concurrent ML (CML), which extends SML with independent processes and higher-order communication and synchronization primitives. The power of CML is that a wide range of communication and synchronization abstractions can be programmed using a small collection of primitives.
A concurrent program is composed from two or more sequential programs, called processes, that execute (at least conceptually) in parallel. The sequential part of the execution of these processes is independent, but they also must interact via shared resources in order to collaborate on achieving their common purpose. In this book, we are concerned with the situation in which the concurrency and process interaction are explicit. This is in contrast with implicitly parallel languages, such as parallel functional languages [Hud89, Nik91, PvE93] and concurrent logic programming languages [Sha89]. The choice of language mechanisms used for process interaction is the key issue in concurrent programming language design.
A natural application of concurrency is the management of multiple independent tasks. For example, building a large C program involves a number of individual compilations, each of which is run as a separate UNIX command. Since these compilations are independent, they may be run at the same time (possibly on different machines). In this chapter, we describe the implementation of a “parallel” build system using CML.
The problem
The basic problem is that we are given a set of objects, and a set of dependencies between the objects. Associated with some objects is an action that describes how to build the object; other objects (e.g., source files) do not have associated actions. Taken together, they form an acyclic dependency graph, whose topological order defines the order in which objects should be built. We use the term antecedents to denote the nodes that a node depends on, and successors to denote the nodes that depend on it. The nodes of the graph are classified into internal nodes (those that have non-zero in-degree), leaf nodes (those with no antecedents), and the root node (which has no successors). We restrict ourselves to graphs with exactly one root. For the graph to be well formed, any internal node should have an associated action. Leaf nodes may also have actions.
Also associated with each object is a timestamp that tells when the object was last built or modified.
The design of CML has been driven by practical experience. In particular, the mechanism of first-class synchronous operations is motivated by the fundamental conflict between selective communication and abstraction. This chapter explains the rationale for the design of CML, and especially for first-class synchronous operations. It is aimed at people interested in language design, and is not required to understand the remainder of the book. In this chapter, we focus on coreCML — synchronous message passing plus the event combinators — the other synchronization mechanisms found in CML, such as mailboxes, I-variables, and M-variables, can be viewed as derived forms. Some of the discussion here repeats earlier arguments, but is included for coherence.
Basic design choices
As surveyed in Chapter 2, there are many possible choices for the design of a concurrent language. CML chooses message passing over shared memory, synchronous communication over asynchronous, and simple rendezvous over extended rendezvous. This section argues in favor of these choices.
While SML is an imperative language, its design greatly encourages a mostly functional style of programming. Mutable values must be declared explicitly as such, and there is syntactic overhead on their use. For these reasons, extending SML with sharedmemory concurrency primitives is not true to the “spirit” of the language. Message passing, on the other hand, encourages a mostly functional programming style that fits well with ML. As we have seen, much of the state in typical CML programs is represented as immutable arguments to the tail-recursive functions that implement threads.
Concurrent programming is the task of writing programs consisting of multiple independent threads of control, called processes. Conceptually, we view these processes as executing in parallel, but in practice their execution may be interleaved on a single processor. For this reason, we distinguish between concurrency in a programming language, and parallelism in hardware. We say that operations in a program are concurrent if they can be executed in parallel, and we say that operations in hardware are parallel if they overlap in time.
Operating systems, where there is a need to allow useful computation to be done in parallel with relatively slow input/output (I/O) operations, provide one of the earliest examples of concurrency. For example, during its execution, a program P might write a line of text to a printer by calling the operating system. Since this operation takes a relatively long time, the operating system initiates it, suspends P, and starts running another program Q. Eventually, the output operation completes and an interrupt is received by the operating system, at which point it can resume executing P. In addition to introducing parallelism and hiding latency, as in the case of slow I/O devices, there are other important uses of concurrency in operating systems. Using interrupts from a hardware interval timer, the operating system can multiplex the processor among a collection of user programs, which is called time-sharing. Most time-sharing operating systems allow user programs to interact, which provides a form of user-level concurrency.