To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Two of the most common tasks in scientific computing are interpolation of discrete data and approximation by known functions of the numerical solution, the source terms, and the boundary or initial conditions. Therefore, we need to perform these tasks both accurately and efficiently. The data are not always nicely distributed on a uniform lattice or grid, and thus we must learn how to manage these situations as well. We often use polynomials to represent discrete data because they are easy to “manipulate,” that is, differentiate and integrate. However, sines and cosines as well as special functions called wavelets are very effective means to perform interpolation and approximation, and they have very interesting properties.
In this section, we will study various such representations and their corresponding C++ implementations. We consider cases where the data are just sufficient to determine exactly the representation (deterministic case) as well as cases where the data are more than the information needed (overdetermined case).
Finally, we will present a more detailed discussion of MPI_Send and MPI_Recv, the two fundamental building blocks of MPI.
In this chapter we introduce the main themes that we will cover in this book and provide an introduction for each of them. We begin with a brief overview of C++ and define the two basic concepts of functions and classes as well as other syntactic elements of the language. We then introduce basic mathematical concepts that include elements of linear algebra, vector orthogonalization, and corresponding codes and software. Finally, we introduce parallel programming and review some generic parallel architectures as well as standard parallel algorithms for basic operations (e.g., the fan-in algorithm for recursive doubling). We also provide a brief overview of the main MPI commands.
INTRODUCTION TO C++
An ancient proverb states that the beginning of a thousand-mile journey begins with a single step. For us, this single step will be a brief overview of the C++ programming language. This introduction is not designed to be all-inclusive, but rather it should provide the scaffolding from which we will build concepts throughout this book. Admittedly, what you will read now may seem daunting in its scope, but as you become more familiar with the concepts found herein, you will be able to use the C++ language as a tool for furthering your understanding of deeper mathematical and algorithmic concepts presented later in the book. With this in mind, let us begin our thousand-mile journey with this first step.
In this chapter we consider explicit discretizations of space and time derivatives. In such discretizations we can express directly a derivative at one grid point in terms of function values at adjacent grid points (spatial discretizations) or in terms of previous time levels (temporal discretizations). This, in turn, implies that there is no implicit coupling, and thus no matrix inversion is involved; instead, only simple daxpy type operations are required.
The material in this chapter is relatively easy to program both on serial as well as on parallel computers. It is appropriate for demonstrating fundamental concepts of discretization and primary constructs of the C++ language and of the MPI library. Specifically, we will demonstrate the use of loops, arrays, functions, and passing functions to functions. In addition to presenting MPI_Send and MPI_Recv implementations for finite differences, we also introduce MPI_Sendrecv and MPI_Sendrecv_replace as alternative advanced MPI function calls for parallelizing finite differences discretizations.
EXPLICIT SPACE DISCRETIZATIONS
Basics
The formulation of derivatives based on function values on a set of points, which we call the grid, dates back to Euler in the beginning of the eighteenth century. However, advances have of course been made since then. In this section, we will formulate ways to compute first- and higher order derivatives of a function using discrete data points. The key idea is to use Taylor expansions at a subset of adjacent points of the grid, as shown in Figure 5.1.
Science and engineering have undergone a major transformation at the research level as well as at the development and technology level. The modern scientist and engineer spend more and more time in front of a laptop, a workstation, or a parallel supercomputer and less and less time in the physical laboratory or in the workshop. The virtual wind tunnel and the virtual biology laboratory are not a thing of the future; they are here! The old approach of “cut and try” has been replaced by “simulate and analyze” in several key technological areas such as aerospace applications, synthesis of new materials, design of new drugs, and chip processing and microfabrication. The new discipline of nanotechnology will be based primarily on large-scale computations and numerical experiments. The methods of scientific analysis and engineering design are changing continuously, affecting both our approach to the phenomena that we study as well as the range of applications that we address. Whereas there is an abundance of software available to be used as almost a “black box,” working in new application areas requires good knowledge of fundamentals and mastering of effective new tools.
In the classical scientific approach, the physical system is first simplified and set in a form that suggests what type of phenomena and processes may be important and, correspondingly, what experiments are to be conducted. In the absence of any known type of governing equations, dimensional inter dependence between physical parameters can guide laboratory experiments in identifying key parametric studies.
We have already discussed how to solve tridiagonal linear systems of equations using direct solvers (the Thomas algorithm) in Chapter 6 and some iterative solvers (Jacobi, Gauss-Seidel, SOR, and multigrid) in Chapter 7. We have also discussed solutions of nonlinear and linear systems and have introduced the conjugate gradient method in Chapter 4. In the current chapter we revisit this subject and present general algorithms for the direct and iterative solution of large linear systems. We start with the classical Gaussian elimination (which is a fast solver) and then proceed with more sophisticated solvers and preconditioners for symmetric and nonsymmetric systems.
In parallel computing, we introduce the broadcasting command MPI_Bcast and demonstrate its usefulness in the context of Gaussian elimination. In addition, we reiterate the use of MPI_Send, MPI_Recv, MPI_Allgather, and MPI_Allreduce through example implementations of algorithms presented in this chapter.
GAUSSIAN ELIMINATION
Gaussian elimination is one of the most effective ways to solve the linear system
Ax = b.
The Thomas algorithm (see Section 6.1.4) is a special case of Gaussian elimination for tridiagonal systems.
Over a number of years of working with frameworks, and being involved in their development and use, I have found that a few recurrent themes have come up. A number of issues arise again and again, and various solutions are tried. Sometimes the solutions work out, and apply to the same kind of problem when it comes up again in the future. Just like the design patterns and structure that evolve in a framework, these kinds of “best practices” that have evolved over time can save you considerable effort when applying frameworks to your own development projects.
In this chapter, we explore some of these practices and discuss how they came about, and how they can be applied to your own situation.
INITIAL ADOPTION
The first group of best practices that we discuss relates to the initial adoption stage of the use of frameworks: when you are just getting started there are some points to consider. This is the stage where a good number of mistakes are made, sometimes resulting in an otherwise promising framework being abandoned.
Take the Plunge
The first most important strategy for using frameworks and components in your development process is to make the decision to do so in the first place. We have explained in great detail, in the preceding chapters, how frameworks bring benefits to the development process, discussed what these benefits are, and examined concrete examples of frameworks.
Scientific Computing is by its very nature a practical subject – it requires tools and a lot of practice. To solve realistic problems we need not only fast algorithms but also a combination of good tools and fast computers. This is the subject of the current book, which emphasizes equally all three: algorithms, tools, and computers. Oftentimes such concepts and tools are taught serially across different courses and different textbooks, and hence the interconnections among them are not immediately apparent. We believe that such a close integration is important from the outset.
The book starts with a heavy dosage of C++ and basic mathematical and computational concepts, and it ends emphasizing advanced parallel algorithms that are used in modern simulations. We have tried to make this book fun to read, to somewhat demystify the subject, and thus the style is sometimes informal and personal. It may seem that this happens at the expense of rigor, and indeed we have tried to limit notation and the proving of theorems. Instead, we emphasize concepts and useful “tricks of the trade” with many code segments, remarks, reminders, and warnings throughout the book.
The material of this book has been taught at different times to students in engineering, physics, computer science, and applied mathematics at Princeton University, Brown University, and MIT over the past fifteen years. Different segments have been taught to undergraduates and graduates, and to novices as well as to experts. To this end, on all three subjects covered, we start with simple introductory concepts and proceed to more advanced topics; such bandwidth, we believe, is one strength of this book.
GENERAL PRINCIPLES OF IDES IN FRAMEWORK/COMPONENT DEVELOPMENT
Integrated Development Environments (IDEs) are often a software developer's most intensively used tool, and this is often true of Web-application development. Until recently, IDEs had few special features for Web-application development, but this has changed. IDEs have become more flexible and easily configured, and their increased capabilities make them a better fit with frameworks. In this chapter we examine IDEs and how they relate to frameworks, with a few specific examples as well as some general observations and best practices.
IDEs and frameworks actually have much in common. They are tools that allow you to build your applications. The difference is that whereas an IDE provides tools for manipulating code, editing, searching, compiling, and so forth, a framework also provides services to be used by your code. A framework does not generally help you manipulate code (although some do generate code), it instead provides you with structure and services to build your application faster and easier.
Sometimes the line blurs a little between a framework and an IDE. Some IDEs provide libraries or APIs that are intended to be used in combination with your finished application at run time – this means they have at least some of the elements of a framework. Some frameworks provide graphical or command-line frontends to help you generate code, where the generated code is commonly a subclass of one of the abstract classes provided by the framework – in this sense, the framework provides some of the capabilities of an IDE.
For the purposes of this compilation example, we will assume that we are using the GNU g++ compiler to compile a C++ program we have written contained within the file myprog.cpp. We will also assume that the machine on which you are trying to compile is a parallel machine with some version of MPI installed. You will need to contact your system administrator to find out the exact version of MPI that is available and the paths on your local architecture. In the following examples, the argument following the “-o” flag designates the file name to be used for the output. If no “-o” option is specified, most compilers default to using the name “a.out.” We now present several different programming scenarios:
• No user-defined libraries or user-defined header files are needed, and no special system libraries (such as those associated with math.h) are needed other than the MPI libraries:
g++ -o myprog myprog.cpp -lmpi
• No user-defined libraries or user-defined header files are needed, but the special system library corresponding to math.h is needed along with the MPI libraries:
g++ -o myprog myprog.cpp -lmath -lmpi
• User-defined libraries, user-defined header files, and the special system library corresponding to math.h are needed along with the MPI libraries:
The string following the flag designates the location of the user-defined header files to be included. The string following the “-L” flag designates the location of the user-defined libraries to be included. The string “-lSCmathlib” links the program with the user-defined library we created, and the string “-lmath” links the program with the system math library corresponding to math.h. The string “-lmpi” links the MPI libraries.
In this chapter we apply the approximation theory we presented in Chapter 3 to find solutions of linear and nonlinear equations and to perform integration of general functions. Both subjects are classical, but they serve as basic tools in scientific computing operations and in solving systems of ordinary and partial differential equations. With regard to root finding, we consider both scalar as well as systems of nonlinear equations. We present different versions of the Newton-Raphson method, the steepest descent method, and the conjugate gradient method (CGM); we will revisit the latter in Chapter 9. With regard to numerical integration we present some basic quadrature approaches, but we also consider advanced quadrature rules with singular integrands or in unbounded domains.
On the programming side, we first introduce the concept of passing a function to a function; in the previous chapter we were passing variables. This allows an easy implementation of recursion, which is so often encountered in scientific computing. We offer several C++ examples from root finding and numerical integration applications that make use of recursion, and we show an effective use of classes and overloaded operators. We also address parallel programming with emphasis on domain decomposition, specifically the concept of reduction operations. We introduce the MPI commands MPI_Reduce and MPI_Allreduce for accomplishing reduction operations among a collection of processes.
The best way to get a “feel” for different frameworks when choosing between them is to use them – to put them to work thereby creating an actual application. In actual practice, this is seldom done. No one has enough time to spend creating throw-away applications in several different frameworks. Once a developer begins to use a particular framework, and dedicates time to learn it well enough to do the example, it is easy to continue, and to take advantage of what has been learned so far. To switch tracks and use another framework is difficult, and things that have been learned about the first framework must often be unlearned to properly use another.
The comparison is still very helpful; however, seldom it is done. Creating a similar application in multiple frameworks, even a very simple one such as the one used here, allows a point-for-point comparison between frameworks. If one of the frameworks you are giving serious consideration to is not one of the ones we have chosen for a case study, you might find it helpful to implement the same sample application as a starting point.
In this Appendix, we use several detailed code examples, so that you can see what developing the same application with different frameworks is like. We do not, for lack of space, list every source code line here. You can download a zip file of the complete source code from http://www.cup.org/titles/catalogue.asp?isbn=0521520592.
WHY ARE COMPONENTS THE FUTURE OF WEB-APPLICATION DEVELOPMENT?
In this chapter we discuss the why of components and frameworks and the rationale behind their use. We also talk about the current state of Web-application development and where the industry perceives it is going to establish a firm foundation and justification for the use and development of components and frameworks.
We also look briefly at Java's suitability for component-based development and for the development of application frameworks, as well as for the specialized features of the extended Java platform and associated APIs that make them ideal to this task, including JavaBeans, Enterprise JavaBeans, and Reflection.
Where We are Today
The software industry is, for the most part, still creating much of its product in a “monolithic” fashion. The products may be more modular and configurable than they used to be, but most projects cannot be said to be truly component based. Even some projects being built with component-enabled technologies are not taking full advantage of the component model. It is quite possible to misuse component capabilities and as a result, to forfeit many of their benefits.
Many companies and organizations are becoming aware of the advantages and are getting their developers trained in the new technologies and the proper way to use them. It takes time for an organization to adopt such a significant change in their current practices.
This paper considers key questions concerning computer-based language-learning environments. Using evidence from current literature, it discusses the main characteristics of such environments including human, technical and physical resources, communicative structures, information management, and cultural contexts. It then uses data from an investigation of the universities of Cambridge, Toronto and Ulster to assess the pedagogical effectiveness of the computer-based environments currently in operation in these three institutions.
It considers, in particular, the integrative role that computer-based language learning environments seem to provide. Although each institution has integrated computer technology into language teaching and learning in different ways, a key element of each environment has been the establishment of a common computer-mediated infrastructure, enabling effective information dissemination, resource distribution, communication and teaching and learning. No single common infrastructure would be suitable in all three, however, in each case, it was found that the environments created were valuable, especially in integrating elements of the teaching and learning process that would normally have remained apart.
In concluding that the creation of a computer-based language learning environment in the present climate is beneficial, it was noted that adequate technical resources and a management that is keen to integrate computer technology into all aspects of university life is a key factor in their success.
This article discusses the applications of the theory of Computer Supported Collaborative Learning (CSCL) to teaching and learning a foreign language, in this case, one of the ‘Less Commonly Taught Languages’, Finnish. The ‘Virtually Finnish’ project was based on the idea of creating a larger, virtual Finnish learning community among the second-year Finnish language students in five US universities during the fall semester, 2001. CSCL was chosen as a base, as its principal goal is to aid the learners in sharing and distributing their knowledge to the whole learning community. In addition to discussing the set up and the content of the project, the students’ feedback about their language learning experience in the CSCL-environment will be examined. Furthermore, the conclusions, and the realization of our goals from the perspective of CSCL, as well as the students’ feedback on the collaborative process will be discussed. Based on the knowledge gained from limited feedback, a discussion of tools to aid language students in becoming a part of motivating and authentic collaboration projects on-line will be discussed. Looking at the students’ work on-line, as well as the feedback received, it was realized that certain skills crucial for communication have been neglected in teaching. Furthermore, it was realized that teaching on-line learning strategies in addition to teaching the target language is crucial. Future plans, and ideas for research in the area of CSCL will be shared in anticipation of inspiring others to undertake further research.
Developing multimedia courseware for successful integration into a wider system necessitates close attention to a multitude of factors. After a brief literature survey and a critical examination of some overly enthusiastic claims concerning the benefits of multimedia, this paper proposes a flexible framework for CALL development that takes the interrelations between approach, content, and technology-based variables into account. It contends that some of these variables are predetermined by the educational context, whereas others reflect the developers’ views of what constitutes an optimal language learning environment. The main part of the paper provides a detailed account of how these development factors shaped the design of multimedia materials for a large-scale project at the Vienna University of Economics and Business Administration. It argues for a balanced approach to learner control that, while allowing learners to interact with the materials according to their needs and preferences, provides enough learner guidance for those who prefer a more structured approach to learning. Special attention is paid to the often problematical areas of learner style suitability, feedback, and transparency of structure and content.
Educators and researchers in the acquisition of L2 phonology have called for empirical assessment of the progress students make after using new methods for learning (Chun, 1998, Morley, 1991). The present study investigated whether unlimited access to a speech-recognition-based language-learning program would improve the general standard of pronunciation of a group of middle-aged immigrant professionals studying English in Sweden. Eleven students were given a copy of the program Talk to Me from Auralog as a supplement to a 200-hour course in Technical English, and were encouraged to practise on their home computers. Their development in spoken English was compared with a control group of fifteen students who did not use the program. The program is evaluated in this paper according to Chapelle’s (2001) six criteria for CALL assessment. Since objective human ratings of pronunciation are costly and can be unreliable, our students were pre- and post-tested with the automatic PhonePass SET-10 test from Ordinate Corp. Results indicate that practice with the program was beneficial to those students who began the course with a strong foreign accent but was of limited value for students who began the course with better pronunciation. The paper begins with an overview of the state of the art of using speech recognition in L2 applications.