To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The present paper aims to design and simulate an adversarial strategy where a swarm of quadrotor UAVs is herding anti-aircraft land vehicles (AALV) that actively oppose the swarm’s objective by potentially taking them down. The main strategy is to block the AALVs’ line of sight to their goal zone (AALVs’ objective), shifting its trajectory so it reaches a kill zone instead (UAVs’ objective). The counter-swarm strategy performed by the AALVs consists of taking down the closest aerial units to the goal zone. As a result, a consensus algorithm is executed by the UAVs in order to assess the communication network and re-group. Consensus is based on the propagation of local observations that converge into a global agreement on a communication graph. Re-grouping is done via positioning around the kill zone vector or preferring an anti-clockwise formation to better close gaps. The adversarial strategy was tested in an empty arena and urban setting, the latter making use of a path-planning procedure that re-routes the AALV trajectory based on its current destination. Simulation results show a maximum UAV mission success rate converging to roughly 80% in the empty arena. When targeted elimination procedures are executed, UAV mission performance drops 5%, making no distinction between re-grouping strategies in the empty arena. The urban setting shows lower performance due to navigation complexity but favors the decision to re-group based on a formation that close gaps rather than positioning around the kill zone vector.
The issue of implementing nonlinear model predictive control (NMPC) on mechanical systems evolving on special orthogonal group (SO(3)) is taken into consideration in the first place. Necessary conditions of optimality are extracted based on Lie group variational integrators, leading to a two-point boundary value problem (TPBVP) which is solved using sensitivity derivatives and indirect shooting methods. Fast Newton-like methods referred to as fast solvers which are commonly used to solve the TPBVP are established based on the repetition of a nonlinear process. The numerical schemes employed to alleviate the computation burden consist of eliminating some constraint-related but non-essential terms in the trend of sensitivity derivatives calculation and for solving the TPBVP equations. As another claim, assuming that a first attempt to resolve the NMPC problem is accessible, the problem subjected to some changes in its initial conditions (due to some re-planning schemes) can be resolved cost-effectively based on it. Instead of solving the whole optimization process from the scratch, the optimal control inputs and states of the system are updated based on the neighboring extremal (NE) method. For this purpose, two approaches are considered: applying NE method on the first solution that leads to a neighboring optimal solution, or assisting this latter by updating the NMPC-related optimization using exact TPBVP equations at some predefined intermediate steps. It is shown through an example that the first method is not accurate enough due to error accumulations. In contrast, the second method preserves the accuracy while reducing the computation time significantly.
Cold rolling involves large deformation of the workpiece leading to temperature increase due to plastic deformation. This process is highly nonlinear and leads to large computation times to fully model the process. This paper describes the use of dimension-reduced neural networks (DR-NNs) for predicting temperature changes due to plastic deformation in a two-stage cold rolling process. The main objective of these models is to reduce computational demand, error, and uncertainty in predictions. Material properties, feed velocity, sheet dimensions, and friction models are introduced as inputs for the dimensionality reduction. Different linear and nonlinear dimensionality reduction methods reduce the input space to a smaller set of principal components. The principal components are fed as inputs to the neural networks for predicting the output temperature change. The DR-NNs are compared against a standalone neural network and show improvements in terms of lower computational time and prediction uncertainty.
Robotic guides take visitors on a tour of a facility. Such robots must always know the position of the visitor for decision-making. Current tracking algorithms largely assume that the person will be nearly always visible. In the robotic guide application, a person’s visibility could be often lost for prolonged periods, especially when the robot is circumventing a corner or making a sharp turn. In such cases, a person cannot quickly come behind the limited field of view rear camera. We propose a new algorithm that can track people for prolonged times under such conditions. The algorithm is benefitted from an application-level heuristic that the person will be nearly always following the robot, which can be used to guess the motion. The proposed work uses a Particle Filter with a ‘follow-the-robot’ motion model for tracking. The tracking is performed in 3D using a monocular camera. Unlike approaches in the literature, the proposed work observes from a moving base that is especially challenging since a rotation of the robot can cause a large sudden change in the position of the human in the image plane that the approaches in the literature would filter out. Tracking in 3D can resolve such errors. The proposed approach is tested for three different indoor scenarios. The results showcase that the approach is significantly better than the baselines including tracking in the image and projecting in 3D, tracking using a randomized (non-social) motion model, tracking using a Kalman Filter and using LSTM for trajectory prediction.
You have the hardware and understand its architecture. You have a large problem to solve. You suspect that a parallel program may be helpful. Where do you begin? Before we can answer that question, an understanding of the software infrastructure is required. In this chapter, we will discuss general organization of parallel programs, that is, typical software architecture. Chapter 5 elaborates this further and discusses how to design solutions to different types of problems.
Question: How are execution engines and data organized into a parallel program?
As we have noted, truly sequential processors hardly exist, but they execute sequential programs fully well. Some parts of the sequential program may even be executed in parallel, either directly by the hardware's design, or with the help of a parallelizing compiler. On the other hand, we are likely to achieve severely sub-par performance by relying solely on the hardware and the compiler. With only a little more thought, it is often possible to simply organize a sequential program into multiple components and turn it into a truly parallel program.
Question: What are some common types of parallel programs?
This chapter introduces parallel programming models. Parallel programming models characterize the anatomy or structure of parallel programs. This structure is somewhat more complex than that of a sequential program, and one must understand this structure to develop parallel programs. These programming models will also provide the context for the performance analysis methodology discussed in Chapter 3 as well as the parallel design techniques described in Chapter 5.
We will see in Chapter 7 that many efficient sequential algorithms are not so efficient if trivially parallelized. Many problems instead require specially designed parallel algorithms suitable for the underlying system architecture. These parallel algorithms are often designed directly in terms of these programming models.
A program broadly consists of executable parts and memory where data is held, in addition to input and output. A large parallel program usually performs input and output through a parallel file system. We will discuss parallel file systems in Section 5.4, but in the context of the current discussion they behave much like memory – data of some size can be fetched from an address or written to an address by executable parts.
We are now ready to start implementing parallel programs. This requires us to know:
Question: Where do I begin to program? What building blocks can I program on top of?
• How to create and manage fragments (and tasks).
• How to provide the code for the fragments.
• How to organize, initialize, and access shared memory.
• How to cause tasks to communicate.
• How to synchronize among tasks.
This chapter discusses popular software tools that provide answers to these questions. It offers a broad overview of these tools in order to familiarize the reader with the core concepts employed in tools like these, and their relative strengths. This discussion must be supplemented with detailed documentation and manuals that are available for these tools before one starts to program.
The minimal requirement from a parallel programming platform is that it supports the creation of multiple tasks or threads and allows data communication and synchronization among them. Modern programming languages, Java, Python, and so on, usually have these facilities – either as a part of language constructs or through standard library functions. We start with OpenMP, which is designed for parallel programming on a single computing system with memory shared across threads of a processor. It is supported by many C/C++ and Fortran compilers. We will use the C-style.
OpenMP
Language-based support for parallel programming is popular, especially for single node computing systems. Compiling such programs produces a single executable, which can be loaded into a process for execution, similar to sequential programs. The process then generates multiple threads for parallel execution. OpenMP is a compiler-directive-based shared-memory programming model, which allows sequential programmers to quickly graduate to parallel programming. In fact, an OpenMP program stripped off its directives is nothing but a sequential program. A compiler that does not support the directives could just ignore them. (For some things, OpenMP provides library functions – these are not ignored by the compiler.) Some compilers that support OpenMP pragmas still require a compile time flag to enable that support.
Preliminaries
C/C++ employs #pragma directives to provide instructions to the compiler. OpenMP directives all are prefixed with #pragma omp followed by the name of the directive and possible further options for the directive as a sequence of clauses, as shown in Listing 6.1.
This chapter introduces some general principles of parallel algorithm design. We will consider a few case studies to illustrate broad approaches to parallel algorithms. As already discussed in Chapter 5, the underlying goal for these algorithms is to pose the solution into parcels of relatively independent computation, with occasional interaction. In order to abstract the details of synchronization, we will assume the parallel RAM (PRAM) or the bulk-synchronous parallel (BSP) model to describe and analyze these algorithms. It is a good time for the reminder that going from, say, a PRAM algorithm to one that is efficient on a particular architecture requires refinement and careful design for a particular platform. This is particularly true when “constant time” concurrent read and write operations are assumed. Concurrent reads and writes are particularly inefficient for distributed-memory platforms, and are inefficient for shared-memory platforms as well. It requires synchronization of the processors’ views of the shared memory, which can be expensive.
Question: How do parallel algorithms differ from sequential algorithms?
Recall that PRAM models focus mainly on the computational aspect of algorithm, whereas practical algorithms also require close attention to memory, communication, and synchronization overheads. PRAM algorithms may not always be practical, but they are easier to design than those for more general models. In reality, PRAM algorithms are only the first step toward more practical algorithms, particularly on distributed-memory systems.
Parallel algorithm design often seeks to maximize parallelism and minimize the time complexity. Even if the number of actually available processors is limited, higher parallelism translates to higher scalability in practice. Nonetheless, the work-time scheduling principle (Section 3.5) indicates that low work complexity is paramount for fast execution in practice. In general, if the best sequential complexity of solving the given problem is, say To(n), we would like the parallel work complexity to be O(To(n)). It is a common algorithm design pattern to assume up to To(n) processors and then try to minimize the time complexity. With maximal parallelism, the target time complexity using To(n) processors is O(1). This is not always achievable, and there is often a trade-off between time and work complexity. We then try to reduce the work complexity to O(To(n)), without significantly increasing the time complexity.
Parallel programming is challenging. There are many parts interacting in a complex manner: algorithm-imposed dependency, scheduling on multiple execution units, synchronization, data communication capacity, network topology, memory bandwidth limit, cache performance in the presence of multiple independent threads accessing memory, program scalability, heterogeneity of hardware, and so on. It is useful to understand each of these aspects separately. We discuss general parallel design principles in this chapter. These ideas largely apply to both shared-memory style and message-passing style programming, as well as task-centric programs.
Question: How to devise the parallel solution to a given problem?
At first cut, there are two approaches to start designing parallel applications:
Question: What is the detailed structure of parallel programs?
1. Given a problem, design and implement a sequential algorithm, and then turn it into a parallel program based on the type of available parallel architecture.
2. Start ab initio. Design a parallel algorithm suitable for the underlying architecture and then implement it.
In either case, performance, correctness, reusability, and maintainability are important goals. We will see that for many problems, starting with a sequential algorithm and then dividing it into independent tasks that can execute in parallel leads to a poor parallel algorithm. Instead, another algorithm that is designed to maximize independent parts, may yield better performance. If a good parallel solution cannot be found – and there do exist inherently sequential problems, for which parallel solutions are not sufficiently faster than sequential ones – it may not be a problem worth solving in parallel.
Once a parallel algorithm is designed, it may yet contain parts that are sequential. Further, the parallel parts can also be executed on a sequential machine in an arbitrary sequence. Such “sequentialization” allows the developer to test parts of a parallel program. If a purely sequential version is already available, or can be implemented with only small effort, it can also serve as a starting point for parallel design. The sequential version can be exploited to develop the parallel application incrementally, gradually replacing sequential parts with their parallel versions. The sequential version also provides performance targets for the parallel version and allows debugging by comparing partial results.
Programs need to be correct. Programs also need to be fast. In order to write efficient programs, one surely must know how to evaluate efficiency. One might take recourse to our prior understanding of efficiency in the sequential context and compare observed parallel performance to observed sequential performance. Or, we can define parallel efficiency independent of sequential performance. We may yet draw inspiration from the way efficiency is evaluated in a sequential context. Into that scheme, we would need to incorporate the impact of an increasing number of processors deployed to solve the given problem.
Question: How do you reason about how long an algorithm or program takes?
Efficiency has two metrics. The first is in an abstract setting, for example, the asymptotic analysis of the underlying algorithm. The second is concrete – how well does the algorithm's implementation behave in practice on the available hardware and on data sizes of interest. Both are important.
There is no substitute for measuring the performance of the real implementation on real data. On the other hand, developing and testing iteratively on large parallel systems is prohibitively expensive. Most development occurs on a small scale: using only a few processors, p, on small input of size n. The extrapolation of these tests to a much larger scale is deceptively hard, and we often must resort to simplified models and analysis tools.
Asymptotic analysis on simple models is sometimes criticized because it oversimplifies several complex dynamics (like cache behavior, out-of-order execution on multiple execution engines, instruction dependencies, etc.) and conceals constant multipliers. Nonetheless, with large input sizes that are common in parallel applications, asymptotic measures do have value. They can be computed somewhat easily, in a standardized setting and without requiring iterations on large supercomputers. And, concealing constants is a choice to some degree. Useful constants can and should be retained. Nonetheless, the abstract part of our analysis will employ the big-O notation to describe the number of steps an algorithm takes. It is a function of the input size n and the number of processors p.
Asymptotic notation or not, the time t(n, p) to solve a problem in parallel is a function of n and p. For this purpose, we will generally count in p the number of sequential processors – they complete their program instructions in sequence.
This chapter is not designed for a detailed study of computer architecture. Rather, it is a cursory review of concepts that are useful for understanding the performance issues in parallel programs. Readers may well need to refer to a more detailed treatise on architecture to delve deeper into some of the concepts.
There are two distinct facets of parallel architecture: the structure of the processors, that is, the hardware architecture, and the structure of the programs, that is, the software architecture. The hardware architecture has three major components:
Question: What are execution engines and how are instructions executed?
1. Computation engine: it carries out program instructions.
2. Memory system: it provides ways to store values and recall them later.
3. Network: it forms the connections among processors and memory.
An understanding of the organization of each architecture and their interaction with each other is important to write efficient parallel programs. This chapter is an introduction to this topic. Some of these hardware architecture details can be hidden from application programs by well-designed programming frameworks and compilers. Nonetheless, a better understanding of these generally leads to more efficient programs. One must similarly understand the components of the program along with the programming environment. In other words, a programmer must ask:
1. How do the multiple processing units operate and interact with each other?
2. How is the program organized so it can start and control all processing units? How is it split into cooperating parts and how do parts merge? How do parts cooperate with other parts (or programs)?
One way to view the organization of hardware as well as software is as graphs (see Sections 1.6 and 2.3). Vertices in these graphs represent processors or program components, and edges represent network connection or program communication. Often, implementation simplicity, higher performance, and cost-effectiveness can be achieved with restrictions on the structure of these graphs. The hardware and software architectures are, in principle, independent of each other. In practice, however, certain software organizations are more suited to certain hardware organizations. We will discuss these graphs and their relationship starting in section 2.3.
Another way to categorize the hardware organization was proposed by Flynn and is based on the relationship between the instructions different processors execute at a time. This is popularly known as Flynn’s taxonomy.
SISD: Single Instruction, Single Data
A processor executes program instructions, operating on some input to produce some output. An SISD processor is a serial processor.
Lessons in programming often start with a definition of the term algorithm. Webster's dictionary defines algorithm as “a step-by-step procedure for solving a problem.” Not only does this definition lend itself naturally to an imperative programming style, but it often also leads to a focus on sequential programming. However, the truth is that program execution is hardly ever in a step-by-step fashion, even if it may sometimes appear to be so. This nonsequentiality can be due to multiple instructions being in flight simultaneously, that is, the instructions are in various stages of their executions at the same time. This is true even when a program is presented as a linear sequence of instructions, and its correctness depends on their execution in that exact sequence. This is also true when the program is “parallel” instead, that is, the order among instructions is not necessarily specified.
In this book, we focus on this parallel programming, where instructions are neither specified nor expected to be in a single sequence. Further, the execution of these programs is also in a parallel context, where potentially several thousand instructions, or even more, execute at any given time.
Concurrency and Parallelism
Sometimes the terms “concurrent” and “parallel” are informally used interchangeably, but it is important to recognize the distinction. Parallelism may be defined as performing two activities at the same time. These activities may be related in some manner or not. Usually, these activities are not instantaneous: each takes a finite time. Two related activities are said to be concurrent if there is no predetermined order between them – they may or may not overlap in time when they do occur. We will see that in certain situations, concurrency is not desirable, and a relative order is imposed. When such an order is enforced on two activities, they clearly cannot be executed in parallel.
Although our focus in this book is on parallel programming, concurrency must often be managed in a parallel program, and we discuss practical aspects of concurrency as well.
Why Study Parallel Programming
Natural processes are inherently parallel, whether they be molecular and nuclear behavior, weather and geological phenomena, or biological and genetic manifestation. By no means does that imply that their simulation and computation must be parallel.
Interaction between concurrently executing fragments is an essential characteristic of parallel programs and the major source of difference between sequential programming and parallel programming. Synchronization and communication are the two ways in which fragments directly interact, and these are the subjects of this chapter. We begin with a brief review of basic operating system concepts, particularly in the context of parallel and concurrent execution. If you already have a good knowledge of operating systems concepts, browse lightly or skip ahead.
Question: Who controls the executing fragments? How do different executing fragments interact and impact each other’s execution?
Threads and Processes
Computing systems are managed by a program: an operating system. Process is the mechanism that operating systems use to start and control the execution of other programs. A process provides one or more ranges of addresses for the executing program to use. Each address has a value (which remains undefined until it is initialized). Each range is mapped to a block of memory (which may reside on one or more attached devices). These blocks of memory are under management of the operating system. A range of addresses and the data that they map to are collectively called an address space. An address space is divided into fixed-size units called pages. Address space and pages provide a logical or a virtual view of the memory. This view is also called virtual memory. The operating system maintains a mapping between pages and their locations on the device. One advantage of virtual memory is that not all pages need to be resident in the physical memory device – some may be relegated to slower storage (not unlike the cache strategy), while others that remain undefined need not be mapped to any storage at all.
Being an executing program, the operating system comprises a set of processes, which start and schedule other processes. For example, an application starts with some running process launching a new process to execute that application's code. These processes may execute concurrently, sharing the available hardware by turn. An executing process may be forced to turn over to a waiting process via a mechanism of hardware interrupts.
This study reviews 71 high-quality studies of massive open online courses focused on languages (LMOOCs) that were published from the inception of LMOOCs to 2021. The purpose of this study is to gain a deeper understanding of the current state of research and identify fruitful directions for future LMOOC research. First, we reviewed three basic sets of characteristics of these studies: (1) research trends – for example, publication types and years; (2) research contexts – for example, countries in which the studies were conducted, the subjects’ target languages, language-ability levels, skills, and whether the focal courses are for specific purposes; and (3) research design, including data collection, data analysis, and theoretical frameworks. We then utilized a text-mining approach called Latent Dirichlet Allocation that uses machine-learning techniques to identify research-topic commonalities underlying the collected studies. In this way, a total of nine topics were identified. They were: (1) core elements of LMOOCs; (2) interaction and communication in LMOOCs; (3) innovative LMOOC teaching practices; (4) LMOOC standards and quality assurance; (5) LMOOC implementation, participation, and completion; (6) LMOOC teaching plans; (7) LMOOC learning effectiveness and its drivers/obstacles; (8) learners and learning in LMOOCs; and (9) inclusiveness in LMOOCs. These were then diagrammed as a ThemeRiver, which showed the evolutionary trend of the nine identified topics. Specifically, scholarly interest in Topics 5, 7, and 9 increased over time, whereas for Topics 1 and 6, it decreased. Based on our results, we highlighted specific directions for future LMOOC research on each of the identified research topics.
The use of topology optimization in the design of fluid dynamics systems is still in its infancy. With the decreasing cost of additive manufacture, the application of topology optimization in the design of structural components has begun to increase. This paper provides a method for using topology optimization to reduce the power dissipation of fluid dynamics systems, with the novelty of it being the first application of stochastic mechanisms in the design of 3D fluid–solid geometrical interfaces. The optimization algorithm uses the continuous adjoint method for sensitivity analysis and is optimized against an objective function for fluid power dissipation. The paper details the methodology behind a vanilla gradient descent approach before introducing stochastic behavior through a minibatch-based system. Both algorithms are then applied to a novel case study for an internal combustion engine's piston cooling gallery before the performance of each algorithm's resulting geometry is analyzed and compared. The vanilla gradient descent algorithm achieves an 8.9% improvement in pressure loss through the case study, and this is surpassed by the stochastic descent algorithm which achieved a 9.9% improvement, however this improvement came with a large time cost. Both approaches produced similarly unintuitive geometry solutions to successfully improve the performance of the cooling gallery.