Skip to main content Accessibility help
Internet Explorer 11 is being discontinued by Microsoft in August 2021. If you have difficulties viewing the site on Internet Explorer 11 we recommend using a different browser such as Microsoft Edge, Google Chrome, Apple Safari or Mozilla Firefox.

Chapter 6: Middleware: The Practice of Parallel Programming

Chapter 6: Middleware: The Practice of Parallel Programming

pp. 139-210

Authors

, Indian Institute of Technology, Delhi
  • Add bookmark
  • Cite
  • Share

Extract

We are now ready to start implementing parallel programs. This requires us to know:

Question: Where do I begin to program? What building blocks can I program on top of?

  • • How to create and manage fragments (and tasks).

  • • How to provide the code for the fragments.

  • • How to organize, initialize, and access shared memory.

  • • How to cause tasks to communicate.

  • • How to synchronize among tasks.

This chapter discusses popular software tools that provide answers to these questions. It offers a broad overview of these tools in order to familiarize the reader with the core concepts employed in tools like these, and their relative strengths. This discussion must be supplemented with detailed documentation and manuals that are available for these tools before one starts to program.

The minimal requirement from a parallel programming platform is that it supports the creation of multiple tasks or threads and allows data communication and synchronization among them. Modern programming languages, Java, Python, and so on, usually have these facilities – either as a part of language constructs or through standard library functions. We start with OpenMP, which is designed for parallel programming on a single computing system with memory shared across threads of a processor. It is supported by many C/C++ and Fortran compilers. We will use the C-style.

OpenMP

Language-based support for parallel programming is popular, especially for single node computing systems. Compiling such programs produces a single executable, which can be loaded into a process for execution, similar to sequential programs. The process then generates multiple threads for parallel execution. OpenMP is a compiler-directive-based shared-memory programming model, which allows sequential programmers to quickly graduate to parallel programming. In fact, an OpenMP program stripped off its directives is nothing but a sequential program. A compiler that does not support the directives could just ignore them. (For some things, OpenMP provides library functions – these are not ignored by the compiler.) Some compilers that support OpenMP pragmas still require a compile time flag to enable that support.

Preliminaries

C/C++ employs #pragma directives to provide instructions to the compiler. OpenMP directives all are prefixed with #pragma omp followed by the name of the directive and possible further options for the directive as a sequence of clauses, as shown in Listing 6.1.

About the book

Access options

Review the options below to login to check your access.

Purchase options

eTextbook
US$64.00
Paperback
US$64.00

Have an access code?

To redeem an access code, please log in with your personal login.

If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.

Also available to purchase from these educational ebook suppliers