Message-Passing Programming

The message-passing programming model is based on the abstraction of a parallel computer with a distributed address space where each processor has a local memory to which it has exclusive access, see Sect. 2.3.1. There is no global memory. Data exchange m

  • PDF / 1,136,680 Bytes
  • 59 Pages / 439.37 x 666.142 pts Page_size
  • 89 Downloads / 210 Views

DOWNLOAD

REPORT


Message-Passing Programming

The message-passing programming model is based on the abstraction of a parallel computer with a distributed address space where each processor has a local memory to which it has exclusive access, see Sect. 2.3.1. There is no global memory. Data exchange must be performed by message passing: to transfer data from the local memory of one processor A to the local memory of another processor B, A must send a message containing the data to B, and B must receive the data in a buffer in its local memory. To guarantee portability of programs, no assumptions on the topology of the interconnection network is made. Instead, it is assumed that each processor can send a message to any other processor. A message-passing program is executed by a set of processes where each process has its own local data. Usually, one process is executed on one processor or core of the execution platform. The number of processes is often fixed when starting the program. Each process can access its local data and can exchange information and data with other processes by sending and receiving messages. In principle, each of the processes could execute a different program (MPMD, multiple program multiple data). But to make program design easier, it is usually assumed that each of the processes executes the same program (SPMD, single program, multiple data), see also Sect. 2.2. In practice, this is not really a restriction, since each process can still execute different parts of the program, selected, for example, by its process rank. The processes executing a message-passing program can exchange local data by using communication operations. These could be provided by a communication library. To activate a specific communication operation, the participating processes call the corresponding communication function provided by the library. In the simplest case, this could be a point-to-point transfer of data from a process A to a process B. In this case, A calls a send operation, and B calls a corresponding receive operation. Communication libraries often provide a large set of communication functions to support different point-to-point transfers and also global communication operations like broadcast in which more than two processes are involved, see Sect. 3.6.2 for a typical set of global communication operations.

T. Rauber and G. Rünger, Parallel Programming, DOI: 10.1007/978-3-642-37801-0_5, © Springer-Verlag Berlin Heidelberg 2013

227

228

5 Message-Passing Programming

A communication library could be vendor or hardware specific, but in most cases portable libraries are used which define syntax and semantics of communication functions and which are supported for a large class of parallel computers. By far the most popular portable communication library is MPI (Message-Passing Interface), but PVM (Parallel Virtual Machine) is also often used, see [69]. In this chapter, we give an introduction to MPI and show how parallel programs with MPI can be developed. The description includes point-to-point and global communicat