Production function
- PDF / 10,846,558 Bytes
- 53 Pages / 595.32 x 790.92 pts Page_size
- 65 Downloads / 203 Views
PACKING PROBLEM The integer-programming problem defined as follows: Maximize
eX
subject to EX :s e
where the components of E are either 1 or 0, the components of the column vector e are all ones, and the variables are restricted to be either 0 or 1. The idea of the problem is to choose among items or combinations of items that can be packed into a container and to do so in the most effective way. See Bin packing; Set covering problem; Set partitioning problem.
PALM MEASURE See Markovian arrival process.
PARALLEL COMPUTING Jonathan Eckstein Rutgers University, Piscataway, New Jersey To the applications-oriented user, paraUel computing is the use of a computer system that contains multiple, replicated arithmetic-logical units (ALUs), programmable to cooperate concurrently on a single task. This definition does not include other kinds of concurrency not typically visible to the applications programmer, such as the overlapping of floating point and integer operations, or launching of multiple concurrent instructions on "superscalar" microprocessor chips. Parallel computing exists because, despite quick
and steady advances in computing technology, there always exist problems that a single processor cannot solve in an acceptable amount of time. Thus, parallel computing is necessarily high performance computing, and computational efficiency tends to be a prime concern in developing parallel applications. KINDS OF PARALLEL COMPUTERS: The taxonomy of Flynn (1972) classifies parallel computers as either "SIMD" or "MIMD." In a SIMD (Single Instruction, Multiple Data) architecture, a single instruction stream controls all the ALUs in a synchronous manner. In MIMD (Multiple Instruction, Multiple Data) architectures, each ALU has its own instruction stream. Such systems are typically built out of standard microprocessor chips. While there was active competition between SIMD and MIMD through the 1980's, MIMD has emerged as the clear winner: MIMD machines are cheaper to build and are adaptable to a much wider variety of programming styles. Another distinction is between local and shared memory. In pure local-memory architectures, each processor has its own memory bank, and information can be moved between different processors only by messages passed through a communication network. On the other end of the spectrum are pure shared-memory machines, also called Symmetric Multiprocessors or "SMPs," in which there is a single global memory bank that is equally accessible to all processors. Such designs provide performance and ease of programming for small numbers of processors, but are difficult to scale to large numbers of processors due to memory contention problems. The majority of parallel computers now in existence use this architecture - most computer vendors' high performance server systems are SMPs, although most users do not bother to coordinate the processors, instead using them simultaneously for separate tasks. In a large-scale system, it is not generally practical to provide a dedicated connection between every pair
Data Loading...