Pseudoconvex Function

  • PDF / 1,319,299 Bytes
  • 90 Pages / 547.087 x 737.008 pts Page_size
  • 49 Downloads / 206 Views

DOWNLOAD

REPORT


P4

Palm Measure

Partitioned preassigned pivot procedure. A procedure for arranging the basis matrix of a linear-programming problem into as near a lower triangular form as possible. Such an arrangement helps in maintaining a sparse inverse, given that the original data set for the associated linear-programming problem is sparse.

▶ Markovian Arrival Process (MAP)

Parallel Computing Jonathan Eckstein Rutgers, The State University of New Jersey, Livingston Campus, New Burnswick, NJ, USA

See ▶ Linear Programming ▶ Revised Simplex Method

Introduction

Packing Problem The integer-programming problem defined as follows: Maximize subject to

cT x

Ex  e

where the components of E are either 1 or 0, the components of the column vector e are all ones, and the variables are restricted to be either 0 or 1. The idea of the problem is to choose among items or combinations of items that can be packed into a container and to do so in the most effective way.

See ▶ Bin-Packing ▶ Set-covering Problem ▶ Set-partitioning Problem

Parallel computing is the use of a computer system that contains multiple, replicated arithmetic-logical units (ALUs), programmable to cooperate concurrently on a single task. Between 2000 and 2010, parallel computing underwent a sea change. Prior to this decade, the speed of single-processor computers advanced steadily, and parallel computing was generally employed only for applications requiring more computing power than a standard PC processor chip could deliver. Taking advantage of Moore’s Law (Moore 1965), which predicts the steady increase in the number of transistors that can be packed into a given chip area, microprocessor manufacturers built processors that could execute a single stream of calculations at steadily increasing speeds. In the 2000–2010 decade, Moore’s law continued to hold, but the way that chip builders used the ever-increasing number of transistors began to change. Applying everlarger number of transistors to a single sequential stream of instructions began to encounter diminishing returns, and while smaller transistors enabled increasing clock

S.I. Gass, M.C. Fu (eds.), Encyclopedia of Operations Research and Management Science, DOI 10.1007/978-1-4419-1153-7, # Springer Science+Business Media New York 2013

P

1104

speeds, clock speeds are limited by energy consumption and heat dissipation issues. To use the ever-increasing number of available transistors, processor designers began placing multiple processor cores, essentially multiple processors, on each CPU chip. In the laptop and desktop markets, processors with four cores are now common, and CPU chips with only a single processing core are now rare. Thus, parallel processing is no longer only an effort to advance over the power available from mainstream computing platforms such as desktop and laptop computers; it has now become an integral part of such mainstream platforms.

Kinds of Parallel Computers The taxonomy of Flynn (1972) classifies parallel computers as either SIMD (Single Instruction, Multiple Data) or MIMD (