Concurrent Nomadic and Bundle Search: A Class of Parallel Algorithms for Local Optimization
We present a family of algorithms for local optimization that exploit the parallel architectures of contemporary computing systems to accomplish significant performance enhancements. This capability is important for demanding real time applications, as we
- PDF / 337,239 Bytes
- 11 Pages / 439.37 x 666.142 pts Page_size
- 0 Downloads / 220 Views
3
Nodalpoint Systems LTD, Athens, Greece [email protected] 2 Department of Computer Science, University of Ioannina, 1186, 45110 Ioannina, Greece Department of Materials Science and Engineering, University of Ioannina, 1186, 45110 Ioannina, Greece
Abstract. We present a family of algorithms for local optimization that exploit the parallel architectures of contemporary computing systems to accomplish significant performance enhancements. This capability is important for demanding real time applications, as well as, for problems with time–consuming objective functions. The proposed concurrent schemes namely nomadic and bundle search are based upon well established techniques such as quasi-Newton updates and line searches. The parallelization strategy consists of (a) distributed computation of an approximation to the Hessian matrix and (b) parallel deployment of line searches on different directions (bundles) and from different starting points (nomads). Preliminary results showed that the new parallel algorithms can solve problems in less iterations than their serial rivals. Keywords: Parallel local search · Nomadic search · Concurrent search · Nested parallelism · Line search · Quasi-Newton
1
Introduction
Optimization has broad practical applications in engineering, science and management. Many of these may either have expensive function evaluations or require real–time response. For example we refer to aircraft design, molecular modeling, space trajectory planning, optimal sea routing etc. High performance parallel computing can provide powerful tools for solving optimization problems. Nowadays the computing power of a single core CPU has reached a premium that is improving at a very slow pace. The direction for performance enhancement has turned from pushing the CPU clock higher up, towards creating multicore parallel architecture systems. However these hardware developments alone cannot change the picture overnight. Suitable software must be developed based on algorithms that can exploit the parallel features of the hardware in order R. Wyrzykowski et al. (Eds.): PPAM 2013, Part II, LNCS 8385, pp. 343–353, 2014. c Springer-Verlag Berlin Heidelberg 2014 DOI: 10.1007/978-3-642-55195-6 32,
344
C. Voglis et al.
to reach the desired performance levels. In critical real time applications, an untimely (delayed) result is at best useless if not costly or catastrophic. In addition, large scale tough problems with expensive objectives, left aside or abandoned as hopeless goals due to the extremely long computational times they required, are now being reconsidered in the light of parallel processing. Parallelizing a sequential algorithm usually results in minor performance gains and often the new algorithm is not equivalent to the original. However, an algorithm designed afresh aiming to exploit parallelism, is naturally expected to attain high levels of performance. Monte–Carlo methods are inherently parallel and their implementation on parallel systems is quite straightforward. Global optimization methods are also p
Data Loading...