Convergence Analysis of Parametric Optimization Methods
This chapter deals with some simple convergence results related to the parametric optimization methods discussed in Chap. 5 . The main idea underlying convergence analysis of an algorithm is to identify (mathematically) the solution to which the algorith
- PDF / 444,901 Bytes
- 32 Pages / 439.36 x 666.15 pts Page_size
- 58 Downloads / 252 Views
1.
Chapter Overview
This chapter deals with some simple convergence results related to the parametric optimization methods discussed in Chap. 5. The main idea underlying convergence analysis of an algorithm is to identify (mathematically) the solution to which the algorithm converges. Hence to prove that an algorithm works, one must show that the algorithm converges to the optimal solution. In this chapter, this is precisely what we will attempt to do with some algorithms of Chap. 5. The convergence of simulated annealing requires some understanding of Markov chains and transition probabilities. To this end, it is sufficient to read all sections of Chap. 6 up to and including Sect. 3.1.3. It is also necessary to read about convergent sequences. For this purpose, it is sufficient to read all the material in Chap. 9 up to and including Theorem 9.2. Otherwise, all that is needed to read this chapter is a basic familiarity with the material of Chap. 5. Our discussion on the analysis of the steepest-descent rule begins in Sect. 3. Before discussing the mathematical details, we review definitions of some elementary ideas from calculus and a simple theorem.
2.
Preliminaries
In this section, we define some basic concepts needed for understanding convergence of continuous parametric optimization. The material in this section should serve as a refresher. All of these concepts will be required in this chapter. Readers familiar with them can skip this section without loss of continuity. © Springer Science+Business Media New York 2015 A. Gosavi, Simulation-Based Optimization, Operations Research/Computer Science Interfaces Series 55, DOI 10.1007/978-1-4899-7491-4 10
319
320
2.1.
SIMULATION-BASED OPTIMIZATION
Continuous Functions
Definition 10.1 A function f (x) defined as f : D → is said to be continuous on the set D if and only if lim f (x) = f (c) for every c ∈ D.
x→c
We now illustrate this idea with the following example function from to : f (x) = 63x2 + 5x. Note that the function must be continuous since for any c ∈ , lim f (x) = lim (63x2 + 5x) = 63c2 + 5c = f (c).
x→c
x→c
Now consider the following example. defined as: f (x) =
A function f : → is
x2 − 5x + 6 when x = 2; f (x) = 90 when x = 2. x2 − 6x + 8
Now, at x = 2, we have: x2 − 5x + 6 (x − 2)(x − 3) x−3 1 = lim = lim = = 90. x→2 x2 − 6x + 8 x→2 (x − 2)(x − 4) x→2 x − 4 2
lim f (x)= lim
x→2
This implies, from the definition above, that the function is not continuous at x = 2, and hence the function is not continuous on .
2.2.
Partial Derivatives
Definition 10.2 The partial derivative of a function of multiple variables (x(1), x(2), . . . , x(k)) with respect to the ith variable, x(i), is defined as ∂f (x(1), x(2), . . . , x(k)) ≡ ∂x(i) f (x(1), x(2), . . . , x(i) + h, . . . + x(k)) − f (x(1), x(2), . . . , x(k)) . h→0 h lim
The partial derivative defined here is actually the first partial derivative.
2.3.
A Continuously Differentiable Function
Definition 10.3 A function f : D → is said to be continuously differentiable if each o
Data Loading...