Complexity of stochastic dual dynamic programming

  • PDF / 540,529 Bytes
  • 38 Pages / 439.37 x 666.142 pts Page_size
  • 59 Downloads / 252 Views

DOWNLOAD

REPORT


Series A

Complexity of stochastic dual dynamic programming Guanghui Lan1 Received: 22 December 2019 / Accepted: 8 September 2020 © Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2020

Abstract Stochastic dual dynamic programming is a cutting plane type algorithm for multistage stochastic optimization originated about 30 years ago. In spite of its popularity in practice, there does not exist any analysis on the convergence rates of this method. In this paper, we first establish the number of iterations, i.e., iteration complexity, required by a basic dual dynamic programming method for solving single-scenario multi-stage optimization problems, by introducing novel mathematical tools including the saturation of search points. We then refine these basic tools and establish the iteration complexity for an explorative dual dynamic programing method proposed herein and the classic stochastic dual dynamic programming method for solving more general multi-stage stochastic optimization problems under the standard stage-wise independence assumption. Our results indicate that the complexity of these methods mildly increases with the number of stages T , in fact linearly dependent on T for discounted problems. Therefore, they are efficient for strategic decision making which involves a large number of stages, but with a relatively small number of decision variables in each stage. Without explicitly discretizing the state and action spaces, these methods might also be pertinent to the related reinforcement learning and stochastic control areas. Mathematics Subject Classification 90C25 · 90C06 · 90C22 · 49M37 · 93A14 · 90C15

Dedicated to Professor Alexander Shapiro on the occasion of his 70th birthday for his profound contributions to stochastic optimization. This research was partially supported by the NSF grant 1953199 and NIFA Grant 2020-67021-31526.

B 1

Guanghui Lan [email protected] H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA

123

G. Lan

1 Introduction In this paper, we are interested in solving the following stochastic dynamic optimization problem  min H1 (x1 , c1 ) + λE

x1 ∈X 1

 +λE · · · + λE[

min

x2 ∈X 2 (x1 )

min

x T ∈X T (x T −1 )

H2 (x2 , c2 )

HT (x T , cT )]

 

,

(1.1)

with feasible sets X t given by   X 1 := x ∈ X¯ 1 ⊆ Rn 1 : A1 x1 = b1 , Φ1 (x1 , p1 ) ≤ 0 , X t (xt−1 ) ≡ X t (xt−1 , ξt )   := x ∈ X¯ t ⊆ Rn t : At x = B t xt−1 + bt , Φt (x, pt ) ≤ Q t xt−1 .

(1.2) (1.3)

Here T denotes the number of stages, Ht (·, ct ) are closed convex objective functions, X¯ t ⊂ Rn t are closed convex sets, λ ∈ (0, 1] denotes the discounting factor, At : Rn t → Rm t , B t : Rn t−1 → Rm t , and Q t : Rn t−1 → R pt are linear mappings, and Φt,i (·, pt ) : Rn t → R, i = 1, . . . , pt are closed convex constraint functions. Moreover, ξ1 := ( A1 , b1 , B 1 , p1 , c1 ) is a given deterministic vector, and ξt := ( At , bt , B t , Q t , pt , ct ), t = 2, . . . , T , are the random ve