Measuring and optimizing system reliability: a stochastic programming approach
- PDF / 2,212,124 Bytes
- 20 Pages / 439.37 x 666.142 pts Page_size
- 74 Downloads / 246 Views
Measuring and optimizing system reliability: a stochastic programming approach Joshua L. Pulsipher1 · Victor M. Zavala1 Received: 4 January 2020 / Accepted: 13 February 2020 © Sociedad de Estadística e Investigación Operativa 2020
Abstract We propose a computational framework to quantify (measure) and to optimize the reliability of complex systems. The approach uses a graph representation of the system that is subject to random failures of its components (nodes and edges). Under this setting, reliability is defined as the probability of finding a path between sources and sink nodes under random component failures and we show that this measure can be computed by solving a stochastic mixed-integer program. The stochastic programming setting allows us to account for system constraints and general probability distributions to characterize failures and allows us to derive optimization formulations that identify designs of maximum reliability. We also propose a strategy to approximately solve these problems in a scalable manner by using purely continuous formulations. Keywords Reliability · Design · Network · Topology Mathematics Subject Classification 68M15 · 90B15 · 90C15 · 68R10
1 Introduction In this work, we investigate the problem of quantifying the reliability of complex systems and of designing systems of maximum reliability. Such problems have a wide range of applications such as supply chains, transportation networks, energy networks, process networks, sensor networks, and control networks (Kim and Kang 2013). In these applications, it is vital to design systems that maintain functionality in the face of natural and man-made events (e.g., mechanical failures, power outages, weather, and cyber-attacks) (Yan et al. 2012). Despite its practical importance, quantifying the reliability of complex systems remains a technical challenge. * Victor M. Zavala [email protected] 1
Department of Chemical and Biological Engineering, University of Wisconsin-Madison, 1415 Engineering Dr, Madison, WI 53706, USA
13
Vol.:(0123456789)
J. L. Pulsipher, V. M. Zavala
Reliability has been traditionally defined as the probability that a system remains functional under component failures (Ogunnaike 2009). The most prominent model used in industry to quantify reliability is based on so-called reliability block diagrams (RBDs). Here, the system is modeled as a network (a directed graph) of series/parallel paths in which each path has a single source and sink node. The system is said to function under a given failure if there exists at least one path between the source and the sink node. The RBD approach exploits the simple topology of series/parallel systems to analytically compute the reliability of the overall system from the reliability of its individual components (Thomaidis and Pistikopoulos 1994). Here, it is also implicitly assumed that the probability of failure for every component can be chracterized using the same probability distribution. The availability of an analytical measure facilities the design of sys
Data Loading...