Delay performance of data-center queue with setup policy and abandonment
- PDF / 1,065,950 Bytes
- 25 Pages / 439.37 x 666.142 pts Page_size
- 10 Downloads / 170 Views
Delay performance of data-center queue with setup policy and abandonment Tuan Phung-Duc1
· Ken’ichi Kawanishi2
© Springer Science+Business Media, LLC, part of Springer Nature 2019
Abstract This paper considers a finite capacity multiserver queue for performance modeling of data centers. One of the most important issues in data centers is to save energy of servers. To this end, a natural policy is to turn off a server immediately once it has no job to process. In this policy, which is called ON–OFF policy, the server must be setup upon the arrival of a new job. During the setup time, the server cannot process jobs but consumes energy. To mitigate the drawback, this paper considers a setup policy, where the number of setup servers at a time is limited. We also consider an extension of the setup policy in which some of servers, but not all of them, are allowed to remain idle for a random amount of time. The main purpose of this paper is to analyze the delay distribution of the multiserver queue with the setup policies. We assume that jobs may abandon the queue without receiving their services. We formulate the queue length process using a two-dimensional continuous-time Markov chain. It can be shown that we cannot apply the distributional Little’s law to obtain the waiting time distribution via the queue length distribution. Therefore, we construct a threedimensional absorbing Markov chain which describes the virtual waiting time process. We then obtain a phase-type expression of the stationary waiting time distribution. We evaluate the performance of the multiserver queue with the setup policy, and then discuss the optimal control parameters of the policy based on the delay performance as well as the mean power consumption. Keywords Data center · Setup time · Staggered setup policy · Delayed-off policy · Abandonment
B
Ken’ichi Kawanishi [email protected] Tuan Phung-Duc [email protected]
1
Department of Policy and Planning Sciences, University of Tsukuba, Tsukuba, Ibaraki 305-8577, Japan
2
Division of Electronics and Informatics, Gunma University, Kiryu, Gunma 376-8515, Japan
123
Annals of Operations Research
1 Introduction Data centers are the core of cloud computing which nowadays is one of the most important platform of the information society. Data centers usually consist of a large number of servers, consuming a huge amount of energy. While data centers are widely used as the basic infrastructure, they are still designed for peak hours and thus, a large portion of servers are idle in off-peak time (Maccio and Down 2015). It is reported that power consumption of an idle server, not serving a job and just waiting for a new job, is about 60% of its peak (Barroso and Hölzle 2007). Therefore, from a management point of view, power-saving of servers has a great impact on the cost reduction of data centers. To this end, one of the simple ways is to turn off physical servers once they become idle. Such a policy is called ON–OFF policy (Gandhi et al. 2010). The ON–OFF policy is simple but has some
Data Loading...