VIOS: A Variation-Aware I/O Scheduler for Flash-Based Storage Systems
NAND flash memory has gained widespread acceptance in storage systems because of its superior write/read performance, shock-resistance and low-power consumption. I/O scheduling for Solid State Drives (SSDs) has received much attention in recent years for
- PDF / 729,915 Bytes
- 14 Pages / 439.37 x 666.142 pts Page_size
- 78 Downloads / 176 Views
School of Electronic and Information Engineering, Xi’an Jiaotong University, Shaanxi 710049, China [email protected], [email protected] 2 Department of Software Engineering, ShenZhen Institute of Information Technology, Guangdong 518172, China
Abstract. NAND flash memory has gained widespread acceptance in storage systems because of its superior write/read performance, shockresistance and low-power consumption. I/O scheduling for Solid State Drives (SSDs) has received much attention in recent years for its ability to take advantage of the rich parallelism within SSDs. However, most state-of-the-art I/O scheduling algorithms are oblivious to the increasingly significant inter-block variation introduced by the advanced technology scaling. This paper proposes a variation-aware I/O scheduler by exploiting the speed variation among blocks to minimize the access conflict latency of I/O requests. The proposed VIOS schedules I/O requests into a hierarchical-batch structured queue to preferentially exploit channel-level parallelism, followed by chip-level parallelism. Moreover, conflict write requests are allocated to faster blocks to reduce access conflict of waiting requests. Experimental results shows that VIOS reduces write latency significantly compared to state-of-the-art I/O schedulers while attaining high read efficiency. Keywords: Process variation · Solid state drive · I/O scheduling · Flash memory · Parallelism
1
Introduction
As NAND flash storage capacity becomes cheaper, NAND flash-based SSDs are being regarded as powerful alternatives to traditional Hard Disk Drives (HDDs) in a wide variety of storage systems [1]. However, SSDs introduce an out-of-place update mechanism and exhibit asymmetric I/O properties. In addition, a typical SSD usually offers rich parallelism by consisting of a number of channels with each channel connecting to multiple NAND flash chips [2,3]. Most of conventional I/O schedulers including NOOP, CFQ and Anticipatory are designed to mitigate the high seek and rotational costs in mechanical disks, leading to many barriers to take full advantage of SSDs. Thus, I/O scheduling for SSDs has received much c IFIP International Federation for Information Processing 2016 Published by Springer International Publishing AG 2016. All Rights Reserved G.R. Gao et al. (Eds.): NPC 2016, LNCS 9966, pp. 3–16, 2016. DOI: 10.1007/978-3-319-47099-3 1
4
J. Cui et al.
attention for its ability to take advantage of the unique properties within SSDs to maximize read and write performance. Most of existing I/O scheduling algorithms for SSDs, such as PAQ [4], PIQ [5] and AOS [6], focus on avoiding resource contention resultant from shared SSD resources, while others take special consideration of Flash-Translation-Layer (FTL) [7] and garbage collection [8]. These works have demonstrated the importance of I/O scheduling for SSDs to reduce the number of read and write requests enrolled in conflict, which are the major contributors to access latency. However, little attention has been paid to dynamically optimize the data
Data Loading...