Medusa: A Novel Stream-Scheduling Scheme for Parallel Video Servers

  • PDF / 895,977 Bytes
  • 13 Pages / 600 x 792 pts Page_size
  • 9 Downloads / 200 Views

DOWNLOAD

REPORT


Medusa: A Novel Stream-Scheduling Scheme for Parallel Video Servers Hai Jin School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan 430074, China Email: [email protected]

Dafu Deng School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan 430074, China Email: [email protected]

Liping Pang School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan 430074, China Email: [email protected] Received 6 December 2002; Revised 15 July 2003 Parallel video servers provide highly scalable video-on-demand service for a huge number of clients. The conventional streamscheduling scheme does not use I/O and network bandwidth efficiently. Some other schemes, such as batching and stream merging, can effectively improve server I/O and network bandwidth efficiency. However, the batching scheme results in long startup latency and high reneging probability. The traditional stream-merging scheme does not work well at high client-request rates due to mass retransmission of the same video data. In this paper, a novel stream-scheduling scheme, called Medusa, is developed for minimizing server bandwidth requirements over a wide range of client-request rates. Furthermore, the startup latency raised by Medusa scheme is far less than that of the batching scheme. Keywords and phrases: video-on-demand, stream batching, stream merging, multicast, unicast.

1.

INTRODUCTION

In recent years, many cities around the world already have, or are deploying, the fibre to the building (FTTB) network on which users access the optical fibre metropolitan area network (MAN) via the fast LAN in the building. This kind of largescale network improves the end bandwidth up to 100 Mb per second and has enabled the increasing use of larger-scale video-on-demand (VOD) systems. Due to the high scalability, the parallel video servers are often used as the service providers in those VOD systems. Figure 1 shows a diagram of the large-scale VOD system. On the client side, users request video objects via their PCs or dedicated set-top boxes connected with the fast LAN in the building. Considering that the 100 Mb/s Ethernet LAN is widely used as the in-building network due to its excellent cost/effective rate, we only focus on the clients with such bandwidth capacity and consider the VOD systems with homogenous client network architecture in this paper. On the server side, the parallel video servers [1, 2, 3] have two logical layers. Layer 1 is an RTSP server, which is re-

sponsible for exchanging the RTSP message with clients and scheduling different RTP servers to transport video data to clients. Layer 2 consists of several RTP servers that are responsible for concurrently transmitting video data according to the RTP/RTCP. In addition, video objects are often striped into lots of small segments that are uniformly distributed among RTP server nodes so that the high scalability of the parallel video servers can be guaranteed [2, 3]. Obviously, the key bottleneck of th