In-Memory Databases

Disk based database technology has influenced database design since the inception of electronic databases. One of the issues with disk based media is that the physical design of systems tries to speed up processing by reducing disk access, in other words

  • PDF / 366,874 Bytes
  • 15 Pages / 441 x 666 pts Page_size
  • 6 Downloads / 187 Views

DOWNLOAD

REPORT


In-Memory Databases

What the reader will learn: • The origins of in-memory databases • The advantages and disadvantages of in-memory databases • Different implementations of in-memory databases • The type of applications suited to in-memory databases • The use of personal computers with in-memory databases

8.1

Introduction

Disk based database technology has influenced database design since the inception of electronic databases. One of the issues with disk based media is that the physical design of systems tries to speed up processing by reducing disk access, in other words disk input/output (I/O) is a limiting factor which needs to be optimised. In-memory databases have been described as a disruptive technology or disruptive tipping point because it provides a significant improvement in performance and use of system resources. In-memory databases systems are database management systems where the data is stored entirely in main memory. There are several competing technologies that implement this. For example, Oracle’s TimesTen system is effectively a relational system loaded into memory. Another other big player in the field is SAP with its HANA database which offers column-based storage. In contrast, Starcounter is a OLTP (On Line Transaction Processing) transaction database using its own proprietary object oriented data manipulation language based around NewSQL. A comparison between these technologies will be made later in the chapter.

8.2

Origins

In early computers, memory was always the most expensive hardware component. Until the mid-1970’s magnetic core memory was the dominant memory technology. It was made of magnetised rings (cores) that could be magnetised in one of two P. Lake, P. Crowther, Concise Guide to Databases, Undergraduate Topics in Computer Science, DOI 10.1007/978-1-4471-5601-7_8, © Springer-Verlag London 2013

183

184

8

In-Memory Databases

directions by four wires that passed through the centre of them forming a grid. Two wires controlled the polarity and the others were sensors. This allowed binary representation of data. It’s one advantage was that it was non-volatile—when the power went off the contents of memory was not lost. Core memory was however expensive and bulky. It also meant that programs were written in such a way as to optimise memory usage. One way of doing this was to use virtual memory where data was swapped in and out of memory and onto disk storage. This optimised memory usage but degraded overall performance. From the mid-1970’s memory started to get cheaper and faster. For example in 2001 the maximum capacity of memory was 256 megabytes. By 2012 that had risen to 16 gigabytes, a 64 fold increase. Cost on the other hand dropped from 0.2 US dollars a megabyte to just 0.009 US dollars per megabyte. But by far the biggest change was in speed where in 2002 response time using hard disk drives was 5 milliseconds, by 2012 using in-memory technology, speed had increased to 100 nanoseconds, a 50,000 fold increase. The last hurdle to overcome was the D—the durability of