Dynamic Testing Via Automata Learning

This paper presents dynamic testing, a method that exploits automata learning to systematically test (black box) systems almost without prerequisites. Based on interface descriptions, our method successively explores the system under test (SUT), while it

  • PDF / 1,102,368 Bytes
  • 17 Pages / 430 x 660 pts Page_size
  • 63 Downloads / 273 Views

DOWNLOAD

REPORT


University of Dortmund, Chair of Programming Systems, Otto-Hahn-Str. 14, 44227 Dortmund, Germany [email protected] Tel.: ++49-231-755-7759, Fax. ++49-231-755-5802 2 University of Dortmund, Chair of Programming Systems, Otto-Hahn-Str. 14, 44227 Dortmund, Germany [email protected] Chair of Services and Software Engineering, Universit¨ at Potsdam, August-Bebel-Str. 89, 14482 Potsdam, Germany [email protected]

Abstract. This paper presents dynamic testing, a method that exploits automata learning to systematically test (black box) systems almost without prerequisites. Based on interface descriptions, our method successively explores the system under test (SUT), while it at the same time extrapolates a behavioral model. This is in turn used to steer the further exploration process. Due to the applied learning technique, our method is optimal in the sense that the extrapolated models are most concise in consistently representing all the information gathered during the exploration. Using the LearnLib, our framework for automata learning, our method can elegantly be combined with numerous optimizations of the learning procedure, various choices of model structures, and last but not least, with the option to dynamically/interactively enlarge the alphabet underlying the learning process. All these features will be illustrated using as a case study the web application Mantis, a bug tracking system widely used in practice. We will show how the dynamic testing procedure proceeds and how the behavioral models arise that concisely summarize the current testing effort. It has turned out that these models, besides steering the automatic exploration process, are ideal for user guidance and to support analyzes to improve the system understanding.

1

Motivation

Testing was, is and will be an inevitable part of system development. No formal verification methodology can change that, because it fails to fully integrate the actual platform. However, formal methods are very valuable also for testing. E.g. model-based testing led to a qualitative change of the testing technology, 

This work has been partially supported by the European Union Specific Targeted Research Project SHADOWS (IST-2006-35157), exploring a Self-Healing Approach to Designing cOmplex softWare Systems. The projects web page is at https://sysrun.haifa.ibm.com/shadows

K. Yorav (Ed.): HVC 2007, LNCS 4899, pp. 136–152, 2008. c Springer-Verlag Berlin Heidelberg 2008 

Dynamic Testing Via Automata Learning

137

by providing means to measure quality of testing and to generate test suites according to some given notion of coverage or some specifically defined goal. One particularly interesting technique here is conformance testing, which generate test suites that guarantee a notion of equivalence between a model and an implementation under certain additional assumptions. In fact, there is a wealth of powerful techniques dealing with the case when systems come with a formal model. However, what can be done if there is no or only a partial formal