For our present computational infrastructures to become computer utility (see [Gru68]), the system elements must be reliable. The move towards open systems, such as ubiquitous computing, peer-to-peer computing, and autonomic computing, means there will be many systems producing interactions that are not predictable at design-time. This trend also means the incipient rise of specialization and standardization of algorithms, of software, and capabilities, and the decline in the historical concentration of general purpose computing. How to build reliability into systems of that kind will involve a lot of intellectual and practical effort. Shipping out systems and waiting for user feedback to correct bugs is simply not plausible any longer. In run-time verification, one expects that, given the specification, and while running an already tested and/or debugged program, a program execution is verified to concur with the specification, and if it does not, the program should automatically be corrected while running. From industrial and exemplar systems, we need to develop theories and use these theories to develop tools that system builders can use to specify, design, build, test, audit, monitor and verify programs. Although these are very early days, some of the projects already carried out in run-time verification produce tools that, when looked at properly, are simply debuggers with fancy features and/or provide good tracing mechanisms. Examples include, inter alia, Jass [BFMW01], Opium [Duc90], Morphine [DJ01], Coca [Duc99], DynaMICs [GRMD01], Daikon [ECGN01], and ESC/Java [RLNS00]. Some approaches in run-time verification include collecting statistics during runtime to perform some form of debugging later on. However, what is encouraging amongst the majority of these projects and tools is the use of linear time logic (or extensions of it) to describe the monitor that monitors the program behavior [Gei01] or as the basis of a specification language to specify the properties to monitor [DGJV01]. In particular, Hakansson et al. [HJL03] proposed the generation of on-line test oracles with a rich logic which contains past operators, metric time, and can handle data values by means of the quantification construct. What is needed is a tool (or set of) that software professionals can use to specify requirements, to design the system, to code, to test, to deploy, and to monitor the software while running, something like an integrated development environment (IDE) that combines those attributes. JPAX [HR01a] is going in the right direction. This is an IDE that tries to combine the positive attributes of testing, i.e. in terms of scaling up, and that of formal methods, i.e. by providing temporal formulas for specification. It falls shy to be called a run-time verification tool as it only monitors a program and emits its results, it does not automatically correct the monitored program. MaCS [KLS+02] is going in the right direction towards being called a run-time verification tool. It combines monitoring a program with steering that program if, while running, fails to concur with its specification. There are some obvious problems to tackle in this area, such as how far forward in a running program should a tool look to correct, and the computational complexity that is inherent in tackling such a problem. © Springer-Verlag Berlin Heidelberg 2005.

Colin, S., Mariani, L. (2005). Run-Time Verification. In M. Broy, B. Jonsson, J.P. Katoen, M. Leucker, A. Pretschner (a cura di), Model-based Testing of Reactive Systems (pp. 525-555). Springer Verlag [10.1007/11498490_24].

Run-Time Verification

MARIANI, LEONARDO
2005

Abstract

For our present computational infrastructures to become computer utility (see [Gru68]), the system elements must be reliable. The move towards open systems, such as ubiquitous computing, peer-to-peer computing, and autonomic computing, means there will be many systems producing interactions that are not predictable at design-time. This trend also means the incipient rise of specialization and standardization of algorithms, of software, and capabilities, and the decline in the historical concentration of general purpose computing. How to build reliability into systems of that kind will involve a lot of intellectual and practical effort. Shipping out systems and waiting for user feedback to correct bugs is simply not plausible any longer. In run-time verification, one expects that, given the specification, and while running an already tested and/or debugged program, a program execution is verified to concur with the specification, and if it does not, the program should automatically be corrected while running. From industrial and exemplar systems, we need to develop theories and use these theories to develop tools that system builders can use to specify, design, build, test, audit, monitor and verify programs. Although these are very early days, some of the projects already carried out in run-time verification produce tools that, when looked at properly, are simply debuggers with fancy features and/or provide good tracing mechanisms. Examples include, inter alia, Jass [BFMW01], Opium [Duc90], Morphine [DJ01], Coca [Duc99], DynaMICs [GRMD01], Daikon [ECGN01], and ESC/Java [RLNS00]. Some approaches in run-time verification include collecting statistics during runtime to perform some form of debugging later on. However, what is encouraging amongst the majority of these projects and tools is the use of linear time logic (or extensions of it) to describe the monitor that monitors the program behavior [Gei01] or as the basis of a specification language to specify the properties to monitor [DGJV01]. In particular, Hakansson et al. [HJL03] proposed the generation of on-line test oracles with a rich logic which contains past operators, metric time, and can handle data values by means of the quantification construct. What is needed is a tool (or set of) that software professionals can use to specify requirements, to design the system, to code, to test, to deploy, and to monitor the software while running, something like an integrated development environment (IDE) that combines those attributes. JPAX [HR01a] is going in the right direction. This is an IDE that tries to combine the positive attributes of testing, i.e. in terms of scaling up, and that of formal methods, i.e. by providing temporal formulas for specification. It falls shy to be called a run-time verification tool as it only monitors a program and emits its results, it does not automatically correct the monitored program. MaCS [KLS+02] is going in the right direction towards being called a run-time verification tool. It combines monitoring a program with steering that program if, while running, fails to concur with its specification. There are some obvious problems to tackle in this area, such as how far forward in a running program should a tool look to correct, and the computational complexity that is inherent in tackling such a problem. © Springer-Verlag Berlin Heidelberg 2005.
Capitolo o saggio
Run-time verification
English
Model-based Testing of Reactive Systems
Broy, M; Jonsson, B; Katoen, JP; Leucker, M; Pretschner, A
2005
9783540262787
3472
Springer Verlag
525
555
Colin, S., Mariani, L. (2005). Run-Time Verification. In M. Broy, B. Jonsson, J.P. Katoen, M. Leucker, A. Pretschner (a cura di), Model-based Testing of Reactive Systems (pp. 525-555). Springer Verlag [10.1007/11498490_24].
none
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/14786
Citazioni
  • Scopus 131
  • ???jsp.display-item.citation.isi??? 29
Social impact