Zusammenfassung
Software testing is typically an ad hoc process where
human testers manually write test inputs and
descriptions of expected test results, perhaps
automating their execution in a regression suite. This
process is cumbersome and costly. This paper reports
results on a framework to further automate this
process. The framework consists of combining automated
test case generation based on systematically exploring
the input domain of the program with runtime
verification, where execution traces are monitored and
verified against properties expressed in temporal
logic. Capabilities also exist for analyzing traces for
concurrency errors, such as deadlocks and data races.
The input domain of the program is explored using a
model checker extended with symbolic execution.
Properties are formulated in an expressive temporal
logic. A methodology is advocated that automatically
generates properties specific to each input rather than
formulating properties uniformly true for all inputs.
The paper describes an application of the technology to
a NASA rover controller.
Nutzer