Risk-based testing: Method for identifying the right test cases
There is no such thing as bug-free software! Nevertheless, software is successfully used even in very critical systems. The software development processes have become so mature that it is possible to reduce the number of errors in the software reliably to such an extent that the number of system errors which have their cause in the software have become so small that they are accepted by society. In these safety relevant projects mainly specification-based, efficiency-based and structure-based test design methods are used. Risk-based testing has not played a significant role in this area so far. On the other hand, the complexity and scope of software is also increasing strongly in the safety-relevant area. Trends such as Industry 4.0 and autonomous driving strongly support this development. Under what conditions could risk-based testing take on a more important role in the safety-relevant area in the future? How can this test design technique be improved so that the technique itself can be further disseminated? The following blog post discusses the strengths and weaknesses of risk-based testing and suggests ways to improve the technique itself. An overview of common test design methods can be found in the blogpost Comparison and evaluation of different test design techniques.
Risk-based testing – What is it?
The ISTQB Glossary defines the risk-based test as follows A test procedure in which the management, selection, prioritization and application of test activities and resources are based on corresponding risk types and risk levels.
Stephan Grünfelder describes the risk-based test as follows in his book Software Test for Embedded Systems:
You try to direct the test effort in such a way that the most test effort is invested where you most likely expect errors and where errors hurt the most. In other areas, however, the test effort is reduced. On the one hand, an exact knowledge of the criticality of the requirements is necessary for this evaluation, on the other hand, you should have data at hand that allows conclusions to be drawn about the error expectation. These are, for example:
- Comparison values with similar projects
- A high number of serious errors found during code review in certain modules
- A high error density in system tests already performed for certain functions
For Stephan Grünfelder this description refers to software testing. However, it can be transferred very easily to all other tests, e.g. system tests.
Strengths of risk-based testing
The strength of the risk-based test design technique is that it focuses on finding errors in the system or software parts with the greatest risk. The test efforts, such as
- the creation of the test specification
- creating the test script for automated tests, or
- creating the test sequence for manual tests,
- the test implementation
- the evaluation of the results
can be efficiently controlled with this procedure. Before the test specification is created, a decision is made on which tests will be created and performed based on the identified risk.
If through transparent communication of the process of identifying and assessing the greatest risks of a project and the priorities derived from this during testing, the following persons are involved, excellent results can be achieved through risk-based testing:
- the project manager,
- the customer,
- the system/software architect and
- other responsible persons, if applicable
Weaknesses of risk-based testing
On the other hand, the greatest weakness of this test design technique lies precisely in the identification and selection of the tests to be created and performed, as there is no sufficient systematic approach to find the “right” tests. In practice, the quality of the results often depends heavily on the person who selects the tests. The transparent communication mentioned above is only one means of avoiding big mistakes in test selection. It cannot replace a truly systematic approach.
However, this means that the test results of the risk-based test are often only of limited value. This is particularly problematic if the tests are predominantly positive. If the test results are predominantly negative, it is at least possible to deduce from the results whether the errors relate to the identified risk or not. However, this represents a considerable effort, especially when testing large systems.
Systematization of the identification of tests
The accurate and efficient detection of failures is the great strength of risk-based testing. In order to fully exploit this strength, reliable and comprehensible data must be available which could serve to systematise the selection of tests.
Tool-supported, new approaches are now trying to implement exactly this.
- It is known from system and software engineering that software parts that are changed frequently also have an increased probability of errors. Data on the frequency of changes are recorded in the following tools:
- The versioning tool (e.g. Git) contains data about the frequency of changed software modules.
- The bug reporting tool contains the changed functionalities and, depending on the tool, possibly also the changed software modules associated with them.
- It is also known that functionalities that are never tested have a high probability of errors. The coverage of the software by tests is recorded by:
- Tools (e.g. VectorCast, Tessy, Parasoft), which measure the structural source code coverage, provide exactly this information
- Tool for non-intrusive system observation from the company Accemic, which records the tested data and control flow.
- The tool Teamscale offers the possibility to merge and display the data from 1.a, 2.a and 2.b
- The evaluation of the data of the Bug-Reporting Tool usually requires the creation of own, small scripts
For the risk-based test design technique, this means that tests with the highest priority are created and executed first. After the first execution, the data for change frequency, bugs, code coverage and coverage of data and control flows can then be evaluated. Based on the results, tests can then be added systematically and target-oriented.
Conclusion
Risk-based test design techniques are recommended for the following application scenarios, among others
- Testing of very large, complex and non-safety relevant software systems
- Testing of non-safety relevant software systems where business critical aspects cannot be efficiently defined with requirements (e.g. software with complex HMIs).
- Necessary, quick feedback on changes to a software and the execution of all tests takes longer than one night.
- Testing of safety-relevant software systems which are still in early phases of development
The complexity of technical systems has been increasing for years and there is no end in sight. The key driver of innovation is software. Very powerful hardware in combination with complex software are the basis for trends such as Internet of Things (IoT), autonomous driving, smart home and human-robot collaboration, to name but a few. Today, software enables the realization of functionalities with a complexity that was never thinkable with electronics or mechanics alone. Risk-based testing will probably play an increasingly important role in the long term. This is especially true if the approach shown above can be successfully implemented in practice.
Related HEICON Blog posts
- The non-intrusive measurement of structural coverage!
- Requirement completeness using data- and control flow analysis
- How many level of Software requirements are necessary and useful?
- Good safety development process – What is it?
- Management aspects of testing
- Structural Source Code Coverage – Cost without benefit?
Are you ready for a status workshop, to analyse improvement potentials in your test engineering, then send a mail to: info[at]heicon-ulm.de or call +49 (0) 7353 981 781.