Comparison and evaluation of different test design techniques.
The complexity of technical systems has been increasing for years and there is no foreseeable end to this. The crucial driver of innovation is software. Very powerful hardware in combination with complex software are the basis for trends such as IOT, autonomous driving, Smart Home and human-robot collaboration, to list just a few.
Today, software enables the realization of functionalities with a complexity that was never imaginable with electronics or mechanics alone.
However, this also increases the potential errors in a system. There is no such thing as guaranteed error-free software! A major reason for this is that it is not possible to test an industry relevant software system completely. This is due to the fact that the number of data and control flow paths that can be passed through a software system tends to be infinite. With this background, the following article introduces common test design techniques and evaluates their strengths and weaknesses.
Overview of the most important test design techniques
All test design techniques used in industry aim to find the “right” test cases from the potentially infinite number of test cases. As already mentioned, a complete test is only possible for trivial software.
The test design techniques can be distinguished as follows:
- Specification-based
- Structure-based
- Efficiency-based
- Experience-based
- Risk-based
Each of the mentioned test design techniques already carries the criterion for the selection of the “right” test cases in its name. To increase the test coverage, it is often useful to combine the design techniques.
Specification-based test design technique
Specification-based test design techniques are also known as black box testing. They use the requirements or architecture as the basis for creating the tests. Experience from practical application over the last 20 years shows that the specification-based design technique is the most important of all test design methods. The reasons for this are:
- Requirements enable a systematic and traceable approach to testing
- The documentation of the architecture opens up the possibility to systematically localize weak points of the system by tests and reviews
- Tests can only prove individual aspects. Atomic requirements are the ideal prerequisite for this
- The test always proves the functionality of the system. The description of the functionality is the great strength of requirements.
- The systematic identification of the interfaces of a system to be tested is essential for the test. The documentation of the architecture defines exactly these interfaces.
- Already during the creation of the requirements, important questions for later testing can be considered. Requirements thus enable maximum parallelization of the development and test process.
The greatest weakness of requirements are their completeness. There is no measurable criterion for the completeness of requirements. The architecture has its weakness in the level of detail. Which level of detail of the architecture is the right one has to be reassessed in every project. A generally valid statement is hardly possible.
However, a requirement or architectural aspect that does not exist cannot be tested with specification-based testing. Test gaps arise. Furthermore, it is not possible to prove all aspects of all requirements completely by testing. There is also no clear criterion for determining when a requirement is complete. This also automatically results in test gaps.
Structure-based test design technique
Structure-based test techniques are also known as white-box test methods. The focus of the test is the verification of the source code structure. In practice, the following 3 categories of structure tests can be found:
- Statement Coverage
- Branch coverage
- MC/DC (Modified Condition / Decision Coverage)
The strength of this test design procedure is that the completeness for the 3 categories can be clearly determined. In this sense, the structure-based test design method is complementary to the specification-based method.
The main disadvantages of the method are in the following areas:
- The critical errors in a software are usually found in the functionality and not in the structure of the code. Pure structure-based testing can hardly find functional errors
- In order to prove the structural coverage of the source code, the source code must be modified for the test. This modification of the code has a negative effect on the performance of the system. In most cases this has the consequence that the structural coverage can only be proven on unit level.
- Even the structure-based tests, when using the above mentioned coverage measures, only show a fraction of all possible software paths. A complete test of all possible software paths is not possible.
In the future, non-intrusive system observation will offer the possibility to measure the structural source code coverage without influencing the performance of the system. This opens up new possibilities for the use of this test method. More information can be found in the following blog posts:
- The non-intrusive measurement of structural coverage!
- Requirement completeness using data- and control flow analysis
Efficiency-based test design techniques
The most prominent representatives are the stress and load tests. Efficiency-based test design techniques are an important subset of specification-based testing. The advantages and disadvantages are similar.
The efficiency-based test techniques, however, represent a special challenge for the test environment. To prove response times of a system of e.g. 10ms, the test system must be able to work in a time frame of < 10ms. Another example is the proof of maximum currents which can flow simultaneously over certain interfaces. The electronics of the test system must be designed for this maximum value.
As can be seen from these examples, the efficiency-based test design techniques are strong cost drivers for the test environment.
Another special feature of these test method is that many tests are only meaningful if a well-founded analysis is created at the same time as the test. In most cases, the test does not show that the worst possible case of the system was tested. The analysis must provide exactly this justification.
Experience-based test design techniques
The best known experience-based test design technique is certainly exploratory testing. In my view, usability testing is also one of the experience-based test design techniques. The advantage lies in the possibility to give as much room for creativity as possible when testing. The tester should bring in his whole system experience and create creative test scenarios. Experience-based tests usually represent an optimal supplement to specification-based tests. Many test scenarios are created, which often help to reveal gaps in the requirements.
The disadvantage is that the quality of the tests correlates directly with the experience of the tester. The challenge with experience-based tests is that it is often not possible to decide quickly and clearly whether a test result is correct or incorrect. Often it is also difficult to create a clearly reproducible test sequence. Handling tests that have found an error but are not reproducible is one of the biggest challenges in test engineering.
Risik-based test design techniques
The advantage of risk-based test design is that the focus is on finding errors in the software parts with the greatest risk. From this perspective, it is therefore a very efficient test technique.
On the other hand, the methodology for identifying the tests that optimally cover the considered risk is not fully elaborated. In practice, the quality of the results strongly depends on the individual tester. In this sense, experience-based and risk-based test design procedures are therefore similar.
New approaches to risk-based testing involve collecting data on the frequency of changes, errors and test coverage of software functions. These data are used to prioritize tests for regression testing. It is known from software engineering that all 3 aspects mentioned above significantly increase the probability of new errors in the software. Therefore its logical to focus the test effort on software parts which are very error-prone, are frequently changed or have never been tested before.
This approach is mainly used for extensive software products with a high number of existing tests. Especially if the execution of all tests takes several days.
Conclusion
There is no complete test of software! Therefore, different test design techniques have been developed in test engineering, each of which defines different priorities when selecting test cases. Many years of experience have shown that the specification-based and, as a subset thereof, the efficiency-based test design techniques are the most important if a complete test of the product is to be achieved. The method is designed to systematically consider all test aspects from the beginning of the project.
However, since specification-based test procedures also have the disadvantage that completeness cannot be achieved, it often makes sense to use structure-based or experience-based test design procedures in addition.
In this way, important software releases can be tested completely. The test execution time plays a subordinate role for such situations.
Risk-based test design procedures are recommended when rapid feedback on software changes is required and the execution of all tests would take longer than one night. A further application scenario for this are non-safety relevant software projects where success critical aspects cannot be efficiently defined with requirements (e.g. software with complex HMIs). In these projects, the advantages of risk-based test design methods are then particularly effective.
Related HEICON Blog posts
- Requirement and Test Traceability – Any added value?
- How many level of Software requirements are necessary and useful?
- Good safety development process – What is it?
- Management aspects of testing
- Structural Source Code Coverage – Cost without benefit?
Are you ready for a status workshop, to analyse improvement potentials in your test engineering, then send a mail to: info[at]heicon-ulm.de or call +49 (0) 7353 981 781.