Static analysis and dynamic testing: Even after several decades of software engineering, we still far away from guaranteed error-free software. Even for software developed to the highest safety standards, no one can guarantee absolute freedom from errors. All functional safety standards recognize that a guaranteed error-free software (with the current state of the technology) cannot be achieved. The measures in the standards have the goal to minimize the number of errors and to detect occurring errors in time to prevent a serious error effect.
The static analysis and dynamic testing represent two such measures. Especially in safety-relevant systems, the use of both measures is required, although in some cases the same errors are found.
Extensive validation, verification and testing is associated with high costs. In most projects this leads to a high pressure to develop an optimized test strategy.
A prerequisite for successful test optimization strategies is the comprehensive understanding of strengths and weaknesses of static analysis and dynamic test and analyses. The following article provides a basic overview of the topic.
Static testing or static analysis
Often the term static analysis is used for the static testing. In its glossary, the ISTQB defines static analysis as follows:
The process of evaluating a component or system without executing it, based on its form, structure, content, or documentation. [After ISO 24765]
In contrast, the static testing is defined as follows:
Testing a work product without code being executed.
However, it is not clear to me how a comprehensible distinction between the two definitions can be made in practice. I am not aware of any distinction that is practicable. Consequently, I will use both terms in the following so that they can be exchanged at any time.
The ISTQB glossary defines the dynamic testing as follows:
Testing that involves the execution of the software of a component or system.
According to ISTQB, dynamic analysis differs somewhat from dynamic testing. The ISTQB glossary provides the following definition:
The process of evaluating behavior, e.g., memory performance, CPU usage, of a system or component during execution. [After IEEE 610 ]
The distinction between dynamic analysis and dynamic testing makes sense in practice. Proof of memory efficiency and CPU usage can only be provided meaningfully if dynamic tests are combined with a mix of different analyses.
Overview of static analyses or static testing
Which activities in a development project fall under the term static analysis or test in practice? The following diagram provides an overview:
Reviews of requirements, architecture, design, source code and traceability
These reviews have in common that they have to be carried out as far as possible manually, by hand. There are still no automatable answers to the question of whether a requirement is correctly implemented in an architecture/design or source code. Only in the case of traceability there is some tool support. However, no tool is able to check the content of links, i.e. only humans can decide whether the linked source really matches the requirement. Tools can only support, with rules, like every requirement must be assigned to at least one architecture element etc.
Checking the coding guidelines
Coding guidelines should cover two main aspects. On the one hand, coding guidelines exclude programming language elements from use whose use has proven to be particularly error-prone in the past. On the other hand, coding guidelines are also applied to make the source code more readable and easier to understand.
In contrast to the above reviews, the review of coding guidelines can be automated as much as possible nowadays. On the one hand, there are now other coding standards besides MISRA, so that every project should be able to find the right subset for itself, and for the vast majority of these rules there are now tools that can automatically check the implementation of the rules in the code.
With regard to the source code, experience shows that there are usually only a few readability rules that can be so specific that no tool can check them. But before you define such rain for your project, you should be aware that it really is an important rule. Otherwise, its not worth to spend the effort of checking compliance with the rule.
Checking source code metrices
Just like the coding guidelines, many source code metrics can be checked automatically. The best known metric for software is certainly the cyclomatic complexity (McCabe). However, there are many other parameters that can be used to evaluate software. Examples are:
- Number of lines per function
- Number of variables in a function
- Number of exit points
- Ratio of comment lines to code lines
After a temporary hype about the application and use of these metrics to measure software quality, it has now become widely accepted that their validity is quite limited. These metrics are certainly not more than a first indicator of quality. More and more aspects have to be added in order to assess sustainable quality.
Analysis to detect runtime errors
A static analysis that has been underestimated is the runtime error analysis. The most famous representatives of runtime errors are
- division by zero
- Field accesses outside the valid field limits
- invalid pointer manipulations and dereferencing
- integer arithmetic overflows and floating point overflows
In the past, such inspections were part of manual code review, especially in aerospace. Of course, the goal of dynamic testing was and is to find such errors. Nowadays, however, there are tools that can detect these errors by static analysis. Depending on the tool this can even be based on mathematical correctness. In most cases this analysis is clearly preferable to dynamic testing in terms of finding runtime errors. Three well-known tools for this analysis are
What characterises the dynamic testing?
In contrast to the various static analyses discussed, the dynamic testing is characterized by the fact that the test object is actively executed.
The greatest strength of the dynamic testing is the systematic proof of the functionality of the developed system or software. Compared to the dynamic test, there is no static method for proving the correctness of the implemented functionality that is even close to as effective as the dynamic testing.
The experience of the last 30 years in industry shows that good dynamic testing results are achieved when the system to be tested is broken down into 3 – 4 levels and appropriate tests are performed at each level. These are usually the software unit tests on the lower level, then the SW/SW or HW/SW integration tests and on the upper level the system tests.
At each level, a mix of different test design techniques is then applied. The diagram shows the most important rough categories of test design techniques. Details are described in the blog post Test design techniques.
The cost of good dynamic testing is usually many times higher than the cost of static analysis and testing. However, since static analysis and testing and dynamic testing cannot completely replace each other due to the different errors that are found in each case, a smart verification and validation strategy is the crucial factor to keep costs under control. A good strategy is characterized by the fact that it uses the above-mentioned methods in a targeted and complementary way for the respective project. It makes sense to document this strategy in order to make the procedure transparent for all project participants. This is also the best way to adapt and optimize the strategy step by step.
Dynamic analysis combines dynamic testing with static analyses
In embedded systems there are always questions that can only be answered adequately with a combination of static analysis and dynamic testing. These include worst-case considerations in particular. Nowadays, the worst case of a system, both in terms of memory usage and timing behavior, can only be answered meaningfully with dynamic analysis. In most cases, this only approximates the actual worst case. The exact value can often no longer be determined exactly due to the complexity of the hardware and software used. To ensure that this approximation is made with sufficient reliability and correctness, static analyses of the memory behaviour or the time-related system behaviour are carried out. Based on these analysis results, dynamic tests are then created which confirm the static analysis.
A similar procedure is often necessary for proof of system performance. Here it may be necessary to supplement a dynamic test with a static analysis, since the physically existing test environments are not sufficient to prove certain scenarios. In this case, a static analysis is performed to supplement the dynamic test.
Long-term and endurance tests are often supplemented by static analysis to obtain a valid overall result.
The paper showed the scope of dynamic and static analyses and tests. It has been shown that the measures cannot replace each other. A system which has only been statically analysed and tested has no proof of functional correctness.
A system which is only dynamically tested often contains errors which are not visible in the laboratory due to the limited functional tests, but can occur in the field. In addition, a dynamic test is many times more expensive and complex than a static analysis/test.
Accordingly, static analysis and dynamic testing should complement each other wisely. This can be achieved by creating an appropriate validation and verification strategy, which is written down in a project planning document. This will also enable a systematic improvement of the strategy.
Related HEICON Blog posts
- Risk-based testing: Method for identifying the right test cases
- Implicit Testing – A good idea (Part 1)?
- Implicit Testing – A good idea (Part 2)?
- Structural source code coverage and Requirements – Is there any dependency?
- Psychology of Testers
- Requirement Engineering – Aspects which even not considerd in theory!
- Requirement Engineering Embedded versus IT systems
- Requirement and Test Traceability – Any added value?
- Requirement/Code Reviews – The better TDD?
- Economical considerations on requirement reviews!
Are you ready for a status workshop, to analyse improvement potentials in your test engineering, then send a mail to: info[at]heicon-ulm.de or call +49 (0) 7353 981 781.