Testing needs to be appropriate, effective, and evolve with the project.
That means it needs to verify the appropriate things, meeting the business goals and needs through a multifaceted testing strategy. It needs to be effective, minimizing the likelihood of bugs while minimizing the time spent on testing. It needs to evolve as the project changes and not grow uncontrollably.
The costs of poor testing can be significant. At a minimum they can mean doing poorly in the marketplace. More significantly, if embedded system failures cause death, injury, or damage, they can cause great harm both inside and outside the company. This can mean lawsuits, fines, and penalties.
The news is full of incidents of failures that could have been prevented with better testing. This is not the way you want the public to know your company name.
Pillar 4 of the Dojo Five Modern Embedded development practices is Effective Testing: Testing that is appropriate, effective, and evolves with the project.
What does this mean?
Appropriate: testing that verifies the appropriate things to ensure project success, using a multifaceted testing strategy.
Effective: testing that effectively roots out problems and verifies correct behavior and system attributes while spending a minimal amount of time on testing.
Evolves: testing that changes as the project changes to ensure that it remains appropriate and effective.
In contrast, the industry is full of examples of inappropriate testing (that focused on the wrong things), ineffective testing (that failed to catch problems or verify correct operation), and testing that didn’t evolve (that didn’t change as the project added features and complexity).
Testing is a complex subject. There are many ways to do it, and it’s not a panacea. Edsger Dijkstra famously said, “Program testing can be used to show the presence of bugs, but never to show their absence!” But failure to do appropriate, effective testing can have severe consequences when it allows bugs to escape into the world. [Read our blog on “Debugging”]
Testing has costs and consumes valuable resources. Making sure it’s appropriate, effective, and evolving maximizes the return on that investment.
The Cost of Poor Testing
Embedded systems range from consumer products to commercial, industrial, and military systems. Functionality ranges from passive monitoring and data collection to active automated control of those systems.
The costs of poor testing range from relatively minor to catastrophic. At one extreme, a buggy consumer product may do poorly in the marketplace, resulting in lost revenue and unrecovered development expenses, all the way up to failure of the company.
At the other extreme, buggy control systems can cause real physical harm, extending the costs beyond the company to the world at large. They can cause injury, death, and physical damage at both individual and mass levels.
These can have severe direct financial costs as well as subsequent costs due to liability and penalties. Consider, for example, a satellite launch vehicle and its payload-each costing hundreds of millions of dollars-being lost to bugs due to poor testing. Or an automotive control system that causes random accidents due to bugs. Even more catastrophic-a control system in a chemical plant that causes an explosion releasing a cloud of toxic vapor over a populated area – all because of bugs that were not found and addressed during testing.
Appropriate testing focuses resources on testing things that represent actual risks. It depends on the business goals and needs.
It ensures that things are tested from various perspectives. No single type of test regime can uncover all possible bugs. Therefore it’s appropriate to test with a focus on different characteristics. This requires a hybrid, multi-faceted test strategy.
- Functional: does processing perform the right operations and do them correctly?
- Interactions: do components interact with each other correctly?
- Performance and timeliness: do things happen at the correct time, and for hard real-time systems, within the correct deadlines?
- Security: do things operate in a secure manner to protect data and systems from breaches and hijacks?
It’s appropriate to test things at various levels, from the smallest unit of behavior in isolation to the full system in its operational context. [Read more about “Unit Testing“]
It’s appropriate to test things at all times, from the moment development begins to the point where a complete product is ready for release. This is especially important because the sooner a bug is identified and corrected, the lower the cost to deal with it. A TDD unit test that takes $10 of developer time (or a hundred of those) could save the company $500 million in lost launch vehicles, lawsuits, or environmental disasters. “For want of a nail, the kingdom was lost.”
Effective testing ensures that the resources invested in testing are actually accomplishing something useful. The testing is adequately exercising the system to expose issues and verify correct behavior.
You can never test all the states of a sufficiently complex system. You can, however, build test suites that minimize the likelihood of bugs while also minimizing the time spent on testing. These suites then become part of the automated pipeline that forms Pillar 3: Automated. [Read Pillar 3]
Effective tests focus on areas of high risk or complexity, and things that have a history of causing problems. By testing behavior rather than the structure of the code, they remain effective as the underlying code changes, with minimal maintenance burden.
Effective testing also evolves as new things are added to the code. New behavior requires new tests in order to keep the test suite effective.
Effective testing consists of multiple types of tests:
- Unit tests: these test small units of behavior and are themselves standalone units, decoupled from other parts of the system and external dependencies. They demonstrate that things behave as expected, so they can be relied upon as known-good building blocks.
- Integration tests: these test the combination of the small known-good units into larger units. They demonstrate that the units interact with each other as expected, creating known-good subsystems. Those can be further tested at larger integration scope.
- End-to-end tests: these test full systems. They not only demonstrate system-wide behavior, they also provide a platform for measuring characteristics and verifying that they are acceptable.
Test suites are themselves maintainable code. They should be refactored over the development of the project to ensure that they remain appropriate and effective.
Tests that are no longer appropriate should be removed in order to minimize the time spent on testing while still being effective. Remember-tests act as executable documentation of how to use the code. Stale tests can cause confusion about the code.
The challenge here is that no one wants to go back and work on existing tests. There’s little motivation to change them unless they’re directly interfering with progress. But, just as the production code needs to be kept clean and not accumulate stale old code, the tests need the same care. It’s what keeps the tests lean. A test suite can accumulate a large number of tests over time. It just gets bigger and takes longer to run. Careful pruning can trim it back while keeping it effective. This requires a good understanding of the code and the tests.
Some tests can be removed because the cases are covered by other tests once the code itself has evolved further. Tests that were helpful during development may have been superseded by newer, better ones.
The test suite should still provide coverage, regression protection, and executable documentation of how to exercise the code. A well-maintained test suite is informative, just as interesting reading as the code itself, and clearly spells out what the code should accomplish.
Inappropriate, Ineffective, Stale Testing
In contrast, inappropriate, ineffective, or stale testing is both a waste of resources and an opportunity cost. It squanders the opportunity to find the things that actually matter.
For example, a smoke test that must pass before a commit is allowed is a waste of time if it only exercises a part of the system that doesn’t include the area affected by the commit. That test might have been useful early in the project, but if it hasn’t evolved as the project has grown in scope, it’s no longer appropriate or effective for evaluating commits.
Similarly, unit tests that don’t exercise the way the code is actually used are ineffective. And tests that are based on the structure of the underlying code are brittle, requiring maintenance when that code changes.
For some motivational reading, two sources have been highlighting system failures for years, with extensive archives. The failures are not always the results of poor testing, but testing is often a factor. Effective testing will help keep your system from appearing in these.
- Jack Ganssle’s newsletter, The Embedded Muse, at https://www.ganssle.com/. “Inadequate testing” is number 3 on Jack’s Top Ten.
- The ACM Committee on Computers and Public Policy Forum on Risks to the Public in Computers and Related Systems, moderated by Peter G. Neumann, at https://catless.ncl.ac.uk/risks/. This covers other types of systems in addition to embedded, but the themes are often common across all types.
Need help with testing for your embedded product? We can help your team develop and implement testing that is appropriate, effective, and evolves with your project, as well as incorporate it into your automated pipelines.
Contact us with your questions, projects, and unique problems that need solving.
– Joe Schneider, Founder of Dojo Five –
Wanna stay in touch?
Subscribe to our newsletter! We’ll keep you apprised of the latest news at Dojo Five as well as interesting stories relative to the embedded firmware industry.