Some people claimed that these systems are really so different in scale that they are really incomparable. In particular, I heard the claim that hardware verification (where I spent much of my adult life) is easier because all the interfaces are well-defined.
Clearly, there is a lot of truth to that. And I know from personal experience that even the step from (1) to (2) is hard: For instance, in “hardware module” verification, the primary focus is to try to reach all corner cases. It took us a while to realize that in “SoC + low-level-SW” verification, the primary focus should be on reaching corner cases only within some specified “use cases”. Many such realizations probably await on the road to (3), (4) and (5).
On the other hand, there are clearly many things which apply to all examples in the range, e.g.:
- They all have more combinations than you could really test in a lifetime
- Interactions of basic inputs could result in unexpected consequences (and if you assume that this does not happen in hardware because the interfaces are well-defined, then I can only smile condescendingly).
- So you need a test / coverage plan, and some tradeoff analysis
- And as part of that you need to clearly define your Device Under Test (DUT)  and the Verification Environment (VE) to be modeled / simulated around it. And you may have several of those in a single verification project.
- And experience from your initial testing is bound to make you modify your test / coverage plan, usually causing your current test / coverage score to actually go down
And so on.
So anyway, putting the differences and commonalities in this “verification range” in some reasonable conceptual framework is an interesting exercise. Here are some further thoughts on that, just to get your creative juices going.
A few important (and related) questions are:
- What is the DUT here?
- Is the interface between the DUT and VE well-defined?
- To what “radius” should we extend the VE?
For instance, in (1) the interface to the module (inputs and outputs) is normally completely specified, and the expectation is that this interface is all you need to know.
In (2), the interface (often in C code) is well-specified for some areas and not for others (and part of the verification process is understanding the boundaries).
In (3) and above, the input is the whole crazy word, including “goat-on-the-highway”. The expected output is also sometimes not completely determined: Somebody has to thing about the goat-on-the-highway and then supply a generic answer for all animals. And then you need to ask him about highway-bridge-destroyed, and so on.
In (4) and especially (5), one first has to think hard about “what is the DUT here”. You may decide that that the DUT is the change you are planning to engineer (e.g. “build a bridge to Malmo”). And then, the rest of the world is the VE.
 When I say “some”, don’t assume I mean “in the high thousands”. Reminds me of when I asked my cousin Doron about the popularity of his local choir, and he said “we were inundated with a postcard”.
 I know, you call it SUT, or DUV, or some other name. And you are of course right. Let’s call it DUT for now and get back to the topic at hand. I owe you one.