Checking probabilistic components

Summary: In this post I discuss the problem (common in autonomous systems verification) of checking things which don’t have “sharp” Boolean rules. I also suggest some (imperfect) solutions.

Say we want to create a growing catalog of Autonomous Vehicles scenarios, and use it to test our AV, as I suggested here. We are talking system-level tests here – testing the whole AV interacting with (mostly-simulated) roads, vehicles, people etc.. The AV is a complex beast, made of many types of components: mechanics, electronics, machine learning, tons of SW and so on, so it can have many interesting kinds of bugs. (note that the discussion below also applies to sub-system testing, but we’ll stick to whole-AV testing for now).

There are lots of challenges there. Perhaps the biggest are (a) how to define, execute and mix scenarios and (b) how to check them.

This post is about the checking problem. There are two related (but distinct) issues which make checking AVs (and other intelligent autonomous systems) a real pain:

  1. Almost every rule (assertion) has exceptions: The AV should never go on a sidewalk (except if it must, so as to avoid hitting somebody). The AV should never go over the speed limit (except if all the traffic around you goes much faster). The AV should never hit a pedestrian (except if a previously-hidden person jumps right in front of the AV). And so on.
  2. The components / algorithms implementing the AV are often probabilistic in nature, and thus are not supposed to be “correct” 100% of the time.

Issue (1) is mainly about (verification) code complexity and clarity: We would like to be able to express all those exceptions in some convenient, aspect-oriented way and not “all in one place”. But once we have them all coded, checking (though complex) works fine.

Issue (2) presents a bigger problem for checking, as we’ll see in the following chapters.

About bugs vs. areas of concern

Suppose our AV crosses the dividing line separating same-direction parallel lanes. This act (called a “lane departure”) is not a bug – you should just not do it too often. There may be many reasons for lane departure: Bad road markings, sensor errors, the probabilistic nature of a machine learning algorithm, or even a simple, old-fashioned SW bug (e.g. under some conditions the process watching for lane departures gets delayed).

Note that there are also exceptions to the “no lane departure” rule, as in (1) above, but we’ll ignore that for the current discussion, and treat each lane departure as a small, non-terrible mistake.

Say it is OK to have a lane departure every 100 miles (on average). If it turns out that under some circumstances (say on a hilly terrain with the sun ahead) we get 10x that rate, we would like to know that. Ideally, we would like the verification system to automatically mark this as an “Area of Concern” (AOC), so an engineer will then inspect it and decide whether this is still OK, or should be considered as a bug and fixed (by specifying a better sensor, re-training the ML algorithm, fixing the SW bug, etc.).

This sounds straightforward enough. As I’ll describe below, when Boolean checks (“X should never happen”) fail, our verification automation infrastructure should produce “failure clusters” (sets of runs for which this check failed). So ideally, when probabilistic (statistical) checks (“The average frequency of X should not be higher than F”) fail, it should produce AOCs, which we should then inspect and decide what to do with them.

However, turns out that doing that for probabilistic checks is pretty hard. To understand why, let me first walk you through how this is done for Boolean checks in a typical CDV environment

How we use Boolean checks in classical CDV

If you are coming from HW verification, you know all about this:

Suppose we are trying to verify a router, which receives packets via input channels and sends each of them to one of several output channels according to the address field in the packet. How do we find really-hard-to-find router bugs?

“Classical” CDV works as follows:

  • Create a Verification Environment and run it with (the simulation of) the DUT many times
  • Each run will create many random packets (using some smart-random scheme) and send them to the DUT
  • It will also check for errors (see below). If any occurs, it will halt the run and log the error, plus the run parameters, in some database
  • Say we ran a 20,000 such runs over the weekend, and 100 of them failed. In the morning, the engineers will come, use some heuristic to cluster those 100 failures into, say, 10 clusters (hopefully representing 10 bugs), and then set to work on debugging a representative run from each of the clusters.

How is checking for errors done? Well, there are various ways: Reference models, simplified reference models (e.g. scoreboards), temporal assertions and so on. But in any case, eventually there is some Boolean check there, i.e. a statement saying “if then dut_error()”.

For instance, say we have a check saying that our router does not corrupt data (“if in_packet.data != out_packet.data then dut_error()”). At the later stages of verification, this check will fail only when we reach some rare, corner-case bug. For instance, there could be a bug which happens when two long packets enter the router at the same time, and the internal buffer is full. Still, because the check is Boolean, it will trigger every time the bug occurs. We just need to have a good mechanism for progressively reaching more and more corners, and we’ll eventually hit it.

But will that flow work for probabilistic checks?

Back to probabilistic checks

Suppose our simulated AV indeed has the problem that in one “driving region” (a hilly terrain with the sun ahead), lane departures are 10x more frequent. But we don’t yet know it, or even suspect that we should look there (much like we did not suspect the aforementioned router bug). How should we structure our test generation and checking so we’ll stand a reasonable chance of finding it?

Generation wise, it seems intuitive that we should mix all kinds of scenarios, with all kinds of conditions and corner cases (more on this in a separate post). But how should we write that check?

One way is to write the probabilistic check as a Boolean check: On every lane departure, compute the lane departure frequency over a rolling window of K seconds, and if it is bigger than MAX_FREQUENCY then issue a DUT error.

We are probably not going to stay in any driving region long enough to fill the required rolling window, but this is still not a problem, as long as we have a fixed set of regions. For instance, we can split the world into a finite matrix of driving regions: (hilly, flat) X (sun ahead, no sun ahead) X (low, medium, high speed) etc., and maintain a separate lane departure frequency for each cell of the matrix. This is a reasonable solution, and is often done.

But it only works for regions we can think of ahead of time. It does not work for “regions” / combinations of conditions we did not think of. Consider the (Boolean) router bug above: It is not that we set out to check “what happens when two long packets enter at the same time when the internal buffer is full”. We just lucked out, as often happens in (well-mixed) CDV, and passing even once through the bug-causing condition triggered the error message. But that would not work for probabilistic checks – a single lane departure is not a bug.

One option is to log all lane departures (and other info), and use some multi-run correlation / machine learning technique to find what, if anything, correlates with high frequencies. This is not easy, and you need lots of samples (because we are not looking for departures but rather for departure frequencies). If such tricks do work, then we can perhaps automatically steer simulations towards suspected areas so as to increase our confidence that there is indeed a problem there.

And note that we need to repeat this whole process for every release – a brand-new probabilistic bug could have sneaked in.

Finding such probabilistic bugs is somewhat similar to finding HW or SW performance bugs. You can often find those using a profiler (looking for function-sized “regions”), and there are more sophisticated region-based techniques. Still, performance degradation under rare conditions is less important usually (as long as you can be sure they are rare). Safety is different: If in some regions some safety metric degrades significantly, this is a serious matter.

For instance, we may care about lane departures because somewhere in the overall car safety fault tree there is an accident node like “(driver in right lane departs left) and (driver in left lane departs right) and (they don’t react quickly enough) => accident”. With the expected lane departure frequency, this kind of accident is rare enough, but in that “danger zone” (hilly area with sun ahead) the danger is 10*10=100x higher, which may be too much. I’ll come back to fault trees in a subsequent post.

Of course, one other thing we could do is not to have any lane departure check – just have an “actual accident occurred” check and then trace back for the reason, but this is sub-optimal for many reasons (e.g. excessive simulation time until we get to that rare event).

To summarize this part:

  • Finding probabilistic bugs over a fixed matrix of predefined regions is non-trivial but should work
  • Finding them for previously-unthought-of regions is an open research problem – perhaps this can be done with machine learning and some other tricks. This is unlike Boolean bugs, which CDV lets us find without specifying exactly where we are looking.

Assessing the frequency of Boolean bugs

Here is a related problem: Even assessing the frequency of Boolean bugs is pretty hard. Suppose we find some Boolean bug (i.e. a plain old bugs, caught by a simple Boolean check). Is there a way to estimate how often it will happen in real life?

Of course, there is the manual way: Debug it, then try to assess how often each of the component conditions causing the bug (e.g. our router bug above) will happen. This is what people often do when trying to assess how urgent it is to fix a bug. But can it be done automatically? Such a capability would be very useful:

  • It would help us prioritize debugging, delaying looking at failures if we know that in any case the resulting bug will be very rare
  • If this bug only causes some small degradation of behavior (like our canonical line departure), then if very rare we would, again, just ignore it, without ever debugging it

Alas, automatically estimating the frequency of Boolean bugs seems very hard (in fact, I don’t know how to do it). If we were running our simulations using the Expected Distributions (ED), i.e. randomizing everything according to the distribution we expect in real life, then it would be easy, assuming we ran enough simulations to make the bug appear a statistically-significant number of time.

But one does not do CDV this way. To realistically get to corner case bugs, one uses a variety of Bug Finding Distributions (BFDs) designed to reach corner cases in high frequency. Given a bug which was encountered a few times using some BSD, I know of no way to “normalize” it back to get the bug frequency under ED. Just saying “oh, the BFD has increased sharp turns frequency by 10x, so just divide the bug-frequency-under-that-BFD by 10” is hopelessly naïve – the effects mostly don’t scale linearly.

Other verification challenges of probabilistic components

In addition to the aforementioned checking problems, probabilistic components present other verification challenges. Koopman and Wagner cover those well in Challenges in Autonomous Vehicle Testing and Validation (pdf), which I have mentioned favorably here.

Here is my version of that list, and my (not always exciting) suggested solutions:

  • Not only are checks hard to specify, but (for very high-reliability components), getting the to the region where they might fail may take a long time. Partial solutions:
    • Try to move the simulation to regions where the frequency of failure is higher
    • Inject the failure to see how the rest of the system works (high-level fault injection).
  • Some systems achieve very high reliability by using two or more probabilistic components, each of which is just highly reliable. This assumes the components are independent, but independence is a very hard thing to verify. Partial solutions:
    • Run the system for a long period, and check for correlation of some (user-specified?) events between the two components.
    • Inject some noise and see if it somehow appears in both
  • Causing a probabilistic component to reach some state (so you can test what happens next) is hard, since you are never sure what it will do. Partial solution:
    • Once it does get there, save that test and seed, and then restore from that point but change the random seed.
  • Unless special care is taken, probabilistic components are not repeatable, which is bad for debugging, regressions etc. (read my rants about repeatability here). Partial solutions:
    • Create a repeatable infrastructure (e.g. fake all parallelism on a single process)
    • For algorithms which use random number create a hook where the seed can be forced from the verification environment
    • Add random stability to your infrastructure (see here).

New systems like AVs will contain more and more probabilistic components. We should be able to test them, but it is not going to be easy.

Notes

I’d like to thank Thomas (Blake) French, Sandeep Desai, Kerstin Eder, Amiram Yehudai and Mugur Tatar for commenting on previous versions of this post.

I had long discussions with Blake, who (like myself) comes from the hardware-verification side of the fence, but who (unlike myself) has very extensive experience in performance verification of multi-core systems. Performance verification has some commonalities with probabilistic checking, and I am hoping Blake will comment on this.

I also exchanged views with Mugur, who knows a lot about the automotive-verification side of things (I mentioned him previously at the end of this post). I hope he’ll also comment on this post.

Finally, this whole topic is related to fault trees – expect a post about that soon.

[Edit 29-July-2016: Here is that fault tree post]


2 thoughts on “Checking probabilistic components

  1. Pouncing on ->
    “Performance verification has some commonalities with probabilistic checking, and I am hoping Blake will comment on this”

    There a bunch of possible commonalities bobbing around in my head. Below is the one swimming at the top.
    I consider it a variant of “Finding probabilistic bugs over a fixed matrix of predefined regions is non-trivial but should work”

    Enable highly repeatable scenarios.
    Define clear/measurable set of metrics.
    Define a fixed set of ecosystems.
    Define a fixed set of parameters applicable to all echo systems
    Create a database of metric results that sweep across scenarios cross parameters.
    In traditional PerfV, these scenarios are often carefully crafted sequences that approximate real world workloads.
    In traditional PerfV, these metrics could be things like max bandwidth, max latency under load, etc..
    In traditional PerfV, these ecosystems could be things like topologies, memory hierarchies/technologies, etc..
    In traditional PerfV, these parameters could be default credit settings, a microcode patch, etc..
    For AV, these scenarios could be morning/evening rush hour traffic, traversing a mountain pass, etc..
    For AV, these metrics could be the number of fender benders, and road rage incidents.
    For AV, these ecosystems could be, Sunny in San Francisco, Typhoon in Japan
    For AV, these parameters could be sensor upgrades, a microcode patch, etc..
    Iterate until results meet expectations/requirements.
    Update benchmark database

    Now repeat with
    Scenario noise
    Measure and analyze deltas against baseline
    Change Ecosystems Stressor (Raining in San Francisco)
    Measure and analyze deltas against baseline
    Parameter changes
    Measure and analyze deltas against baseline

    Next steps depend on the results.
    Will see how folks respond to above, before prattling on.

  2. When we check safety/quality requirements that involve systems with physical components the boundaries between good and bad are quite often difficult to set. The physical world is not binary or discrete (if not looking at quantum physics) and it is not that easy to mold the design wishes in a physical implementation that contains “strange” physics inside – such as different kinds of friction, turbulent flows, temperature effects, etc.

    To give an example from “classical” power-train: the transmission control software drives the current of magnetic valves, that switch flows and regulate hydraulic pressures applied on clutch plates (ako mechanical switches) that can open or close mechanical connections, that transmit rotation energy in different torque/speed ratios, according to selected gears. Conflicting design goals are: fast gear switching, comfort and durability of the clutches. These can translate on a requirement on the maximum power loss due to friction per clutch, or maximum energy loss (translated in heat) per clutch per shift and maximum duration of a shift. Long story short: many conditions to test and sometimes we find scenarios where a power loss is higher that desired or a shift is longer than desired. This will not make the transmission “failed”. It is a problem only if this happens too often. Then the transmission will have a short life.

    With our kind of CDV we indeed find such scenarios. Because we generate many corner-cases, we cannot say really something about the “probability”. No absolute measures. We try to derive “relative measures”, i.e. things that maybe show a trend from a release to the next one (when we suddenly find much more violations than before). Another place where these quality measures are used are real driving scenarios measured on the street (highway, town, land road). These are defined as benchmarks, analyzed, and usually for these scenarios no quality violations are accepted.

    “Finding probabilistic bugs over a fixed matrix of predefined regions is non-trivial but should work”
    – maybe and hopefully. But I am afraid about how this can scale for large number of parameters.

Leave a Reply