Where Machine Learning meets rule-based verification

Summary: This post addresses some high-level questions like: Longer term, how much of the verification of Intelligent Autonomous Systems can be done with just Machine Learning (ML)? Should most requirements remain rule-based, and if so – how does that connect to the ML part? And how will the uneasy interface between ML and rules influence general ML-based systems?

ML Keeps growing by leaps and bounds (<Insert here last week’s amazing-but-already-old ML news>). Meanwhile, Intelligent Autonomous Systems (IAS), especially Autonomous Vehicles, are advancing fast, and thus the question of how to best verify them is becoming more urgent. It makes sense (for various reasons described below) that a growing chunk of IAS verification will be ML-based.

So, can we assume that as ML improves, all verification will be done via ML, or is there some unavoidable part which will forever have to be specified in a rule-based, “logical” way (using data structures, assertions and so on)? The history of ML reads like a sequence of battles where the rule-based alternative fought ML and lost, so it is tempting to claim that an all-ML solution will eventually win.

Let’s try to speculate what an all-ML solution will look like. As discussed here, dynamic verification has various pieces: specification, test definition and generation, monitoring/recognition, checking, coverage collection and so on. Potentially, each of these could be (fully or mostly) ML-based.

For instance, consider verifying the parametrized scenario “Our AV overtakes another car”: Recognition of “an overtake” is fuzzy, so it should perhaps be ML based. Checking for errors (“Did it take undue risks? Was it too timid?”) is also fuzzy and thus hard to express in rules, so perhaps we just need to train some “overtake-grading-and-checking network” (via labeled examples). And so on – let’s forget about rule-based requirements, and just do this all-ML.

But this feels wrong. There should be some “logical”, rule-based list of requirements for at least the black-and-white cases (“Don’t collide with the car you are overtaking, except in very weird cases”).

So what is the right answer? The rest of this post will try to explore these issues, but to relieve suspense, here is my top-level summary:

  • Regulators (and common sense) will insist that verification will be based upon human-supplied requirements, consisting of scenarios, coverage definitions, checks and so on – let’s call them all “rules” for short. So some rule-based specifications will always be here.
  • On the other hand, the role of ML in IAS (and other) verification will continue to grow
  • Just how to connect the unavoidable, rule-based piece with the ever-growing ML piece is going to be an important question, attracting a lot of research
  • A similar issue regarding “how to connect ML with rules” exists in the bigger arena of systems constructed via ML

This whole topic – where (and how) should ML and rule-based verification meet – has been on my mind for a while, but I still don’t have good answers. I do think it deserves significant attention from researchers and practitioners.

The next three chapters will discuss why I expect ML to keep growing in dynamic verification, why there will always be an unavoidable, irreducible non-ML part, and some ideas about connecting the two.

Finally, the last chapter will talk about rules in ML-based systems, explainable AI and all that. If you are not into verification, just go directly there.

Why ML will play an increasing role in IAS verification

 Please take a quick look at my Dynamic verification in one picture post. Here is that picture again, annotated with the possible roles ML could play:

roles_of_ml

All these yellow callouts suggest that ML can indeed be very useful in IAS verification (and in the verification of other systems, such as chips – more on this in a subsequent post).

Let me expand on just some of these cryptic callouts, quoting from here and elsewhere:

Generator:

  • Much of the behavior is not completely predictable (due to the huge complexity of HW, SW, physics, human behavior, the environment etc.), so you need a mechanism which “executes” your constrained-randomize abstract scenarios using some sort of probabilistic planning, and ML techniques excel in that
  • Lower-level fine movement control (“accelerate this car such that it almost, but not quite, bumps into our AV”) is something ML should be good at
  • As I mentioned here, Generative Adversarial Networks (GANs – a rising force in ML) may become quite useful in the generator. There is even a variant called Creative Adversarial Networks, which tries to make the generated objects as “novel” as possible – a promising direction for verification.

Monitor:

  • Even monitoring (for checking and coverage collection) may need some ML. For instance, in a test-track setting, you probably need an ML-based sensor fusion module to tell the verification environment what happened (based on test track cameras/sensors)

Maximizer:

BTW, one reason to expect an increasing role for ML in verification is that AV/IAS engineers are already immersed in ML, so they are liable to grab that hammer.

Why we still need rules

To clarify why rules will be necessary, let’s consider a hypothetical all-ML checker. This is not a completely silly idea, because writing IAS checkers is really hard, as I discussed here:

There are two related (but distinct) issues which make checking AVs (and other intelligent autonomous systems) a real pain:

  1. Almost every rule (assertion) has exceptions: The AV should never go on a sidewalk (except if it must, so as to avoid hitting somebody). The AV should never go over the speed limit (except if all the traffic around you goes much faster). The AV should never hit a pedestrian (except if a previously-hidden person jumps right in front of the AV). And so on.
  2. The components / algorithms implementing the AV are often probabilistic in nature, and thus are not supposed to be “correct” 100% of the time. For instance, to avoid an accident, the AV may (correctly) decide to turn left, reducing the chances of an accident to just 10%. Still, in 10% of the simulation runs this will result in an accident, which should not be considered a bug.

So how about throwing all rules away and just creating an ML-based “total-cost checker”? It will somehow check that in all cases the car behaves in a way which minimizes (in expectation) some total-cost weighted formula, taking into account accidents, delays and so on. Note that this implies that not just the checker, but also the specification, will be ML-based: Just the formula will be hand-written.

Suppose we were able to create such a checker, and then suggested it to regulators as the one-and-only way to verify our AVs. I think it is safe to say the regulators will consider that absurd:

  • At a minimum, they will want to see detailed reports clarifying that the manufacturer tested all required scenarios with all required parameter values, while performing all required checks (e.g. “no accident except in very unusual cases”)
  • They (or somebody else, say legal people) will also want to examine the detailed process by which the manufacturer looked for “unexpected bugs” (see this post).

In other words, while much of the verification machinery will be ML-based, the verification will essentially be guided by human-written definitions of environment models, scenarios, coverage and checks – i.e. rules. For “best” verification (and certification) you want to do as much as possible via rules: They are easier to inspect, understand, debug, create corner cases for, perhaps formally verify, and so on. However, this is very hard work, and ML (which keeps improving) will offer “shortcuts”, so we’ll move towards a mix.

Connecting ML and rules

So we’ll have to connect ML and rules, but that is not easy. The main reason for that is that ML (unlike “normal” SW) is trained by examples, not programmed, so there is no easy way to tell it “absolute” rules. And most MLs deal in approximations – again not “absolute” things.

Consider another domain where people needed to connect ML and rules: ML safety. Unlike verification, this domain is about how to construct ML systems such that they also obey safety rules (more on that in the next chapter).

Interestingly, I know of at least four approaches for connecting ML and safety rules (perhaps suggesting that there is no clear winner). Fig. 2 depicts those approaches (the ML blocks are in green). Note that the drawings are simplified – for further details and references see “RL and safety” in this post:

ml_safety_options

As I described in that post, the “obvious” scheme of simply training an ML system to obey the rules does not work well (e.g. giving a Reinforcement Learning system a positive reward signal whenever it does something good, and a big negative one when it disobeys the rules). One reason (among several) is that putting too many edge cases in the training set skews the normal-case behavior.

Intuitively, I like the idea of multiple, alternating ML/rule layers, but that’s still pretty vague. The general problem (connecting ML with rules in a seamless way) seems like an open problem right now. I hope ML researchers will look into it (or maybe they already do and I just don’t know about it).

There has been work on making ML handle more structured information like relations and rules – see for instance the recent DeepMind paper about relational reasoning. But this is about efficient ways to learn relations from data, not about efficient ways to take into account relations / rules fed from the outside. There were also attempts to formally prove logical assertions about ML systems, which I found promising but limited (see discussion of Reluplex here).

BTW, there is also a way to involve ML and rules sequentially during a verification project (in which case the “how to connect” problem is irrelevant). Consider again our “overtake” scenario: Let’s say that initially we just have the black-and-white rule “Don’t collide with the car you are overtaking, except in very weird cases” and everything else is considered “gray”. Then we take the results of many random overtake runs, and use ML to extract sub-scenarios, areas of concerns and parameters. Then, with some human help, we write new rules for these sub-scenarios, thus reducing the no-rule “gray area”, and so on.

Rules in ML-based systems, explainable AI and all that

So far I talked about using ML to verify complex systems (made of HW, SW, ML, mechanics, human procedures and so on). Let me now talk about the bigger picture of ML-and-rules:

Consider systems which are (exclusively or mostly) constructed using ML: Clearly such systems will become increasingly-important (in IAS and elsewhere). And the fact that “ML systems don’t play nice with rules” (discussed above) may cause even bigger issues there:

  • It will be hard to make ML systems work according to rules
    • e. g. see the approaches for ML safety above
  • It will be hard to verify ML systems according to rules
  • It will be hard to understand why ML systems do things
    • e. g. see discussion of explainable AI here

That last issue (“explainable AI”) has been getting the most press: See e.g. If a driverless car goes bad we may never know why. But these are really three related-but-distinct issues.

As ML-based systems (whether embodied or not) become more important, these issues are bound to keep whole categories of people busy and off the street: Legal people, public policy people, ethicists, military planners, ML people, verification people and more.

Notes

This was more speculative than my usual posts – comments are very welcome. Expect the second part of my Stuttgart report Real Soon Now.

I’d like to thank Ziv Binyamini, Yael Feldman, Shai Fuss, Ariel Melchior, Shlomi Uziel, Yael Kinderman, Gil Amid, Thomas (Blake) French, Eli Sennesh, Lior Wolf, Amiram Yehudai, Vadim Kosoy, Sandeep Desai, Ian Goodfellow and Nicolas Papernot for discussions and reviews of this post.

[Added 7-July-2017: Hacker News discussion is here. Also, following some comments there, let me clarify: Some ML techniques (random forests, inductive logic programming) may work better with rules, but most of the current action in ML seems to be in Neural Networks.]

[Added 15-Oct-2017: This post talks about Program Induction as a possible (future) solution to the problems described here.]

 

 


Leave a Reply