I travelled to Stuttgart to attend this first-of-a-kind conference. It was interesting for me, because (in Foretellix) I am trying to investigate how dynamic, constrained-random, coverage-driven verification (call it “CDV” – the bread-and-butter of hardware verification) can be extended for verifying complex socio-technical systems. This long(ish) post summarizes my impressions.
I did not know much about autonomous vehicle (AV) testing, so I figured this 3-day immersion in the topic would be a good start. Presumably I would be learning from the very best – Southern Germany is perhaps the biggest automotive center in Europe, and the lecturers seemed like a fairly international and knowledgeable bunch. And AVs are currently hot (together with UAVs, which I already investigated somewhat, and robotics).
Below is the summary of what I found out. Please take with a grain of salt – these are the initial impressions of a complete outsider.
Also:
- A related topic (which is also of interest for Foretellix) came up in conversations: The usage of simulation as an instrument for helping public policy decision-making. This is summarized here
- I was pleasantly surprised by the thoughtful feedback I got – some of it summarized here
Executive summary
So how did it turn out? Not sure yet. Here is the executive summary:
- Just about everybody was represented, including people from Singapore, New Zeeland, and even a rumored Apple guy (in disguise)
- I learned more than I ever wanted to know about AVs in general. The main split there is between the Google-style “No steering wheel, no driving license required” camp, and the “automated driving” initially suggested by most European companies, where the car may ask the driver to take back control.
- The lectures were of mixed quality. Some were mainly marketing fluff, and some deeply technical. I also think the unevenness reflected the fact that the field is very young, and even the terminology is still in flux.
- Most of the testing is done with a human in a car (or in a “car simulator”), alias Driver-in-loop or DIL. Just a small percentage is done automatically (with Hardware in the loop, alias HIL), and a really small percentage is done in SW-only (SIL). This is sort of backwards from what we are used to in electronics verification, and (in my opinion) is bound to change at some point.
And now for the details:
Why are autonomous cars interesting
- This is happening now, and involves technology, human factors, public policy, legislation, insurance, standards etc.., and nobody really knows what to do about it – everybody is looking over their shoulders to see what the others are doing.
- In that sense, it may be representative of the few other technology-based revolutions-in-progress, like the UAV, robotics, “sharing economy” and the CRISPR-related biotech revolution.
Insurance implications
- I always considered insurance as a primary customer of prediction-related SW
- So I listened to “The impact of autonomous vehicles on future vehicle insurance” (Matthew Avery, research director, Thatcham Research, UK). Much of what he said was new to me (though reasonable in hindsight)
- They have thought what the various Advanced Driver Assistance Systems (ADAS) will do to insurance.
- Claims for whiplash compensation (for cars-hitting-from-the-back) are a huge part of the claims. Automatic Emergency Breaking (AEB) will reduce those by a lot
- Another huge piece is hitting another car while parking. AEB will eliminate most of that as well.
- He felt that in general, most of the claims-of-the-future will disappear, but when accidents do happen, they may be catastrophic (like in airplanes)
- The customers will stop insuring themselves – insurance will be bundled with the car, and will be taken by the car companies (with some re-insurance scheme, except for the biggest car makers).
- A point he did not make, but was implied is that the vehicle-insurance market will shrink by a lot. This will be a brave new world for insurers.
Effect on car companies
- Fully autonomous cars may eventually shrink the automotive market as well (though this is hard to predict, considering e.g. what happens in the developing world)
- Most European car companies are going through this incremental, multi-level ADAS thing (for various reasons).
- It is possible that the Google “jump directly to the no-steering-wheel end-goal” will disrupt them all.
- In particular, the highly-automated intermediate level, where the car is autonomous until it needs to interrupt the driver (at which time the driver has 10 seconds to take control) may never happen, because it will be too stressful for the driver.
The car testing scene
- The symposium was part of a bigger car testing expo. Walking the aisles felt a lot like any other industry expo (e.g. the Design Automation Conference familiar to us chip-guys).
- There are a lot of small’ish companies, many of them of the classical German “Mittelstand”, family-owned variety. They are very innovative, and are listening to the customer’s needs (including the AV needs), but this is very fragmented. The kind of integration which happened in electronic system verification is nowhere to be found here – see next chapter.
AV verification
As I mentioned in the summary, verification (in the sense we are talking about) seems not very advanced. In the following sections I will describe:
- Why I say that
- What does exist already in that direction
- What reasons / excuses people gave for that
I say “not very advanced” reservedly:
- I talked to some people, e.g. from Bosch, who seemed to understand the big picture well
- As I said above, I am an outsider. Quite possibly there are trade-offs I don’t understand.
- The selection process for the lectures was somewhat hurried, and all presentations were short (no time for in-depth discussions) – hopefully will improve in subsequent years
- There are reasons for how some of these things are the way they are – to be discussed below.
But still:
- Most verification is physical, driver-in-the-loop testing. This is of course indispensable as the “final proving ground”, but it cannot really be where all the scenarios are tried out, because there are too many of them.
- One lecturer (from ETAS) talked about “there are so many variables and combinations – more than the number of atoms in the universe – so clearly we can’t test them all”. This is the classical kind of naïve thinking that many of us encountered in the early day of CDV, and the fact that somebody would even say this is an indication of how immature the field is. Note: The classical CDV answer is “even testing all combinations of two-integer addition will never end in a lifetime, and yet there is a straightforward CDV methodology of testing that”.
- Even the people who were discussing random, model-based testing talked about “we make sure that the random distribution is according to the ‘real’ distribution”. And when asked “but this way you will never find corner cases like e.g. the problem of railway tracks reflected by your AV’s Radar” they did not quite realize the errors of their way. This is another classical “CDV 101” error: The “correct” rule is of course “your testing should reflect expected usage, as well as your fears”.
- Some (otherwise pretty smart) people said “if we ever did SW-only verification, clearly this has to be formal verification, because in simulation you can clearly not think of all the cases” – again a well-known fallacy (i.e. you can’t think of all cases in simulation, but neither can you in formal – it just has different failure modes).
- Fairly basic ideas like illegal/ignore in functional coverage (e.g. “no need to test battery recharging when the battery is full”) were discussed as if they were lovely, time-saving innovations.
- And so on.
Perhaps the biggest missing thing was “SW-only testing”:
- Note that in this industry they seem to call any “automatic” testing (with no human in the loop) “virtual testing”. This divides mainly into HIL and SIL, with SIL (SW-only, or SW-with-models) practically non-existent.
- A lot of people talked about “simulations”, but (just like in aerospace) they mainly mean “simulation” as in “flight simulator” (i.e. as something to put a human in, for the purpose of training or driver-in-the-loop testing), and not the kind of CDV-based many-simulation-runs-through-the-night-with-automatic-checking-and-coverage that we all know and love.
Many of the pieces for doing CDV-like stuff are already in place:
- Some companies (e.g. UK-based rFpro) have detailed digital models for various road kinds, and SW to let you really “feel” the road as you are driving through it. They have an API to change the weather on the map, and an API to do post-processing on the image (e.g. insert random sensor distortion).
- Many vendors have tools which take models in the popular formats (Matlab/Simulink, AutoSIM, FMI, AUTOSAR – see below) and run them, often together. Examples of such vendors are dtrace, RTI (Reatime Technologies Inc), CRI (Concurrent RealTime) etc..
- AUTOSAR is an automotive standard for connecting C/C++-based models of ECUs (Electronic Control Units – basically any piece of car electronics) coming from multiple vendors and running them together. This looks like an excellent starting point for connecting stuff for full CDV simulation, but see below.
- FMI is a standard for connecting continuous and event-based simulators – again a promising start – but see below
- The people in the industry know that driver-in-the-loop tests, while indispensable, are expensive, non-repeatable (though there is effort in that direction too), and mainly cover just a tiny part of the scenario space.
So why is CDV-like stuff (almost) not happening?
I asked quite a few people that question (in various guises), and listened attentively. There seem to be multiple reasons, listed below.
My intuition is that this is bound to change as the industry grows up – the current state of things is just too inefficient.
OK, so why?
- The models issues: While AUTOSAR and Matlab allow model exchange, there are no encrypted models, and the people developing the subsystems (e.g. Bosch) are very hesitant to hand over their crown jewels to a competitor.
- The mechanics of transfer: FMI exists, and many vendors are ready to accept FMI models, but almost nobody produces them. Similarly, AUTOSAR exists, but interoperability is promised, not completely hammered out.
- Tradition: These guys (and certainly their bosses) all come from a time when mechanics and car dynamics were king, and SW was secondary. They are just not used to simulations (in our meaning of the word) as a means of verification. This is similar to the electronics world many years ago, when the old guard assumed that verification mainly meant “checking the parameters of the capacitor”.
- Lack of knowledge of what CDV is: This may be the biggest part. Until you see a good, integrated CDV solution, all you know are a million point solutions, and you assume that your job will forever be just to optimize the set of point solutions you use.
- They find it hard to rationalize the tradeoff between “real” and “virtual” testing.
Let me elaborate more on this last point:
- Some things are really hard to model: E.g. even if all sub-systems come with their full, simulation-ready, very accurate model (plus all needed verification artifacts), they need to run this in a real car. There is obviously no substitute for checking this with the car itself (with all its mechanics, electronics etc.), in conjunction with other cars, weather conditions, radar reflections and all that.
- So you must do this. So you spend a lot of money on physical test setup, test tracks, and all that.
- All that energy buys you real, tangible results than anybody could understand. So what’s the value of expanding more effort on doing “virtual” simulation, where much of this is abstracted away?
- The answer is “an awful lot of value”. Even if you replace your car with a Matlab model, bypass all your sensors completely (and just insert random errors into the simulation of the reception of objects in front of you) etc., you can still run so many repeatable simulations of all combinations, that you can really find many bugs which will never be found otherwise.
- Also, this can give you sensitivity analysis which will help you to better direct the precious “real life” testing.
- But this is hard to explain, and there is no standard mandating that, and everybody is busy doing what they need to do anyway, so CDV never gets done.
BTW, this is fairly similar to what I saw in avionics. It is only in missile-related stuff (where real-life experiments are really expensive), and only in the most advanced of places (e.g. Rafael in Israel) that serious simulation is beginning to take place.
A related issue is what regulatory bodies have come to expect:
- Currently, this is still very nebulous, with the 26262 standard open to many interpretations (there is a whole cottage industry of consultants whose job is to help you navigate it).
- Europe is full of many AV-related initiatives, projects and ideas, with no strong common theme. This is, quite possibly, the right thing to do at this stage.
- If those regulatory bodies said “you have to test your AV’s behavior against the following 10k or 100k cross coverage points”, then people would find a way to do it, and that way would clearly be virtual. They would then negotiate with those regulatory bodies regarding which of these can be done in sub-system testing, and what (relatively small) subset must be done in “real life” testing.
- But the regulatory bodies don’t, because they don’t know it is possible (and because the field is very young).
- This, of course, may well change (e.g. Lawyer: “your software killed my client’s child”. Car manufacturer: “Government, we need to organize this”).
That’s it for now. See some of the feedback I got here. Further comments are very welcome.
Good