Misc. stuff: ASAM, DeepMind, Tesla and more

Summary: This is another one of those “misc. stuff” posts, with no unifying theme other than “Interesting inputs regarding Autonomous Vehicles verification”. It will discuss: What I learned regarding the ASAM OSC standardization effort, DeepMind’s “Rigorous Agent Evaluation” paper, Tesla’s “400,0000-car regression farm” idea, some good papers by Philip Koopman, and the upcoming Stuttgart symposium. … More Misc. stuff: ASAM, DeepMind, Tesla and more

Don’t stay in Monte Carlo (for AV verification)

Summary: This post talks about why Monte Carlo simulations (which uses the expected distribution) will most likely not get you to safe Autonomous Vehicles. It also talks about what I learned at the ASAM OpenSCENARIO workshop in Munich. Several people I talked to lately assumed that AV verification should mostly be done using Monte-Carlo simulation. … More Don’t stay in Monte Carlo (for AV verification)

Bridging AV verification and AV regulation

Summary: In this post I’ll describe my impressions from the ASAM OpenSCENARIO workshop. I’ll then use that as an excuse to discuss a related topic: Many people agree that scenarios are a good way to check Autonomous Vehicles (AVs) for safety. Some of these people have thorough verification in mind, while others have regulation in … More Bridging AV verification and AV regulation

The Uber accident and the bigger picture

Summary: This post discusses the influence of the Uber accident on Autonomous Vehicle (AV) deployment. It claims that AVs should eventually be deployed, and yet that we should expect many fatal AV accidents. It then suggests that a comprehensive, transparent verification system could help solve this inevitable tension. That tragic Uber accident has brought AV … More The Uber accident and the bigger picture

How to write AV scenarios (and some notes about Pegasus)

Summary: There are several approaches for verifying that Autonomous Vehicles are safe enough. The Pegasus project is one interesting, thoughtful attempt to do that (focusing initially on highly automated driving, not AVs). In this post I’ll summarizes a recent Pegasus symposium, and describe what I like about the approach and what is perhaps still missing. … More How to write AV scenarios (and some notes about Pegasus)

Using program induction for verification

Summary: I discussed before (e.g. here) how connecting rule-based verification to the rule-less, amorphous Machine Learning world is really hard, and yet necessary. The current post talks about a somewhat-exotic technique called Program Induction (PI), and how it might (eventually) help bridge that gap. What’s program induction Background: I always liked the idea of synthesizing … More Using program induction for verification

Where Machine Learning meets rule-based verification

Summary: This post addresses some high-level questions like: Longer term, how much of the verification of Intelligent Autonomous Systems can be done with just Machine Learning (ML)? Should most requirements remain rule-based, and if so – how does that connect to the ML part? And how will the uneasy interface between ML and rules influence … More Where Machine Learning meets rule-based verification

Verification implications of the new US AV policy

Summary: This post will take an initial look at the US Autonomous Vehicles policy announcement, and claim it is a big deal. It will then examine the verification implications, and claim they are mainly positive. Yesterday, the US Department of Transportation (DOT) and the National Highway Traffic Safety Administration (NHTSA) came out with an announcement … More Verification implications of the new US AV policy