Misc stuff: Mobileye, simulations and test tracks

Summary: This is another “What’s new in verification land” post. It describes a video and a paper from Mobileye, and takes that opportunity to revisit four topics: How Autonomous Vehicles should handle unstructured human interaction, how to balance Reinforcement Learning and safety, why simulation is the main way to validate safety in these unstructured environments, … More Misc stuff: Mobileye, simulations and test tracks

Formal verification of really-complex systems

Summary: This post describes some new attempts at formal verification of complex systems (mainly AVs and ML-based systems) In general, I think Formal Verification (FV) will only have a limited role in verifying Intelligent Autonomous Systems (IAS) and ML-based systems, because those systems are so complex and heterogenous. In FV has better PR than CDV … More Formal verification of really-complex systems

Verifying interactions between AVs and people

Summary: The interactions between Autonomous Vehicles and people can be complex, which complicates AV deployment.  This post summarizes some recent related publications, and then tries to predict the verification implications of all that. For instance, it suggests that verification teams will try to track total-accidents, AV-specific-accidents and AV-specific-annoyances. There were several interesting publications lately about “bumps on … More Verifying interactions between AVs and people

The “Synthetic Sensor Input” problem in AV verification

Summary: This post discusses the annoying “Synthetic Sensor Input” (SSI) problem, i.e. the fact that it is very hard to synthesize realistic, synchronized streams of sensor inputs (e.g. Video+LiDAR+Radar). It explains why the SSI problem is a pain for Autonomous Vehicles verification (and for other things), and talks about the (imperfect) solutions. Let me start … More The “Synthetic Sensor Input” problem in AV verification

Verification, coverage and maximization: The big picture

Summary: This post tries to explain (once and for all?) how the concept of coverage is used to optimize the verification process, what it means to auto-maximize coverage, and how people have tried to do it. I have been spending some time lately on coverage maximization via ML (which I described here). As is often … More Verification, coverage and maximization: The big picture

Misc stuff: The verification gap, ML training and more

This post covers recent updates in machine learning, autonomous systems and verification. It has four sections: Automation / ML keep accelerating, but verification of automation / ML seems to lag behind HVC is coming, and I plan to attend (and even present) The idea of training an ML-based system using synthetic inputs (which I like) … More Misc stuff: The verification gap, ML training and more

Verification implications of the new US AV policy

Summary: This post will take an initial look at the US Autonomous Vehicles policy announcement, and claim it is a big deal. It will then examine the verification implications, and claim they are mainly positive. Yesterday, the US Department of Transportation (DOT) and the National Highway Traffic Safety Administration (NHTSA) came out with an announcement … More Verification implications of the new US AV policy

Using Machine Learning to verify Machine Learning?

Summary: Can one use ML to verify ML-based systems? This post claims the answer is mostly “no”: You mainly have to use other system verification methodologies. However, some ML-based techniques may still be quite useful. How does one verify ML-based systems? A previous post in this series claimed that the “right” way is CDV: Essentially, … More Using Machine Learning to verify Machine Learning?