Summary: This short post does not talk about autonomous vehicles. It talks about a new paper describing new ways in which complex socio-technical systems could go bad. I then talk about the verification implications of that.
Bruce Schneier is a well-known security researcher: I have been following his blog for years. I don’t mention him in my posts because he talks about security, and I talk about safety (and verification in general). However, these two topics sometimes overlap.
The paper: Anyway, he just came up with a 50-page very-readable paper called “The coming AI hackers”. Here is his summary:
Hacking is generally thought of as something done to computer systems, but this conceptualization can be extended to any system of rules. The tax code, financial markets, and any system of laws can be hacked. This essay considers a world where AIs can be hackers. This is a generalization of specification gaming, where vulnerabilities and exploits of our social, economic, and political systems are discovered and exploited at computer speeds and scale.
This is a really good paper. It is non-technical and yet thought-provoking. I suggest reading the whole 50 pages (you can start by reading his 2-page summary in Wired, but the paper itself is much better).
The verification angle: This paper has several interesting implications for us verification people. It talks about potential issues arising from the design of socio-technical systems:
Because of their complexity, computers are hackable. And today, everything is a computer. Cars, appliances, phones: they’re all computers. All of our social systems—finance, taxation, regulatory compliance, elections—are complex socio-technical systems involving computers and networks. This makes everything more susceptible to hacking.
Importantly, this is mainly not about “bad people”: The term “hacking” is often associated with people knowingly using loopholes to do bad things. But the paper mainly talks about engineered systems causing “bad things” as unintended consequences.
As such, it is squarely in our yard: These are essentially complex-system design bugs. However, these are often bugs in what economists call mechanism design: In other words, the fix will often involve changing the incentives to create better behavior.
Finding bugs in really-complex systems: Indeed, his last (and regrettably short) “solutions” chapter suggests using verification-like techniques for finding and fixing possible problems (or at least making them “public and part of the policy debate”).
I have written a long post about spec bugs in socio-technical systems, where I described why we need to “pre-verify” them before unleashing them on the world:
Some of the technologies we are talking about are extremely powerful, and are bound to change the environment in which they operate. With great power, as they say, comes a great need for an undo button. Except there ain’t any: Some genies will just refuse to return to their respective bottles.
Can we really use coverage-driven, scenario-based verification to find loopholes / dangers in complex systems (like the tax code, the design of social-media systems, proposed legislation, and so on)? And even if we could, will that really influence the design of these systems?
I am not sure, but it is worth a try: The dangers described in the paper are coming fairly soon.
Dear Yoav, I’ve contacted you some years ago when I was writing an article about Automotive Software Bugs. This article draw my attention when I read the lines “The verification angle: This paper has several interesting implications for us verification people. It talks about potential issues arising from the design of socio-technical systems: Because of their complexity, computers are hackable. And today, everything is a computer. Cars…..”
I’m just thinking out loud now and would like to know what you think about this: “The electric cars I’m testing nowadays are all connected to the internet. That means that you can break in en do anything you like with that car once you’re in. Of course I’m sure they won’t make it easy for hackers but that drastically changes when AI-hackers are used. They’re – or will be – smarter than any human hacker. So that means that as an enemy of the State you’re safer in an Old Timer than in any other modern electric car. But what an AI-hacker can do in that case is hack an electric car riding behind or next to you and kill you indirectly.” Let me know what you think about my assumptions please…..