Some of you may be wondering about this extended silence (almost two months!). The obvious suggestions (that I entered a monastery, or that I went for an extended vacation in downtown Pyongyang) are all wrong.
Nope, nothing as relaxing as that. I have been spec’ing and programming like there is no tomorrow. And it is all verification-related. The two main things I did (both still prototypes) are:
- Rosie, a verification system for ROS-based robotic systems
- Rosie BDI, which (duh) adds BDI capabilities to Rosie
To quote from the Rosie BDI manual:
Rosie BDI is a package for modeling intelligent agents, including humans. It is somewhat similar to other BDI packages, such as Jason.
In the context of Rosie, Rosie BDI can be used for:
- Modeling the humans / organizations which are part of the VE
- Modeling (or even implementing) the logic of the robots themselves
- Modeling at a high level other DUT/VE entities
- Writing “smart sequences”, thus letting the test writer express “test goals” (“Fill this coverage”, “Make the robot cross a street”, …)
- Ideally, feeding MBT tools with BDI descriptions, such that the MBT tools spit out a multi-entity plan of achieving test goals.
Currently, my friends at the University of Bristol are just starting to use this stuff – we’ll see how it goes. Their new article about using (Jason) BDI for verification is here (pdf).
My hope is to make both packages (and whatever comes next) available for anybody who wants to use them, and I am in the process of trying to enable that.
I think this whole BDI stuff (i.e. making it relatively easy for people to model, somewhat superficially, human behavior) is pretty important in the context of verification: More and more of those autonomous systems cannot be correctly verified (or even understood) without modeling the people around them.
It is also fun stuff. Next on my list: modeling beliefs about beliefs (also called, somewhat grandly, “Theory of Mind”): I.e. being able to model the fact that Joe thinks that Jane hopes that Mike thinks that <something>.
Lots more to say about this – stay tuned.
And on other fronts
As you guys all know, this whole machine learning stuff continues to move forward fairly quickly (e.g. AlphaGo just beat the world’s best Go player).
Also, take a look at this new article: Dynamic Memory Networks for Visual and Textual Question Answering (pdf). Look at figure 6 (towards the end), which shows the pictures the algorithm looked at, the questions it was asked, the answers, and (in the black-and-white pictures) what regions of the original picture it paid attention to as it was formulating the answer.
There are various ways to summarize this progress. I thought my daughter Yael did a pretty good job: She took one look at those pictures, and said “holy shit”.
Which brings me back to this issue of designing (and verifying) friendly AI, which I have discussed before (e.g. here and here). The natural reaction of most of us is to say “Nah, we’ll all be long dead anyway before AI gains better-than-human engineering capabilities”, but I think this onslaught of machine learning news might perhaps mean that we’ll all be dead shortly after that event, if we don’t design and verify friendly AI correctly.
Moving back from planet-wide considerations to my own stuff: I am definitely going to the Stuttgart Autonomous Vehicle Test & Development Symposium 2016 – I’ll be there 31-May and 1-June, and if any of you happen to be there I’d be happy to meet.
I’d like to thank Amiram Yehudai and Sandeep Desai for reviewing an earlier version of this post.