Impressions from the robotics world

I have been interacting with a bunch of robotics people lately, so as to learn what’s going on there regarding verification. This post is a (somewhat disorganized) summary of what I learned.

I did this detective work on your behalf, gentle reader, by:

Here are the things which came up and which I found interesting:

The academic world (mostly) ignores full-system autonomous verification

I have been talking to a bunch of robotics people from academia (mainly from Israel, but also from other countries). There are a lot of people researching various aspects of robotics, but very few people who care about full-system integration, and the related verification issues.

So these people are deep into navigation, or vision, or robot-coordination-and-game-theory, and so on, but very few are into “putting-it-all-together-and-finding-the-bugs-in-that”.

There are probably several reasons for this, all very understandable:

  • The urge to finish inventing it, then worry about verification
  • Academia’s tendency to specialize
  • The feeling that big systems, and especially their verification, is “the messy stuff industry does – there is no research in that”
  • The fear of spending too much time “getting to the starting point” (because there are no readily-available realistic systems to verify)

All reasonable stuff, but still somewhat short-sighted, and thus bound to eventually change. Some of the academics I talked to had a similar opinion.

Perhaps one can hope for the appearance of some new, unified view of the field (“robotic system modeling and verification”?). Once it is firmly in place, academics will be able to specialize again, starting from it.

Interestingly, some exceptions to this lack-of-interest-in-big-picture-verification are the Bristol folks I am working with (Kerstin Eder et al.), Robert Alexander et al. (York U), and Michael Fisher et al. (Liverpool U). Why are these folks ahead of the crowd? Is this a British thing – some obscure consequence of empire loss and too much rain? 😉  Dunno.

“Record-replay-with-variations” is pretty common

For instance, I talked to somebody who is doing localization (indoor, outdoor) for a big company. Turns out that the way they do simulation-based tests is by recording a bunch of situations (a multi-level mall, a stadium, …) each containing a few localization devices (e.g. wifi). Then for each new version of the SW they re-simulate those situations, with small perturbations on top of the recording (e.g. turn off the recording of a few of the devices).

This is similar to what I have described here (under “CDV-style testing of ML systems”) regarding autonomous vehicle testing, and seems to stem from the same reason: Synthetic creation of realistic scenes with realistic inputs is just too hard.

ROS stuff

I talked to a bunch of people about testing ROS-based systems (see my previous posts on this). They all agreed this is a real pain because of the repeatability issue.

Specifically, I talked to some researchers in Bar Ilan U, and to people from Siemens (see below). I think if we summed up the total annoyance this ROS simulation repeatability issue has caused mankind so far, the number would be quite high.

Also, I ran into some people from Siemens Tecnomatix (Manufacturing Simulation and Validation), one of whom I knew from my previous life. He told me about ROS-Industrial  and suggested talking to other ROS-Industrial guys, and arranged a meeting with them – I’ll report on anything interesting.

The military robots are coming

Regardless of where you stand, ethically, on the “drone wars” controversy, the military sees drones as a huge success. And thus land military robots are clearly coming.

We had a guy from IAI talking about the challenges of such robots, especially the problem of doing it all at the same time (friend/foe identification, situational awareness, targeting etc.). Not a lot of discussion about verification, and most simulator-related work is really about “simulators for training personnel”.

I think the military folks are not going for full autonomy (for various reasons, including the campaign to ban “killer robots”). They are, however, very much hoping to minimize the number of operators. For instance, if you have a bunch of small, cheap mine-clearing robots, you don’t want them fully autonomous, but you don’t want to devote a full-time operator to each of them either.

This trend (“mostly-autonomous systems, but with some operators”) appears to be an important one in general.  More on this in my next post.

Autonomous robotics verification is just starting

 This is sort of a general observation: Autonomous robotics verification is behind autonomous vehicle verification (not surprising, perhaps, considering their place along the hype curve). And autonomous vehicle verification (as have discussed before) is not all that impressive either (though hopefully improving).

In general:

  • This not considered a discipline yet, and there is a lot of fragmentation
  • The academic world is not all that interested yet (see above), though that is improving
  • Most of it is done manually
  • The simulation / running infrastructure (e.g. ROS+Gazebo) has issues
  • There is no commonly-agreed way to model human-interaction-with-robots. BDI (which I am experimenting with) is considered somewhat old, but nobody seems to know what should replace it.

Look at the bright side – this is bound to improve, keeping us (verification people) busy and off the street.

Notes

I’d like to thank Sandeep Desai, Kerstin Eder and Amiram Yehudai for reviewing this post.


One thought on “Impressions from the robotics world

Leave a Reply