The rise of mostly-autonomous systems

Summary: This post discusses the (possible) rise of mostly-autonomous systems, i.e. systems which are normally autonomous, but still have “operators standing by” for the infrequent-but-crucial moments when they are needed. I’ll discuss this trend,  its implications and startup opportunities, and then turn to my favorite topic – the verification implications of all that.

Jobs of the future

When people talk about the upcoming automation revolution (autonomous cars / robots, machine learning everywhere etc.), discussion often turns to the question of employment. Some people assume that employment will decline drastically, as we automate away the jobs of truck drivers, bank tellers, radiologists and so on. Others assume that (as in previous revolutions) new occupations will arise to replace the old ones.

My own intuition (since you asked) is:

  • Yes, there will be new occupations
  • No, they will not compensate for all the jobs lost to automation
  • One of the main new jobs will be “operators of mostly-autonomous systems”

It is the third bullet I want to talk about today. My intuition (and it is just that – your comments are welcome) is that we are going to have lots of autonomous systems, but (for various reasons detailed below) they are often going to be just “mostly autonomous”. And (to paraphrase Miracle Max) “mostly autonomous” is “slightly human-operated”.

The main reason for them being only mostly autonomous is that it is much, much, much easier to automate (and verify) 97% of the required behavior than it is to automate 100%.

Take, for instance, future assistive robots (which assist the disabled / elderly, as I described here). They will be able to take the customer downstairs, take her to the movies, help her with some medical procedures and so on. This covers the vast majority of the time.

However, in some cases (e.g. the customer faints, or some policeman across the street hollers incomprehensible stuff at the robot, etc.) this mostly-autonomous robot should alert an operator. The operator would probably take over remotely, using the robot’s sensors (e.g. vision, hearing) and actuators (e.g. locomotion, grabbing) to understand the situation and “do the right thing”.

Full autonomy is perhaps possible, but is really hard. Some people (e.g. David Mindell in “Our Robots, ourselves” – here is an interview with him) claim it will never be achieved, percisely because of these rare-but-hard-to-handle cases.

“Never” is a bit strong. But even if it can be achieved eventually, I think economics and common-sense dictate that we’ll first go through this mostly-autonomous stage.

The corresponding regulatory hassles will also (I think) be simpler than in the fully-autonomous case. For instance, the very fact that there is a human to blame will make this simpler for regulators and law makers.

A common pattern

Let’s call this pattern (MOstly Autonomous Systems + their remote operators) MOAS. Once you starting thinking about MOAS, many examples come to mind. Here is a short list of MOAS-like things, going from the present to the future:

Airline pilots: Most of the job of flying an airplane (including takeoff and landing) is best handled by automation. Pilots are there because of regulations, to reassure the passengers, and to handle the not-easily-automated cases (like a flock of birds killing both engines, or the guy in 22C deciding to have a mid-flight heart attack). Note that this is MOAS-like, not true MOAS: The operator is not remote.

Automated answering services: The people who pick up the phone when you choose the “I want to talk to a human” option in an automated answering system can also be viewed as last-ditch operators in a mostly-autonomous system.

We currently curse these mostly-autonomous systems – they are usually pretty bad. But they will improve in the future, and the human operator will also have to be more skilled (and higher paid), because the simple stuff will be well-automated.

Chatbots: Everybody and their brother are now creating chatbots based on machine learning (ML), which help in scheduling, pizza ordering and so on. Several of these have taken the (reasonable) decision to actually have humans do the things which the ML algorithm still can’t do, with the hope that there will be fewer of these things as time goes by.

Here is a Bloomberg summary of this, which says:

A handful of companies employ humans pretending to be robots pretending to be humans. In the past two years, companies offering do-anything concierges (Magic, Facebook’s M, GoButler); shopping assistants (Operator, Mezi); and e-mail schedulers (X.ai, Clara) have sprung up. The goal for most of these businesses is to require as few humans as possible. People are expensive. They don’t scale. They need health insurance. But for now, the companies are largely powered by people, clicking behind the curtain and making it look like magic.

Autonomous vehicles: Some companies (e.g. Google) talk about full autonomy, with just a red “OFF” switch. Others (e.g. most European car makers) talk about “automated driving” where the car may ask the driver to take back control in some cases. Note that this “automated driving” scheme is MOAS-like, not MOAS: I doubt remote operators are feasible in this case (mainly because there will not be enough time to acquire situational awareness).

However, I suspect Google-style autonomous cars will, eventually, fit into the MOAS scheme. Recall that one of their attractions is that they will open driving to children and disabled people, which is a great thing. However, consider what happens when such a car encounters a rare situation (like a log across the road in some remote area, or the user pushing an emergency button, etc.). It makes sense that control would then be transferred to some operator.

Military robots: As I said in a previous post, I think the military folks are not going for full autonomy (for various reasons, including the campaign to ban “killer robots”). They are, however, very much hoping to minimize the number of operators. For instance, if you have a bunch of small, cheap mine-clearing robots, you don’t want them fully autonomous, but you don’t want to devote a full-time operator to each of them either.

Everything else: Then there is that infinite queue of things waiting to become autonomous when technology, regulation and business allow: Autonomous drone deliveries, autonomous drone inspection services for offshore oil installations, autonomous ships, autonomous everything-in-agriculture, and so on. I expect practically all of these to be MOAS, at least initially.

Implications of this trend

Assume for a minute that MOAS does become a major trend. What does it imply? Here are some ideas.

As a concrete example, consider the future Assistive-Robots-R-Us corporation (motto: “Making the elderly and the disabled independent again”). They rent their robots for a weekly fee, and their sales guy swears on a stack of bibles that by golly, when an emergency occurs and a remote operator needs to take control, an operator will absolutely be there in A-R-R-U’s headquarters, ready and able to assist. In fact, this is why A-R-R-U is so popular: people trust it.

But how many operators really sit there? Assume emergencies take (on average) 1% of the time: In principle, you could have one operator per 100 robots. But what if occasionally 2% of the users have an emergency? Worse, what if there is a power cut or blizzard, and suddenly 10% of the users have an emergency? Keeping too-few operators could ruin A-R-R-U’s stellar reputation and inundate it with the law suites, but keeping a permanent 10% will surely ruin it financially (especially now that those bastards at We-Also-Have-Assistive-Robots are offering their robots at half price – though we all know how much one can trust their operators).

So this number-of-operators-on-call / quality-of-service tradeoff could be the most important life-or-death operational issue for a MOAS company. This has a lot of potential implications:

MOAS operators will be paid for their willingness (and ability) to context switch: If you are a certified A-R-R-U operator, A-R-R-U will gladly pay you X $/hour to do whatever you want, as long as you can guarantee that upon emergency you can drop everything and be at your A-R-R-U operator station (at home, on the beach, wherever – all it takes is a smart phone and headphones) within 3 minutes.

Not everybody can have fun with a potential context switch looming at any minute, but those who can will be paid well for (mostly) doing whatever they want. This may actually be a good fit for a world with a lot of educated, out-of-work people (and perhaps universal basic income).

There may even be layers of this (“Joe, please switch now from 30-minutes-backup-mode to 3-minutes-backup-mode – looks like we are starting to get loaded”). And of course the hourly pay will increase as you go from 30-minutes-backup to 3-minutes-backup to emergency mode.

MOAS operators will be smart problem solvers: This is probably not going to be a low-paid, simple job – all the simple stuff will be automated away. The typical operator will be a smart, interdisciplinary problem solver – she gets all the odd situations, and is measured on customer satisfaction and avoidance of bad outcomes.

MOAS optimization will become an important discipline: There is already work on this topic. For instance, Sarit Kraus of Bar Ilan U (one of the people I talked to lately as I was researching robotics) has a paper (pdf) about that. They created an “intelligent advising agent” which decides who gets to operate on which robot-request-for-help and when. From the abstract:

The number of multi-robot systems deployed in field applications has risen dramatically over the years. Nevertheless, supervising and operating multiple robots at once is a difficult task for a single operator to execute. In this paper we propose a novel approach for utilizing advising automated agents when assisting an operator to better manage a team of multiple robots in complex environments.

MOAS operator interfaces will improve: Right now, being an MOAS operator is no fun. For instance, the people who oversee AI chatbots often find it annoying, and Predator UAVs are notoriously hard to operate. But as MOAS proliferate this is bound to improve.

In fact, there is probably a whole range of MOAS-enablement startup ideas just waiting to be explored (or perhaps being explored as we speak): Make the operator interface more intuitive. Create good, immersive, take-anywhere MOAS command-and-control systems (VR-based?). Simplify the tricky machine-to-human handover and enable faster gaining of situational awareness. Simplify remote human-to-human handovers (“Doc – this guy looks very pale – please take over”). Adapt to limited (e.g. cellular) bandwidth when needed.

Bundle in MOAS optimization SW (see above), record keeping and tracking, and simulation / training capabilities, and your unicorn is ready to fly.

Verification implications

If you are one of the (three) regular readers of this blog, then you know that for any X, I don’t really care about X: I care about how to verify X. This topic is no different.

So, if MOAS proliferate, what are the verification implications? Not sure yet, but here are some thoughts:

For one thing, MOAS are still essentially autonomous systems. Those are really hard to verify, as I have discussed here and elsewhere in my blog. Systems based on ML (like those chatbots and much else) are even harder to verify, as I discussed here. And the specs are really complex, and It’s the spec bugs that kill you.

But are MOAS easier or harder to verify than fully autonomous systems? Not sure.

On the one hand, verification of the autonomous system per se may be somewhat easier, because the spec is going to be somewhat simpler (all these rare cases have one solution – transfer control to the operator).

To put it differently, fully-autonomous systems have to be verified for some total functionality F, while in MOAS, the system / SW has to be verified for some subset F1 of F, and the operator has to be individually certified for some subset F2, such that F1 and F2 together cover F.

So perhaps verification is easier. However:

  • Handovers (between machines and humans) are notoriously hard-to-spec and hard-to-verify. Take current military UAVs (drones): They are not really MOAS – they are mostly piloted by somebody on the ground, and are only autonomous when communication is lost. So the number of (machine-to-human and human-to-machine) handovers is pretty small, and yet I know of a bunch of military-UAV handover bugs which caused UAV loss.
  • Machine-to-human handovers are hard because the machine has to notice that it cannot handle the situation, alert the operator, explain the nature of the emergency (and the current state), and transfer control to the operator while still maintaining basic safety for the user. This is tricky to spec, and (because there are many possible scenarios) tricky to verify.
  • Explaining the nature of the emergency (and the current state) is especially hard for ML algorithms, which are usually sort of alien and do not produce good explanations.

Here are some further MOAS-specific verification challenges;

  • Security issues and related verification may be bigger in MOAS. BTW, for those of you who feel we don’t have enough invasion of privacy already: MOAS can be a big-brother dream world.
  • We need to verify (statistically) that those 3-minute-backup policies etc. indeed guarantee the promised quality-of-service under various scenarios.
  • While verifying the fully-autonomous assistive robot we had to simulate / randomize the behaviors of the user and other people around. Now we have to add the behavior of the operator.
  • When the operator takes over, at what level does she operate the robot? Can she ask for both low-level operations (“turn the head to the left”) and high-level operations (“go to the customer and lift him”)? If so, how do those levels interact? All this complexity has to be spec’d and verified.
  • A MOAS verification system should ideally double as a MOAS-operator-training-and-certification system: Some of those complex scenarios (that the verification system will dream up so as to trip up the SW) can be used to train/certify the operator. I say “ideally” because this (intuitively obvious) idea has rarely been executed in real life – usually companies spend millions on a training system, and then spend money on a separate, less capable verification system.

That’s all I have so far. Any comments?

Notes

I’d like to thank Kerstin Eder, Amiram Yehudai and Sandeep Desai for reviewing a previous version of this post.

[Added 25-Apr-2016: Marginal Revolution thread is here, though most comments are about other links]

[Added 26-Apr-2016: Hacker News thread is here]

[Added 30-Apr-2016: Soylentnews thread is here]

 


5 thoughts on “The rise of mostly-autonomous systems

  1. I would like to announce myself as one of those three (or more!) readers.

    This is a very interesting topic. In the consumer space, they’re only starting to take this subject seriously and then only within a very small and limited scope – primarily functional safety and it’s role in automated driver assist systems (ADAS).

    I also don’t think we should underestimate the role of VR in MOAS based systems. Already, some of the most successful VR “games” are ones where you do mundane work-like activities (like filing papers), but the novelty is in how well these systems actually simulate the real world. If I’m wearing VR goggles and virtually place an item on a high shelf, I can then put those same goggles on my daughter and she can’t reach the item because the system has automatically adjusted for our differing heights.

    I don’t think these kinds of games will be fun for long, but it does highlight VR’s already exceptional ability to replicate the real world and suggests a future where MOAS operators are practically trained from birth and little – if any – advanced training is in fact necessary.

    1. Hi Silas

      You bring up an interesting point in that last section, which I did not think about: Much of the technical infrastructure (and conventions/culture) needed by MOAS operators is very similar to that used in muti-player video games: Distributed updating of shared context and maps, integrated chat and voice, conventions and roles for working towards common goals etc.. So yes, we do have a generation trained for it from birth.

      VR is indeed a natural enhancer, being more immersive and (importantly) more portable.

  2. Many educators will also become MOAS. Students will interface with an AI for most of their learning, and the AI will respond appropriately to most of their predictable actions, but real professors will still need to step in occasionally. Of course, everything they do is recorded with an effort to train the AI to handle similar situations in the future. In fact, I expect that MOAS will be 60% occupied with the people they assist, and 40% occupied with creating triggers and routines so that the AI no longer needs them.

    I’m not so optimistic that MOAS can hang out on beaches when things are calm. What will always be needed is someone looking over the shoulder of AIs to make sure they’re doing their job optimally. Yes, there will be a “I’m at a loss, humans please take over” mode for automated systems, but one level below that will be: “I’m in a tricky/delicate/uncertain situation, please verify that I’m responding appropriately” mode. Let’s say you run a support center for 10,000 machines, and you have 200 people. That means the 200 least confident machines will have human oversight and help. This is also how the AI gets trained. When it is found to consistently handle the tricky situations correctly, its confidence functions in similar situations will increase, making future human intervention less likely. Basically, a big part of a MOAS job will be::trying to put MOAS like yourself out of work.

    1. Excellent points.

      Regarding the 10,000 machines and 200 people watching the most tricky / uncertain of them: You are probably right, but you may still want hundreds of operators more (in reserve) in case they are needed, hence this beach scenario. Also, I assume (as in the Sarit Kraus paper) that there will be some “intelligent advising agent” (probably ML-based) suggesting which are the tricky/uncertain users, and doing initial prioritization.

  3. MOAS operators sound a lot like video game players. Quickly assess a wide variety of situations and come up with good solutions. The ability to assimilate and adjust to the world-state quickly is probably the most important skill in an RTS game. That skill appears to be the one that drops off with age, so maybe the quickest-reacting MOAS operators will be young, but perhaps with more limited skillsets due to less experience.

    RTS citation: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0094215

Leave a Reply