From now on this blog will focus on AI safety / AI alignment (and how they connect to V&V).
Posts about autonomy V&V – my day job – will go directly to the Foretellix company blog (e.g. see this post). The company blog already has quite a few excellent autonomy / AV-related posts – please take a look.
Why AI safety: Like many, I started worrying about AI alignment more than 10 years ago. I wrote several posts about that, and in particular an introductory post describing the dangers of mis-aligned AGI (Artificial General Intelligence). However, like many, I assumed we have several decades before this gets serious.
Yet here we are, with many experts predicting vastly-shorter timelines. Hence the change in focus of this blog: A first post connecting V&V to safer AGI is here.
Note that I am trying to make this readable for both V&V and AI-alignment people. This involves some compromises, including an informal style, and the need to explain some things which may be obvious to you.