I'm going to talk about a failure of intuition
A failure to detect a certain kind of danger
I'm going to describe a scenario
That I think is both terrifying and likely to occur
A scenario how artificial intelligence could ultimately destroy us
At a certain point, we will build machines that are smarter than we are
They will begin to improve themselves
And then we risk what the mathematician IJ Good called
An intelligence explosion
Let's take a step back
There was always a concept that there would be a cybernetic revolt
The machines would rise up against us
All of these decades, we've had the whole concept in reverse
It's not that machines first become intelligent
And then try to take over the world, it's quite the opposite
The urge to take control of all possible futures
Is a fundamental principle of intelligence
This machine would be capable of waging war
With unprecedented power
We are in the process of building some sort of god