New Claims: Artificial Intelligence Might be Deadly

First, there are two ‘existential’ threats.

AI can automate a lot of mundane jobs. This can put a lot of people out of work. While human society is exceptionally good at coming up with new jobs and things to do, it is far from clear that job creation process will out-speed job destruction process.

Job destruction per se is not necessarily bad – the original purpose of all so-cherished technological progress has been to relieve humans of hard work – think of the wheel invention. However, job destruction can have bad consequences within this particular socio-politico-economic setup we are living in, where if you are out of job, and society has no need for you, you are going down.

In particular, it is not impossible that we may end up in a situation where labor of a portion of population assisted by AI will be enough to provide all goods and services required by the whole population. Under current political regimes this could be catastrophic and lead to tensions between the few owners of the means of production and the non-owners.

It is in response to these concerns that ideas such as basic income have been proposed and experimented with [Hawaii just became the first US state to pass a bill supporting basic income — here’s the man behind itFinland is testing universal basic income – and found it has had an unexpected side effectSwitzerland’s voters reject basic income plan – BBC News].

The gravity of this concern and its growing popularity among younger people will also be a guiding force behind growing strength of socialist political forces in the Western societies.

AI can become sentient (conscious, possessing subjective perception and free will – or any other preferred definition), and then turn against people.

This may seem farfetched, as the current state of machine learning technology does not offer a clear path to building a sentient machine from the ground up.

In fact, people cannot even agree on what consciousness is, so it is quite hard to study it and engineer it. See, for example, these wonderful videos with Prof. David Chalmers.

However, while science might be quite clueless at this point on how to build up conscious brain (in part, because it has hard time finding instruments to objectively observe a person’s subjective living inside his head), there has been definite progress in integrating existing animal or human brains (or their parts) with machines [“Brain” In A Dish Acts As Autopilot Living ComputerBrain Implant Gives Paralyzed Man Functional Control of Arm – Neuroscience NewsBrain–computer interface – Wikipedia].

In other words, it is not infeasible that at some point we will be able to integrate a living mouse with a military robot (one of these Military robot – Wikipedia), and then suddenly have this robot go out of control trying to kill people. What about integrating a mouse brain with an interface to the Internet? Can it learn to hack your bank account? We have yet to see.

(To be fair, the brain-computer interface may not fit exactly the definition of ‘artificial’ intelligence, but it feels as an appropriate part of this answer).

Second, there are two more threats, which are more mundane, but, I would argue, are more likely to be realized in the proximate future.

Current machine learning technology cannot yet create a conscious brain. However, it is already very good at extracting rules from a lot of data to guide one’s behavior in order to try to achieve some objective mathematically optimally.

For example, recent experiments by Google DeepMind have shown how computer can learn to play Atari games, using visual information from the game screen, sometimes achieving better game scores than a human [http://www.nature.com/nature/jou…]. We also know that we have been quite good at creating autonomous driving robots [Waymo – Wikipedia].

Now, taking these technologies and putting them on top of a military robot is already absolutely feasible, with only minor technological challenges. There is, really, little fundamental difference to a computer between looking at Atari game pixel screen and picking actions to optimally shoot down space ships, and looking at a pixel screen of a real-world camera and picking actions to optimally shoot down people.

With only a nascent international legal framework to control the use of such autonomous machines, we may soon end up in a world where machines programmed to kill with mathematical precision will outmatch both in deadliness and numbers ordinary human soldiers. What if such technology is developed to fruition by a country or group of people who do not have respect for human life and freedom? It remains to be seen.

(b) Finally, even if AI does not gain sentience, and is programmed only with good intentions in mind, there still remains a possibility of an error. Whether it is an error during the control of a nuclear power plant, or your new shiny Tesla car, or a missile launch, or a stock exchange, or an automated dispute resolution system, the errors can be far more grave in consequences than any naughtiness intentionally programmed in.

Whereas previous issues are more societal in scope, and are up for higher powers to adjudicate, this issue of errors is something AI developers bear the most immediate responsibility for, and have to take great care to plan for and prevent.


There may be other threats that AI poses to humans, but I feel the above four are the most critical ones and deserve careful thinking by politicians, business leaders, and engineers alike.