Advertisement

Stephen Hawking: Dismissing artificial intelligence would be a mistake

Scientists say not enough research being done on effects of artificial intelligence.

By Danielle Haynes
Astro physicist Professor Stephen Hawking sits in a garden inspired by his book "a brief history of Time" at the 2010 Chelsea Flower Show in London. The flower show is one of the hottest tickets in the London summer season. UPI/Hugo Philpott
Astro physicist Professor Stephen Hawking sits in a garden inspired by his book "a brief history of Time" at the 2010 Chelsea Flower Show in London. The flower show is one of the hottest tickets in the London summer season. UPI/Hugo Philpott | License Photo

LONDON, May 3 (UPI) -- Stephen Hawking, in an article inspired by the new Johnny Depp flick Transcendence, said it would be the "worst mistake in history" to dismiss the threat of artificial intelligence.

In a paper he co-wrote with University at California, Berkeley computer-science professor Stuart Russell, and Massachusetts Institute of Technology physics professors Max Tegmark and Frank Wilczek, Hawking said cited several achievements in the field of artificial intelligence, including self-driving cars, Siri and the computer that won Jeopardy!

Advertisement

"Such achievements will probably pale against what the coming decades will bring," the article in Britain's Independent said.

"Success in creating AI would be the biggest event in human history," the article continued. "Unfortunately, it might also be the last, unless we learn how to avoid the risks."

The professors wrote that in the future there may be nothing to prevent machines with superhuman intelligence from self-improving, triggering a so-called "singularity."

"One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all," the article said.

Advertisement

"Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks."

Latest Headlines