What happened?
Some of the world’s most powerful people have just signed an open letter addressing the dangers of artificial intelligence.
The letter – published by the MIT-affiliated Future of Life Institute (FLI) – urges “important and timely research on how to make AI systems robust and beneficial”. It was signed by leading figures in technology and academia, including theoretical physicist Stephen Hawking, and entrepreneur Elon Musk, founder of SpaceX and Tesla Motors.
“As capabilities cross the threshold from laboratory research to economically valuable technologies”, the letter noted, “even small improvements in performance are worth large sums of money, prompting greater investments in research.
“Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”
So what’s at stake?
Only the future of humanity, according to Stephen Hawking. Last month, Hawking told the Guardian that “the development of full artificial intelligence could spell the end of the human race.”
Co-signatory Musk seems to be of the same view. Yesterday he retweeted this rather ominous message:
“First question asked of AI; ‘Is there a god?’ First AI answer; ‘There is now’.”
So, are we doomed?
The creation of a self-aware super-intelligence is a real worry for this letter’s signatories, but some thinkers are less spooked.
For Ray Kurzweil, director of engineering at Google, the benefits of AI outweigh its dangers:
“AI today is advancing the diagnosis of disease, finding cures, developing renewable clean energy, helping the disabled” he said, in an interview with Time. “We have the opportunity in the decades ahead to make major strides in addressing the grand challenges of humanity.”
“We have a moral imperative to realize this promise while controlling the peril. It won’t be the first time we’ve succeeded in doing this.”