A British computer scientist known as the “Godfather of AI” has quit Google and expressed regret over parts of his life’s work as he fears that the growth of artificial intelligence could lead to killer robots that are smarter than humans.
Geoffrey Hinton, 75, the head of the Google Brain research department and the recipient of a Turing Award, revealed his concern over the possible ultimate outcomes of his pioneering work.
He said: “It is hard to see how you can prevent the bad actors from using it for bad things.
“The idea that this stuff could actually get smarter than people – a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
It was the latest in a series of high-profile warnings about the dangers of AI taking over jobs, and the prospect of disinformation being spread by chatbots such as ChatGPT.
Last month, Sir Jeremy Fleming, the director of GCHQ, the UK’s listening post, privately warned the Cabinet about the damaging potential of chatbots.
Sundar Pichai, the chief executive of Google, recently admitted that the rapid pace of AI development keeps him up at night, and said it could be “very harmful if deployed wrongly”.
In an open letter in March, Elon Musk and other technology leaders called for a six month pause in AI research.
Mr Musk warned that it was getting “out of control” and that “AI stresses me out”.
A total of 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, an academic society, have also warned of the dangers.
Mr Hinton studied experimental psychology at King’s College, Cambridge, and was later awarded a PhD in AI from the University of Edinburgh in 1978 and joined Google in March 2013.
In 2018, he and two colleagues received the Turing Award, known as the “Nobel Prize of computing”, for their research on neural networks.
Recent controversy around ChatGPT, developed by the US company OpenAI, has encouraged other technology giants, including Microsoft, Google and Baidu to develop their own versions.
Hinton said he feared people would “not be able to know what is true anymore” as they are deluged with fake videos and photographs on the internet.
He said chatbots, which were intended to help with “drudge work”, could take away whole jobs.
The scientist has long been against the use of AI for military purposes, including the creation of “robot soldiers”.
He told The New York Times: “Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary.
“Maybe what is going on in these systems is actually a lot better than what is going on in the brain.”
He added: “I don’t think they should scale this up more until they have understood whether they can control it.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”
In 2017, Hinton told The Telegraph he had signed a petition warning about the danger of lethal autonomous weapons – so-called “killer robots”.
He also wrote to the Ministry of Defence to express his concerns.
He told The Telegraph: “The reply said there is no need to do anything about this now because the technology is a long way away, and anyway, it might be quite useful. But they certainly have the capacity to do this.”
In a statement Jeff Dean, Google’s chief scientist, said: “We remain committed to a responsible approach to AI.
“We’re continually learning to understand emerging risks while also innovating boldly.”
Adblock test (Why?)