The ‘godfather of AI’ quits Google and warns about “existential risk” posed by technology

AIOrganic Search
-

Geoffrey Hinton, the so-called ‘godfather of AI’ who won the ‘Nobel Prize of Computing’ for his trailblazing work on neural networks, has quit Google and expressed regrets and fears about his life’s work.

The life-long academic joined Google after the company acquired a business he started alongside two of his students. One of these went on to become chief scientist at OpenAI. Hinton’s company had developed a neural network that taught itself to identify common objects like dogs, cats, and flowers after analysing thousands of photos. It’s this work that ultimately led to the creation of ChatGPT and Google Bard.

Hinton fears AI is spreading misinformation

In an interview with the New York Times, he said “I console myself with the normal excuse: If I hadn’t done it, somebody else would have… It is hard to see how you can prevent the bad actors from using it for bad things.”

Hinton said that he was happy with Google’s stewardship of the technology until Microsoft launched the new OpenAI-infused Bing, challenging Google’s core business. He said that this sparked a “code red” response from Google. He added that such fierce competition within the AI space may result in a world with so much fake imagery and text that nobody will be able to tell “what is true anymore.”

Hinton, who has worked for Google for more than a decade, notified the company of his resignation last month and spoke with CEO Sundar Pichai directly. He has since clarified that he is not criticising Google and that the tech giant has “acted very responsibly.”

For its part, Google has confirmed that it remains committed to a responsible approach to AI and will continue to understand emerging risks while also innovating boldly.

Future fears over chatbots, existential risks and the jobs market

In a later interview talking about AI generally, Hinton told the BBC that AI chatbots are “quite scary”. He warned that they could become more intelligent than humans and could be exploited by “bad actors”.

Although Hinton primarily remains concerned about AI spreading misinformation, he’s also worried that AI technology will also eliminate jobs, and possibly humanity itself as AI begins to write and run its own code.

He informed the BBC that he was concerned about the “existential risk of what happens when these things get more intelligent than us.”

He added that “I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have… So it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

Why does this matter?

It’s hugely significant that the man often touted as the ‘godfather of AI’ is concerned about misinformation and the possibility that the technology may upend the job market. It’s also worrying that someone of this standing is concerned about the “existential risk” posed by the creation of a true digital intelligence, which is where Google and Microsoft’s projects appear to be heading.

We should add that Hinton’s thoughts are also echoed by others in the space. For example, Elon Musk said he had fallen out with the Google co-founder Larry Page because Page was “not taking AI safety seriously enough”. Similarly, Valérie Pisano, the chief executive of Mila – the Quebec Artificial Intelligence Institute – said the slapdash approach to safety in AI systems would not be tolerated in any other field.

In the short term, Hinton’s concern is that people will not be able to discern what is and isn’t true anymore. This is something we’ve already seen in action, as an image of Pope Francis in a Balenciaga puffer coat went viral in March, with many users unaware it had been generated by AI.

However, despite concerns from Hinton and others, both Google and Microsoft are likely to continue with their AI projects as planned. Whether governments step in or greater regulation/oversight is introduced remains to be seen. 

Author spike.digital