A man credited with being one of the “Godfathers of AI” has resigned from his position at Google and given an interview where he expresses regret over his life’s work.
Geoffrey Hinton, 75, has spent his life working on neural networks, a mathematical system that has led to generative artificial intelligence (AI) models such as text-to-image generators like Midjourney and language apps like ChatGPT.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton tells The New York Times in an interview published today.
Dr. Hinton gave a media interview only after he resigned from his position at Google so that he “could talk about the dangers of AI without considering how this impacts Google.”
Dr. Hinton began exploring AI systems back in 1972 when, as a student, he embraced the idea of neural networks at a time few other researchers believed the idea would work.
Hinton moved from his native Britain to the United States before taking a post at the University of Toronto where, along with two students, he built a neural network that could analyze thousands of photos and identify common objects, such as cats, cars, and flowers.
Google acquired the company started by Dr. Hinton and his two students, one of whom, Ilya Sutskever, became chief scientist at OpenAI — the company that makes ChatGPT and DALL-E.
As the technology has gotten better and companies are building systems with much larger datasets, Dr. Hinton has apparently grown concerned by its capabilities.
“Look at how it was five years ago and how it is now,” he says of AI. “Take the difference and propagate it forwards. That’s scary.”
Dr. Hinton tells The Times that he sees great danger in synthetic photos, videos, and text that is now flooding the internet, adding that the average person will “not be able to know what is true anymore.”
“The idea that this stuff could actually get smarter than people — a few people believed that,” he says. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Dr. Hinton even fears the day that “killer robots” becomes a reality because AI can learn unexpected behavior from the huge swathes of data it analyzes. In particular, he fears autonomous weapons and AI systems running its own computer code.
“I don’t think they should scale this up more until they have understood whether they can control it,” he adds.
Source: PetaPixel