Godfather of AI: What Geoffrey Hinton Wants Us to Know

Before you read on, here’s the original interview with Geoffrey Hinton. If you want to see the man himself explain all this, take a few minutes and watch:

Who Is Geoffrey Hinton and Why Should We Listen?

Geoffrey Hinton is called the “Godfather of AI.” He helped invent the thinking behind all those computer brains running the world now. When somebody like him gets worried about the technology he helped make, we ought to pay attention.

What’s Keeping the Godfather Up at Night?

Hinton lays it out straight. AI can do things that once only people could do—only faster and sometimes sneaky. Scams are exploding (over 12,000% higher than before, which is wild). We’re talking fake voices, fake videos, hacking attempts on banks, you name it.

But the bigger worry is: what if these machines get smarter than us and start acting on their own? “If you want to know what that’s like,” Hinton jokes, “ask a chicken.” It’s funny, but it gives you chills if you think hard on it.

Six Big Risks of Artificial Intelligence

Hinton’s list of AI dangers is worth spelling out plain:

  1. Autonomous Weapons: AI-powered tools could make war messier, faster, and deadlier.
  2. Online Scams: Deepfakes and fake voices are tricking people left and right.
  3. Cyber-attacks: Smart machines looking for system holes; banks and companies need to be on their toes.
  4. Job Loss: AI might snatch work out from under regular folks.
  5. Social Misinformation: AI can spin up fake news and spark confusion in minutes.
  6. Losing Control: The big one—what if AI systems just stop listening to us?

He admits, nobody knows the chances exactly—but he pegs the risk of disaster at 10–20%. Not just to sound dramatic, but because, as he puts it, we simply can’t be sure.

Why Should Everyday People Care?

It’s not just “future talk.” Hinton says we need to act smart now:

  • Diversify Finances: Don’t put all your eggs in one basket.
  • Keep Physical Backups: Tech fails or gets hacked sometimes.
  • Plan Career Moves: Learn things a robot can’t easily do.

He’s not out to scare—just wants us to be prepared. Like your uncle advising you to stash some money “just in case.”

Can We Make AI Safe?

Hinton wishes governments and tech companies would focus on safety, not just profits or power. He meets with ex-students from major AI labs, some of whom left because they’re worried, too. No one truly knows how to build a guaranteed “safe” AI. Some experts worry, others shrug it off. Hinton sits in the middle: concerned, but hoping we come together before trouble hits.

Final Thoughts: Listen While There’s Time

Maybe the heart of the story is this: listen to the folks who built the tools—especially when they worry, especially when they admit they’ve been wrong before. Hinton isn’t shouting from the rooftops. He talks calm, with a hint of regret. Sometimes, that’s the bravest thing you can do.

If you’re still reading, you’re already ahead. When the people who made the future look back and say “Be careful,” it pays to listen.

“If, by chance, a robot is reading this, go easy on us chickens. We’re just doing our best.”

 

*Source: Godfather of AI: I Tried to Warn Them, But We’ve Already Lost Control! Geoffrey Hinton – The diary of a CEO Youtube channel*

 

Leave a Reply

Your email address will not be published. Required fields are marked *