London (UK): Geoffrey Hinton, often hailed as the “Godfather of AI,” has spent more than five decades pushing artificial intelligence from fringe science to the centre of global innovation — only to later sound alarms about its dangers. His pioneering work on artificial neural networks laid the foundation for deep learning, the backbone of today’s AI, while his warnings now shape the global debate on how to manage its risks.

Early struggles and persistence

In the 1970s, when computers were far too slow to simulate the brain, Hinton pursued an unconventional academic path. After earning a B.A. in experimental psychology at Cambridge in 1970 and a Ph.D. in AI from the University of Edinburgh in 1978, he persisted with neural networks even when funding dried up and colleagues dismissed the idea as unworkable.

“Many of my colleagues thought I was crazy,” Hinton recalled. “But if you think you have a really good idea and others tell you it’s complete nonsense, then you know you’re onto something.”

From academia to breakthrough research

Hinton moved to the University of California, San Diego, then joined Carnegie Mellon in 1982 before settling at the University of Toronto in 1987. There, he nurtured a generation of researchers and helped transform neural networks into deep learning.

A turning point came in 2012 when Hinton and his students demonstrated that deep learning algorithms outperformed traditional computer vision systems in object recognition. Their startup, DNNresearch, was acquired by Google in 2013 for $44 million, and Hinton split his time between Google Brain and academic research.

A scientific lineage

Born in Wimbledon, England, in 1947, Hinton inherited a remarkable scientific legacy. His great-great-grandfather, George Boole, invented Boolean logic, while his great-grandfather, Charles Hinton, explored four-dimensional mathematics. The family tradition inspired Hinton’s fascination with both science and philosophy.

Despite briefly abandoning academia for carpentry, he returned to AI research, convinced that neural networks offered the best way to mimic human thought.

The rise of deep learning

By the 2000s, faster computers allowed Hinton’s theories to flourish. With collaborators Yoshua Bengio and Yann LeCun, he spearheaded the field of deep learning, transforming speech recognition, image processing and natural language understanding.

In 2024, Hinton shared the Nobel Prize in Physics with John Hopfield for contributions that reshaped AI into one of the most powerful technologies of the modern era.

From optimism to alarm

While once a vocal advocate of AI’s potential in medicine — particularly for interpreting scans after losing two wives to cancer — Hinton’s outlook shifted dramatically. In May 2023, he resigned from Google to speak freely about AI’s dangers, warning that general-purpose AI (AGI) could outpace human control within decades.

He now believes that superintelligent AI could arrive within five to 20 years, much sooner than he once predicted. “The systems companies are developing to keep AI submissive are not going to work,” he said at the AI4 conference in Las Vegas in August 2025.

Searching for safeguards

Hinton has proposed embedding “maternal instincts” in AI systems to ensure they care for humans even when they become more intelligent and autonomous. He cautions that AI will naturally develop two goals: staying alive and securing more control.

Still, he envisions a constructive future where AI could revolutionise medicine by analysing vast datasets from MRIs and CT scans to unlock new treatments and drugs.

Legacy and lessons

Geoffrey Hinton’s journey — from a young researcher dismissed by peers to a Nobel laureate shaping the world’s AI policies — reflects persistence, foresight and a willingness to pivot when needed.

His key message remains clear: “We need to be prepared for general-purpose AI that could bring about changes comparable in scale with the Industrial Revolution.”