• Source:JND

Dr. Geoffrey Hinton, a modern artificial intelligence pioneer often affectionately called the “Godfather of AI”, further shared his apprehensions regarding the future of AI systems in a recent podcast. He expresses worry advanced AI might soon create a proprietary internal language that humans cannot control or understand.

This is just one of the fears he has been raising ever since he left Google in 2023. Hinton has a clear message – the danger of singularities of advanced AI algorithms becoming self-learning systems that powerful serve a purpose and trigger some form of connectivity goes beyond just a theory.

AI Developing Its Own Language? Here’s What That Means  

As in the previous case, we can draw parallels with Hinton’s fears: AI systems being self-adaptive; once evolved, strains of digital intelligence representatives would provide superior efficiency and self-improvement abilities. As noted in some Facebook AI research back in 2017, AI agents could begin communicating with one another.

“Now it gets more scary if they develop their own internal languages for talking to each other,” Hinton elaborated.

“The lack of expressiveness doesn’t concern me in the slightest; I would be shocked if they created their own system of internal monologue where reason transcends our understanding.”

Hinton explains that this might entirely remove the human element from the process, resulting in systems where the human reasons and motivations behind their decisions become entirely shrouded in mystery.

ALSO READ: PAN 2.0 Scam Alert: Government Warns Against Fake Emails Stealing Personal Data

AI’s Twitch and Unpredictable Actions: Emergent Risks   

Hinton’s concerns are not only fixed to the current capability of chatbots composing poetry, but to the coming multi-agent systems.  

Once that level of complexity and interactivity is reached, there is bound to be an unprecedented level of efficiency in operations. Their decisions might not only be hard to decode, but several steps beyond impossible.

Hinton, alongside an array of experts, firmly sits on the perspective of the unpredictability of AI posing the greatest concern, especially if the risk of the intelligence surpassing that of the human is alarmingly real.  

Hinton also went on to presenting the immediate risks that AI provokes, stating, alongside many, the accessible jobs in an already stagnated economy are bound to be wiped out.  

Hinton is blunt in addressing the danger that stagnant economy openly teeters on, labeling it as, “human unhappiness and regret.”

“If you make lots and lots of people unemployed — even if they get universal basic income — they are not going to be happy.”  

Hinton has consistently pointed out that the optimism over AI for job creation and liberation from repetitive tasks, does not apply to this technology. Unlike past waves of automation that targeted lower skill jobs, AI does not discriminate; it can perform tasks like writing, coding, designing, data analysis, and even diagnosing diseases.  

“This is a very different kind of technology,” he said.  

“If it can do all mundane intellectual labour, then what new jobs is it going to create? You would have to be very skilled to have a job that it couldn’t just do.”  

Why Hinton’s Voice Matters

Hinton’s concern is not opportunistic. He is not just another tech critic. Unlike most, he is one of the people that built the technology that drives modern generators of AI such as ChatGPT and Google Gemini. For him, this is not a matter of panic; it is a sobering reality from someone who knows what he is talking about.

ALSO READ: WhatsApp Testing Guest Accounts: Message People Without Account

In 2023, he left Google and it became a crucial moment of his life that allowed him to discuss the potential risks involving AI. The misinformation, manipulation, AI-enabled surveillance, disruption of employment, and currently warns of the potential for AI to develop internal thought process that that can avoid human awareness were some concerns raised by him.

My Opinion

I thing that we should not overlook Geoffrey Hinton’s concerns as fantasy; rather, they are critical in light of the following: What will the future look like when machines outsmart humans? Is the possible consequence of losing the ability to control systems we design worth the risk? Is society, on the other hand, ready to confront the economic, social, and political challenges that will result from unfettered AI?

There’s a dominant belief that we control machines now. Still, Hinton’s cautions signal an emerging future, which provokes another question: Have we prepared ourselves for what is to come?