• Source:JND

Centaur AI: Imagine a computer that doesn’t just follow your commands but actually starts to understand you — not in a futuristic, mind-reading kind of way but by learning how humans really think and behave. That’s exactly what Centaur, a new artificial intelligence model, is beginning to do. And its accuracy is already turning heads in the scientific community.

Centaur Isn’t Guessing — It’s Learning From Us

Centaur wasn’t designed to chat or generate text like other AI models. Instead, it was trained to predict human behaviour — and not based on theory or guesswork, but on real decisions made by over 60,000 people across more than 10 million scenarios. From memory tests to moral dilemmas, researchers fed the AI data on how people actually act in a wide range of situations.

ALSO READ: Samsung Galaxy Z Fold 7 Live Images Leak Ahead Of Launch, Reveals Sleek New Design

The team started with Meta’s Llama 3.1 language model and fine-tuned it using a specialised process that zoomed in only on the parts relevant to human decision-making. The training took less than a week, but the result? A model that could consistently outperform traditional psychology tools that experts have relied on for years.

Outperforming Psychology's Best Models

When tested, Centaur didn’t just get lucky. It consistently made smarter predictions than longstanding psychology models — even when the experiments changed the rules or introduced brand-new situations. And it wasn’t just repeating patterns it had seen before; it adapted, sometimes even responding like a real human would, complete with occasional errors that felt surprisingly familiar.

Thinking Like Us — Without Being Told To

Here’s where things get even more fascinating: as Centaur learnt more about human decisions, its internal workings began to resemble patterns seen in actual brain scans. No one programmed it to mimic the brain — but somehow, its "thought process" began to align with how our minds work. In fact, the model even helped researchers uncover a previously unnoticed decision-making pattern in humans.

Where It Could Go Next

The potential applications are vast. Centaur could shape educational tools that adapt to the way you learn or medical software that spots early signs of mental health struggles. But the technology also raises tough questions — if an AI can predict your choices before you make them, what happens to privacy? And who decides how far it should be allowed to go?

ALSO READ: iPhone 17 Pro Max Vs iPhone 16 Pro Max: 5 Expected Upgrades That Could Make A Big Difference

The researchers aren’t ignoring those concerns. They’ve opened up their methods to the broader scientific community and are now working to make Centaur more inclusive — expanding its training to cover more cultures, viewpoints, and decision types.

The study behind Centaur was published on July 2, 2025, in the journal Nature and led by a team at the Institute for Human-Centred AI at Helmholtz Munich.

This isn’t about AI replacing people. It’s about building systems that understand us better — and maybe help us understand ourselves a little more along the way.