tntbassets

New MIT study shows what you already knew about AI_ it doesn't actually understand anything

Published: January 01, 0001 Reading Time: Approx. 8 mins

The latest generative AI models are capable of astonishing, magical human-like output. But do they actually understand anything? That'll be a big, fat no according to the latest study from MIT (via Techspot).

More specifically, the key question is whether the LLMs or large language models at the core of the most powerful chatbots are capable of constructing accurate internal models of the world. And the answer that MIT researchers largely came up with is no, they can't.

“I was surprised by how quickly the performance deteriorated as soon as we added [[link]] a detour. If we close just 1 percent of the possible streets, accuracy immediately plummets from nearly 100 percent to just 67 percent,” says lead author on the research paper, Keyon Vafa.

The core lesson here is that the remarkable accuracy of LLMs in certain contexts can be misleading. "Often, we see these models do impressive things and think they must have understood something about the world. I hope we can convince people that this is a question to think very carefully about, and we don’t have to rely on our own intuitions to answer it," says senior paper author Ashesh Rambachan.

Your next upgrade

Nvidia RTX 4070 and RTX 3080 Founders Edition graphics cards

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

What this new MIT research showed is that [[link]] LLMs can do remarkably well without actually understanding any rules. At the same time, that accuracy can break down rapidly in the face of real-world variables.

Of course, this won't entirely come as news to anyone familiar with using chatbots. We've all experienced how quickly a cogent interaction with a chatbot can degrade into hallucination or just borderline gibberish following a certain kind of interrogative prodding.

But this MIT study is useful for crystallizing that anecdotal experience into a more formal explanation. We all knew that chatbots just predict words. But the incredible accuracy of some of the responses can sometimes begin to convince you that something magical might just be happening.

This latest study is a reminder that it's almost certainly not. Well, not unless incredibly accurate but ultimately mindless word prediction is your idea of magic.

Reader Comments

User Avatar

GameSeeker866

Some games are a bit laggy on my phone at times, but the variety of games and the smooth desktop experience make up for it. Overall, the website offers a great gaming experience for both casual and serious players.

User Avatar

CoinDragon525

The deposit process is smooth and fast. I was able to fund my account instantly and start playing without any hassle. Plus, the multiple payment options make it convenient for everyone regardless of location.

User Avatar

GamerFox418

I won a small jackpot yesterday and it was really exciting! The thrill of winning real money keeps me coming back. The website feels fair, and payouts are processed promptly, which makes me trust the platform even more.

Resident Evil_ Degeneration A Lackluster Afterthought

With the latest Resident Evil movie behind us and the next game looming ahead, flick Resident [[link]] Evil: Degeneration sounds like perfect filler. But at least one critic says it’s little more than a 96-min...

Rooster Teeth Introduces Captain Dynamic To City Of Heroes

The creators of the Red vs. Blue series take on City of Heroes, in a series of live-action shorts featuring the greatest superhero of all time, Captain Dynamic. The series revolves [[link]] around the new Miss...

Shoko Nakagawa Now Owns A PS3

Very important update! [[link]] Last we heard, Xbox 360 owner, popstar and nerd hero Shoko Nakagawa was thinking about buying a PS3. She didn’t. https://kotaku.com/everyone-wants-to-be-xbox-live-friends-with-s...