LLM—“LANGUAGE” is a ridiculous sole means of simulating intelligence. No living creature “learns” that way, solely or even primarily. This makes some sense.
“I’m not so interested in LLMs anymore.”
One of the godfathers of AI dropped a bombshell:
Yann LeCun, the man who helped create the AI revolution, says we’re chasing the wrong path.
Here’s why he thinks we’ll NEVER reach AGI through LLMs alone:
↳ Current LLMs need 400,000 years’ worth of text to train.
↳ A 4-year-old learns more through 16,000 hours of vision.
↳ The physical world is exponentially more complex than language.
His solution? World Models.
↳ Not just guessing the next word.
↳ This is AI that thinks like us: physics, planning, and common sense.
While everyone else is busy feeding LLMs more gibberish,
↳ Yann LeCun is doing something entirely different: JEPA.
JEPA is Joint Embedding Predictive Architecture.
It learns by watching the world, not reading the internet.
The wild part? V-JEPA can already spot physically impossible events from just 16 frames of video.
↳ No prompt engineering needed.
This isn’t about smarter chatbots. This is about AI that understands reality.
For business leaders: if you’re betting everything on today’s LLMs, you might be building the wrong foundations.
The future?
It will be hybrid, where language meets vision meets actual understanding.
Are you preparing for this shift?
See post on LinkedIn