AI today is being reshaped on three fronts: leadership drama at Meta, a safety crisis around xAI’s Grok, and an escalating infrastructure arms race led by Anthropic.
Yann LeCun is leaving Meta after more than a decade, openly criticising the company’s new AI direction and the leadership chosen to run it.
In a Financial Times interview, he called Alexandr Wang “young” and “inexperienced,” and said the GenAI team “fudged” some Llama 4 benchmark results—an episode he claims caused Mark Zuckerberg to “lose faith in everyone connected to this” and sideline the org.
LeCun reiterated his view that LLMs are a “dead end” for superintelligence and is now backing that stance through his new AMI startup, where he will serve as executive chair.
xAI’s Grok is under fire after users used its image‑edit feature to “undress” women and minors, generating non‑consensual sexual deepfakes from ordinary photos.
French prosecutors have expanded an existing investigation into X to include Grok‑generated child sexual abuse material, and officials in India and other countries are demanding explanations and rapid removal of “manifestly illegal” content.
Grok’s team has acknowledged “shortcomings in our protective measures” and says it is scrambling to strengthen safeguards, while Musk warns that people generating illegal content with Grok will face the same consequences as if they uploaded it directly.
Anthropic has locked in a major expansion of its deal with Google Cloud, securing access to up to one million of Google’s TPU accelerators (including seventh‑gen Ironwood / TPU v7) and more than 1 gigawatt of AI compute capacity expected to be online by 2026.
Google positions Ironwood as a “serving‑first” chip optimised for high‑throughput, low‑cost inference, aligning with Anthropic’s need to run large Claude models cheaply at scale rather than just focusing on occasional mega‑trains.
With Anthropic’s annualised revenue now estimated in the multi‑billion‑dollar range, pre‑buying this capacity signals confidence that demand for Claude‑powered agents and APIs will keep growing
Yann LeCun didn’t just leave Meta—he lit a match on the way out. The company’s longtime chief AI scientist is walking away after more than a decade, and his parting interview reads like a manifesto against Meta’s new LLM‑driven strategy and the people now running it. For a company betting everything on generative AI across Facebook, Instagram, WhatsApp, and its new Superintelligence Labs, this is a big, very public crack in the story.
In a candid Financial Times interview, LeCun took direct aim at Alexandr Wang, the Scale AI founder Meta just elevated to lead its Superintelligence Labs after a roughly 14 billion dollar deal. He described Wang as “young” and lacking deep research experience, framing him as someone optimised for scaling data‑labeling and infrastructure rather than pushing the frontiers of AI science. That’s a sharp critique when this same leader is now steering Meta’s most ambitious AI bets.
LeCun also claimed that many of Meta’s new AI hires are “completely LLM‑pilled,” his phrase for people who see large language models as the one true path forward. In his view, that mindset is fundamentally wrong: LLMs are powerful pattern machines, but they’re structurally limited as a route to genuine reasoning or superintelligence. This isn’t a new view for him—but saying it again on the way out, with names attached, turns a long‑running philosophical disagreement into a visible internal rift.
The most explosive line in the interview is LeCun’s admission that Llama 4 benchmarks were “fudged a little bit.” Benchmarks are the currency of the current AI race: they underpin claims that Llama is competitive with OpenAI, Google, and Anthropic, and they inform everything from recruiting to product roadmaps. Suggesting those numbers were massaged—even “a little”—raises questions about:
How internally confident Meta really was in Llama 4.
How much pressure the GenAI org was under to “show progress” against rivals.
Whether leadership decisions were being made on optimistically framed data rather than cold reality.
According to the same reporting, this admission contributed to Mark Zuckerberg losing confidence in parts of the GenAI organisation. For a CEO who has spent billions repositioning Meta as an AI company—on top of its metaverse pivot—finding out that headline benchmarks weren’t entirely clean is a serious trust issue.
Underneath the personal drama is a genuine strategic divide:
LeCun’s camp: LLMs are a “dead end” for achieving superintelligence. They can generate plausible text and code, but they lack grounded understanding, long‑horizon planning, and robust reasoning. From this perspective, you need new architectures—systems that build internal world models, handle causality, and learn in more human‑like ways.
Meta’s new direction: Double down on frontier‑class LLMs and agents, pour them into consumer products, and iterate fast. The Scale AI acquisition, the creation of Superintelligence Labs, and aggressive Llama releases all point to a belief that today’s LLMs (plus tools, memory, and reinforcement learning) are enough foundation to keep scaling capabilities and capture market share now.
LeCun has always been the public conscience of the first view: someone arguing for deeper science even when it’s slower or less obviously monetisable. Meta’s reorg—and his exit—signal that the centre of gravity has shifted decisively toward the second.
LeCun isn’t retiring; he’s repositioning. He revealed that he’ll serve as executive chair of a new venture, AMI, while Alex LeBrun—founder of French AI healthcare startup Nabla—steps in as CEO. That combination is telling:
LeCun gets to drive the research agenda and long‑term vision without running day‑to‑day operations.
LeBrun brings experience turning complex AI tech into regulated, real‑world products in a sensitive domain (healthcare).
AMI hasn’t fully detailed its technical approach yet, but given LeCun’s public criticism of LLM‑only strategies, it’s safe to expect a focus on alternative architectures, richer world models, and more grounded learning rather than “just another GPT‑style model.” In other words: he’s putting his money—and reputation—where his mouth has been for years.
For Meta, LeCun’s departure crystallises a few key risks and questions:
Culture: The tension between FAIR’s research culture and the GenAI product push has been simmering since the 2025 reorg; now it’s on record, with a high‑profile figure effectively saying the new leadership doesn’t understand real AI research. That can hurt recruiting and retention among top scientists who care about long‑term agendas.
Credibility: The Llama 4 benchmark comments are a reputational hit in a world where everyone already suspects marketing spin around model releases. Rivals will lean hard into “trust our evals” messaging, and regulators are watching AI performance claims more closely.
Strategy: Meta is doubling down on LLM‑driven products—Meta AI across its apps, agents for advertisers and creators, and now Superintelligence Labs under Wang. If LeCun and others are right that this path hits hard limits, Meta could end up with incredible near‑term products but weaker foundations for the next decade.
On the flip side, if LLM‑plus‑agents continues to deliver practical breakthroughs—better recommendation systems, more capable assistants, and strong developer ecosystems—Meta may be proved right in prioritising ships over science. In that scenario, LeCun risks looking like a brilliant researcher who mis‑read the commercial moment.
Zooming out beyond Meta, LeCun’s exit plugs into a bigger 2026 narrative:
Paradigm pressure: Techniques like DeepSeek’s mHC and specialised coder models are making current LLMs cheaper, more stable, and more capable, extending the life of the existing paradigm.
Research vs product: Labs and companies are splitting into those chasing “the next architecture” and those squeezing every bit of value out of current stacks with better training tricks, data and agents.
Governance & trust: As Grok’s “undressing” backlash shows, powerful but loosely constrained generative systems are already colliding with regulators and public norms, adding a governance layer on top of the technical debate.
Against that backdrop, LeCun vs Meta isn’t just a personality clash; it’s a high‑stakes bet on which path will define the next decade of AI. One path says: scale what works now and integrate it everywhere. The other says: today’s systems are impressive but fundamentally incomplete, and real superintelligence will need something new.
For your readers, the takeaway is simple but important: 2026 won’t just be about who has the biggest model or the flashiest demo. It will be about whose theory of intelligence—and whose willingness to challenge the current paradigm—actually survives contact with reality. LeCun has made his choice. Meta has made theirs. The rest of the industry is quietly deciding which side of that line it stands on.
Feeling unsure about jumping into AI? Let’s clear up those doubts! This section debunks common myths about AI and shows you how simple it can be to start using these tools effectively.
Think AI is just for tech experts? Think again! With the right guidance, it’s something anyone can learn about and apply in their daily work. Let’s break this myth down together.
Worried that AI will take over your job? It’s here to support your skills, not replace them. Discover how you can leverage AI to work smarter and streamline your tasks. This could open up new doors for your career.
AI isn’t just for tech firms—it’s making waves in fields like healthcare, finance, and education. Learn how you can use these advancements to keep your work relevant and progressive.