PickThatAI
Home/Blog/The Anthropic Origin Story: Inside the OpenAI Split That Changed AI

The Anthropic Origin Story: Inside the OpenAI Split That Changed AI

By PickThatAI TeamApril 23, 20269 min read
anthropicopenaihistoryai-safetydrama2026

In late 2020, Dario Amodei walked away from his role as VP of Research at OpenAI. His sister Daniela, who served as VP of Operations, left with him. Seven other senior researchers followed. Within months, they founded Anthropic — a company built on the explicit premise that OpenAI was moving too fast in the wrong direction.

This is the story of how that departure created the most important rivalry in artificial intelligence. Not because of corporate competition, but because of a fundamental disagreement about what AI should be. For how this rivalry plays out today, see our Claude vs ChatGPT comparison.

Who This Is For

  • Anyone curious about the human drama behind the AI industry
  • People wondering why Anthropic and OpenAI seem to dislike each other
  • Those interested in how philosophy shapes technology companies

The Timeline

2015 — OpenAI founded. Sam Altman, Elon Musk, and others create OpenAI as a nonprofit AI research lab. The mission: ensure artificial general intelligence benefits all of humanity.

2016 — Dario Amodei joins OpenAI. A Princeton-trained physicist with stints at Baidu and Google Brain, Amodei becomes one of the leading researchers on large language models. He eventually rises to VP of Research.

2018 — Elon Musk departs. Musk leaves OpenAI's board after reported disagreements about direction and control. The organization begins shifting from pure research toward commercial viability.

2019 — The Microsoft deal. OpenAI announces a $1 billion partnership with Microsoft. The deal transforms OpenAI from a research nonprofit into a commercial entity with a profit-seeking arm. This is the inflection point.

Late 2020 — The departure. Dario and Daniela Amodei leave OpenAI. Publicly, the reasons are vague — "different visions." Privately, the split centers on whether OpenAI's accelerating commercialization is compatible with responsible AI development. The Microsoft deal is the catalyst.

Early 2021 — Anthropic founded. The Amodeis, along with seven other former OpenAI researchers, incorporate Anthropic. The stated mission: AI safety research first, commercial products second.

2021-2023 — Quiet building. Anthropic publishes research on Constitutional AI, develops Claude, and raises funding from Google and Spark Capital. OpenAI, meanwhile, launches ChatGPT and becomes the fastest-growing consumer product in history.

November 2023 — OpenAI's board crisis. Sam Altman is briefly fired by OpenAI's board, then reinstated days later after employee revolt and Microsoft pressure. The drama reinforces Anthropic's argument about governance concerns at OpenAI.

2024-2025 — The gap closes. Anthropic releases Claude 3.5 Sonnet and Claude 4, establishing clear leads in coding and writing benchmarks. Revenue begins growing at 10x annually. The company becomes OpenAI's only serious competitor.

February 2026 — The handshake that never happened. At an AI summit in India, Altman and Amodei are placed next to each other for a group photo. Both visibly refuse to hold hands. The image circulates globally. Fortune calls it "an uncomfortable photo op [that] exposed how a one-time partnership has transformed into one of Silicon Valley's most consequential AI feuds."

March 2026 — Pentagon contract controversy. The New York Times reports on a Defense Department AI contract dispute involving both companies, describing the competition as "deeply personal."

What the Split Was Actually About

The popular narrative is "safety vs speed." That is partially true but incomplete. The real disagreement had three dimensions:

1. Governance structure. OpenAI's transition from nonprofit to capped-profit entity raised questions about who ultimately controls the company's AI research. The Amodeis wanted a governance model where safety researchers had structural authority, not advisory influence.

2. Commercial incentives. The Microsoft deal meant OpenAI now had a $1 billion incentive to ship products fast. The Amodeis believed this incentive structure would inevitably pressure researchers to cut safety corners — not because anyone was malicious, but because incentive structures shape behavior.

3. Research philosophy. Anthropic's founding research focused on Constitutional AI — a framework where models are trained to follow explicit principles rather than just learning from human feedback. This was a fundamentally different approach to alignment than what OpenAI was pursuing.

The Key People

Dario Amodei — CEO, Anthropic. Princeton physics PhD. Formerly VP of Research at OpenAI. Known for being measured, analytical, and deeply concerned about AI's societal impact. In 2025, he publicly warned that AI could displace half of all entry-level white-collar jobs.

Daniela Amodei — President and Co-founder, Anthropic. Formerly VP of Operations at OpenAI. The operational counterpart to Dario's research focus. She has spoken about building Anthropic as a "high-density talent" organization — hiring fewer but stronger people rather than scaling headcount.

Sam Altman — CEO, OpenAI. The charismatic counterpoint. Where Dario is measured, Altman is bold. He has publicly stated ambitions for AGI within years, not decades. The philosophical contrast between the two CEOs mirrors the contrast between their companies.

The Cultural Differences

Anthropic and OpenAI operate differently, and it shows:

Hiring. Anthropic has roughly 1,100 employees. OpenAI has several times that. Anthropic intentionally stays small, using its own AI tools to amplify productivity. OpenAI scales more traditionally.

Research to product. OpenAI ships fast and iterates publicly. ChatGPT launched as a research preview and became a product by accident. Anthropic takes longer to release but ships more polished products — Claude's safety features are designed before launch, not patched in after.

Revenue model. OpenAI's revenue is consumer-first (ChatGPT subscriptions). Anthropic's revenue is developer-first (API usage). This shapes everything from product roadmaps to hiring priorities.

Why This Story Matters

The Anthropic-OpenAI split is not corporate drama for its own sake. It represents the central tension in AI development: speed vs safety, commercialization vs caution, bold bets vs careful progress.

The decisions both companies make in the next two years will shape how AI is deployed globally. If OpenAI's approach wins, AI development will be fast, consumer-driven, and shaped by market incentives. If Anthropic's approach wins, AI development will be more deliberate, safety-governed, and shaped by research priorities.

Most likely, both approaches coexist — and the tension between them produces better outcomes than either would alone.

Editorial Opinion

The most telling detail is not the India photo op or the Pentagon contract dispute. It is that every single Anthropic co-founder chose to leave one of the most valuable companies in tech to build something from scratch. These were senior researchers with equity in OpenAI. They left not because of a better offer, but because they genuinely believed OpenAI was heading in a dangerous direction. That conviction — right or wrong — is what makes this rivalry different from any other tech competition.
The irony is that both companies may be right. OpenAI's speed brought AI to billions of people. Anthropic's caution may prevent the worst outcomes of that speed. The industry needs both impulses.

FAQ

Why did Dario Amodei leave OpenAI?

The primary stated reason was disagreement over OpenAI's direction after the Microsoft partnership. The Amodeis believed the shift toward commercialization would compromise AI safety research. Dario has discussed this publicly on the Lex Fridman Podcast.

Is there bad blood between Altman and Amodei?

Multiple reports from NYT, WSJ, Fortune, and CNBC describe the rivalry as "deeply personal." The India AI Summit incident — where both refused to hold hands during a group photo — was widely covered. Neither CEO has publicly disparaged the other, but the tension is visible.

How many people left OpenAI to join Anthropic?

Dario and Daniela Amodei plus seven other senior OpenAI researchers — a total of nine co-founders. It was one of the largest brain-drains in AI research history.

Is Anthropic really safer than OpenAI?

Anthropic's entire research program is built around AI safety — Constitutional AI, responsible scaling policies, and safety evaluations before model release. Whether this makes Claude "safer" in practice is debated. What is not debated is that Anthropic prioritizes safety as a company-level value more visibly than OpenAI.

Who is winning?

By brand awareness: OpenAI. By revenue growth rate: Anthropic. By model benchmarks: it depends on the task. See our best AI chatbots page for a full comparison.

Explore More AI Tools

Discover the best AI tools for your needs.

Browse All Tools