Dario Amodei, Daniela Amodei, Tom Brown, Jack Clark, Jared Kaplan, and Sam McCandlish, cofounders of Anthropic
Total Funding: $7.25 billion
Number of Employees: 300, according to PitchBook
Notable Investors: Google, Amazon, Menlo Ventures
Anthropic was started in 2021 by a group of researchers within OpenAI who bonded over their shared belief in AI’s potential for both ill and good. Since then, the company has received billions in funding from both Google and Amazon in what some have termed an “AI arms race.”
From its inception, the company has been sold as the LLM company with safety in its DNA. CEO Dario Amodei, a former Google Brain researcher with a Ph.D. in computational neuroscience, has been writing about the cataclysmic potential of AI since 2016. He, along with Anthropic’s other cofounders, including former Bloomberg technology reporter Jack Clark, could see that AI was going to progress exponentially, and they believed that AI companies needed to start formulating a set of values to constrain these powerful programs.
“We really trusted each other and wanted to work together,” said Amodei of himself and his cofounders at a Fortune conference last year, “and so we went off and started our own company with that idea in mind.
Anthropic is incorporated as a public benefit corporation and features an independent board of trustees that, over time will come to control the hiring and firing of company leadership.
Amodei’s sister, Daniela Amodei, who oversaw OpenAI’s policy and safety team, is the company’s president and has said that Anthropic’s safety-first policy is one of its main differentiators.
Last year, Anthropic published a 22-page document laying out what it calls its “responsible scaling policy,” or its plan to prevent its technology from hastening the end of the human race. This policy is reportedly overseen by Anthropic cofounder and theoretical physicist Sam McCandlish, who, while at OpenAI, built out the team that looked at machine learning scaling laws and paved the way for GPT-3.
At the heart of Anthropic’s pitch to enterprise clients is what it calls “constitutional AI,” by which the language model is imbued by its creator with a sort of conscience —a set of principles designed to prevent misuse of the technology. Constitutional AI is partly the brainchild of two other OpenAI alums and Anthropic cofounders, Tom Brown and Jared Kaplan. Brown is a former Google Brain researcher, and Kaplan is a former physics professor at Johns Hopkins who consulted for OpenAI before leaving to start Anthropic.
Both Kaplan and Brown have worked on Anthropic’s efforts to “red team” the company’s flagship language model, Claude, probing for misuse possibilities. That included efforts last year to create a version of Claude that can lie. Kaplan, speaking at a Bloomberg conference last October, said that he thinks AGI—a version of AI powerful enough to upend society potentially —could be as little as five to 10 years away.
“I’m concerned, and I think regulators should be as well,” Kaplan said at the conference.