Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Anthropologie CEO Dario Amodei issued a statement Tuesday to “set the record straight” on the company’s alignment with the Trump administration’s AI policy, responding to what he called a “recent surge in misstatements about Anthropic’s policy position.”
“Anthropology is built on a simple principle: AI should be a force for human betterment, not a threat,” Amody wrote. “That means creating products that are truly useful, talking honestly about the risks and benefits, and working with someone who is serious about getting this right.”
Amodei’s response comes later Last week’s dogpiling on Anthropology From AI leaders and top members of the Trump administration, including AI Czar David Sachs and White House Senior Policy Advisor for AI Shri Ram Krishnan — All are accusing the AI giant of harboring fears of harming the industry.
From Sacks after the first hit Shares Anthropologie co-founder Jack Clarke His hopes and “justifiable fears” about AI include that AI is a powerful, mysterious, “somewhat unpredictable” creature, not a reliable machine that can be easily mastered and operated.
the sack response: “Anthropic is running a sophisticated regulatory capture strategy based on fear-mongering. It is largely responsible for the state regulatory frenzy that is plaguing the startup ecosystem.”
Senator Scott Wiener of California, author of AI Security Bill SB 53, Defended AnthropologyCalling out President Trump’s “efforts to prohibit states from working on AI with federal safeguards advancing.” Sacks then doubled down, claiming that Anthropic was working with Weiner to “impose a leftist view of AI control.”
More comments echoed anti-regulation advocates like Groq COO Sunny Madra saying That AI was “causing chaos for the entire industry” by advocating for a standardization of AI security systems rather than anthropomorphic uninterrupted innovation.
TechCrunch event
San Francisco
|
October 27-29, 2025
In his statement, Amodei said that managing the societal impacts of AI should be a matter of “policy over policy” and that he believes everyone wants to ensure that America maintains its leadership in AI development as well as creating technology that will benefit the American people. He defended Anthropic’s alignment with the Trump administration on key areas of AI policy and called out examples of personal ball games with the president.
As an example, Amodi pointed to Anthropic’s work with the federal government, including the firm Claude’s proposal to the federal government and Anthropic’s $200 million contract with the Department of Defense (which Amode calls the “War Department,” echoing Trump’s preferred terminology, though the name change requires congressional approval). He also noted that Anthropic has publicly praised Trump’s AI Action Plan and supported Trump’s efforts to expand the energy system to “win the AI race.”
Despite this show of cooperation, Anthropic has taken heat from industry peers for stepping outside the Silicon Valley consensus on some policy issues.
The company first drew ire from Silicon Valley-connected executives when it faced a backlash Proposed 10 year ban In state-level AI regulation, a provision that faced widespread bipartisan pushback.
Many in Silicon Valley, including leaders of OpenAI, claim that state AI regulation will slow down the industry and let China take the lead. AMD countered that the real risk is that the U.S. continues to flood China’s data centers with powerful AI chips from Nvidia, adding that Anthropic restricts Sales of its AI services to Chinese-controlled companies despite revenue hit.
“There are products we won’t build and risks we won’t take, even if they make money,” Amody said.
Anthropology also favors some power players when it comes to fall Supported California’s SB 53A light-touch security bill that requires the largest AI developers to make Frontier Model security protocols public Amodei noted that the bill includes a carve-out for companies with annual gross revenues below $500 million, which would exempt most startups from any undue burden.
“Some have suggested we’re interested in harming the startup ecosystem,” Amody wrote, referring to Sacks’ post. “Startups are among our most important customers. We work with tens of thousands of startups and partner with hundreds of accelerators and VCs. Claude is powering a whole new generation of AI-native companies. Harming that ecosystem doesn’t make sense for us.”
In his statement, Amodei said Anthropic has grown from $1 billion to a $7 billion run-rate in the past nine months for deploying AI “thoughtfully and responsibly.”
“Enthropic is committed to constructive engagement on public policy issues. When we agree, we say so. When we don’t, we offer an alternative for consideration,” Amody wrote. “We’re going to be honest and straightforward, and stand up for the principles we believe are right. The stakes in this technology are too high for us to do otherwise.”