Anthropic CEO Dario Amodei published a statement Tuesday to “set the record straight” on the company’s alignment with Trump administration AI policy, responding to what he called “a recent uptick in inaccurate claims about Anthropic’s policy stances.”
“Anthropic is built on a simple principle: AI should be a force for human progress, not peril,” Amodei wrote. “That means making products that are genuinely useful, speaking honestly about risks and benefits, and working with anyone serious about getting this right.”
Amodei’s response comes after last week’s dogpiling on Anthropic from AI leaders and top members of the Trump administration, including AI czar David Sacks and White House senior policy advisor for AI Sriram Krishnan — all accusing the AI giant of stoking fears to damage the industry.
The first hit came from Sacks after Anthropic co-founder Jack Clark shared his hopes and “appropriate fears” about AI, including that AI is a powerful, mysterious, “somewhat unpredictable” creature, not a dependable machine that’s easily mastered and put to work.
Sacks’s response: “Anthropic is running a sophisticated regulatory capture strategy based on fear-mongering. It is principally responsible for the state regulatory frenzy that is damaging the startup ecosystem.”
California Senator Scott Wiener, author of AI safety bill SB 53, defended Anthropic, calling out President Trump’s “effort to ban states from acting on AI w/o advancing federal protections.” Sacks then doubled down, claiming Anthropic was working with Wiener to “impose the Left’s vision of AI regulation.”
Further commentary ensued, with anti-regulation advocates like Groq COO Sunny Madra saying that Anthropic was “causing chaos for the entire industry” by advocating for a modicum of AI safety measures instead of unfettered innovation.
Techcrunch event
San Francisco
|
October 27-29, 2025
In his statement, Amodei said managing the societal impacts of AI should be a matter of “policy over politics,” and that he believes everyone wants to ensure America secures its lead in AI development while also building tech that benefits the American people. He defended Anthropic’s alignment with the Trump administration in key areas of AI policy and called out examples of times he personally played ball with the president.
For example, Amodei pointed to Anthropic’s work with the federal government, including the firm’s offering of Claude to the federal government and Anthropic’s $200 million agreement with the Department of Defense (which Amodei called “the Department of War,” echoing Trump’s preferred terminology, though the name change requires congressional approval). He also noted that Anthropic publicly praised Trump’s AI Action Plan and has been supportive of Trump’s efforts to expand energy provision to “win the AI race.”
Despite these shows of cooperation, Anthropic has caught heat from industry peers from stepping outside the Silicon Valley consensus on certain policy issues.
The company first drew ire from Silicon Valley-linked officials when it opposed a proposed 10-year ban on state-level AI regulation, a provision that faced widespread bipartisan pushback.
Many in Silicon Valley, including leaders at OpenAI, have claimed that state AI regulation would slow down the industry and hand China the lead. Amodei countered that the real risk is that the U.S. continues to fill China’s data centers with powerful AI chips from Nvidia, adding that Anthropic restricts the sale of its AI services to China-controlled companies despite revenue hits.
“There are products we will not build and risks we will not take, even if they would make money,” Amodei said.
Anthropic also fell out of favor with certain power players when it supported California’s SB 53, a light-touch safety bill that requires the largest AI developers to make frontier model safety protocols public. Amodei noted that the bill has a carve-out for companies with annual gross revenue below $500 million, which would exempt most startups from any undue burdens.
“Some have suggested that we are somehow interested in harming the startup ecosystem,” Amodei wrote, referring to Sacks’s post. “Startups are among our most important customers. We work with tens of thousands of startups and partner with hundreds of accelerators and VCs. Claude is powering an entirely new generation of AI-native companies. Damaging that ecosystem makes no sense for us.”
In his statement, Amodei said it has grown from a $1 billion to $7 billion run-rate over the last nine months while managing to deploy “AI thoughtfully and responsibly.”
“Anthropic is committed to constructive engagement on matters of public policy. When we agree, we say so. When we don’t, we propose an alternative for consideration,” Amodei wrote. “We are going to keep being honest and straightforward, and will stand up for the policies we believe are right. The stakes of this technology are too great for us to do otherwise.”
Source link