Truth that Matters. Stories that Impact

Truth that Matters. Stories that Impact

Technology

Anthropic endorses California’s AI safety bill, SB 53

On Monday, Anthropic announced an official endorsement of SB 53, a California bill from state senator Scott Wiener that would impose first-in-the-nation transparency requirements on the world’s largest AI model developers. Anthropic’s endorsement marks a rare and major win for SB 53, at a time when major tech groups like the Consumer Technology Association (CTA) and Chamber for Progress are lobbying against the bill.

“While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington,” said Anthropic in a blog post. “The question isn’t whether we need AI governance — it’s whether we’ll develop it thoughtfully today or reactively tomorrow. SB 53 offers a solid path toward the former.”

If passed, SB 53 would require frontier AI model developers like OpenAI, Anthropic, Google, and xAI to develop safety frameworks, as well as release public safety and security reports before deploying powerful AI models. The bill would also establish whistleblower protections to employees who come forward with safety concerns.

Senator Wiener’s bill specifically focuses on limiting AI models from contributing to “catastrophic risks,” which the bill defines as the death of at least 50 people or more than a billion dollars in damages. SB 53 focuses on the extreme side of AI risk — limiting AI models from being used to provide expert-level assistance in the creation of biological weapons or being used in cyberattacks — rather than more near-term concerns like AI deepfakes or sycophancy.

California’s Senate approved a prior version of SB 53 but still needs to hold a final vote on the bill before it can advance to the governor’s desk. Governor Gavin Newsom has stayed silent on the bill so far, although he vetoed Senator Weiner’s last AI safety bill, SB 1047.

Bills regulating frontier AI model developers have faced significant pushback from both Silicon Valley and the Trump administration, which both argue that such efforts could limit America’s innovation in the race against China. Investors like Andreessen Horowitz and Y Combinator led some of the pushback against SB 1047, and in recent months, the Trump administration has repeatedly threatened to block states from passing AI regulation altogether.

One of the most common arguments against AI safety bills are that states should leave the matter up to federal governments. Andreessen Horowitz’s head of AI policy, Matt Perault, and chief legal officer, Jai Ramaswamy, published a blog post last week arguing that many of today’s state AI bills risk violating the Constitution’s Commerce Clause — which limits state governments from passing laws that go beyond their borders and impair interstate commerce.

Techcrunch event

San Francisco
|
October 27-29, 2025

However, Anthropic co-founder Jack Clark argues in a post on X that the tech industry will build powerful AI systems in the coming years and can’t wait for the federal government to act.

“We have long said we would prefer a federal standard,” said Clark. “But in the absence of that this creates a solid blueprint for AI governance that cannot be ignored.”

OpenAI’s chief global affairs officer, Chris Lehane, sent a letter to Governor Newsom in August arguing that he should not pass any AI regulation that would push startups out of California — although the letter did not mention SB 53 by name.

OpenAI’s former head of policy research, Miles Brundage, said in a post on X that Lehane’s letter was “filled with misleading garbage about SB 53 and AI policy generally.” Notably, SB 53 aims to solely regulate the world’s largest AI companies — particularly ones that generated a gross revenue of more than $500 million.

Despite the criticism, policy experts say SB 53 is a more modest approach than previous AI safety bills. Dean Ball, a senior fellow at the Foundation for American Innovation and former White House AI policy adviser, said in an August blog post that he believes SB 53 has a good chance now of becoming law. Ball, who criticized SB 1047, said SB 53’s drafters have “shown respect for technical reality,” as well as a “measure of legislative restraint.”

Senator Wiener previously said that SB 53 was heavily influenced by an expert policy panel Governor Newsom convened — co-led by leading Stanford researcher and co-founder of World Labs, Fei-Fei Li — to advise California on how to regulate AI.

Most AI labs already have some version of the internal safety policy that SB 53 requires. OpenAI, Google DeepMind, and Anthropic regularly publish safety reports for their models. However, these companies are not bound by anyone but themselves, so sometimes they fall behind their self-imposed safety commitments. SB 53 aims to set these requirements as state law, with financial repercussions if an AI lab fails to comply.

Earlier in September, California lawmakers amended SB 53 to remove a section of the bill that would have required AI model developers to be audited by third parties. Tech companies have previously fought these types of third-party audits in other AI policy battles, arguing that they’re overly burdensome.

Source: techcrunch.com

Leave a Reply

Your email address will not be published. Required fields are marked *