Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Newsom signs AI safety law: What’s in California’s new "Transparency in Frontier Artificial Intelligence Act"

Governor Gavin Newsom has signed SB 53, the Transparency in Frontier Artificial Intelligence Act, making California the first state to set rules for frontier AI developers. The law requires transparency frameworks, safety reporting, and whistleblower protections.

Mac Douglass profile image
by Mac Douglass
California Becomes First State to Pass Frontier AI Safety Law.
Governor Gavin Newsom signed SB 53 in Sacramento on September 29, 2025, establishing new safety and transparency rules for frontier artificial intelligence and creating the CalCompute consortium to support safe innovation.

California has become the first state in the nation to enact sweeping regulations on frontier artificial intelligence, as Governor Gavin Newsom signed Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA), into law on Monday. The legislation, authored by Senator Scott Wiener (D-San Francisco), establishes new transparency, safety, and accountability requirements for developers of large-scale AI models, while also creating a state-backed computing consortium to support safe innovation.


Balancing Innovation and Guardrails

Governor Newsom framed SB 53 as a landmark step in both protecting public safety and cementing California’s global leadership in AI.

“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive.
“AI is the new frontier in innovation, and California is not only here for it — but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation that builds public trust as this emerging technology rapidly evolves.”

-Governor Gavin Newsom

Senator Wiener emphasized that the law was carefully crafted in consultation with academics, industry leaders, and the Governor’s Joint AI Policy Working Group. “With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails to understand and reduce risk,” Wiener said.

The legislation builds on California’s first-in-the-nation AI policy report, released earlier this year, which recommended scientific and transparent frameworks for managing the risks of frontier models. It also comes amid stalled efforts in Congress to pass comprehensive federal AI rules.


Key Provisions of SB 53

The new law applies to “frontier developers” — companies training massive foundation models using more than 10^26 computational operations. For these firms, SB 53 mandates a series of first-of-its-kind requirements:

  • Transparency: Large developers must publish a “frontier AI framework” on their websites, outlining how they align with national and international standards, mitigate catastrophic risks, and govern deployment decisions.
  • Risk Assessments: Before releasing new or substantially modified models, companies must disclose catastrophic risk evaluations, including whether third-party audits were used.
  • Safety Reporting: Developers must notify California’s Office of Emergency Services (Cal OES) within 15 days of a critical safety incident, or within 24 hours if the event poses imminent risk of death or serious injury. Cal OES will publish anonymized annual reports starting in 2027.
  • Whistleblower Protections: Employees responsible for assessing safety risks are shielded from retaliation and may use anonymous internal reporting channels. Successful retaliation claims entitle whistleblowers to attorney’s fees.
  • Accountability: The state Attorney General can enforce compliance, with penalties of up to $1 million per violation.
  • CalCompute Consortium: A new public cloud infrastructure initiative, CalCompute, will be developed under the Government Operations Agency, ideally within the University of California system. Its goal is to provide safe, ethical, and equitable AI research capacity, with a framework report due to lawmakers by Jan. 1, 2027.

The law also explicitly preempts new local ordinances on catastrophic AI risk passed after Jan. 1, 2025, to avoid regulatory fragmentation.


California’s Global AI Footprint

Newsom’s office underscored California’s unmatched dominance in the AI sector as justification for taking the lead on regulation. According to the 2025 Stanford AI Index:

  • California hosted 15.7% of all U.S. AI job postings in 2024, far ahead of Texas (8.8%) and New York (5.8%).
  • More than half of global venture funding in AI and machine learning startups went to Bay Area companies in 2024.
  • 32 of the world’s top 50 AI companies are based in the state.
  • Google, Apple, and Nvidia — all headquartered in California — are among only four companies worldwide to surpass the $3 trillion market valuation mark, each heavily invested in AI development.

“California is stepping up, once again, as a global leader on both technology innovation and safety,” Wiener said.


Expert Backing

Several leading voices in the AI field endorsed SB 53 as consistent with the expert report commissioned by Governor Newsom earlier this year.

  • Tino Cuéllar, former California Supreme Court Justice and member of the National Academy of Sciences, called the law a step toward a “trust but verify” approach.
  • Dr. Fei-Fei Li, Co-Director of Stanford’s Institute for Human-Centered AI, and Jennifer Tour Chayes, Dean at UC Berkeley’s College of Computing, Data Science, and Society, also supported the framework as advancing transparency and accountability.

Looking Ahead

The law requires the California Department of Technology to issue annual recommendations for updating thresholds and definitions as AI systems evolve. Both Cal OES and the Attorney General must also provide annual anonymized reports on safety incidents and whistleblower activity.

In his signing message, Newsom highlighted SB 53 as a template for national action: “This legislation fills the federal gap and presents a model for the nation to follow.”

With SB 53, California aims to set global standards for how governments can both embrace and regulate the rapidly evolving world of artificial intelligence.

Mac Douglass profile image
by Mac Douglass

Subscribe to New Posts

Subscribe for the latest in California today, every day.

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More