California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act into law on Monday afternoon. The bill, also known as SB 53, establishes the first AI-specific regulations in the United States aimed at leading companies in the industry.
The new law mandates that top AI firms meet specific transparency requirements and report safety incidents related to their technology. While several states have passed laws that regulate certain aspects of AI, SB 53 is the first to explicitly concentrate on the safety of powerful, cutting-edge AI models.
In a statement regarding the bill, Governor Newsom said, “California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance.”
Under the law, prominent AI companies are required to publish public documents that detail their adherence to best practices for creating safe AI systems. The legislation also introduces a new channel for these companies to report severe AI-related incidents to the state’s Office of Emergency Services. Additionally, SB 53 strengthens protections for whistleblowers who report health and safety risks associated with AI development.
Non-compliance with the act will result in civil penalties, which are to be enforced by the office of the California Attorney General.
The bill received a mixed response from the technology sector. Industry groups including the Chamber of Progress and the Consumer Technology Association voiced intense criticism of SB 53. In contrast, the AI company Anthropic endorsed the bill, and Meta described it as “a step in the right direction.”
Despite some support, these companies also indicated a clear preference for federal legislation to prevent a “patchwork of state-by-state laws.” Chris Lehane, OpenAI’s chief global affairs officer, articulated this sentiment in a LinkedIn statement several weeks ago. He wrote, “America leads best with clear, nationwide rules, not a patchwork of state or local regulations. Fragmented state-by-state approaches create friction, duplication, and missed opportunities.”
Coinciding with the California law’s signing, a new federal bill was proposed on Monday morning by U.S. Senators Josh Hawley and Richard Blumenthal. This federal proposal would mandate that leading AI developers evaluate their advanced systems and collect data on the probability of adverse AI incidents.
The proposed federal bill, as it is currently written, would establish an Advanced Artificial Intelligence Evaluation Program housed within the Department of Energy. Similar to the requirements of California’s SB 53, participation in this evaluation program would be mandatory for designated companies.
The passage of SB 53 in California and the introduction of the federal bill from Senators Hawley and Blumenthal occur as world leaders have increasingly called for AI regulation due to growing risks from advanced AI systems.
During remarks at the United Nations General Assembly last week, President Donald Trump commented on the technology. He said, AI “could be one of the great things ever, but it also can be dangerous, but it can be put to tremendous use and tremendous good.”
One day after President Trump’s address to the U.N., President Vladimir Zelensky of Ukraine also commented on the subject, stating, “We are now living through the most destructive arms race in human history because this time, it includes artificial intelligence.”




