The Governor of California, Gavin Newsom, is in the news for signing into law bill SB 53, known as the ‘Intelligence Act,’ which was proposed by Scott Wiener. It is the first law focused on the security of the most advanced and powerful artificial intelligence models in the United States. This new law will have a global impact, as 32 of the 50 leading artificial intelligence companies in the world are headquartered in the state of California.
Leading AI market companies such as Anthropic and OpenAI expressed their support, seeing it as a good way to combine safety and innovation. On the other hand, companies like Meta, the Progress Chamber, and the Consumer Technology Association criticized the measure, preferring federal legislation. Meanwhile, Josh Hawley and Richard Blumenthal are also working on an Advanced Artificial Intelligence Assessment Program within the Department of Energy, whose participation would be mandatory.
Transparency in Frontier Artificial Intelligence Act, SB 53
The state of California has just enacted one of the most important laws in recent times called the ‘Intelligence Law’ (SB 53). The purpose of this law is to impose specific regulations for artificial intelligence on the main companies in the industry, requiring them to comply with a series of transparency requirements and to report any security incidents that may occur related to artificial intelligence. Several regulations have been recently approved regarding aspects of artificial intelligence, but this is the first one that explicitly demands a transparency model for advanced and powerful artificial intelligence.
The approval of this law has come with the support of California Governor Gavin Newsom, who stated, “California has shown that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation sets that balance.” He added, “California’s status as a global technology leader provides us with a unique opportunity to offer a model for well-balanced AI policies beyond our borders—especially in the absence of a comprehensive federal AI policy framework,” referring to the impact it will have on companies headquartered in the state.
Sanctions for non-compliance
Companies that do not comply with the requirements of this new law face civil penalties for noncompliance, which will be enforced by the State Attorney General’s office. According to Scott Weiner, the author of the bill, “With technology as transformative as AI, we have a responsibility to support that innovation while putting in place common-sense barriers to understand and mitigate risk”.
Referring to a bill that was rejected a year ago, he stated, “While SB 1047 was more of a liability-focused bill, SB 53 is more focused on transparency.” Companies will have to publicly disclose documents detailing how they are improving practices to create safe artificial intelligence systems.
For and against SB 53
As it could not be otherwise, companies have not delayed in expressing their opinions on the matter. On one hand, SB 53 has not been well received by the Chamber of Progress and the Consumer Technology Association. Meta has also not welcomed it with open arms, and its Vice President of Public Policy, Brian Rice, stated, “The regulatory environment in Sacramento could stifle innovation, block AI progress, and jeopardize California’s technological leadership.” Meanwhile, companies like OpenAI and Anthropic have celebrated the approval of this new law. Jack Clark, the head of policy at Anthropic, stated, “Governor Newsom’s signing of SB 53 sets significant transparency requirements for AI companies without imposing prescriptive technical mandates”.
Although federal regulations remain essential to avoid a patchwork of state rules, California has created a solid framework that balances public safety with ongoing innovation. On the other hand, Jamie Radice, a spokesperson for OpenAI, said, “We are pleased to see that California has created a critical path toward harmonization with the federal government—the most effective approach for AI safety. If implemented properly, this will allow federal and state governments to cooperate in the safe deployment of AI technology”.
Even voices like those of Volodymyr Zelensky and Donald Trump spoke out about it at the United Nations General Assembly. Trump said AI “could be one of the great things in the world, but it can also be dangerous, yet it can be put to tremendous use and tremendous good”, and Zelensky said, “We are now living through the most destructive arms race in human history, because this time it includes artificial intelligence”.
