California Governor has signed the AI Safety and Accountability Act, the most comprehensive state-level AI regulation in the United States. The law requires developers of large AI models to conduct and publish safety assessments before deployment.
The legislation applies to AI models trained using more than 10^26 FLOPS of computing power, effectively covering frontier models from companies like OpenAI, Anthropic, Google, and Meta. Requirements include red-teaming, bias testing, capability evaluations, and ongoing monitoring for harmful outputs.
Developers must also implement "kill switch" capabilities allowing models to be taken offline if they exhibit dangerous behavior, and must maintain logs of model behavior that can be audited by regulators. The state attorney general has enforcement authority with fines up to $10 million per violation.
The tech industry is divided. Anthropic and Microsoft have expressed support, arguing that responsible regulation builds public trust. Meta and several smaller AI companies oppose the law, claiming it will stifle innovation and push AI development to other states or countries.
The law's impact may extend nationally, as companies developing models in California will likely apply its requirements globally rather than maintaining separate practices for different states. This mirrors California's influence on auto emissions and privacy standards.