On September 29, 2025, California’s governor signed a new law meant to stop people from using powerful AI for things that could go horribly wrong—like making a bioweapon or taking down a bank. Last year, a bigger version of this law was rejected, but this time, lawmakers listened to AI experts and industry voices and made changes.
The new law says that companies building the most advanced AI have to put safety rules in place—and tell the public what those rules are. These rules only apply to AI models that use a huge amount of computing power, the kind that could do real damage if misused.
A ‘catastrophic risk’ here means something that could cause at least a billion dollars in damage, or hurt or kill more than 50 people—like if someone hacked the power grid. If something goes wrong, companies have 15 days to report it. The law also protects workers who speak up about safety problems, and companies can be fined a million dollars for each violation. Startups get a bit of a break, so they aren’t held to the same strict rules.
Why does this matter? The federal government has been trying to loosen up AI rules, arguing that too many regulations slow down progress. Some in Congress even tried to stop states from making their own AI laws, but that didn’t work. So, with no strong national rules, states like California are stepping in to set their own limits.
If other states follow California’s lead, companies everywhere might have to get used to these million-dollar fines and quick reporting deadlines. Lawmakers in other places might look at California’s approach as a blueprint. If you work with AI, now’s the time to figure out which of your systems are powerful enough to fall under these new rules, and what you’d have to share with the public.