Frontier AI companies have spent the last decade operating at full speed with very few guardrails. Regulations have long been rumored, but slow to arrive, creating an atmosphere in which compliance is a promise developers have never needed to keep.
But a new California law signed by Governor Gavin Newsom on September 29, 2025 is putting rumors to rest and showing frontier developers the rules they’ll need to play by moving forward. The Transparency in Frontier Artificial Intelligence Act (TFAIA) maps out a regulatory framework that the Governor’s office says will “build public trust while also continuing to spur innovation in these new technologies.” Even companies that never train models themselves will feel the impact, because the vendors they rely on will now be subject to the same standards.
While Newsom’s expectations may sound familiar — lawmakers have been espousing a mix of creativity and responsibility in AI development for years — frontier companies may be surprised at the law’s plan for bringing them to life. TFAIA makes AI ethics the real compliance problem, forcing developers to look beyond what their models are accomplishing and consider what they could achieve if they went off the rails.
A Change in Course for Frontier Developers
When AI compliance concerns were raised in the past, they were mainly focused on use cases. Lawmakers focused on how AI would be applied and where it had the potential to cause harm. When senators made speeches, they warned of the threat of AI models injecting inequity into loan decisions and other instances of AI misuse.
TFAIA changes tack, aiming controls at capability instead of use case. The law’s chief concern is not where AI will be used but how much power it brings to an application. The compute threshold, which identifies the computational power used to train an AI model, is its main focus.
Think of it as policing drivers based on how much horsepower they have under the hood instead of the way they behave in traffic. In the regulatory landscape now taking shape, frontier developers operating at frontier-scale compute levels will face regulatory oversight before their models leave the garage. Even companies nowhere near the compute threshold will feel these effects, because every SaaS, automation, analytics or infrastructure vendor they depend on will now be governed by the same framework.
Frontier developers whose models exceed the stated compute power threshold — which under TFAIA is set at greater than 10²⁶ integer or floating-point operations (FLOPs), a level far beyond the models most businesses interact with today — must embrace frameworks designed to guard against systems that could cause catastrophic damage. The frameworks must be developed, put in place and made public, revealing the steps companies are taking to manage and mitigate the risks their models present.
A Call to Operationalize AI Ethics
A decade ago, when artificial intelligence was a powerful idea with a chance to one day go mainstream, thought leaders made a lot of noise about AI ethics. But as AI became a reality, at work in everything from social media to self-driving cars, talk about ethics was drowned out by talk about safety.
Safety is easy. It’s as easy as putting airbags in the Hellcat. It’s something engineered with efficiency and affordability in mind.
Safety is also typically reactive rather than proactive. Frontier companies pushing boundaries would enhance power in ways that would attract funding, then apply themselves to engineering features that would keep that power on the road.
Ethics is harder. It’s a debate without simple answers that often involves complex trade-offs. Operationally, it involves a lot of cooperation, which means the kind of longer development timelines that can cause investors to lose interest and look elsewhere.
TFAIA signals that it’s time to shift the discussion back to ethics, but it calls for more than just talk. Embracing the requirements of the law ultimately requires operationalizing AI ethics.
Crafting a Compliant-by-Design Culture
The expectations for frontier developers to put ethics above efficiency are high. Effective governance under the new law requires more than just policy memos. TFAIA says each California frontier developer must “write, implement, comply with, and clearly and conspicuously publish on its website a frontier AI framework” that explains how they are determining the threat of catastrophic risk and guarding against it.
When problems arise despite proactive measures to prevent them, TFAIA requires developers to report them. It says those who experience a “critical safety incident,” which the law defines to include real-world harm triggered by unauthorized access to AI models, must alert California’s Office of Emergency Services. Failure to report carries civil penalties, and whistleblowers are protected under the law.
The surest way for developers to stay compliant is to hardwire AI ethics into their workflows. Crafting a compliant-by-design culture starts with treating TFAIA’s requirements as an operating manual for AI development that clearly communicates to all teams the role they play. Companies that make accountability part of their DNA will find it much easier to comply with the standards the new law establishes.
Rejecting a reactive approach to safety and security also is critical. Frontier companies must commit to adopting a slower pace, testing and retesting, and proving that power can be contained before letting it loose in the wild.
Finally, compliant-by-design means transparent-at-all-costs. Intended use, known limitations and assessed risks can’t be kept in the shadows or downplayed. Every employee must be empowered and encouraged to look for problems, report them and keep making noise until they are addressed.
The threat of catastrophic risk posed by AI is not a common topic of discussion in boardrooms. TFAIA changes that. It drops AI ethics into the middle of the marketplace, requiring frontier developers to take responsibility for the power they conjure by creating cultures that are as committed to accountability as they are to innovation.
Jared Navarre is the founder and CEO of Keyni Consulting, CEO of Onnix and Chairman of the humanitarian NGOs IN-Fire and Project AK-47. He is a systems strategist and operational architect known for solving complex, high-stakes problems across technology, healthcare, infrastructure and public-sector operations. With verified top-0.001% WAIS-IV intelligence scores — a clinical measure placing him among the highest-scoring adults ever evaluated — Navarre blends high-cognition modeling with pragmatic execution in his advisory work for Fortune-level enterprises and global institutions. He has designed resilient frameworks for humanitarian networks and guided over 250 organizations through moments of rapid change.