Columns
Lessons from California
The Senate Bill 1047 treats AI as a powerful, fallible infrastructure that needs serious oversight.
Bimal Pratap Shah
California, the birthplace of Google, Facebook and OpenAI, has finally realised the reality of regulating artificial intelligence (AI). The US, which has always been at the forefront of innovation, is now stepping up with a legislative approach as sharp as the minds that built Silicon Valley.
Senate Bill 1047, formally known as the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” is a moral statement disguised as law. It acknowledges a fundamental shift: AI is no longer a futuristic dream but a powerful social force that can be unruly, seductive and potentially catastrophic.
This law mainly targets “frontier models,” the general-purpose AI systems that operate on massive scales of data and computation. These models, like GPT-4.5 or Claude 3, can code, reason, translate and persuade with a fluency that seems almost human. But they’re also the models that, with the right prompt or malicious intent, can generate blueprints for bioweapons, spread viral disinformation, or even hijack critical infrastructure. California was quick to grasp that the stakes are real and immediate.
Under SB 1047, companies training AI systems above a certain computational threshold must implement “reasonable safety precautions.” This includes extensive pre-deployment testing, operational safeguards and the ability to shut down the system if it misbehaves. If a model crosses critical lines, such as enabling the design of biological weapons or conducting autonomous cyberattacks, developers must report it to the California Attorney General.
But the law goes beyond just setting boundaries. It also protects whistleblowers, mandates third-party audits and gives both regulators and citizens the right to question what these systems are doing. This isn’t just regulatory theatre. It is a serious attempt to bring accountability to a domain that has, until now, operated with a sense of libertarian impunity.
For decades, California has fostered a culture where disruption was seen as a virtue and regulation as a hindrance. Innovation was sacred, and markets were believed to self-correct. But now, with AI’s potential to erode truth, amplify inequality and operate beyond human comprehension, California is reimagining its role.
California’s pivot is not isolated. In January 2025, a consortium of 96 international experts led by AI pioneer Yoshua Bengio published the International Scientific Report on the Safety of Advanced AI. Commissioned by 30 governments and global institutions, the report is a stark warning. It outlines the risks posed by increasingly autonomous AI systems: manipulation of public discourse, cyberwarfare, bioterrorism, economic destabilisation, and the real possibility of losing control entirely. The report doesn’t indulge in hysteria; instead, it firmly states that humanity is unprepared.
SB 1047 is California’s response to that call. It’s the first legislative attempt in the US to put into practice what Bengio’s report demands: Not a halt to innovation, but a pause for reflection; not fear of the future, but the tools to meet it. However, while California confronts the dilemmas of abundance, countries like Nepal are struggling to find ways to enter the AI age.
Earlier this year, Nepal’s Ministry of Communication and Information Technology released a draft National AI Policy. It’s earnest, idealistic and deeply aspirational. The document envisions using AI to modernise agriculture, digitise education, improve public health and promote innovation-led entrepreneurship. It hints at a future where Nepali society could leapfrog development hurdles through the transformative promise of artificial intelligence.
And yet, the policy reads more like a dream than a plan. There’s no roadmap for implementation, no inter-agency coordination and no mention of budget allocation. Notably, there are no pointers on an AI task force, no national centre for AI safety and no strategy for data governance. The policy is silent on risks like deepfakes, cybercrime, algorithmic discrimination, or the existential threats outlined in the global report. Simply, it uses the language of innovation but sidesteps the accountability architecture.
This isn’t a criticism of intent. Nepal is doing what many countries in the Global South are doing. It is trying to harness AI as a tool for progress, but it lacks the infrastructure, institutional capacity and global leverage to shape its trajectory. But intent alone won’t save Nepal from the consequences. The international AI safety report makes this divide explicit. It warns of a growing “AI R&D chasm” between nations building the most powerful models and those consuming them. Without meaningful regulation, low- and middle-income countries may find themselves in a digitally colonised future that is dependent on foreign AI systems, vulnerable to their failures and voiceless in their design.
In that context, SB 1047 is not just a Californian law; it’s a prototype for the world. Its critics are vocal, of course. Some argue it will stifle innovation or drive startups to more permissive jurisdictions. But the law is narrowly tailored. It exempts small-scale developers, focuses only on high-risk frontier models and offers liability protections for companies that comply.
What is revolutionary is not the regulation itself, but the recognition that regulation is necessary. That AI doesn’t just happen. This shift in tone is long overdue. For too long, discussions around AI governance have swung between naive techno-utopianism and doom-laden fatalism. What SB 1047 proposes is something more mature: Civic responsibility. It treats AI not as magic, nor menace, but as infrastructure that is powerful, fallible and in need of oversight.
The law affirms that public trust is not a byproduct of innovation but a prerequisite. For countries like Nepal, the lesson is not to mimic California but to act with urgency, move from rhetoric to readiness and invest not only in AI talent and startups but also in the legal, ethical and civic frameworks.
Ultimately, SB 1047 is just a beginning—the first real attempt to tame the frontier. It doesn’t presume to control the future of AI, but it insists that we must participate in shaping it. Progress without guardrails isn’t progress. It’s peril.
For Nepal, the path forward is clear but challenging. The country must develop a comprehensive and actionable AI policy beyond aspirations. This policy should include a clear roadmap for implementation, inter-agency coordination and budget allocation. Establishing an AI task force and a national centre for AI safety would be crucial steps. The policy should also address the risks associated with AI, such as deepfakes, cybercrime and algorithmic discrimination and outline strategies for data governance.