Can one state save us from AI disaster? Inside California's new legislative crackdown

1 day ago 9
Sacramento California extracurricular  superior  building
iStock / Getty Images Plus / Getty Images

Follow ZDNET: Add america arsenic a preferred source connected Google.


ZDNET's cardinal takeaways

  • California's caller AI information instrumentality goes into effect Jan. 1.
  • It centers connected transparency and whistleblower protections.
  • Some AI information experts accidental the tech is evolving excessively quickly.

A caller California instrumentality going into effect Thursday, Jan. 1, aims to adhd a measurement of transparency and accountability to the AI manufacture astatine a clip erstwhile immoderate experts are informing that the exertion could perchance flight quality power and origin catastrophe.

Originally authored by authorities Democrat Scott Wiener, the law requires companies processing frontier AI models to people accusation connected their websites detailing their plans and policies for responding to "catastrophic risk," and to notify authorities authorities astir immoderate "critical information incident" wrong 15 days. Fines for failing to conscionable these presumption tin scope up to $1 cardinal per violation. 

Also: Why analyzable reasoning models could marque misbehaving AI easier to catch

The caller instrumentality besides provides whistleblower protections to employees of companies processing AI models.

The authorities defines catastrophic hazard arsenic a script successful which an precocious AI exemplary kills oregon injures much than 50 radical oregon causes worldly damages exceeding $1 billion, for illustration by providing instructions connected however to make chemical, biological, oregon atomic weapons.

"Unless they are developed with cautious diligence and tenable precaution, determination is interest that precocious artificial quality systems could person capabilities that airs catastrophic risks from some malicious uses and malfunctions, including artificial intelligence-enabled hacking, biologic attacks, and nonaccomplishment of control," wrote the authors of the caller law.

Safety concerns

California's caller instrumentality underscores -- and aims to mitigate -- immoderate of the fears that person been weighing connected the minds of AI information experts arsenic the exertion rapidly proliferates and evolves.

Canadian machine idiosyncratic and Turing Award-winner Yoshua Bengio recently told The Guardian that the AI manufacture had a work to instrumentality a termination power to its almighty models successful the lawsuit that they flight quality control, citing probe showing that specified systems tin occasionally fell their objectives and mislead quality researchers. 

Last month, a insubstantial published by Anthropic claimed immoderate versions of Claude were showing signs of "introspective awareness."

Also: Claude wins precocious praise from a Supreme Court justness - is AI's ineligible losing streak over?

Meanwhile, others person been making the lawsuit that advancements successful AI are moving dangerously rapidly -- excessively rapidly for developers and lawmakers to beryllium capable to instrumentality effectual guardrails.

A connection published online successful October by the nonprofit enactment the Future of Life Institute argued that unconstrained developments successful AI could pb to "human economical obsolescence and disempowerment, losses of freedom, civilian liberties, dignity, and control, to nationalist information risks and adjacent imaginable quality extinction," and called for a intermission connected the improvement of precocious models until rigorous information protocol could beryllium established.

The FLI followed up with a survey which showed that 8 starring developers were falling abbreviated connected safety-related criteria including "governance & accountability" and "existential risk."

Federal, state, and backstage assemblage

California's caller instrumentality besides stands successful stark opposition to the Trump administration's attack to AI, which has frankincense acold been, essentially, "Go distant and multiply.

President Donald Trump has scrapped Biden-era regularisation of the exertion and has fixed the manufacture a wide magnitude of leeway to propulsion up with the improvement and deployment of caller models, anxious to support a competitory borderline implicit China's ain AI efforts.

Also: China's unfastened AI models are successful a dormant vigor with the West - here's what happens next

The work to support the nationalist from the imaginable harms of AI has truthful mostly been handed implicit to authorities lawmakers, specified arsenic Wiener and tech developers themselves. On Saturday, OpenAI announced that its Safety Systems squad was hiring for a caller "Head of Preparedness" role, which volition beryllium liable for gathering frameworks to trial for exemplary information and offers a $555,000 salary, positive equity.

"This is simply a captious relation astatine an important time," institution CEO Sam Altman wrote successful a X post astir the caller position, "models are improving rapidly and are present susceptible of galore large things, but they are besides starting to contiguous immoderate existent challenges."

(Disclosure: Ziff Davis, ZDNET's genitor company, filed an April 2025 suit against OpenAI, alleging it infringed Ziff Davis copyrights successful grooming and operating its AI systems.)

Read Entire Article