Guide to California Senate Bill 1047 "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act".

"If you do not train either a model that requires $100 million or more in compute, or fine tune such an expensive model using $10 million or more in your own additional compute (or operate and rent out a very large computer cluster)?"

"Then this law does not apply to you, at all."

"This cannot later be changed without passing another law."

"(There is a tiny exception: Some whistleblower protections still apply. That's it.)"

"Also the standard required is now reasonable care, the default standard in common law. No one ever has to 'prove' anything, nor need they fully prevent all harms."

"With that out of the way, here is what the bill does in practical terms."

"You must create a reasonable safety and security plan (SSP) such that your model does not pose an unreasonable risk of causing or materially enabling critical harm: mass casualties or incidents causing $500 million or more in damages."

"That SSP must explain what you will do, how you will do it, and why. It must have objective evaluation criteria for determining compliance. It must include cybersecurity protocols to prevent the model from being unintentionally stolen."

"You must publish a redacted copy of your SSP, an assessment of the risk of catastrophic harms from your model, and get a yearly audit."

"You must adhere to your own SSP and publish the results of your safety tests."

"You must be able to shut down all copies under your control, if necessary."

"The quality of your SSP and whether you followed it will be considered in whether you used reasonable care."

"If you violate these rules, you do not use reasonable care and harm results, the Attorney General can fine you in proportion to training costs, plus damages for the actual harm."

"If you fail to take reasonable care, injunctive relief can be sought. The quality of your SSP, and whether or not you complied with it, shall be considered when asking whether you acted reasonably."

"Fine-tunes that spend $10 million or more are the responsibility of the fine-tuner."

"Fine-tunes spending less than that are the responsibility of the original developer."

"Compute clusters need to do standard KYC when renting out tons of compute."

"Whistleblowers get protections."

So, for example, if your model enables the creation or use of a chemical, biological, radiological, or nuclear weapon, that would qualify as "causing or materially enabling critical harm".

"Open model advocates claim that open models cannot comply with this, and thus this law would destroy open source. They have that backwards. Copies outside developer control need not be shut down. Under the law, that is."

The author of the "Guide" (Zvi Mowshowitz) talks for some length about the recurrent term "reasonable" throughout the law. What is reasonable? How do you define reasonable? Reasonable people may disagree.

What struck me was the arbitrariness of the $100 million threshold. And the $10 million fine-tuning threshold. And how it's fixed -- as time goes on, computing power will get cheaper, so the power of models produced at those price points will increase -- and even if it didn't, there's inflation. Although inflation works in the opposite direction, making less powerful models cross the threshold.

But there's also a FLOPS threshold.

"To be covered models must also hit a FLOPS threshold, initially 10^26. This could make some otherwise covered models not be covered, but not the reverse."

"Fine-tunes must also hit a flops threshold, initially 3*(10^25) FLOPS, to become non-derivative."

FLOPS stands for "floating point operations per second". What strikes me about this is the "per second" part. This means if you train your models more slowly, your "per second" number smaller, enabling you to dodge this law.

And unlike the $100 million and $10 million dollar amounts, the FLOPS number is not fixed. That's why the word "initially" is there.

"There is a Frontier Model Board, appointed by the Governor, Senate and Assembly, that will issue regulations on audits and guidance on risk prevention. However, the guidance is not mandatory, and There is no Frontier Model Division. They can also adjust the flops thresholds."

What do you all think? Are all the AI companies going to move out of California, or is this just fine?

Guide to SB 1047 - Zvi Mowshowitz

#solidstatelife #ai #genai #llms #aiethics

1