Site icon The News Guy

New California AI bill threatens to cripple U.S. innovation – Orange County Register

Senate President Pro Tempore Mike McGuire, right, talks to State Sen. Scott Wiener, chairman of the Senate Budget and Fiscal Review Committee, left, before Wiener presents legislation to reduce the state budget deficit at the Capitol in Sacramento, Calif., Thursday,, April 11, 2024.(AP Photo/Rich Pedroncelli)

California legislators are alarmingly close to passing a bill that would strangle artificial intelligence (AI) innovation in its crib. Currently under consideration, Senate Bill 1047 – called the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” – would stifle innovation and competition in a set of technologies with global significance at a time when China and other countries are looking to race ahead.

The bill’s authors briefly acknowledge the “wide range of benefits” that AI might provide, and that much of that innovation is driven by California companies. Unfortunately, the bill sets forth a regulatory regime based on requiring permission from the state at every turn, operating under a philosophy that treats new AI models as dangerous until certified otherwise.

Forcing technology to advance at the pace of bureaucracy is exactly the prescription for squashing innovation.

Enforcement under the bill would be overseen by a new regulatory agency, the Frontier Model Division (FMD). This new agency would be granted sweeping authority to promulgate rules and guidance, and to define its own scope regarding which AI models qualified as “reasonably able to cause or enable a critical harm.” The FMD also would be given authority to fund itself via fees levied on companies seeking the board’s approval, effectively a tax on new AI models. 

Although the bill is touted as regulating only the largest, most powerful computational models on the “frontier” of AI development, it unwisely sets the definition of a “covered model” at an arbitrary level of computing power and cost used to adapt and fine-tune the model. Indeed, as AI computational power becomes more powerful and affordable, an increasing number of smaller companies are sure to exceed this threshold.

While the FMD would have the power to adjust this threshold, it risks falling victim to what is commonly known as the “pacing problem,” wherein a technology – in this case, the speed of AI computation – advances faster than regulators can adapt. Covered model developers also would also be required to build in a “kill switch” that the FMD, in its sole discretion, could use to shut down their model and any derivatives.

Worse, SB 1047 would cripple the development and dissemination of new open-source AI models by requiring AI developers to certify to the FMD that not only their base model but any spin-offs created by others would be incapable of causing serious harm, defined as causing $500 million or more in damages. 

That’s obviously impossible to predict in advance, yet model developers who guess incorrectly could be tried for perjury and face crippling fines. For example, if a bad actor were to figure out how to take an otherwise innocuous open-source AI model and train it to spread malware, the creators of the original model could be held liable and fined up to 30 percent of the model’s development cost. 

Such a risk would likely lead emerging developers to pay the higher costs of licensing closed AI models, the best of which are controlled by Big Tech – in effect, creating AI monopolies. These large, comparatively closed models like Gemini and GPT-4 are useful, but the open-source ecosystem can serve as an important competitive check, and provide a greater level of transparency. 

While AI systems could pose some risk, heavy-handed regulatory regimes like those in SB 1047 ignore the revolutionary benefits that AI may provide in fields like health care, agriculture, transportation and many more. If the United States is to realize these benefits and be competitive with the rest of the world, it is important to regulate AI applications based on a rational assessment of risk rather than fear of the technology itself. 

Most individualized risks of AI can be addressed by enforcing existing laws rather than setting up a new AI czar. With so much of U.S. AI development taking place in California, the effects of such an innovation-stifling, overly cautious regulatory framework would reverberate well beyond the state’s borders. 

If the Legislature continues down this path, it may watch as the best fruits of the AI revolution are harvested elsewhere, especially as China and other nations invest heavily to catch up with innovation that would be happening here if policymakers let it.

Josh Withrow is a technology and innovation fellow at the R Street Institute.

Source link : https://www.ocregister.com/2024/07/14/new-california-ai-bill-threatens-to-cripple-u-s-innovation/amp/

Author :

Publish date : 2024-07-14 08:55:09

Copyright for syndicated content belongs to the linked Source.

Exit mobile version