In the News: These near-impossible standards would make California lose its edge in AI

Opinion: Will this California bill to regulate AI protect consumers or gut tech?

Source: LATimes

By Paul Lekas

California is the cradle of American innovation. Over the years, many information and tech businesses, including ones my association represents, have delivered for Californians by creating new products for consumers, improving public services and powering the economy. Unfortunately, legislation making its way through the California Legislature is threatening to undermine the brightest innovators and targeting frontier — or highly advanced — AI models.

The bill goes well beyond the stated focus of addressing real concerns about the safety of these models while ensuring that California reaps the benefits of this technology. Rather than targeting foreseeable harms, such as using AI for predictive policing based on biased historical data, or holding accountable those who use AI for nefarious purposes, SB 1047 would ultimately prohibit developers from releasing AI models that can be adapted to address needs of California consumers and businesses.

SB 1047 would do this by in effect forcing those at the forefront of new AI technologies to anticipate and mitigate every possible way that their models might be misused and to prevent that misuse. This is simply not possible, particularly since there are no universally accepted technical standards for measuring and mitigating frontier model risk.

Were SB 1047 to become law, California consumers would lose access to AI tools they find useful. That’s like stopping production of a prescription medication because someone took it illegally or overdosed. They would also lose access to AI tools designed to protect Californians from malicious activity enabled by other AI.

To be clear, concerns with SB 1047 do not reflect a belief that AI should proliferate without meaningful oversight. There is bipartisan consensus that we need guardrails around AI to reduce the risk of misuse and address foreseeable harms to public health and safety, civil rights and other areas. States have led the way in enacting laws to disincentivize the use of AI for ill. Indiana, Minnesota, Texas, Washington and California, for example, have enacted laws to prohibit the creation of deepfakes depicting intimate images of identifiable individuals and to restrict the use of AI in election advertising.

Congress is also considering guardrails to protect elections, privacy, national security and other concerns while maintaining America’s technological advantage. Indeed, oversight would be best handled in a coordinated manner at the federal level, as is being pursued through the AI Safety Institute launched at the National Institute of Standards and Technology, without the specter of civil and criminal liability. This approach recognizes that frontier model safety requires massive resources that no state, even California, can muster.

So although it is essential for elected leaders to take steps to protect consumers, SB 1047 goes too far. It would force emerging and established companies to weigh near-impossible standards for compliance against the value of doing business elsewhere. California could lose its edge in AI innovation. And AI developers outside the U.S. not subject to the same transparency and accountability principles would see their position strengthened, inevitably putting American consumers’ privacy and security at risk.

Paul Lekas is the head of global public policy and government affairs for the Software & Information Industry Assn. in Washington.

Comments are closed.