Policy Blog Templates (68)

The Next Administration Should Harness the Power of American Tech

It’s August of an election year, which means the presidential election is in full force. Right now, both candidates, former President Donald Trump and Vice President Kamala Harris, and their selected vice presidential candidates, are building out their policy platforms and providing additional details about their priorities for the next administration.

Many are tuning into the respective party conventions to learn more about these policy platforms. Joining in on the national conversation, SIIA has helped host the Innovation Forum at both party conventions, an event in which industry leaders, experts, and policymakers come together to discuss the most pressing issues facing the tech industry and provide interesting perspectives that can help inform policy specifics for the next administration.

For both the Republican and Democratic tickets, there is still a lot to learn about their policy preferences for the expansive tech industry. Former President Trump and his vice presidential pick JD Vance did not spend much of their time at the convention outlining their plans for American tech. However, at the Republican convention Innovation Forum, Republican lawmakers joined industry experts and leaders for a robust discussion that dove into advancing American innovation, AI and the workforce, and views from Congress.

Just as in Milwaukee for the Republican convention, we are looking forward to hearing more in Chicago this week for the Democratic convention. We can glean perspective about the policy priorities for potential future administrations by examining the candidates’ records. Looking at the Harris-Walz ticket, much attention has been paid to the Vice President’s former role as a District Attorney for California, our nation’s largest tech hub, as well as her record as a Senator from that same state. For his part, Governor Walz has taken a commonsense approach to cybersecurity, privacy and AI. As co-chair of the National Governors Association, he advocated to improve the nation’s cyber resilience and he’s also signed legislation that prohibits the nonconsensual dissemination of deepfake intimate images and the use of deepfakes to influence an election. We will be tuning into the Democratic National Convention this week to learn more about these particular policy platforms and additional details the candidates can provide.

The Innovation Forum aims to inform the larger conversation about the tech industry and show how American innovation is helping to drive economic growth, provide America a competitive edge across a multitude of industries, and help foster a better, brighter world. We hope that both presidential contenders take note, and lay out an agenda that helps harness the power of America’s technology industry for the benefit of the American people.

Policy Blog Templates (66)

SIIA Commends NIST for Finalizing Post-Quantum Cryptography Standards

The following statement can be attributed to Paul Lekas, Senior Vice President, Global Pubic Policy for SIIA.

The Software & Information Industry Association (SIIA) applauds the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) for the finalization of its post-quantum cryptography (PQC) standards – a set of algorithms aimed at preventing cyber-attacks within quantum computing. As the digital landscape rapidly evolves, these new encryption algorithms represent a crucial step forward in safeguarding our information infrastructure against the emerging threats posed by quantum computing.

SIIA recognizes the importance of prioritizing proactive measures when considering the security and privacy of digital information. The role of encryption has become a backbone for protecting both consumer and national security information, but quantum computing is slated to change this landscape over the next decade. NIST’s leadership in developing PCQ standards reflects a forward-thinking approach and provides an effective framework for organizations to begin integrating these algorithms into their systems to protect against potential quantum-based cyber threats.

We commend NIST for its rigorous and inclusive multi-year process in development of these standards, a testament to America’s role as a global technology leader. SIIA looks forward to continued collaboration with NIST and other key stakeholders to support the development and implementation of vital standards across the entire technology ecosystem.

Policy Blog Templates (65)

Illinois Approves Amendment Lowering BIPA Damages

On August 2, Illinois formally amended its Biometric Information Privacy Act (BIPA) to limit the  financial penalties assessed against companies operating in the state, specifically when these entities illegally obtain or sell biometric identifiers. In the wake of ballooning penalties inflicted on companies operating in the state, SIIA supports the amendment as a first step toward limiting the runaway “sue and settle” strategies employed by the plaintiff’s bar under BIPA. We are particularly encouraged to see Illinois take this important step because BIPA’s pre-amendment language not only mandated a statutory damages multiplier, but did so without requiring injury to consumers – leading to sizable and frequent suits. Like other states, which accordingly avoided this language when drafting their own biometrics bills, we are pleased to see Illinois walking away from this unworkable framework.

The amendment, which was approved by the Illinois Legislature in May and signed by Governor Pritzker, provides “that a private entity that more than once collects or discloses a person’s biometric identifier or biometric information from the same person in violation of the Act has committed a single violation for which the aggrieved person is entitled to, at most, one recovery.” Going forward, multiple violations related to an individual’s biometric data will then be counted as a single violation.

Prior to the change, BIPA required entities operating in the states to obtain express written consent for the collection or use of biometric data. The law permitted alleged victims of companies illegally obtaining and selling their identifiers to sue for damages up to $1000 for each negligent violation, and $5000 for each intentional or reckless violation.

The amendment effectively overturns the 2023 Illinois Supreme Court Ruling holding that companies could be held liable for each time they misused a person’s private information – not only the first time. The change is almost certain to significantly reduce potential damages and settlement values associated with BIPA claims.

Notably, BIPA is unique among biometrics laws on the books in that it permits individuals to sue for damages in the first place. Although other states, such as Colorado, have passed laws substantively similar to BIPA, none include BIPA’s private right of action (PRA). The PRA has received widespread criticism for its uncapped penalties associated with foot fault violations – and even “white hat” efforts associated with the collection of biometrics, such as anti-fraud and consumer identification technologies.

In addition to lowering damages, the amendment and recognition of the excesses of BIPA’s PRA is likely to limit BIPA copycats in other states – particularly those that contain rights of action with similar language.

Policy Blog Templates (64)

Both Congress and Private Sector Innovators Have Important Roles in Combating Deep Fake Intimate Imagery

Over the years, leading U.S.-based tech companies have developed technologies that have provided new opportunities for Americans. However, while these innovations have benefited many consumers and businesses, it has also introduced new risks and potential for misuse.

As generative imagery technology has improved in recent years, there has been a concerning increase in generated artificial images – known as “deep fakes” – that portray people in sexually explicit contexts without their consent. These non-consensual intimate images (NCII) create serious harm for victims, and young Americans are particularly at risk. This content can be deeply distressing for people affected by it.

Understanding the horrific effect of these types of images, American companies are often the first line of defense against NCII and are taking significant measures to combat them. Many tech platforms, for example, have simplified the process for victims of NCII to request the removal of deep fake explicit imagery from search engines at scale and are continuing to refine their detection and removal capabilities. This includes leveraging AI technology to find and remove explicit images and duplicates. Additionally, companies have taken steps to train their algorithms to proactively identify and flag search terms that present a higher risk of explicit deep fake content appearing. As companies continue to innovate, this technology will only improve, reducing the risk, scale and impact of NCII.

SIIA welcomes the proactive approach the private sector has taken in tackling this issue, but it’s also important that lawmakers use tools at their disposal to prevent these bad behaviors from gaining traction. Concerningly, despite bills passed at the state level and executive actions taken by the White House, federal laws do not currently provide adequate protections for victims of NCII, especially AI-generated NCII. However, leaders in Washington – including Senators Ted Cruz, Amy Klobuchar, Josh Hawley, Dick Durbin and Lindsey Graham –  are leading the charge on legislative solutions designed to address the issue by creating criminal and civil penalties for bad actors. Taking this important step would address gaps in current law and provide the first federal legal framework to hold perpetrators accountable for creating and distributing real or deep fake NCII without consent. Additionally, by establishing clear legal consequences, including fines, damages, and other penalties, these solutions would serve as a deterrent to potential offenders and offer victims a way to seek justice.

Further, as Congress considers  legislation such as the TAKE IT DOWN Act and DEFIANCE Act to hold bad actors accountable, it’s also essential for Congress to provide legal clarity that allows private actors to continue effectively combating NCII.

Congress can and should act urgently to establish a legal framework to combat the production and distribution of NCII. It is also equally important for lawmakers to work with American companies who can continue to play a critical role in defending against these abuses and can often anticipate future risks and respond to them in a timely manner. This collaborative approach is essential to create a comprehensive federal standard that both punishes offenders and empowers victims. By combining legislative action with technological innovation, elected officials can embrace new solutions to better protect individuals from the harmful effects of NCII.

Policy Blog Templates (27)

In the News: These near-impossible standards would make California lose its edge in AI

Opinion: Will this California bill to regulate AI protect consumers or gut tech?

Source: LATimes

By Paul Lekas

California is the cradle of American innovation. Over the years, many information and tech businesses, including ones my association represents, have delivered for Californians by creating new products for consumers, improving public services and powering the economy. Unfortunately, legislation making its way through the California Legislature is threatening to undermine the brightest innovators and targeting frontier — or highly advanced — AI models.

The bill goes well beyond the stated focus of addressing real concerns about the safety of these models while ensuring that California reaps the benefits of this technology. Rather than targeting foreseeable harms, such as using AI for predictive policing based on biased historical data, or holding accountable those who use AI for nefarious purposes, SB 1047 would ultimately prohibit developers from releasing AI models that can be adapted to address needs of California consumers and businesses.

SB 1047 would do this by in effect forcing those at the forefront of new AI technologies to anticipate and mitigate every possible way that their models might be misused and to prevent that misuse. This is simply not possible, particularly since there are no universally accepted technical standards for measuring and mitigating frontier model risk.

Were SB 1047 to become law, California consumers would lose access to AI tools they find useful. That’s like stopping production of a prescription medication because someone took it illegally or overdosed. They would also lose access to AI tools designed to protect Californians from malicious activity enabled by other AI.

To be clear, concerns with SB 1047 do not reflect a belief that AI should proliferate without meaningful oversight. There is bipartisan consensus that we need guardrails around AI to reduce the risk of misuse and address foreseeable harms to public health and safety, civil rights and other areas. States have led the way in enacting laws to disincentivize the use of AI for ill. Indiana, Minnesota, Texas, Washington and California, for example, have enacted laws to prohibit the creation of deepfakes depicting intimate images of identifiable individuals and to restrict the use of AI in election advertising.

Congress is also considering guardrails to protect elections, privacy, national security and other concerns while maintaining America’s technological advantage. Indeed, oversight would be best handled in a coordinated manner at the federal level, as is being pursued through the AI Safety Institute launched at the National Institute of Standards and Technology, without the specter of civil and criminal liability. This approach recognizes that frontier model safety requires massive resources that no state, even California, can muster.

So although it is essential for elected leaders to take steps to protect consumers, SB 1047 goes too far. It would force emerging and established companies to weigh near-impossible standards for compliance against the value of doing business elsewhere. California could lose its edge in AI innovation. And AI developers outside the U.S. not subject to the same transparency and accountability principles would see their position strengthened, inevitably putting American consumers’ privacy and security at risk.

Paul Lekas is the head of global public policy and government affairs for the Software & Information Industry Assn. in Washington.