Copy of Media Library (34)

SIIA Supports the Invest in Child Safety Act to Combat Online Child Exploitation

The Software and Information Industry Association (SIIA) has joined a letter to Congress advocating for proper funding to address child safety online. The letter emphasizes the importance of adequate resources for prosecutors and law enforcement to effectively investigate and prosecute child predators involved in online child sexual abuse material (CSAM). The letter cites a 2023 report by Stop Child Predators, revealing that law enforcement in 2022 could only investigate less than 1% of reported CSAM cases due to limited capacity and resources. SIIA urges policymakers to support the Invest in Child Safety Act of 2024 (S. 3689 | H.R. 7134), a bipartisan bill aimed at providing $5 billion in mandatory funding to investigate and target predators, along with additional support for community-based efforts and mental health professionals. SIIA remains committed to collaborating with Congress to ensure the safety of children online and support the prosecution of individuals involved in child exploitation.

The Software & Information Industry Association is one of the signatories on a lthe privacy, safety and security of children and students online.

This statement can be attributed to Sara Kloek, Vice President, Education and Children’s Policy, Software & Information Industry Association.

The Software & Information Industry Association is one of the signatories on a letter sent today in support of the Invest in Child Safety Act of 2024.  The bipartisan legislation includes crucial funding for law enforcement and prosecutors to effectively investigate and prosecute the child predators and bad actors reported to them – a critical component to combating child sexual abuse material (CSAM) online.  Indeed, a recent study found that law enforcement investigates less than 1% of all CSAM reports, largely due to a lack of resources. The bill also recognizes the mental health needs of victims and their families by providing funding and support for mental health professionals. 

We encourage the full Congress to enact this important bill that is aligned with the mission of SIIA’s Keep Kids Safe and Connected campaign that advocates to protect the privacy, safety and security of children and students online.

Copy of Media Library (35)

Cracking the Code: What are the implications for tech and information policy?

By Paul Lekas, Senior Vice President, Global Public Policy and Morten Skroejer, Senior Director for Technology Competition Policy

A few weeks ago, SIIA released Cracking the Code. As the trade association for companies in the business of information, Cracking the Code is the first of what we hope will be a recurring deep-dive into how people interact with and use information. 

Cracking the Code focuses on how people find information online in their personal and professional lives. The findings, captured here, here, and here, are striking – not only because they’re really interesting, but also because they have direct implications for policymakers in the United States and abroad.

What We Found

For this survey, we asked 2,000 U.S.-based internet users aged 18-70 how they search for information online. The goal was to map the digital landscape and to gain an empirical, evidence-based understanding of how people use various platforms to access information in their personal lives and professional capacities.

What we found was that there has been a gradual but significant change in people’s online search habits. A decade or two ago, traditional search engines, such as Yahoo! and Google, were the primary means for finding information online. And while those types of traditional platforms are obviously still around and widely used, thinking of internet search as the domain of traditional engines is out of touch with people’s actual behavior today.

For many users, there is a clear distinction between how they use the internet to search for information related to their work, their educational pursuits, or personally. 

Searches that are precipitated or inspired by personal interests often involve extensive explorations that can take unexpected turns and lead in surprising directions. These types of exploratory searches often involve a combination of social media and content-specific websites, at least as much as traditional search engines. In fact, our research shows that younger generations, especially those that belong to the Millennial and Gen Z cohort, are more hesitant to use traditional search engines when they look for information online, and are much more likely to rely on social media platforms and niche websites instead.

Work-related information searches, on the other hand, tend to be more focused and less open-ended. For that reason, traditional search engines are still the preferred means of conducting those types of inquiries. Even so, there is a growing awareness among users of the immense potential benefits platforms that incorporate Generative AI offer, and, as a result, how these types of tools are likely to reshape traditional search methods in the near-to-medium term.

Traditional internet search engines that were en vogue 10-15 years ago are no longer as relevant because new and more refined search platforms are better calibrated to meet the needs and interests of younger internet users. But, that is the nature of competition and exactly what you would expect from the continued and dynamic transformation of the digital frontier.

What This Means For Policy

Cracking the Code suggests that policies centered on traditional search engines as the gatekeepers to the internet is the wrong way to think about how the internet actually works today. Social media platforms, news websites, niche websites, and increasingly, GenAI tools should be considered entry points to the internet on par with traditional search engines. And within these categories, there’s a range of differences that are often lumped together.

Empirical data demonstrating how people find information online in their personal and professional lives has a range of implications for policy makers.

First, policymakers should draw on research like this in developing bills and policies that are future proof. The general lesson from this research is that policymakers should focus not on particular technology but on the actions and harms that warrant regulation. This approach will help to cultivate rules of the road that are technology-neutral, future proof, and capable of adapting as technology develops. Adopting a risk-based approach to technology regulation is one way to achieve these goals. 

Second, in the context of the internet search ecosystem, this research is directly relevant to debates on competition. Antitrust policy and enforcement are designed to ensure that we protect and promote competition, not to insulate specific companies from the effects of competition. Fundamentally, this is about market definition. While consumers recognize the difference between traditional search engines, social media platforms, and GenAI apps, when it comes to looking for information online, they are using these tools – and more – in a similar way. The upshot is that when lawmakers focus on alleged dominance of search engine platforms, they are looking at a snapshot of internet behavior that could be obsolete – perhaps consistent with conventional wisdom, but inconsistent with reality.

The debate in the last Congress over whether to pass the American Innovation and Choice Online Act (AICOA) is a case in point. AICOA sought to create a set of special rules that would apply to just a handful of companies in the tech industry, based largely on a range of assumptions (and conventional wisdom) that, as Cracking the Code and other studies have shown, does not match the reality of user behavior on the internet.

Third, future proofing becomes hard as new technology develops rapidly and younger generations lead adoption and drive change. Indeed, it’s possible that we are looking at information and tech policy in the wrong way. Technology is changing rapidly. GenAI is a new concept for most internet users; tools like ChatGPT have been available to the public for less than 18 months. More empirical research is critical to furthering policy debates.

Closing

One of SIIA’s core objectives is to foster public policy solutions that support a healthy digital ecosystem and are sufficiently robust to withstand changes driven by technology and politics. We do not know what technological innovation will bring and how user behavior will respond. Yet it is certain that the quest for knowledge and exploration almost certainly will continue to rapidly evolve and profoundly shape our digital interactions. As policymakers debate legislative and regulatory changes in this area, and discuss how to future-proof any such endeavors, they will need to pay close attention to and be mindful of these important developments. 

Copy of Media Library (33)

SIIA Statement on Supreme Court Arguments on Texas and Florida Content Moderation Laws

The following statement can be attributed to Chris Mohr, President, Software & Information Industry Association.

The ability for private entities to decide what to publish, and what not to publish, is a fundamental lynchpin of First Amendment freedoms. Today’s arguments to the Supreme Court make it clear that these two laws from Florida and Texas are an abridgement of these rights and would rework years of constitutional law. Parts of Florida and Texas laws would sharply restrict the editorial discretion of social media platforms including SIIA members Meta and Google.

These laws unconstitutionally restrict platforms’ ability to engage in content moderation by restricting their choices about whether and how to present user-generated content to the public, and impermissibly burden constitutionally protected editorial decisions by requiring the creation of an explanation every time a platform makes a decision to remove user content.  The platforms should have the right to decide what content is appropriate and remove content that they believe is harmful to themselves or the public.

Platforms are not like common carriers, which exercise no editorial judgment, and giving the government control over the content as if the platforms were state actors would have extraordinary implications for the internet, the entire U.S. economy, and the Constitution.

Copy of Media Library (32)

The Year of Deepfakes

Written by: Suher Adi

Once an afterthought in the broader policy debates around AI, deepfakes have taken center stage. SIIA has been leading industry on the topic for the last few years working with partners to ensure deepfakes are appropriately managed to minimize harms on society. 

Industry has been working to meet the challenges of deepfake content and move policies ahead of current standards protecting users and the integrity of their platforms. Recently, Meta has developed a policy to detect and label AI-generated images posted by users on all its platforms. It will also be incorporating visible and invisible watermarks for images generated using Meta’s AI tools. Additionally, Google has announced that its Gemini AI models will incorporate digital watermarks and restrict the ability to generate images of named people. Google has joined the Coalition for Content Provenance and Authenticity (C2PA) serving on its steering committee to enhance industry collaboration on this issue. 

The most recent industry collaboration on the topic of deepfakes was announced a few days ago as companies like Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, Snap, TikTok and X signed a pledge to combat the use of deceptive artificial intelligence (AI) in this year’s elections. It has been noted that about 20 companies have joined this pledge taking important steps to ensure that technology is not used by bad actors to undermine democracy. The pledge extends beyond the United States and is focused on securing elections in over 40 countries which are set to have elections throughout the year. 

These steps are all necessary to ensuring companies do all within their power to minimize the consequences of deepfake content on their platforms. However, stopping the harms of deepfakes will require agility and collaboration beyond the private sector. 

Congress has multiple pieces of legislation introduced discussing the impacts of deepfakes. H.R.5808 – Preventing Deep Fake Scams Act, a bipartisan piece of legislation focuses on preventing AI deepfake content from being used to scam consumers and those vulnerable to identity theft and fraud in the financial sector. The DEEPFAKES Accountability Act (HR- 5586) requires any deepfake content of any person to include a disclaimer using a digital watermark. 

Two bills have been linked to developments of AI’s use in pop culture. H.R.6943 – No AI FRAUD Act was introduced in January of this year to protect the voices and likeness of individuals in the AI age effectively making the use of AI generated voice content illegal if created without consent. HR 3106- Preventing Deepfakes of Intimate Images Act, which regained traction following the controversy of explicit AI-generated images of Taylor Swift surfaced online. The bill would provide an avenue for civil cases against individuals who post deepfake images of individuals without their consent, including images of children. 

In addition to Congressional action, the White House and federal agencies have also developed guidance to deal with the issue of deepfake content. The White House Executive Order on Artificial Intelligence issued in October of 2023 directed the Department of Commerce to develop guidance on watermarking and labeling of AI generated content. The National Security Agency (NSA) and other U.S. Federal Agencies have issued guidance on “synthetic media content known as deepfakes.” The Federal Elections Commission (FEC) has also issued a notice about their intent to develop rules around “deliberately deceptive artificial intelligence campaign advertisements.” 

Together, industry and Congress and federal agencies can work together to develop policies and strategies that protect consumers and users of online platforms from fraud. Our democracy and society at large are better off with these collaborative policy efforts.  

To watch the event mentioned earlier in the post held with Adobe and the Center for Democracy and Technology – you can click here: https://www.youtube.com/watch?v=D1Y_5JhUhCs.

Copy of Media Library (30)

SIIA Advocates for Secure-by-Design Software Principles in Response to CISA’s Request for Information

The Software & Information Industry Association (SIIA) has responded to the Cybersecurity and Infrastructure Security Agency’s (CISA) request for information on the white paper, “Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Secure by Design Software.” SIIA advocate for alignment with the NIST Secure Software Development Framework, emphasizing stakeholder engagement, and encouraging feedback from industry-led organizations. SIIA recommends adopting flexible principles over rigid controls for secure-by-design software, promoting government support for standardization, and incorporating vulnerability testing in procurement processes. Overall, SIIA strongly supports the objectives of the white paper while urging a collaborative approach with industry expertise and adherence to established frameworks.