Beth-7.12.18-291-400

‘An Opportunity to Be the Glue’; Wearing Your Values on Your Sleeves Can Be Rewarding to You and Community

“At Brief, we are emphatic about placing the right people in the right jobs,” Elizabeth Green, CEO of Tulsa-based Brief Media, once told my colleague Matt Kinsman. “We do an extensive amount of personality profiling, including looking at motivating factors. My personal number one motivating factor is altruism.”

Log in here to read how flashing your values can be valuable to your organization.

siia-policy-feature-image

‘The Stakes Are Just Simply Too High’; Diverse Voices and Attention to the Inherent Risks Are Key to Developing Responsible AI Use, Panel Says

The Building Responsible U.S. Leadership in AI webinar presented by SIIA on March 28 featured government officials and industry experts discussing responsible AI, pending legislation, and US/EU harmonization.

SIIA President Jeff Joseph introduced the event, as the second in the 2022 Tech & series.

He discussed the organization’s support for consumer data privacy protection, saying that it will “enable continued innovation and the responsible use of emerging technologies such as AI. Advancing a model for the responsible development and use of artificial intelligence is among the most important components of a U.S. approach to fostering economic competitiveness.” Joseph underscored SIIA’s celebration of women’s history month, focused on highlighting leaders who are pushing the envelope on AI policy and are advancing opportunities to represent women in the fields of AI and STEM.

In keynote remarks, Rep. Jan Schakowsky (D-Ill.), head of the House Energy and Commerce Consumer Protection Subcommittee, set the table for a vigorous discussion on the fairness-to-all risks involved in the development of artificial intelligence—and the steps being taken to overcome those risks.

“The future of responsible artificial intelligence is in the diverse voices that we need to hear from,” she said. “Over the last few years we have seen that AI is susceptible to bias, because the people who created it are susceptible to bias, especially against women and people of color.

“AI is being used to determine whether or not some people get parole, get homes, get into schools and even get a job. These are not trivial decisions; they’ve formed the foundation of our lives, and we have to take seriously the dangers that accompany the promise of AI. I call on technology companies using artificial intelligence to diversify their workforce… The stakes are just simply too high.”

Moderating risks was a constant theme of the day and arose during the fireside chat between Dr. Divya Sridhar, SIIA’s Senior Director, Data Protection, and Dr. Sorelle Friedler Assistant Director for Data and Democracy at the White House Office of Science and Tech Policy. Sorelle shared the OSTP’s plans and opportunities for the U.S. to lead by building a rights-preserving approach to AI that would support marginalized communities with better tech and data design.

“As we’re moving out of this public engagement phase, what we’re trying to do is think concretely about what guardrails can be built into automated systems,” said Friedler. “I know from my time as a software engineer that many of the guardrails that we need are already best practices within software development. But, the harms of these technologies are real. Instead of thinking about a risk-based approach, I find it useful to think about ways… of protecting people’s fundamental rights, their access to opportunities and their access to vital needs.”

The importance of balancing rapid AI development with strong consumer privacy protections was raised by Rep. Zoe Lofgren (D-Calif.), chairperson of the Committee on House Administration. “What we mean by AI can be so blurry… It’s constantly changing and can be hard to scope policy in that direction, and so I think it can be more useful to think about scoping policy based on its potential impact.

It’s so important that we continue to foster opportunities for women in the fields of AI and other STEM and where representation has long been a problem.” She described “the central challenges of our age”—especially digital privacy—and the “profound risks that we’re only beginning to understand and [that] we’re doing virtually nothing to regulate.

“Now artificial intelligence is at the center of this,” she continued, “and, while we know that AI has tremendous potential to benefit humanity, it also comes with risk. Ultimately, we have to ensure that AI is not used to constrain or manipulate Americans, which means that regulating AI comes down to regulating privacy and digital data.”

Issie Lapowsky, chief correspondent with Protocol, moderated an engaging panel discussion stating “When we talk about responsible use of AI by government bodies or by military and law enforcement, it’s not just how it’s developed, but who gets to use it.”

Elham Tabassi, chief of staff, IT Lab, NIST, spoke about finding the “truth” or middle ground, so to speak, between those who believe that AI is going to improve our lives and those warning about its risks and possible harm.

“And that’s exactly what we’re trying to do—define what the risks are so we can get the attention of the technology developers and designers to understand those terms. We want to advance the measurement science for being able to not only understand what those requirements and practices are but be able to measure them in quantitative or qualitative work.”

Alice Hau, tech fellow, U.S. Senate Committee on Commerce, Science and Transportation, sensed a growing urgency. “A number of lawmakers are now looking to pay attention to the field,” she said. “And, as it matures, think about different ways that agencies like the Federal Trade Commission can step in and actually hold companies more accountable to implementing responsible AI practices.

“At the same time, some of these proposals still feel a little bit young, and [there are] suggestions to establish a new bureau that would be responsible for implementing that law. That might warrant greater scrutiny in terms of questions of where the appropriate place within government might be to house something that would enforce this kind of transparency, oversight and accountability in companies that implement AI.”

Julie Babayan, senior manager for government relations and public policy for Adobe, spoke about the practicalities, giving an example from Photoshop of how having diverse representation led to spotting a bias in the development of their new Neural Filters—allowing their engineering teams to make improvements and update the AI data set before the feature was released.

“We spent a year-long process to make sure that we were developing AI technologies in an ethical and responsible and inclusive way for our customers and for our communities,” she said. “When I read the NIST framework, [I saw] a desire to take best practices, not just from all of the many stakeholders that fed into this process but also take themes from the EU—and make sense of them and put them into perspective so that organizations that are developing AI are doing it in a responsible fashion.”

The entire webinar can be viewed here.

 

 

siia-policy-feature-image

American Innovation and Choice Online Act – A Solution in Search of a Problem

A statement from Paul Lekas, SIIA SVP for Global Public Policy:

The American Innovation and Choice Online Act (S.2992) and its companion bill in the House (H.R.3816) are a solution in search of a problem. The sweeping pronouncements made yesterday by the acting head of the Department of Justice’s Office of Legislative Affairs in support of those bills doesn’t change that. Contrary to what the DOJ letter asserts, it is not at all clear that there is a need, let alone a pressing one, for Congress to augment or clarify existing antitrust law or for the government to try to influence how digital markets work in the manner these bills propose. Our current antitrust laws have served us well for over a century and continue to do so.

Beyond being unnecessary, however, there is every reason to believe that the bills present serious risks to our national security and, if enacted, would dangerously weaken our cybersecurity defenses at a time when both are under threat. The DOJ letter glosses over these concerns. Striking the right balance between the many and varied interests that these bills seek to address requires difficult tradeoffs and careful deliberation. Given what is at stake, we cannot afford to get this wrong.


For more on SIIA’s concerns with S.2992 and H.R.3816 see here.

siia-policy-feature-image

“Unleashing” PETs to increase the potential of our digital economy

SIIA Senior Director, Data Protection Policy Divya Sridhar published a piece in GRC Outlook on Privacy-enhancing technologies (PETs).  PETs are techniques that use cryptography and statistics to minimize the amount of identifiable personal data processed between entities, while still ensuring accuracy in the measurement and analysis of the data. Common examples of PETs include homomorphic encryption, anonymization, and differential privacy, techniques which provide additional protection by never revealing personal data in plain text.

Read her article here.

Welcome (Instagram Post)

Event Recap: Building Responsible U.S. Leadership in AI

SIIA hosted a public event on Monday, March 28 from 2:00 – 3:30 pm ET, to spotlight women leadership in AI policy. The event featured government officials and industry experts to discuss responsible AI, pending legislation, and US/EU harmonization. The event keynote was Rep. Jan Schakowsky, Chair of the Subcommittee on Consumer Protection in the House Energy and Commerce Committee. We hosted a fireside chat with Sorelle Friedler of the White House OSTP and heard remarks from Rep. Zoe Lofgren, Chair of the House Administration Committee. Following that, we hosted a true expert panel that included Elham Tabassi, Chief of Staff, IT Lab, NIST; Alice Hau, Tech Fellow, US Senate Committee on Commerce, Science and Transportation; and Julie Babayan, Senior Manager for Govt Relations and Public Policy, Adobe. The moderator was Issie Lapowsky, Chief Correspondent with Protocol. Politico featured the event as the top story in Morning Tech, which can be viewed here.

Highlights from the event include:

  • SIIA President Jeff Joseph introduced the event, as the second in the Tech & series in 2022. He spoke to SIIA’s support for consumer data privacy protections that enable continued innovation and responsible use of emerging technologies such as AI. He noted that the U.S. government should make this a priority in its foreign policy, or else we are in danger of ceding the regulatory playing field to those who don’t share our democratic values and support for a free and open internet. He also noted SIIA’s enthusiasm for Friday’s announcement on an agreement in principle with the EU on data privacy and data transfers, which is a good first step. Jeff underscored SIIA’s celebration of women’s history month, focused on highlighting leaders who are pushing the envelope on AI policy and are advancing opportunities to represent women in the fields of AI and STEM.
  • Keynote for the event was Chair Schakowsky, who discussed the importance of underlying data and algorithmic design for technology development and design. She focused on the need for the U.S. to advance federal privacy legislation as a compelling starting point in improving algorithmic bias, and as an important leveling of the playing field when it comes to the United States keeping an innovative position in the geopolitical race. We have to protect the privacy of consumer data which feeds into AI algorithms, said Rep. Jan Schakowsky (D-Ill.), the head of the House Energy and Commerce Consumer Protection Subcommittee and a key player in the long-running (and long-stalled) conversation around comprehensive privacy legislation.
  • Next, the event featured a fireside chat between SIIA’s Senior Director for Data Policy, Dr. Divya Sridhar and Sorelle Friedler, Assistant Director for Data and Democracy at the White House Office of Science and Tech Policy. Sorelle shared about her background in machine learning, work as a technologist, and how it has shaped her career in academia and policymaking. She discussed the OSTP’s plans and opportunities for the U.S. to lead by building a rights-preserving approach to AI that would support marginalized communities with better tech and data design. She also touched on the future of funding for science and tech, which will require additional investments in computer science and opportunities to support the youth with trajectories in computer science, coding, AI, and much more.
  • In her remarks, Rep. Lofgren, Chair of the House Administration Committee, noted the importance of balancing rapid AI development with strong consumer privacy protections and shared more about her proposal, the Online Privacy Act.It’s very important that we get the details right, Lofgren said at the SIIA event on Monday. Privacy rules set by Congress in the next few years, she said, “could set the stage for responsible data collection and AI for decades to come.”
  • During the moderated panel at the event, public and private sector experts discussed a range of topics, including: the NIST framework for responsible AI use and its recent report on reducing bias in AI; Congress’ plans with the National AI Initiative and proposals like the Algorithmic Accountability Act; use of AI by government and industry; and how industry players like Adobe have made strides of progress toward reducing AI bias through impact assessments and a review board strictly to support the industry process. In closing, the group touched on the need for more diverse representation of women and technologists in government and the positive efforts that are underway to bolster the STEM pipeline in Congress.

A recording of the event can be viewed on the SIIA YouTube Channel, here.