‘The Stakes Are Just Simply Too High’; Diverse Voices and Attention to the Inherent Risks Are Key to Developing Responsible AI Use, Panel Says

The Building Responsible U.S. Leadership in AI webinar presented by SIIA on March 28 featured government officials and industry experts discussing responsible AI, pending legislation, and US/EU harmonization.

SIIA President Jeff Joseph introduced the event, as the second in the 2022 Tech & series.

He discussed the organization’s support for consumer data privacy protection, saying that it will “enable continued innovation and the responsible use of emerging technologies such as AI. Advancing a model for the responsible development and use of artificial intelligence is among the most important components of a U.S. approach to fostering economic competitiveness.” Joseph underscored SIIA’s celebration of women’s history month, focused on highlighting leaders who are pushing the envelope on AI policy and are advancing opportunities to represent women in the fields of AI and STEM.

In keynote remarks, Rep. Jan Schakowsky (D-Ill.), head of the House Energy and Commerce Consumer Protection Subcommittee, set the table for a vigorous discussion on the fairness-to-all risks involved in the development of artificial intelligence—and the steps being taken to overcome those risks.

“The future of responsible artificial intelligence is in the diverse voices that we need to hear from,” she said. “Over the last few years we have seen that AI is susceptible to bias, because the people who created it are susceptible to bias, especially against women and people of color.

“AI is being used to determine whether or not some people get parole, get homes, get into schools and even get a job. These are not trivial decisions; they’ve formed the foundation of our lives, and we have to take seriously the dangers that accompany the promise of AI. I call on technology companies using artificial intelligence to diversify their workforce… The stakes are just simply too high.”

Moderating risks was a constant theme of the day and arose during the fireside chat between Dr. Divya Sridhar, SIIA’s Senior Director, Data Protection, and Dr. Sorelle Friedler Assistant Director for Data and Democracy at the White House Office of Science and Tech Policy. Sorelle shared the OSTP’s plans and opportunities for the U.S. to lead by building a rights-preserving approach to AI that would support marginalized communities with better tech and data design.

“As we’re moving out of this public engagement phase, what we’re trying to do is think concretely about what guardrails can be built into automated systems,” said Friedler. “I know from my time as a software engineer that many of the guardrails that we need are already best practices within software development. But, the harms of these technologies are real. Instead of thinking about a risk-based approach, I find it useful to think about ways… of protecting people’s fundamental rights, their access to opportunities and their access to vital needs.”

The importance of balancing rapid AI development with strong consumer privacy protections was raised by Rep. Zoe Lofgren (D-Calif.), chairperson of the Committee on House Administration. “What we mean by AI can be so blurry… It’s constantly changing and can be hard to scope policy in that direction, and so I think it can be more useful to think about scoping policy based on its potential impact.

It’s so important that we continue to foster opportunities for women in the fields of AI and other STEM and where representation has long been a problem.” She described “the central challenges of our age”—especially digital privacy—and the “profound risks that we’re only beginning to understand and [that] we’re doing virtually nothing to regulate.

“Now artificial intelligence is at the center of this,” she continued, “and, while we know that AI has tremendous potential to benefit humanity, it also comes with risk. Ultimately, we have to ensure that AI is not used to constrain or manipulate Americans, which means that regulating AI comes down to regulating privacy and digital data.”

Issie Lapowsky, chief correspondent with Protocol, moderated an engaging panel discussion stating “When we talk about responsible use of AI by government bodies or by military and law enforcement, it’s not just how it’s developed, but who gets to use it.”

Elham Tabassi, chief of staff, IT Lab, NIST, spoke about finding the “truth” or middle ground, so to speak, between those who believe that AI is going to improve our lives and those warning about its risks and possible harm.

“And that’s exactly what we’re trying to do—define what the risks are so we can get the attention of the technology developers and designers to understand those terms. We want to advance the measurement science for being able to not only understand what those requirements and practices are but be able to measure them in quantitative or qualitative work.”

Alice Hau, tech fellow, U.S. Senate Committee on Commerce, Science and Transportation, sensed a growing urgency. “A number of lawmakers are now looking to pay attention to the field,” she said. “And, as it matures, think about different ways that agencies like the Federal Trade Commission can step in and actually hold companies more accountable to implementing responsible AI practices.

“At the same time, some of these proposals still feel a little bit young, and [there are] suggestions to establish a new bureau that would be responsible for implementing that law. That might warrant greater scrutiny in terms of questions of where the appropriate place within government might be to house something that would enforce this kind of transparency, oversight and accountability in companies that implement AI.”

Julie Babayan, senior manager for government relations and public policy for Adobe, spoke about the practicalities, giving an example from Photoshop of how having diverse representation led to spotting a bias in the development of their new Neural Filters—allowing their engineering teams to make improvements and update the AI data set before the feature was released.

“We spent a year-long process to make sure that we were developing AI technologies in an ethical and responsible and inclusive way for our customers and for our communities,” she said. “When I read the NIST framework, [I saw] a desire to take best practices, not just from all of the many stakeholders that fed into this process but also take themes from the EU—and make sense of them and put them into perspective so that organizations that are developing AI are doing it in a responsible fashion.”

The entire webinar can be viewed here.

 

 

Comments are closed.