RetentionSpeakers

‘Creating a Space That Gives You Support’; Employee Retention Takes New Thinking

Even with most offices opening up, the days of taking new employees around to meet their new colleagues may be behind us. “The world has changed,” Anne Holland, co-founder of MJBiz, recently sold to Emerald for $120 million, told me recently. But that kind of connection did help employee retention and in-house collaborations. Have we replaced it?

Log in here to read new retention steps.

 

Beth-7.12.18-291-400

‘An Opportunity to Be the Glue’; Wearing Your Values on Your Sleeves Can Be Rewarding to You and Community

“At Brief, we are emphatic about placing the right people in the right jobs,” Elizabeth Green, CEO of Tulsa-based Brief Media, once told my colleague Matt Kinsman. “We do an extensive amount of personality profiling, including looking at motivating factors. My personal number one motivating factor is altruism.”

Log in here to read how flashing your values can be valuable to your organization.

siia-policy-feature-image

‘The Stakes Are Just Simply Too High’; Diverse Voices and Attention to the Inherent Risks Are Key to Developing Responsible AI Use, Panel Says

The Building Responsible U.S. Leadership in AI webinar presented by SIIA on March 28 featured government officials and industry experts discussing responsible AI, pending legislation, and US/EU harmonization.

SIIA President Jeff Joseph introduced the event, as the second in the 2022 Tech & series.

He discussed the organization’s support for consumer data privacy protection, saying that it will “enable continued innovation and the responsible use of emerging technologies such as AI. Advancing a model for the responsible development and use of artificial intelligence is among the most important components of a U.S. approach to fostering economic competitiveness.” Joseph underscored SIIA’s celebration of women’s history month, focused on highlighting leaders who are pushing the envelope on AI policy and are advancing opportunities to represent women in the fields of AI and STEM.

In keynote remarks, Rep. Jan Schakowsky (D-Ill.), head of the House Energy and Commerce Consumer Protection Subcommittee, set the table for a vigorous discussion on the fairness-to-all risks involved in the development of artificial intelligence—and the steps being taken to overcome those risks.

“The future of responsible artificial intelligence is in the diverse voices that we need to hear from,” she said. “Over the last few years we have seen that AI is susceptible to bias, because the people who created it are susceptible to bias, especially against women and people of color.

“AI is being used to determine whether or not some people get parole, get homes, get into schools and even get a job. These are not trivial decisions; they’ve formed the foundation of our lives, and we have to take seriously the dangers that accompany the promise of AI. I call on technology companies using artificial intelligence to diversify their workforce… The stakes are just simply too high.”

Moderating risks was a constant theme of the day and arose during the fireside chat between Dr. Divya Sridhar, SIIA’s Senior Director, Data Protection, and Dr. Sorelle Friedler Assistant Director for Data and Democracy at the White House Office of Science and Tech Policy. Sorelle shared the OSTP’s plans and opportunities for the U.S. to lead by building a rights-preserving approach to AI that would support marginalized communities with better tech and data design.

“As we’re moving out of this public engagement phase, what we’re trying to do is think concretely about what guardrails can be built into automated systems,” said Friedler. “I know from my time as a software engineer that many of the guardrails that we need are already best practices within software development. But, the harms of these technologies are real. Instead of thinking about a risk-based approach, I find it useful to think about ways… of protecting people’s fundamental rights, their access to opportunities and their access to vital needs.”

The importance of balancing rapid AI development with strong consumer privacy protections was raised by Rep. Zoe Lofgren (D-Calif.), chairperson of the Committee on House Administration. “What we mean by AI can be so blurry… It’s constantly changing and can be hard to scope policy in that direction, and so I think it can be more useful to think about scoping policy based on its potential impact.

It’s so important that we continue to foster opportunities for women in the fields of AI and other STEM and where representation has long been a problem.” She described “the central challenges of our age”—especially digital privacy—and the “profound risks that we’re only beginning to understand and [that] we’re doing virtually nothing to regulate.

“Now artificial intelligence is at the center of this,” she continued, “and, while we know that AI has tremendous potential to benefit humanity, it also comes with risk. Ultimately, we have to ensure that AI is not used to constrain or manipulate Americans, which means that regulating AI comes down to regulating privacy and digital data.”

Issie Lapowsky, chief correspondent with Protocol, moderated an engaging panel discussion stating “When we talk about responsible use of AI by government bodies or by military and law enforcement, it’s not just how it’s developed, but who gets to use it.”

Elham Tabassi, chief of staff, IT Lab, NIST, spoke about finding the “truth” or middle ground, so to speak, between those who believe that AI is going to improve our lives and those warning about its risks and possible harm.

“And that’s exactly what we’re trying to do—define what the risks are so we can get the attention of the technology developers and designers to understand those terms. We want to advance the measurement science for being able to not only understand what those requirements and practices are but be able to measure them in quantitative or qualitative work.”

Alice Hau, tech fellow, U.S. Senate Committee on Commerce, Science and Transportation, sensed a growing urgency. “A number of lawmakers are now looking to pay attention to the field,” she said. “And, as it matures, think about different ways that agencies like the Federal Trade Commission can step in and actually hold companies more accountable to implementing responsible AI practices.

“At the same time, some of these proposals still feel a little bit young, and [there are] suggestions to establish a new bureau that would be responsible for implementing that law. That might warrant greater scrutiny in terms of questions of where the appropriate place within government might be to house something that would enforce this kind of transparency, oversight and accountability in companies that implement AI.”

Julie Babayan, senior manager for government relations and public policy for Adobe, spoke about the practicalities, giving an example from Photoshop of how having diverse representation led to spotting a bias in the development of their new Neural Filters—allowing their engineering teams to make improvements and update the AI data set before the feature was released.

“We spent a year-long process to make sure that we were developing AI technologies in an ethical and responsible and inclusive way for our customers and for our communities,” she said. “When I read the NIST framework, [I saw] a desire to take best practices, not just from all of the many stakeholders that fed into this process but also take themes from the EU—and make sense of them and put them into perspective so that organizations that are developing AI are doing it in a responsible fashion.”

The entire webinar can be viewed here.

 

 

siia-policy-feature-image

American Innovation and Choice Online Act – A Solution in Search of a Problem

A statement from Paul Lekas, SIIA SVP for Global Public Policy:

The American Innovation and Choice Online Act (S.2992) and its companion bill in the House (H.R.3816) are a solution in search of a problem. The sweeping pronouncements made yesterday by the acting head of the Department of Justice’s Office of Legislative Affairs in support of those bills doesn’t change that. Contrary to what the DOJ letter asserts, it is not at all clear that there is a need, let alone a pressing one, for Congress to augment or clarify existing antitrust law or for the government to try to influence how digital markets work in the manner these bills propose. Our current antitrust laws have served us well for over a century and continue to do so.

Beyond being unnecessary, however, there is every reason to believe that the bills present serious risks to our national security and, if enacted, would dangerously weaken our cybersecurity defenses at a time when both are under threat. The DOJ letter glosses over these concerns. Striking the right balance between the many and varied interests that these bills seek to address requires difficult tradeoffs and careful deliberation. Given what is at stake, we cannot afford to get this wrong.


For more on SIIA’s concerns with S.2992 and H.R.3816 see here.

siia-policy-feature-image

“Unleashing” PETs to increase the potential of our digital economy

SIIA Senior Director, Data Protection Policy Divya Sridhar published a piece in GRC Outlook on Privacy-enhancing technologies (PETs).  PETs are techniques that use cryptography and statistics to minimize the amount of identifiable personal data processed between entities, while still ensuring accuracy in the measurement and analysis of the data. Common examples of PETs include homomorphic encryption, anonymization, and differential privacy, techniques which provide additional protection by never revealing personal data in plain text.

Read her article here.