The strength of AI is its ability to recognize patterns and generate information that helps humans make decisions. Doing that successfully requires robust datasets and finely tuned algorithms to recognize patterns across the data to generate accurate predictions. Yet predictions made by image generators and large language models are of limited value if they do not match user intentions or reflect unintended bias.
We saw this play out when Google initially released its image generation feature for its AI tool, Gemini. It was, admittedly, not ready for prime time, creating images that were inaccurate and, as the company described, its results were sometimes overcautious. Once the problem surfaced, Google made a responsible decision to swiftly remove the Gemini feature that generates images of people from the public domain.
Gemini was not the first AI product to be released prematurely. Remember Tay – one of Microsoft’s first forays into AI: In less than twenty-four hours, Tay’s Twitter account began to regurgitate racist and antisemitic comments it had found online.
OpenAI’s ChatGPT also experienced issues. At launch, users reported ChatGPT’s tendency to confidently report inaccurate information, and experts found cybersecurity and privacy threats in the model. At the time, however, OpenAI and Microsoft chose to make adjustments while enabling access to the tool, thus allowing the American public to essentially participate in the testing stage.
Plenty of other AI tools have amplified stereotypes without acknowledging their errors. Amplifying stereotypes and bias rather than presenting generated content that is patently at odds with our notions of reality is much harder to detect, and may cause less of a stir. But that sort of output should be no less problematic.
This leads, of course, to the question of what to do once these problems arise.
Responsible industry practice demands that firms proactively address problems and attack them head-on. For example, we would expect AI developers to identify potential cyber risks associated with AI and invest in ensuring that bad actors cannot employ cyber-attacks to expose private personal information and business secrets. We would also expect a certain degree of civic responsibility, for example by limiting answers to election-related queries. The responsible deployment of generative AI requires both taking preventative measures before release, but also as we discover the need to tweak these products to address potential concerns as they arise to ensure AI tools match our current reality.
That is exactly what Google did with Gemini. Although errors occurred, what happens in the aftermath matters. Google quickly removed the problematic Gemini feature from public access. However, overblowing the response to mistakes in new AI products with calls for heavy-handed regulations carries the risk of reducing competition in the tech industry and stifling innovation. Gemini’s rollout was far from perfect, but we should avoid overcorrecting on policy and vilifying responsible companies that are treating this complicated technology with care.