Written by: Sharan Sawlani
While a majority of eyes were on the Senate Judiciary Committee’s hearing that featured OpenAI’s Samuel Altman and IBM’s Christina Montgomery, the Homeland Security and Governmental Affairs Committee (HSGAC) held its own hearing regarding artificial intelligence in government. The meeting addressed both the potential benefits of AI in public service and potential pitfalls such as bias, privacy risks, and the need for transparency.
In his opening statement, Chairman Gary Peters (D-MI) emphasized the need for the U.S. to maintain global leadership in the AI space. He recognized AI’s potential in improving services for Americans but cautioned that planning for potential risks is necessary and guardrails are needed. This includes considerations for things like bias, privacy harms stemming from extensive data collection, the need for transparency and accountability, and workforce expertise to manage and develop these systems within agencies.
Ranking Member Rand Paul (R-KY), however, pointed out that government agencies have long used existing, early-stage AI systems for the clandestine surveillance, tracking and otherwise monitoring of US citizens, especially those engaging in disfavored political activism on both the right and left. He emphasized the potential dangers of more powerful AI systems being used to further censor dissenting viewpoints under the guise of preventing the spread of misinformation.
Witnesses for this hearing included:
- Richard A. Eppink: Of Counsel, ACLU of Idaho
- Taka Ariga: Chief Data Scientist, GAO
- Lynn Parker, PhD: Associate Vice Chancellor and Director, AI Tennessee Initiative, University of Tennessee
- Daniel Ho: Professor, Stanford Law School
- Jacob Siegel: Writer
Their key recommendations revolved around creating frameworks and standards for transparency and accountability in AI development and use, oversight and enforcement for high-risk use cases, resourcing and effort to hire and retain experts across federal agencies, and the need to fill open AI leadership positions. Another notable point of agreement between Chairman Peters, Taka Ariga, and Daniel Ho surrounded the necessity of a federal privacy law to mitigate some of the risks associated with the large amounts of data that can be collected as a result of using AI. Additionally, large troves of information collected in order to enable AI systems such as large language models (LLMs) in the first place may include sensitive data, such as health or consumer financial data, that implicates heightened consumer privacy concerns.
This hearing follows the White House’s recent announcement on new initiatives to strengthen American leadership in AI technology, and to encourage responsible use of AI within the U.S. government. Through this summer, we are anticipating that the federal government will continue to pay especially close attention to the implications of AI use and innovation. As AI continues to shape our world, maintaining transparency, accountability, and a respect for individual privacy will remain paramount.