SIIA joins many of our members in providing feedback to the National Telecommunications and Information Administration’s (NTIA’s) Request for Comment on Artificial Intelligence and Accountability Policy. First and foremost, our response is focused on contemplating a workable definition and scope for AI accountability.
We believe the development of accountable AI requires a risk-based approach. Although existing frameworks already govern many of the concerns we share regarding AI, such a risk-based framework should be the starting point for developing those laws and regulations to fill the gaps in accordance with the risk of harm presented by AI development. This is naturally a different question for AI systems that, for example, determine consumer eligibility for financial or employment opportunities than it is for pedestrian AI features such as autocorrect.
Furthermore, we strongly support sector-based governance of AI systems. Government agencies that already possess deep expertise in consumer protection issues that may implicate AI need not be supplanted, but rather require additional funding to upskill their workforce to grapple with the challenges and opportunities posed by AI.
Finally, we believe federal legislation that clarifies and enhances oversight of AI systems could be beneficial. This is important to avoid the inefficient and costly “patchwork” of regulations we currently see proliferating in regulatory areas such as data privacy protection. It would also enable the US to lead the world in AI development along with sensible regulation developed with expert input.