Identifying Risk of AI Used In Law Enforcement and Immigration

Written by: Suher Adi

Similarly on the law enforcement and immigration use cases outlined in the draft comment, there should be more tailored guidelines on what is considered under these uses. In the law enforcement context, the language is broad and far-reaching describing a wide variety of potential government actions. It is not clear what is and is not included, and there is a good chance that this language will be interpreted to cover a number of otherwise low-risk AI applications. This would impede effective and responsible adoption of AI systems for critical governmental purposes, without meaningfully adding to the protection of individuals’ rights and privileges. Additionally, it  goes beyond the intended scope of the AI Executive Order, which focuses on the use of AI in “areas where AI can enhance law enforcement efficiency and accuracy, consistent with protections for privacy, civil rights, and civil liberties.”

The immigration use case outlines the use of facial recognition technology and surveillance technology more broadly. Due to its  broad language, seemingly mundane uses, such as facial matching, would unnecessarily be flagged as “rights-impacting AI.” Facial matching–which we read to mean authentication and/or verification using facial recognition technology–presents significantly less risk than do other uses of facial recognition technology, or than  many of the other uses contained in this category. Because of this, it would be helpful if this set of use cases was further delineated in accordance with a risk-based assessment.

There are many ways we can tackle the use of AI in law enforcement and immigration contexts. First, we must disambiguate between law enforcement and surveillance-related risk assessments. While law enforcement has a well understood meaning, “surveillance-related risk assessments” could apply to anything from law enforcement to FDA inspections. The scope of surveillance-related risk assessments is not clear and could be interpreted in different ways. In addition, there should also be clarification about the use of “the conduct of cyber intrusions”.

Second, the focus should be on those law enforcement purposes that have a direct impact on individuals’ rights, like litigation. Additionally, there must be a limit on recourse mechanisms, such as notification to negatively affected individuals and opt-out rights. These recourse limits are particularly important when it would jeopardize important government ends, including the need to maintain confidentiality in ongoing  investigations. Importantly, recourse limits are not required by either existing law or the Constitution. Clarifying these uses and assessments ensures that the use of AI for law enforcement and immigration does not infringe on people’s rights, and that appropriate measures are included to mitigate potential harms in the future.

Comments are closed.