The Software & Information Industry Association (SIIA) has expressed constitutional concerns regarding the current draft of HB 184 in a letter. However, SIIA argue that the bill, as currently drafted, could impose unnecessary burdens on both government agencies and the private sector. The requirement for annual assessments of AI systems by the New Mexico General Services Department is criticized as imposing needless costs, with assessments unlikely to significantly change within a year, potentially causing confusion rather than clarity. The transparency requirements for vendors, mandating the disclosure of sensitive information, are deemed risky and possibly in violation of statutory requirements. SIIA acknowledges the importance of overseeing AI systems but suggests importing language from SB 130, a study bill, as a more efficient and less risky alternative to achieve the same goals. And advocates against passing such a substantive bill in a short budget session with limited stakeholder input.
Identifying Risk of AI Used In Law Enforcement and Immigration
Written by: Suher Adi
Similarly on the law enforcement and immigration use cases outlined in the draft comment, there should be more tailored guidelines on what is considered under these uses. In the law enforcement context, the language is broad and far-reaching describing a wide variety of potential government actions. It is not clear what is and is not included, and there is a good chance that this language will be interpreted to cover a number of otherwise low-risk AI applications. This would impede effective and responsible adoption of AI systems for critical governmental purposes, without meaningfully adding to the protection of individuals’ rights and privileges. Additionally, it goes beyond the intended scope of the AI Executive Order, which focuses on the use of AI in “areas where AI can enhance law enforcement efficiency and accuracy, consistent with protections for privacy, civil rights, and civil liberties.”
The immigration use case outlines the use of facial recognition technology and surveillance technology more broadly. Due to its broad language, seemingly mundane uses, such as facial matching, would unnecessarily be flagged as “rights-impacting AI.” Facial matching–which we read to mean authentication and/or verification using facial recognition technology–presents significantly less risk than do other uses of facial recognition technology, or than many of the other uses contained in this category. Because of this, it would be helpful if this set of use cases was further delineated in accordance with a risk-based assessment.
There are many ways we can tackle the use of AI in law enforcement and immigration contexts. First, we must disambiguate between law enforcement and surveillance-related risk assessments. While law enforcement has a well understood meaning, “surveillance-related risk assessments” could apply to anything from law enforcement to FDA inspections. The scope of surveillance-related risk assessments is not clear and could be interpreted in different ways. In addition, there should also be clarification about the use of “the conduct of cyber intrusions”.
Second, the focus should be on those law enforcement purposes that have a direct impact on individuals’ rights, like litigation. Additionally, there must be a limit on recourse mechanisms, such as notification to negatively affected individuals and opt-out rights. These recourse limits are particularly important when it would jeopardize important government ends, including the need to maintain confidentiality in ongoing investigations. Importantly, recourse limits are not required by either existing law or the Constitution. Clarifying these uses and assessments ensures that the use of AI for law enforcement and immigration does not infringe on people’s rights, and that appropriate measures are included to mitigate potential harms in the future.
IN THE NEWS: We Must Protect Kids Online, But Bills Currently Before Congress Fail to Meet the Needs of the American Public
Pulled from Real Clear Policy
By Chris Mohr
Congress has made it clear that protecting children online is a priority for 2024. And while scheduling blockbuster hearings, hosting panels, and litigating in the press can help draw attention to a pervasive issue that demands a resolution, many of the proposed legislative solutions fail to actually keep kids safe online.
Bills like STOP CSAM, KOSA, EARN It, etc., while valiant in their intentions, miss the bigger picture. Kids are safest from exploitation when online platforms are not forced to collect their personally identifiable information and other data. Requiring every online platform kids could access – whether social media sites, news publications, or online stores – to take inventory of birth dates, drivers licenses, etc. puts our kids’ (and adults) most private information in the hands of just about anyone who hosts a website on the internet, raising the risks of all too common data breaches and identity theft.
Legislation like this also has the very real potential to backfire. Allowing the government to subjectively restrict access to news media and other information, or limit free speech online, raises significant constitutional and civil liberty issues.
However, just because the bills currently in front of federal – and some state – legislators are problematic does not negate the opportunity or potential for real solutions that marry the protection of data with the safety of kids navigating an increasingly online world. Before any piece of online safety legislation moves forward, policymakers should consider the following criteria as core guidelines.
First and foremost, companies should minimize the collection, use, and retention of children’s data. Only collecting and using the necessary information instead of forced collection of sensitive information will lead to more privacy protections not less. Data minimization is especially important for processing children’s data because children are generally more trusting and less aware of the risks related to sharing personal information.
Second, any piece of online safety legislation should require companies to maintain robust security safeguards for users, including children, that are appropriate to the level of sensitivity of the personal information collected, used, or shared. Security safeguards are a fundamental principle of privacy protection. The safeguards should incorporate protections against such risks as loss, or unauthorized access, destruction, use, modification, or disclosure of data. It should also include precautions against the deliberate abuse or misuse of information, and facilitate the detection of any violation of the security system.
Third, easy-to-use and easy-to-access tools that empower users to customize privacy protections and exercise rights over their own data should become industry standard. Instead of putting bureaucrats in charge of deciding what content families can and cannot access, companies should provide tools and be transparent about their privacy practices providing users concise, prominent, accessible privacy policies that use clear and plain language.
Fourth, it is also critical that policymakers recognize the need to help educate children and parents about safe online practices. Just as we teach children to look both ways before crossing the street, we also need to teach them to think about managing personal information they choose to share or not share online. Funding for digital citizenship programs and other educational opportunities for families, educators, and the general public could go a long way in supporting a holistic approach towards protecting the privacy of America’s children.
To accomplish all of these goals, policymakers can adopt strong rules, such as a federal preemptive comprehensive privacy law, to make sure platforms remain accountable to strong rules protecting all children across the United States.
We can create a more attainable reality where kids can indeed stay both safe and connected online, it just takes the right legislative solution.
Chris Mohr is President of SIIA