Online Speech and AI

Written by: Suher Adi

As discussed in the OMB AI Draft Memo, there are uses of AI that may have an impact on individuals rights and safety. In our previous blog post, we discussed ways OMB can and should limit the scope to be targeted in the application of assessments for high risk AI. AI can impact people’s fundamental rights, and SIIA agrees that those specific impacts should  be covered under a different risk assessment model and trigger the minimum requirements review. 

However, the discussion of online speech is not new to technology companies and is a topic that is largely debated in the courts. It is important to note the various ways OMB can refine the use cases of “rights-impacting AI” to maximize the impact for appropriate instances. Sec.230 of the Communications and Decency Act passed in 1996 has enshrined the ability for technology companies to utilize their own discretion to ensure content is moderated online. Content moderation is increasingly important to ensuring that inappropriate and illegal content is not shared online. Tools used for content moderation could be assisted by integrating AI, leading to less biased decisions and decreasing the cost of moderation strategies. 

When OMB mentions issues in the AI Draft Memo regarding “blocking, removing, hiding or limiting the reach of protected speech,” it must be noted that this can be done with existing online content moderation strategies. These online content moderation strategies for years have relied on a combination of human and machine learning processes to make determinations about user-generated content that comports with content policies. While the First Amendment provides robust protection for these practices in the private sector, the First Amendment limits the federal government’s discretion to block, remove, hide, or limit the reach of protected speech. The inclusion of this use case should specifically reference government-run websites and platforms and acknowledge the history of content moderation for online companies to date.

With the Supreme Court deciding on a variety of cases, including cases related to content moderation, it must be maintained that technology companies will continue, within their rights, to moderate harmful content online, regardless of whether they develop AI or not. 

Comments are closed.