Written by: Suher Adi

The European Union has been leading in the development of AI policy– first with the passage of the EU AI Act on March 13th and now with the Council of Europe’s Treaty to Protect Artificial Intelligence, Human Rights, Democracy and the Rule of Law. The Treaty was agreed to on March 14th and attempts to bring principles of the AI Act to other countries not formally part of the Council of Europe. The United States, United Kingdom, Canada, and Japan were observers to the negotiations which advocated for carve outs for national security as well as exemptions for private companies.

The Council of Europe’s AI Treaty only applies to government bodies and private technology companies acting on the government’s behalf. It acknowledges the impacts AI can have on human rights like discrimination and expanding inequalities as well as promoting digital literacy and the development of AI in a trustworthy manner. The development of the treaty is to uphold past legislation like the UN Convention on the Rights of the Child and the Universal Declaration of Human Rights. Obligations to parties under the treaty also include maintaining the integrity of democratic processes and institutions such as access to justice.

The treaty discusses actions to mitigate risks with AI systems including developing “adequate transparency and oversight requirements tailored to specific contexts and risks” and developing measures to ensure accountability for “adverse impacts on human rights, democracy and the rule of law.” The treaty also echoes sentiments about data privacy from the EU’s GDPR for applicable domestic and international laws and frameworks.

Interestingly, the AI treaty also includes Article 13 on Safe Innovation that promotes the development of the technology within “controlled environments” which includes testing AI systems. This article specifies that testing and development of AI systems should be done under the supervision of “competent authorities” but does not specify what determines competent authority.

Remedies for harms related to infringement of human rights and democracy should be consistent with domestic legal systems. Examples include providing authorized government bodies access to information and communicating violations to impacted individuals. For safeguards the treaty outlines disclosure to users that they are interacting with an AI system and taking measures to mitigate risks posed by AI. The treaty makes mention of “risk management measures” as well as bans in respect of certain uses of AI systems that are “incompatible with the respect of human rights and the functioning of democracy.”

Article 19 focuses on public consultation by government bodies with the public and multi-stakeholders regarding the use of AI. The goal is to ensure that consideration of different issues by public stakeholders for AI systems. Each party will provide a report to the Conference of the Parties within the first two years of becoming a party and then periodically after with details of activities described above. There will be an oversight mechanism for treaty compliance that has yet to be determined by the Council of Europe. As always the treaty is subject to ratification by states domestically.