International Tech Policy Development Recap of 2023

Written by: Suher Adi

2023 was a big year for developments in technology policy and regulations around the world. With the United States, United Kingdom, China, and the European Union racing to pass legislation on emerging technologies like Generative AI, there have also been more diplomatic opportunities to engage with each other on global governance and guidelines. As we look forward to the technological advancements in the year to come, we are also keen on seeing the policy developments through these international forums and initiatives. 

February 2023: The International Standards Organization (ISO) along with the International Electrotechnical Commission (IEC) developed a new standard on artificial intelligence. ISO/IEC 23894 specifically focuses on risk management related to the use and integration of AI and how to ensure risk management is integrated into processes effectively. We are likely to see more developments to standardize management and integration on a global scale throughout 2024. 

May 2023: The United Nations (UN) launched the Global Digital Compact. According to UN Secretary General Guterres, the Global Digital Compact set out principles, objectives, and actions for advancing an open, secure and human-centered digital future. Taking a step away from historical discussions on internet governance, the UN aims to position digital governance as a new framework for how to think about regulating the digital world. The UN has not defined the idea of “digital governance,” but it is a notable step in the conversation on international tech regulations. 

May 2023: The Hiroshima AI Process was launched following the G7 Hiroshima Summit. After continuous discussions with G7 leaders in September and October, there was a release of the Hiroshima AI Process Comprehensive Policy Framework in October. This was seen as the first international framework that includes guiding principles and a code of conduct aimed at promoting the safe, secure and trustworthy advanced AI systems. In October, the G7 leaders also agreed on the International Guiding Principles for Advanced AI systems in addition to the AI Code of Conduct. The principles serve as a foundational document to showcase the agreements by the G7 countries on governance for Advanced AI systems; the Code of Conduct serves as practical steps organizations can take to ensure that their AI systems are deployed with measures to ensure safety and security in place. 

May 2023: The EU-US Trade and Tech Council worked together to develop a “code of conduct” to prevent the harms of Generative AI. This was seen as the impetus to the G7 AI Code of Conduct and a way for the EU and US to jointly show their leadership in the international AI policymaking landscape. Upcoming meetings of the TTC are worth watching, as they could serve as a precursor for broader international discussions on technological trade, data transfers, and other major pieces of technology policy. 

May 2023: The New Zealand-U.K. Free Trade Agreement went into full effect. The trade agreement encompasses many aspects, including aspects related to technology. It includes a commitment to developing and using a risk-based approach to AI, developing alignment between the two countries to ensure interoperability of AI policies with each other and other international policy developments, and commitments to cross-border data flow and to not localize data. The agreement is considered to be among the most robust with regard to technology policy developments.

June 2023: South Korea becomes the first non-founding member to join the Digital Economy Partnership Agreement (DEPA). Signed in June 2020 by Chile, New Zealand and Singapore, the DEPA was established to promote these countries as platforms for the digital economy. Regarding AI, the DEPA countries have focused on promoting ethical governance frameworks and have taken into consideration international principals and developments with regards to their multilateral work together. Costa Rica is considered likely to also join the agreement.

July 2023: The United Kingdom officially joined the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP). The largest trade agreement since the UK left the European Union,  it now spans Asia, the Pacific and Europe. The agreement would allow for easier imports from Asian countries. The CPTPP is notable for its inclusion of cross-border data flow agreements, not requiring access to source code as a condition for importing technology, and ensuring interoperability of policies developed with international agreements and standards. 

September 2023: The first Japan-UK Strategic Economic Policy and Trade Dialogue in London was held. This bilateral effort touches on all aspects of trade and investment between the two countries. The Japan-UK Comprehensive Economic Partnership Agreement, which was signed in 2020 is notable in two ways: 1) it included commitments on cross-border data flows, and 2) an agreement to bar data localization. The most recent meeting was attended by each country’s ministers of technology and focused on collaboration regarding semiconductors and AI. 

October 2023: The UK Online Safety Bill passed the UK Parliament and became law. The law includes a duty of care to prevent users from being harmed on platforms, in addition to stipulations on risk assessment for various harms, mirroring the EU’s Digital Services Act. This includes provisions regarding children around hate speech and graphic content and age verification for websites. Platforms are also expected to promptly remove illegal material once they have been notified of its existence. Some have raised concerns about how new obligations on platforms in the law may negatively impact their ability to continue to offer users high-standard end-to-end encryption. Platforms failing their duty of care can incur fines of up to £18 million or 10% of their annual turnover, whichever is higher. The UK’s communications regulator, Ofcom, will release implementing regulations in the coming months. 

October 2023: The Global Partnership on Artificial Intelligence (GPAI) released its Ministerial Declaration. The Ministerial, or New Delhi Declaration, addresses concerns around the spread of misinformation through AI, employment impacts with use of AI, intellectual property, data protection, and threats to human rights and democracy through the use of AI. In addition, the Declaration is seen as a step toward advancing transparency and fairness in the adoption of AI. It emphasizes collaborative efforts to cultivate skills and knowledge around AI, including in policy development, infrastructure investment, risk management frameworks, and other governance techniques. The Declaration also stresses the importance of making sure that  policy conversations are inclusive, especially toward low and middle income countries, so that everyone can harness the potential of AI advancements and manage risks. India will be the Lead Chair of GPAI for 2024. 

October 2023: UN Secretary-General Launches an AI Advisory Body to address AI Governance. The AI Advisory Board includes 39 members spanning countries and sectors, including public, private, civil society, and academia. The Advisory Body will focus on building global scientific consensus of risks and challenges with regards to AI and seek to create avenues for international cooperation for AI governance. It released an interim report on options for international AI governance in December. The final report is expected by August 2024. 

November 2023: The UK government hosted an AI Safety Summit. The Summit brought together the U.S, China, India, Japan, and EU countries to create a baseline for future action, and a commitment to use the London-based AI Safety Institute, which aims to “act as a global hub on AI safety,” for future AI-safety testing. In addition to governments signing on to the Declaration, companies like OpenAI, Google DeepMind, Microsoft, Meta and other AI companies also signed on to the Bletchley Declaration. It is not clear which AI models will be subject to testing or when testing will take place, but this is the first international declaration stating that we should not allow companies to test their own products or take them at their word. Rather, this sets the precedent that more oversight is needed.  

November 2023: The Organisation for Economic Co-operation and Development (OECD) AI Policy Observatory updated its definition of “artificial intelligence.” The goal was to create a definition for multiple countries to agree with to allow for interoperability across country jurisdictions. In a departure from previous iterations, the language shifted from including “human defined objectives” to instead focusing on AI’s objectives and the inputs and outputs it utilizes to make those objectives turn into content that can influence physical or virtual environments. The OECD is regarded as having developed the foundational set of principles on AI that most other policy developments have been based on. 

December 2023: The G7 Endorsed the AI Code of Conduct that came out of the Hiroshima AI process. This Code of Conduct aims to help countries unify their approach to AI governance and address privacy concerns and security risks associated with the technology. It provides “voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems.” The Code of Conduct urges companies to take appropriate measures to identify, evaluate and mitigate risks across the AI lifecycle in addition to tackling any patterns of misuse after AI products have been deployed. Since the guidance is voluntary in nature, it allows countries in different jurisdictions to approach the guidelines in their own way.

December 2023: International Standards Organization (ISO) along with the International Electrotechnical Commission (IEC) developed a new standard on artificial intelligence management systems (AIMS) across jurisdictions. ISO/IEC 42001 specifies requirements for establishing and improving AIMS to ensure responsible development and use of AI systems. This is the first AI management system standard and it is likely that more standards will be set by the ISO and IEC to help provide guidance in this rapidly evolving field.  

December 2023: The European Union’s Artificial Intelligence Act passed when the European Parliament and Council reached an agreement after months of negotiations. Through the passage of the AI Act, the EU is hoping to set worldwide regulatory standards for the use of AI technology. The AI Act is using a risk categorization approach to mitigate harms. The greater the risk, the greater the restrictions for that use of technology. It also bans multiple uses of AI,  like social scoring systems, bulk scraping of facial images, and most emotion recognition systems in workplace and educational settings. Providers of “high-risk” AI will be subject to reporting requirements, such as disclosure to public databases and human rights impact assessments. EU countries were able to secure exemptions from some aspects of the law for military or defense uses of AI. The EU AI Act rules come into effect in 2025. Companies found to violate the AI Act will face fines ranging from 1.5% to 7% of global sales. 

 

We expect 2024 to be no less busy, and SIIA will keep you informed about major developments throughout the year. 

Comments are closed.