Artificial Intelligence is in the sights of regulators around the world, with major and diverse new legislation on AI rules being brought forward in the EU, USA, and China, amongst others. Individual countries have also brought in regulation of specific AI tools in response to the recent rapid development of Large Language Models such as OpenAI’s ChatGPT or Google’s LAMDBA.
At the end of April, the European Parliament reached agreement in committee stage on the proposed AI Act, which would seek to regulate AI systems in four categories, ranging from “Unacceptable Risk AI” to “Minimal Risk AI”. The Act, which was proposed by the European Commission in 2021, aims to be technology neutral. In its current form, it will ban some uses of AI where there is considered to be an unacceptable risk. In other cases, it will impose obligations such as requiring risk assessment, the use of high-quality data sets, and clear information to be provided to users. “Minimal risk AI” systems, which the European Commission has indicated covers most AI systems used in Europe, can be freely used and developed under the proposed rules.
Meanwhile, the federal government in the US is starting to consider its own set of AI regulations. In 2022, the White House published a ‘Blueprint for an AI Bill of Rights’, setting out five principles to guide development and release of AI systems, covering “Safe and Effective Systems, Algorithmic Discrimination Protections, Data Privacy, Notice and Explanation, and Human Alternatives, Consideration and Fallback”. While not legally binding as a document, the White House has indicated that a number of the provisions in the five principles are already covered by the constitution and other laws. The US National Telecommunications and Information Administration, an advisory body to the White House, issued a call for feedback on how policies can be developed to regulate AI to increase accountability and limit potential harms from the technology.
These steps towards regulating AI come as groups across society, ranging from educators to law enforcement, employers, the media, and governments, grapple with how to respond to the impact that AI can have. In some cases, countries have taken steps to limit the use of specific AI systems. Italy’s data protection regulator required OpenAI, providers of the ChatGPT AI chat service, to stop processing Italian users’ data due to concerns that the system was in breach of the EU’s General Data Protection Regulation.
Beyond government actions, a number of academic institutions, private organisations and individuals, including CEPIS, have also recently signed an open letter calling for a pause in development of large-scale AI systems to allow regulators to catch-up. CEPIS Board member, Meltem Gönenç Eryılmaz, said, “While technologies are created, threats and processes are seen simultaneously, and appropriate regulations must be developed accordingly. Of course, the issues that the AI systems development race has caused cannot be solved in six months, but it would be a good start, especially to give time for the development of more in-depth knowledge and awareness about the issue.”
Reflecting a different approach to AI regulation, China’s Cyberspace Administration has also recently published draft rules on AI systems as major Chinese tech firms, including Alibaba and Baidu, release AI-powered chat bots and services. The proposed rules would require that discriminatory data is not used to train systems, ban the generation of false information, and require that content created is reflective of the Chinese state’s values.