Artificial intelligence (AI), and associated technologies such as ChatGPT, have become of greater prominence in recent months. Regulatory bodies and governments are noticing the speed with which it is integrating into everyday life and the EU has now proposed an Artificial Intelligence Act to ensure there is regulated control of AI. But will the UK follow suit and what does this mean for AI innovation?
ChatGPT has been headline news as its popularity grows, but also has prompted concern. Students have started using ChatGPT with essay writing, with schools suddenly seeing a spike in some students’ grades. Some teachers, having noticed this sudden burst in productivity and learning, have employed AI detector tools which can flag whether the content was written by a ChatGPT. Essentially an AI ratting its fellow AI out.
Some job hunters are using AI to enhance their CVs and cover letters, or to write them completely. On the other side, employers are also using AI to screen and vet the CVs which come in. Two AI systems working harmoniously with both employer and prospective employee thinking the other party are none the wiser. It’s safe to say AI is certainly encroaching into everyday life.
Technology is meant to make life easier and arguably using AI increases productivity, however students and employees still need to know the knowledge and skills themselves to succeed in careers. This is a growing concern for many including, Professor Stuart Russell from Berkeley University, who has written multiple academic papers about the possibilities of AI but also the threat it can pose. Other people concerned are Tesla founder Elon Musk and Steve Wozniak, co-founder of Apple, who signed an open letter calling for a six month ban on AI research. Italy put a temporary ban at the end of March on ChatGPT after concerns of data privacy being breached. The ban has now been lifted; Italy’s data protection authority confirmed ChatGPT has made changes to be more transparent to users.
Now legislation is coming and the EU are the first to take the step. The goal of the AI Act is to categorise the risk of AIs being developed. “The first being applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.”
The classification system will determine the risk of an AI, for example video games and spam filters will be deemed low risk, however biometric identification systems (facial recognition) in public spaces will be deemed high risk. Facial recognition is already highly contested in the UK. Big Brother Watch has frequently protested against identification systems used by police in public spaces. The campaign group found between 2016-23, 3,000 people have been wrongfully stopped, with an 85% inaccurate facial recognition.
What happens if an AI development is deemed high risk?
If an AI project is categorised as high risk there will be rigorous testing. Developers and users must also adhere to proper documentation of data quality and an accountability framework that details human oversight. Examples of high risk AI are self-driven cars, medical devices, and infrastructure machinery.
If the developers don’t adhere to the specific rules, there will be penalties, with fines reaching up to €30 million or 6% of global income. In addition, submitting false or misleading documentation will lead to fines too. Taking on the first legislation of AI, the EU is leading the way ensuring it is controlled whilst still allowing research and development.
The University of Oxford have created a tool called capAI, a procedure for conducting conformity assessment of AI systems in line with the EU Artificial Intelligence Act.
Human Rights Watch have shared their concerns with the Act. They say that “human rights and civil society groups can partner with these organisations to influence the standardisation process, but some have raised concern that they have neither the technical know-how nor the resources to participate” and that “it is unclear whether harmonised standards – which are voluntary and non-binding in principle, but could attract widespread adoption in practice – can be challenged in court.”
Human Rights Watch also disputed the testing process, which would score AI developments over a period of time for trustworthiness. Stating “the regulation should prohibit any type of behavioural scoring that unduly restricts or has a negative impact on human rights, including the rights to social security, an adequate standard of living, privacy, and non-discrimination. For example, scoring systems that try to predict whether people are a fraud risk based on records of their past behaviour, or serve as a pretext for regressive social security cuts, should be banned.”
In March this year, the UK government released a policy paper about the plans for AI regulation ensuring innovation alongside public safety. The framework of the plan has five principles to guide and inform the responsible development and use of AI in all sectors of the economy: safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, contestability and redress.
The question is will the UK administer fines and follow the same approach of testing as the EU? And how will they regulate facial recognition when it is already used by the police in the UK? Does the UK have the means and the will to regulate AI in a similar way to its regional counterparts?
Author: Bronwen Latham
#AI #ChatGPT #EU #UK #Ethics