Forecasting and profiling, or bias and discrimination?
- EU Regulation
- December 28, 2022
- No Comment
Artificial Intelligence has already brought down one government in Europe.
In 2019, the Dutch tax authority used self-learning algorithms to create risk profiles in an attempt to spot fraud involving child-care benefits. As it became clear that the families, mainly from ethnic-minority communities, had been identified on suspicion of fraud and then penalised because of algorithm-generated profiles, a massive political scandal brought down the government of Dutch prime minister Mark Rutte.
Rutte survived. But thousands of ordinary lives were ruined.
As Artificial Intelligence becomes an essential, albeit invisible, feature of our daily interactions, brace yourselves for even more EU-wide political wrangles, centring on European Commission proposals to regulate the sector.
The proposals, made in 2019, are now making their way through the European Parliament — where political groups have tabled over 3,000 amendments — and are expected to receive the Council greenlight in December. The EU’s AI Act will then be agreed next year, after intra-institutional negotiations.
Expect a showdown between the parliament and the council, however. MEPs are expected to push for stricter regulation and better protection of rights, while governments will likely argue for fewer rules, citing the need for competitiveness and security concerns.
“The success of the text will lie in the balance we find between the need to protect the interests and rights of our citizens, and the interest to stimulate innovation and encourage the uptake and development of AI,” Romanian liberal MEP Dragoș Tudorache, one of the European lawmakers in charge of the file, said recently. “The real political discussions are yet to come.”
Bulgarian socialist MEP Petar Vitanov, one of the negotiators on the file, says the focus must be on ensuring that “fundamental rights and freedoms are safeguarded, there can be no innovation without fundamental rights.”
Key issues include the governance of the act, and the legislation’s definition of risk.
Lawmakers want to give the commission the power to extend the list of so called “high-risk areas”, and to raise fines for non-compliance to €40m or seven percent of annual turnover.
Some EU governments are seeking exemptions for the use of AI by migration authorities and law enforcement, which could lead to more control over communities, including ethnic communities, which are already more policed than others.
While some critics, such as the United Nations’ former human rights chief, Michelle Bachelet, say governments should put a moratorium on the sale and use of AI systems until the “negative, even catastrophic” risks they pose can be addressed.
A report by the UN on AI’s use as a forecasting and profiling tool warns that the technology could have an impact on “rights to privacy, to a fair trial, to freedom from arbitrary arrest and detention and the right to life.”
Bachelet acknowledged that AI “can be a force for good, helping societies overcome some of the great challenges of our times,” but suggested that the harms it could bring outweigh the positives. “The higher the risk for human rights, the stricter the legal requirements for the use of AI technology should be,” she said.
The commission proposal calls for a ban on AI applications that manipulate human behaviour (such as toys which use voice assistance encouraging dangerous activities in children) or systems that allow Chinese-style ‘social-scoring’. The use of biometric identification systems, such as facial-recognition, for law enforcement in public spaces, is also prohibited.
Exceptions are allowed in the case of a search for victims of kidnapping, the identification of a perpetrator or suspect of a criminal offence, or for the prevention of imminent threats, such as a terrorist attack.
Digital-rights activists warn, however, that there are “loopholes” that allow mass surveillance.
Any exemption given to governments and companies for use of an AI system which is “purely accessory” and used in a minor matter could, in fact, “undermine the entire act”, warns Sarah Chander, who leads the policy work on Artificial Intelligence for European Digital Rights (EDRi), a network of non-profit organisations working to defend digital rights in the EU.
The Commission proposal focuses on so-called “high-risk” AI systems which may jeopardise people’s safety or fundamental rights, such as education (for example, scoring of exams), employment (like CV-filtering software for recruitment), or public services (for instance, credit-scoring to deny people the opportunity to obtain a loan).
Companies who want to compete using AI systems in this “high-risk” category, would have to meet EU requirements, such as explainability, risk-assessment, and human oversight.
Some worry, however, that these requirements could discourage start-ups and businesses from investing in Europe in such AI systems, giving the competitive advantage to the US or China.
Companies that fail to comply with the legislation could face fines of up to €30m or six percent of their global turnover.
Chander highlighted that some of the biggest harm from AI systems could come in the delivery of public services, such as social services, policing (where predictive-policing based on mass surveillance is a major concern), and migration. AI-based social decisions are dangerous because AI systems make assumptions seem like facts.
Chander says the commission’s proposal do not go far enough on restricting the use of facial recognition. Her organisation also wants to ban the use of AI for predictive policing and migration as well as for predicting emotions.
Rights-defenders argue that companies should be obliged to make fundamental rights impact-assessments, and provide information on where and how an AI system would be used and its impact on individuals.
They also want clear information provided to the public, and say citizens should be able to seek explanations from public authorities or companies. Citizens should also be able to claim remedies if a company or an authority has violated the AI Act, or an individual has been impacted by a banned system.
Chander said there is a common misunderstanding that AI system can be “perfected”, and policy-makers often ask her how to make these systems better, less prone to bias. But that is the wrong question, she argues because the problem is AI systems are replicating an already discriminatory system.
“Some systems cannot be made better,” she says, adding: “We don’t want to make a perfect predictive-policing system, or perfect lie detector”.