Brussels editorial office
06 October 2021 13:59
Transparent algorithms, banned private facial recognition databases and the use of behavioral police who make decisions based on people’s attitudes. These are the cornerstones of text approved this morning by the plenary of the European Parliament. The Strasbourg Chamber also affirmed the need for artificial intelligence systems used by law enforcement and border control authorities to act under human supervision. The issue has ended up on the agenda of the EU agenda given the applications of the new technology already in use in other countries.
The Singapore authorities, for example, have started testing patrol robots capable of ‘scolding’ people who engage in “undesirable social behavior”. The police robots, explained the British newspaper The Guardian, add to “an arsenal of surveillance technology in the tightly controlled city-state” that is fueling strong concerns about citizen privacy. From CCTV cameras to tests on next-generation street lamps equipped with facial recognition systems, Singapore is seeing an explosion of technologies to monitor its citizens. A mass surveillance considered too invasive by the associations for the protection of privacy and civil liberties.
The intervention of European legislators is aimed at preventing the Singapore case from being replicated in the Old Continent as well. “Fundamental rights are unconditional”, declared the rapporteur of the text, Petar Vitanov, Bulgarian exponent of the Socialists & Democrats. “For the first time ever – he added – we are calling for a moratorium on the deployment of facial recognition systems for law enforcement purposes, as the technology has proven to be ineffective and often leads to discriminatory results”.
The adopted resolution passed with the vote in favor of 377 deputies, 248 against and 62 abstentions. The text states that many identification technologies in use already commit a disproportionate number of identification and classification errors, especially harming people belonging to certain racial or ethnic groups, LGBTI people, children, the elderly and women. a particularly worrying situation in the context of law enforcement and judiciary operations. To ensure respect for fundamental rights, algorithms should instead be transparent, traceable and sufficiently documented, MEPs ask, and public authorities should disclose their applications as open-source software.
“We clearly oppose predictive policing based on the use of artificial intelligence, as well as any biometric data processing that leads to mass surveillance,” added Vitanov. “This – he concluded – is a great victory for all European citizens”.