Artificial intelligence, three traps to avoid – Corriere.it

Artificial intelligence, three traps to avoid – Corriere.it
Artificial intelligence, three traps to avoid – Corriere.it
from Gustavo Ghidini and Daniele Manca

The European Union is preparing to enact rules that will be effective in all member states. But in Italy there is no talk of it

I

Last month in the UK, a black Uber driver had his account deactivated because the company’s face scanning software failed to recognize him repeatedly. The matter ended up in court. But we should ask ourselves on how many and what occasions now artificial intelligence (AI) to make decisions about our life. Machine learning programs guide healthcare and medical procedures. Many banks use AI software to decide creditworthiness, whether or not to lend money to people and companies. Even in courts and judicial offices they use self-learning programs for judicial decisions. The problem as we know that those programs are affected by all the prejudices of those who wrote them at the origin. It is no coincidence that Joe Biden’s scientific advisers are developing a sort of Charter of Rights similar to the one that accompanies the American Constitution of the founding fathers.

The innovation that Europe, more ready to regulate than investing in new technologies, has already produced a major new proposal on artificial intelligence now under discussion. The primacy of the Union over rules has already been demonstrated in the past by the General Data Protection Regulation (GDPR). The Commission wanted to start drawing the line between lawful and illegal in the uses of AI. As soon as the rules are approved by the European Parliament, they will be immediately effective in all Member States. time that even in Italy it begins to discuss.

In the Commission’s proposal, the Regulation distinguishes three levels of risk determined by possible applications (uses) of the AI, which require legal intervention. Out of the way, it is made to evoke the partition of the Divine Comedy, also for the implicit but very clear ethical inspiration that this tripartition guides.

Unacceptable risk
The first level is that of absolute risk, which makes the application illegal and therefore prohibited. In this inferno there are uses that violate both dignity and safety and physical and mental health. And so, for example, AI-based systems that employ subliminal techniques capable of unconsciously distorting the behavior of a person, causing physical or psychological damage to that or other person, are substantially prohibited. And applications such as killer robots, new poisonous substances, subcutaneous implants to affect the human psyche, and similar wonders are also obviously prohibited. Do not seem exaggerated these concerns that animate the proposed Regulation: technicians and scientists are typically fixed, as in love, in the search for success, and their minds are not often pre-occupied by ethical problems. One of the fathers of the atomic bomb, physicist Hans Bethe, witnesses in 1954, in the Oppenheimer Hearings, that moral problems arose in them after the Hiroshima and Nagasaki massacres. And Professor Fritz Haber, Nobel laureate in chemistry in 1918, did not pose ethical problems, nor did he deny one of his famous creatures: that so-called chlorine-based mustard gas that exterminated thousands of Frenchmen in the trenches of Ypres (hence the most famous and sinister name mustard gas).

Furthermore, systems adopted by public authorities to evaluate and classify will be banned, with a social score (social scoring), the reliability of people based on their social behavior in social contexts unrelated to those in which the data was originally generated or collected. And this is if these systems lead to discriminatory treatment of certain people or entire groups of people that are not justified or disproportionate to controlled social behavior. Biometric identification systems at a distance in real time will also be banned in spaces accessible to the public by the police. Unless such use is strictly necessary to prevent an imminent threat to the life or physical safety of individuals, or a terrorist attack, etc.

Acceptable risk: obligations
of precaution, control, information

In the highest place,
to see the stars again
, high risk but acceptable applications are described in Annex III of the proposal. Acceptable in the sense that they can be put on the market only following a prior and rigorous assessment of compliance with stringent requirements, which cover the entire life cycle of the algorithmic application, from design to implementation. In particular, and mainly, a risk management system must be created and maintained; supervision by natural persons should be ensured (
human oversight
) the functioning of the system; the development process of a given AI system and its functioning will have to be documented; finally, transparency obligations towards users on the functioning of the system must be observed.

This category also includes risk assumptions that are still sensitive but still lower – a circle closer to paradiso. The corresponding applications will be lawful as long as only the risk is declared, and therefore (implicitly) manageable with prudent human behavior. For example, AI applications in robot-assisted surgery belong to this category; systems for assessing the reliability of information provided by natural persons to prevent, investigate or prevent crimes; systems for processing and examining asylum and visa applications; systems to assist judges (we’ll come back here shortly). Again, in the case of using chatbot or voice assistants, the user must be informed that he is not interacting with a human being, as well as know if he is watching a video generated with deepfake.

Minimal risk
Finally, other AI systems will be completely free that are essentially harmless with respect to security and citizens’ freedoms. They can therefore be developed and used without specific, particular legal obligations (the Commission however recommends voluntary adherence to codes of conduct to improve transparency and information). These are, for example, predictive maintenance systems, spam filters and unwanted phone calls, video games developed using AI systems. According to the Commission, the vast majority of AI systems currently used within the EU fall into the latter bracket. The road has just begun. But
not intervening quickly would already be a choice.

October 26, 2021 (change October 26, 2021 | 19:29)

© REPRODUCTION RESERVED

NEXT Nine firefighters who died from the collapse of a cave during an exercise – Corriere.it