The use of AI is becoming more and more common and this is why the EU proposes new and stricter rules. The EU has decided to take a step forward with the publication of its first regulatory proposal, with stricter rules to regulate the use of AI.
The draft rules describe how companies and governments in all Member States will be able to use this technology, setting limits around the use of AI in a variety of activities, from autonomous cars to hiring decisions, bank lending, school tuition selection or exam marking, and even its use by law enforcement and judicial systems, and other areas that threaten people’s safety or fundamental rights.
EU shows its approach to AI with the new rule propositions
The EU Commission focuses on high-risk applications of AI and provides guarantees for the security of individuals, their rights, and interests, without introducing major obstacles to companies that want to put AI-based products and services on the market.
The proposal is presented under a”risk-based approach around certain uses of AI according to the potential impact on people, categorizing up to four different levels, among which only military uses are excluded.
Starting with the minimal risk, we find banal uses such as leisure and artificial intelligence included, for example, in some toys or video games, or applications for music creation or image editing, among others, for which the regulation does not specify the application of any restrictive measure.
Measures that will start at “limited risk”, which includes closed AI systems with which users can contact, such as chatbots, which must now comply with a minimum level of transparency, in addition to being warned at all times as a non-human contact.
A second point of “high risk” includes systems that create an adverse impact on the security of people or their fundamental rights, uses in critical infrastructures that may affect health, education, personnel recruitment systems, public services, legislation, or justice.
There’s a category called “unacceptable”
In addition, remote biometric identification systems will also be considered high risk, and new, stricter requirements will apply to them. Although the EU Commission has not decided to ban them directly, it establishes that facial recognition will be prohibited for use in public and live areas. Although there will be some exceptions such as the search for dependent or missing persons, the prevention of a specific and imminent terrorist threat, or the identification of a perpetrator or suspect of a serious crime.
At the highest level, we will find the “unacceptable risk” section, which will encompass those uses considered as a threat to the “social scoring by governments, exploitation of vulnerabilities of children, use of subliminal techniques, and – subject to narrow exceptions – live remote biometric identification systems in publicly accessible spaces used for law enforcement purposes.”
Finally, once again the need for human interaction and control has been highlighted, considering that Artificial Intelligence must never replace or exonerate human beings from their responsibility.
However, the EU Commission has yet to approve this new regulation, a process that could extend the implementation of these measures by more than a year. Although it has already been clarified that, although for the moment only certain uses of Artificial Intelligence will be regulated, these regulations will have a “margin for innovation”, entering already into the regulation of other new hot topics such as robotics or 3D printers.