PROHIBITED AI PRACTISES
By way of introduction, it is worth mentioning that the first proposal for an AI Act was published by the European Commission in April 2021. Essentially, it aims to regulate Artificial Intelligence in a comprehensive manner.
Article 3 of the AI Act contains a glossary of terms and, inter alia, a definition of an AI system, which, under EU legislation, means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments (Article 3(1) of the AI Act).
In Title II, Article 5 of the AI Act, we will find prohibited AI practices or AI systems that reflect the idea behind this regulation, i.e., the need to take care of human safety and rights. Those practices include:
- cognitive-behavioral manipulation of humans,
- biometric categorization systems,
- social scoring,
- the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces,
- AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage,
- AI systems to infer emotions of a natural person in the areas of law enforcement, border management, in workplace and education institutions,
- AI systems for the analysis of recorded footage of publicly accessible spaces through ‘post’ remote biometric identification systems, unless they are subject to a pre-judicial authorisation and strictly necessary for the targeted search connected to a specific serious criminal offense.
Of course, certain exceptions to prohibited practices are also provided in the regulation.
HIGH-RISK AI SYSTEMS
The European legislator has also classified AI systems according to the criteria of risk, dividing them into unacceptable risk, high risk, low or minimal risk.
High-risk AI systems are systems that have a negative impact on both security and basic human rights. The AI Act sets the classification rules and identifies two main categories of high-risk AI systems:
- AI systems intended for use in products covered by EU product safety legislation. Examples include toys, aviation or cars,
- other stand-alone AI systems impacting mainly on fundamental rights, belonging to the specific eight areas identified in Annex III to the AI Act, which will be subject to registration in the EU database.
Firstly, the EU legislator has regulated the obligations of high-risk AI system providers, who will be obliged to:
- ensure that the high-risk systems are compliant with the requirements indicated in Chapter 2 of the AI Act,
- have a quality management system in place which complies with the provisions of the AI Act,
- draw-up and keep the technical documentation of the AI system,
- when under their control, keep the logs automatically generated by their high-risk AI systems that are required for ensuring and demonstrating compliance with the AI Act;
- ensure that the AI system undergoes an appropriate conformity assessment procedure prior to its placing on the market or putting into service,
- comply with the registration obligations (referred in Article 51 of the AI Act),
- take the necessary corrective action (as referred to in Article 21 of the AI Act).
In addition, some information obligations have been imposed on providers and importers of high-risk AI systems.
Distributors of AI Systems also have certain control obligations: they have to verify (before making an AI available on the market) if it bears the required CE conformity marking, if it is accompanied by the required documentation and if the provider and the importer have complied with their obligations set out in the AI Act.
Obligations for deployers of high-risk AI systems include, e.g., taking appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions of use accompanying the systems.
On the other hand, the act introduces very limited transparency obligations in case of AI systems that do not fall into the high-risk category. In this case the legislator sets out some minimum transparency requirements which can allow users to consciously interact with AI systems and, consequently, to make an informed decision to proceed. This mainly refers to such AI systems that produce or manipulate images, audio, or video content (deepfake video).
What may be the consequences for non-compliance with the AI Act? Mainly, administrative fines, the amount of which will depend on the type of infringement. Maximum fine may be 40 000 000 EUR or 7% of the company’s total worldwide annual turnover for the preceding financial year, whichever is higher.
A separate chapter is devoted to administrative pecuniary sanctions which may be imposed on Union institutions, bodies, offices, and agencies.
WILL THE AI ACT BE SUFFICIENT?
In conclusion, the fundamental question to ask is: will this Act be sufficient? With no doubts, EU legislator assumption is to comprehensively regulate rules on artificial intelligence, guided by the idea of respecting human rights. Parliament’s priority is also to eliminate the potential risks of AI systems. However, this is an area of science that is constantly evolving. Legislative regulation must keep pace with those dynamic changes. European jurisprudence will certainly be of assistance in resolving interpretative doubts that arise under the AI Act.
Amendments on the proposal for an AI Act adopted by the European Parliament has given the green light for the next stage, i.e., the start of negotiations with EU member states within the Council on the final form of the AI Act. According to the EU legislator’s assumption, an agreement will be reached by the end of 2023.