The EU AI ACT – let`s be practical

The EU AI ACT – let`s be practical

The 1st of a series of publications, on the practical implementation and application of the new law by companies in Israel.

By now you have probably read lots of summaries regarding the EU AI Act. 

It is important. But now let us offer you some practical advice as to some basic steps you need to take regarding your particular use case vis-à-vis the Act.

The applicability of the Act primarily concerns AI-systems operating within or having an impact on the European Union (EU) market. If you want to know whether the Act is applicable to your AI-related operations, first ask yourself these three general questions: Is the AI system you develop or distribute placed or put into service in the EU market? Do you deploy an AI system and your company is established or located within the EU? Is the output produced by the inquired AI system used in the EU?

If the answer to any of the above is yes, the Act probably applies to you.

If you wish to understand which provisions of the Act apply to your use case, you must first go through a risk-classification process. 
The main theme underlying the classification process is derived from the level of risk associated with your particular use case of AI:

Unacceptable Risk Identification

  • Infringement of Rights: Does the AI system have the potential to infringe upon human rights, such as privacy or equality? For example, does it involve indiscriminate surveillance or social scoring?
  • Manipulation: Can the AI system manipulate people’s behaviour in a way that can cause personal or societal harm, or exploit vulnerabilities of specific vulnerable groups, including children?

If you answer YES to either, the risk associated with your AI system should most probably be classified as Unacceptable Risk.

High-RiskIdentification

  • Sectoral Consideration: Is the AI application designed for use in a sector considered as critical (such as healthcare, law enforcement, critical infrastructure, education, employment, essential public and private services, democratic processes)?
  • Purpose: Is the purpose of the AI system critical in terms of the potential impact on people’s safety, fundamental rights, or risks a significant adverse effect (such as biometric identification, management and operation of critical infrastructure, education, employment)?
  • Data Sensitivity and Decision Making: Does the system process sensitive data or make decisions that significantly affect individuals?

If you answer YES to either, chances are that the risk associated with your AI system should be classified as High Risk.

Limited Risk Identification

  • Transparency Requirement: Does the AI system directly interact with humans, and does it require transparency as per the Act’s requirements in this regard (for example indicating that an AI, not a human, is generating the response, such as chatbots, and transparency in AI-generated content, indicating it is such)?
  • Human Oversight: Does the AI system operate in a manner where insufficient or lack of human oversight would lead to potential risks, yet not to the level classified as high-risk?

If you answer YES to either, you are most probably in the area of Limited Risk.

If you didn’t find your place in any of the above categories – seemsyou fall within the Minimal or No Risk category.

The classification process above should include the following measures: 

Documentation and Justification

  • Comprehensive Evaluation: Document each consideration’s evaluation process, including sectoral analysis, data type, decision-making impact analysis, and transparency requirements.
  • Justification of Classification: For each AI system classified, provide a justification for its classification based on the decision criteria.
  • Review and Update: Note that classifications should be reviewed periodically in light of new information, technological advancements like addition of features, or changes in the legal framework.

Congratulations! you managed to classify your AI system. Now what?
The requirements applying to you may vary from practically nothing, to comprehensive set of limitations and obligations (concerning data, documentation, logging of activity, record-keeping, transparency, human oversight, accuracy, and more), depending on the above classification and your role in relation to the AI system. Stay Tuned For our next Practical Clients’ Alert.

Although the official timeline set for implementation of the various Act’s requirements is spread over time (from 6 to 36 months), NOW is the time to start preparing yourself. Our AI team is here for you.

Please be advised that this document is intended solely to provide a general overview of selected aspects of the law and is not comprehensive in nature. It is not intended to cover all provisions, nuances, and exceptions of the law, and should not be used as a substitute for thorough legal analysis or advice, to also be based on the specifics of the matter. The contents of this document are for informational purposes only and do not constitute legal advice. The information provided in this document should not be used as a basis for making any legal, business, or other decisions, and reliance upon it is at the reader’s own risk. 

Related News