banner shape

The EU’s Artificial Intelligence proposal has arrived

KEYLANE September 20, 2021

While powerful Artificial Intelligence (AI) tools are already present in our day to day lives, AI is still in its relative infancy.

According to PWC research, AI could potentially contribute to up to $15.7 trillion to the global economy by 2030, so it is no surprise that regulation is now coming of age in this rapidly involving market. In response to an increased awareness of the possible dangers this technology carries, the European Union (EU) has adopted a proposal in the first ever international effort to regulate AI.

The proposed legal framework, called the Artificial Intelligence Act, is a positive step towards curtailing the potentially negative impacts of AI on individuals and society as a whole. Within its crosshairs are some of the most exciting and controversial technologies of recent history, such as autonomous driving, facial recognition, and algorithms powering online marketing. The EU’s new act aims to reduce the negative impact of AI by approaching it in a similar way as it does with product safety regulation, which serves to shed light on the development process and increases transparency for the people impacted.

The Artificial Intelligence Act will introduce important obligations surrounding transparency, risk management and data governance, and is likely to apply to AI providers such as FRISS and Keylane, as well as its customers as ‘users’ of our AI. Like the General Data Protection Regulation (GDPR), a legal framework for protecting personal information in the EU, the fines that authorities will be able to issue are tiered according to severity. However, an upper tier fine under the AI act surpasses the GDPR’s, reaching €30 million (about $35.5 million USD) or 6% annual worldwide turnover – whichever is the highest.

How does the proposal approach AI?

The Commission has adopted a broad interpretation of AI – undoubtedly an intentional act to maximise the scope and effectiveness of the legislation.

AI has been broadly defined under Article 3 of the proposal as:

“Software that is developed with one or more [certain] approaches and techniques…

Such approaches can be found listed under Annex I and include machine learning approaches, logic and knowledge-based approaches and/or statistical approaches.

…and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

The Commission’s approach has been to separate AI, and corresponding requirements, according to risk level. It starts with “Prohibited AI”, which has been classified by the Commission as being off-limits due to its exceptional risk. It is what many of us would consider the “too far” tier, which includes technology like biometric facial recognition in public spaces, or social scoring by authorities.

The Commission identifies certain AI as being “Low-Risk” and “Minimal Risk”. Technology within this tier only needs to meet basic transparency obligations in order to be compliant with the proposal. These are things like chat bots, spam filters, or video games that use AI to emulate realistic human player behavior.

The principal focus of the proposal is “High-Risk AI”, examples of which have been provided under Annex III of the proposal as being:

  • AI related to biometric identification and categorisation of natural persons;
  • management and operation of critical infrastructure; education and vocational training;
  • employment, works management and access to self-employment;
  • access to and enjoyment of essential private services and public services and benefits;
  • law enforcement; migration, asylum and border control management;
  • and administration of justice and democratic processes.

While it is dependent on how further guidance defines Annex III, it is possible that Annex III 5(b) is intended to include certain Insurtech products. It is also important to note that this list is dynamic, and may be updated by the Commission to capture future or newly identified systems determined to be High-Risk – which is why we will be regularly monitoring developments.

 

What are the requirements?

Under the proposal, providers of high-risk AI systems must implement a number of requirements, such as: having a risk management system, performing data governance practices, maintaining technical documentation and record keeping, and ensuring the provision of transparent information to users of their systems (Article 16).  In addition to product-centric requirements, AI providers must also ensure that they have a quality management system in place, that they perform ‘conformity assessments’ and keep automatically generated logs. Providers must also affix a CE marking to high-risk AI systems (or documentation) to indicate their conformity. If non-compliance with any of the requirements is identified, providers must inform the appointed authorities and take any necessary corrective actions.

The users of high-risk AI systems are obliged to: follow issued instructions that accompany the AI systems, ensure that input data is relevant for the intended purpose of the AI, monitor operation of the AI and suspend the system in the event that any risk is presented, and maintain and retain generated logs for an appropriate period (Article 29).

 

The approach to the new regulation

FRISS logoAs trusted advisors, FRISS takes regulatory standards extremely seriously. We welcome the use of AI because it never sleeps, is faster, has fewer errors and operates on the collective brain power of an entire anti-fraud department. AI is utilised in such a way that we can provide real-time and holistic views of risks at policy request, renewals, and claims, to increase efficiency for our customers. However, transparency is equally as crucial to us, too.

While this proposal marks the beginning of the legislative process, there are further actions that will need to be taken before we will see the finalised version. The proposal’s next step will be revisions by the European Parliament and Council of Ministers. In the meantime, FRISS’ Compliance and Data teams will continue to closely monitor developments, identifying any changes within the core requirements and ensuring we are in complete alignment where possible and where applicable.

From the beginning, our partner FRISS has been committed to incorporating key principles of responsible AI, such as reduction of bias, offering full transparency, undertaking risk management and focusing on the following aspects of each:

  1. Reduction of bias: We exclude obvious data points on gender, marital status, nationality, ethnicity, etc. in our models, and our data scientists are trained to recognise possible proxies.
  2. Transparency: We apply explainable AI to all of our models, meaning end users will see exactly why a certain claim was flagged as high risk.
  3. Risk management: We’ve implemented Data Protection Impact Assessments (DPIAs) into our development process, and take a privacy-by-design approach in line with the GDPR.

 

What we want you to know

The Artificial Intelligence Act is a positive step towards ethical and responsible use of AI. We commend the EU’s enhanced focus on transparency and risk management, because we also believe this should be a market standard for AI-powered products. We’re taking the time to analyse and observe developments in the proposal to make sure we understand what these requirements will mean for our products and our customers. Because we recognise that being compliant is essential in maintaining “business as usual”, we’re on top of these changes. We’ll continue to monitor what regulatory requirements are on the horizon and take responsible steps towards compliance as early as possible.