High, Medium and Low-risk AI

The EU has tried very hard, to rectify the problem that a major barrier to EU firms in business is confusion about legal liability for AI.  For 43% of EU companies, legal confusion about AI is the main obstacle to making use of AI and a ‘top 3’ barrier to business.

In June 2023, European MP’s agreed to progress toward an AI Act in Europe:

  • Developers of High risk AI will carry most of the risk, rather than users.
  • High risk AI will have to register on an EU Commission database

The EU is looking to finalise the AI Act (EU) as risk-based legislation:

  • to give users of AI some trust in the judicial system, by giving them a fighting chance of proving blame when something goes wrong, and
  • to give developers responsibility for creating safe componentry in higher risk AI. 

Detecting risk, and mitigating risk, therefore becomes very important for developers of high risk AI.

What might be the impact of the upcoming AI Act (EU) for UK firms?

  • Impact will be where UK firms offer products for sale or use in the EU. 
  • Legislative inspiration – UK government has said, in 2023, that innovation and risk need to be managed, and people’s rights respected.  The EU Commission said the same.  There is an enormous difference, however, between the Voluntary Codes approach (supported by EU and UK), and the game-change set out in the EU AI Act which goes so much further, because the Act would attribute ‘fault’ upon developers of higher risk AI, without the necessity for users to prove causation/nexus. 

“The compensation gap”

  • Ordinarily in law, a victim of a civil wrongful act has to prove a nexus between the act and the damage.  However, AI can be highly complex, opaque technology.  Therefore users may struggle to identify who the civil wrongdoer is, when a user is harmed by AI itself.

“The certainty gap”

  • The emerging AI Act (EU) is an attempt to create legal certainty to boost adoption of positive forms of AI in sectors like healthcare, science and exploration.

EU law would not harmonise the detail of national rules about contributory negligence, legal limitation periods, etc. The proposed AI Act also takes a ‘minimum harmonisation’ approach such that countries can impose stricter rules than in the Act itself, leaving countries free to impose strict liability for injury or harm.  However, the proposed new AI Act (EU) would cover claims for damages caused by an output or failure to give an output, by AI where the fault of a person is involved (person being either provider or user).  The EU Directive is careful to point out, that there is no intention for the AI Act to cover damage caused by a human who assesses AI after outputs of AI are given.

There is this emerging sense that liability tips toward developers of high risk AI, where it is very difficult for the injured party to identify a human developer who is liable due to the complexity of design and deployment of AI. 

How should the makers of high risk opaque AI respond? The EU suggests:

  • Filing and registration is a lower priority in risk terms
  • Recognise there is a duty of care
  • Risk assess the AI
  • Be active in mitigating risks with measures
  • Consider robustness
  • Consider cybersecurity
  • View AI as a continuous, iterative process

A summary of the EU categories of AI design risk:

(a) AI which poses ‘unacceptable’ risk:

  • a clear threat to the safety, livelihoods and rights

(c) High risk AI:

  • key infrastructure where life and health of people might be at risk
  • educational access such as exam scores
  • safety componentry in products
  • credit scoring
  • access to self-employment products
  • migration and asylum access and authenticity
  • justice and democratic processes.

(d) Medium and Low risk AI.

AI falls within a universe of 4 critically important technologies in the opinion of the EU Commission:

  • Artificial intelligence
  • Advanced semi-conductors
  • Quantum
  • Biotechnologies

Proposal for a Directive of the EU Parliament and of the Council, on adapting non-contractual civil liability rules to AI 28 September 2022.

Disclaimer

The blogs of Board Originator Ltd / SEEIO and any of its contractors, agents or employees are for the general interest of the readership only.  We do not endorse any news or information we may publish in our blog.  Our blog is not intended to and does not constitute legal or professional advice to any person or business.  Our posts are general news items or updates that may interest our followers and consist of a brief overview therefore are incomplete on information and may contain errors at any time.  Readers are not to rely on our blog content and those that do rely, do so at their own risk.  We accept no responsibility to readers for our blog and we will not be held liable for statements in or third party links within our blogs.  Any common law liability is also excluded as permitted by law.  We do not accept any liability for damages whether direct, indirect, special, consequential or otherwise under any circumstances, whether foreseeable or otherwise.  Please also see our extensive website terms and conditions in the footer of our website.