Much has already been said about the various cases of use of Artificial Intelligence and their merits in terms of how it can improve health care, correct driving of a vehicle, increase efficiency in farming, and more. However, wrong application of Artificial Intelligence might cause unintentional accidents too.
That is where civil liability comes into question. The legal framework can ensure that victims are compensated for the harm they suffered and incentivize the liable party to avoid causing any damage.
What do the Commission and the Parliament say on AI liability?
This is one of the issues that EU legislators are currently looking into. The Commission has issued its report in February. And the European Parliament is working on its position too—in JURI and IMCO reports.
A couple of weeks ago, I filed my amendments. And we have been negotiating the compromises among political groups now. Let me summarize which aspects I find important to take into account:
How should the future legislation look like?
What truly matters for consumers is that they get compensated for the harm suffered, while European companies need clear and unified rules that would facilitate the expansion of their business across the Union. However, it is difficult to create uniform rules for all Artificial Intelligence applications because of their spread across sectors and different uses. Problems resulting from automated vehicles are not necessarily the same as from content moderation systems or chatbots. Therefore, I support a risk based approach, which should be more nuanced than dividing applications into high risk (automated driving, drones) and low risk (everything else). It should mirror the gravity of the harm that can happen including any impact on fundamental rights.
Due to the complexity of Artificial Intelligence systems—and their capacity to perform tasks without every step being predefined by a human—it’s increasingly difficult to trace the damage back to a human fault. Transparency of the functionality, the processes, and the main criteria is, therefore, essential to help better allocate responsibilities and we should consider a reversal of burden of proof for specific cases.
Liability is not the silver bullet. It can only remedy a situation once the harm happened. We need to incentivize investment in ex-ante security measures in order to make sure that we don’t have accidents in the first place. In order to enable an independent audit of Artificial Intelligence systems, all players should publish the source code. This would create better incentives for developers and entice more reviewers.
Both the Commission’s report and the expert group’s report increasingly consider software liability and joint liability including for software developers. While for instance in case of software embedded in a product can’t necessarily be considered as a distinctive product as both can only function together, we should also be very cautious that we don’t jeopardize innovation and Free and Open Source Software projects that are based on cooperation.