LAIWYERS: Law and AI: WaYs to Explore Robust Solutions

by M. Cole and Y. Le Traon

Artificial Intelligence (AI) is reaching out into all sectors and the law is no exception. Machine learning methods are being discussed as potential substitutes for human interaction or decision or at least for creating results on which human decisions are based.

In the area of law and its application, this can start for example with police investigations, range from the use of analysis in the prosecution to the final judicial decision-making, or from a client-lawyer relationship to the decision about damages in a civil procedure. Although there is still a long way to go before human intervention in the field of law could be replaced, first use cases show already now that there is an inherent danger in using AI solutions to replace analysis of a specific situation and possible predictions derived from it when attaching legal consequences to such decisions: the data used may be biased or the way they are applied may have unintended consequences and therefore the results can be unfair, or – in terms of the law – unjust. The area of law in this context is only a possible example to illustrate the research project’s goal: firstly, testing machine learning (ML) applications, which are used to replace or as a basis of human decision-making, in order to identify whether it is technically possible to get adversarily unfair results, and, secondly, to then be able to conclude whether what is technically possible is overall desirable or not. If testing of existing AI methods and models used e.g. in the context of law as described above, show that decision outcomes are worse in light of the originally perceived goal (just/unjust) than if they are based on human intervention only, or if the outcome even contradicts legal or moral assumptions (e.g. fairness by being non-discriminatory), then a (technically) possible use of AI may still not be a favourable option.

In challenging the appropriateness of ML in the context of legal decision-making, but possibly also in other fields such as political decision-making, the ultimate goal is to identify in which cases and to what extent it is “safe” to rely increasingly on such solutions and where the limits are. The project shall not have the goal of programming a specific algorithm that can be used in the legal domain.

Nor is the idea to analyse completely a specific legal domain such as criminal law and identify the specific problems in that area. What is needed is to define concrete cases with which it can be proven that a problem with the ML exists and there is no perfect solution to fix it. In that regard, the project shall be exploratory in nature, by showing what type of testing is necessary to find out deficiencies of AI systems – in this regard it relies on computer science expertise – and what consequences this has from a legal perspective. The latter is twofold: on the one hand, the domain of law and the judicial decision-making process will be used as blueprint for an area in which – possibly with good reason – there is quite some scepticism about using ML. On the other hand, legal norms such as fundamental rights will be the scrutiny benchmark to decide whether the outcome of testing means an AI system is insufficient. With this, the project shall give the basis to then further systematically research specific domains, specific AI solutions or specific testing methodology.

In the meanwhile, another aspect of the project is to present interim findings to a general public via an open conference as there is an increasing awareness of “AI entering everyday life” whilst for many this is a fearful prospect – and thereby there is a need for science to contribute to the societal discussion in Luxembourg.

Prof. Dr. Mark Cole

Prof. Dr. Yves Le Traon