Method for Evaluating Legal Technologies (MELT)
This tutorial has been accepted for ICAIL 2023 in Braga, Portugal (19-23 June 2023).
Organisers: Pauline McBride, Laurence Diver, and Masha Medvedeva (Counting as a Human Being in the Era of Computational Law (COHUBICOL), Vrije Universiteit Brussel and Radboud University Nijmegen)
Topic, goals, and significance
The impact of Artificial Intelligence on law is underexplored. There is not enough consideration at present of how AI technologies affect (i) legal outcomes, (ii) the legal protection of citizens, and (iii) the institutions and practices that underpin legal procedure and the Rule of Law.
Failure to give adequate consideration to the impact of AI technologies on law may result in the production and use of technologies which are ill-suited to the legal domain, adversely affect the scope of legal protections, and threaten the values associated with law as we know it. It is imperative to address these impacts. The method adopted in our Typology and employed in this tutorial is significant since it provides a vehicle for reflecting on and addressing these concerns.
Our Typology of Legal Technologies aims to address these concerns. It is a novel, systematic, and cross-disciplinary method for assessing such impacts that is sensitive to the specific commitments of the legal domain.
The tutorial’s theme
This tutorial is focused on the use of AI systems in legal practice, on the merits of constructive communication across communities, and on ethics and transparency.
We will highlight:
how our method can help to reveal the wider impacts of AI technologies on law and legal protection, generating insights with relevance for the design and use of legal AI.
the need for constructive communication between lawyers, computer scientists, NLP practitioners engaged in the design and deployment of legal AI.
Participants will be confronted with legal and ethical issues which extend beyond familiar issues of accuracy, bias or loss of privacy.
We hope to persuade participants of the necessity to reflect on the implications for law and legal protection of the design and use of AI technologies.
The tutorial is aimed at developers of AI technologies in the legal domain.
This includes a diverse range of legal tech, both code-driven and data-driven:
- prediction of judgment
- legal search
- summarisation of legal documents
- contract analytics
- litigation analytics
- automated compliance
- legal expert systems
- decision support
It is also relevant for researchers concerned with ethics and transparency in the use of AI technologies by citizens, in legal practice, public administration, and business.
This half-day tutorial will involve (i) a presentation of the Typology, and (ii) an interactive session
- What the Typology is, what is in it, and why it matters
- An in-depth look at a profile of a real-world legal tech system
- A discussion of methodology – the challenges and benefits of cross-disciplinary research
The interactive session
We want to encourage participants to think deeply about the implications of AI technologies for legal protection, and to provide them with an approach to help identify such implications.
- Participants form cross-disciplinary groups (max. 4 groups, each with max. 8 participants inc. lawyers, computer scientists, and software developers)
- Participants assess the public claims made by the providers of real-world legal tech systems
- Groups consider the systems using the Typology method:
- identifying claimed essential features, rationale and benefits, and design choices
- comparing assessments between groups
- identifying potential technical issues and impacts on legal protection, from the participants’ own disciplinary background.
The goal is not to make computer scientists become lawyers, or vice versa.
Instead, we want to:
- promote critical thinking about the different features, goals, and affordances of different legal tech systems
- highlight how different disciplinary perspectives shape how providers' claims are interpreted
- demonstrate how perspectives on impact differ according to disciplinary perspective.
The ultimate goal is to provide a benchmark against which current and future legal AI systems can be measured, to ensure their deployment is in line with the Rule of Law (whatever the individual field they are intended for).