Link Search Menu Expand Document

FAQs & methodology

Table of contents
  1. How to cite
  2. Frequently Asked Questions
    1. What is the Typology of Legal Technologies?
    2. Who is it aimed at?
    3. Who created the Typology?
    4. What are its goals?
    5. What do you mean by “potential legal impact”?
    6. Why have you included only 30 systems?
    7. Can I suggest a new entry?
    8. The profile for system X is out of date, will you update it?
    9. How do you define the terms that you’ve used?
    10. How can I give feedback?
  3. Background to this project
    1. Our research concerns
    2. Our objective
      1. The ‘effect on legal effect’?
      2. A typology, or a taxonomy?
      3. Why substantiation, not explainability and bias?
    3. Some history of the Typology’s development
    4. Statement about completeness

How to cite

L. Diver, P. McBride, M. Medvedeva, A. Banerjee, E. D’hondt, T. Duarte, D. Dushi, G. Gori, E. van den Hoven, P. Meessen, M. Hildebrandt, ‘Typology of Legal Technologies’ (COHUBICOL, 2022), available at

We are interested to know how you use it in your research – please get in touch and tell us!

Frequently Asked Questions

The Typology is a method, a mindset, and an analysis of a handpicked set of typical legal technologies (applications, scientific papers, and datasets).

We have assessed these systems based on the claims made by their developers and/or providers, and on the substantiation of those claims, with an eye to the kind of legal impact their deployment might have. Our particular focus is on systems that might alter or impact the concept of legal effect that lies at the heart of law-as-we-know-it. This focus informed the number and range of systems that are represented here (see also Why have you included only 30 systems?).

The Typology is also an online tool. Users are invited to play around with the various filters, thus mapping and comparing the legal technologies in different ways.

Our investigation relies on information that is made publicly available by the providers or developers and on our own computer science expertise regarding the relevant technology.

For a more detailed description of our motivation, see Our research concerns.


Who is it aimed at?

The tool is aimed at legal practitioners, developers of legal technologies, legal scholars, and computer scientists – but it may be of interest to wider audiences too.


Who created the Typology?

The Typology’s contents were researched and produced by the COHUBICOL team and affiliated researchers.

The structure and interface was designed and built by Laurence Diver, legal postdoc in the COHUBICOL project.


What are its goals?

We created the Typology with several aims in mind:

  • To enable further research into legal technologies, based on our investigation of the substantiation of claims made by their providers and the potential legal impact of their deployment.
  • To offer a strategy for review or evaluation of the different types of legal tech.
  • To provide a means of comparing aspects of legal tech, especially how they operate at the ‘back-end’.
  • To make sure our audience (primarily lawyers and computer scientists) can both navigate and understand the information we offer.

Our focus is on legal effect, that is the effects brought about by written and oral speech acts that are recognised by law (e.g. a civil servant pronouncing a marriage, two parties agreeing to a contract, or a judge handing down a written judgment).

Given that legal effect (as we know it) relies on text as its underlying technology, any transition in legal practice toward systems that rely on code and data may disrupt the nature and the operation of legal effect. Such disruption might have positive or negative effects for legal effect and thus for legal protection, but in order to know this the effects must be investigated and anticipated. This means considering how legal technologies are and might foreseeably be deployed: by whom, in what contexts, and for what purposes – including in ways not intended by the system’s provider. We summarise this assessment in each Typology profile under the heading Potential legal impact.

For more, see The ‘effect on legal effect’? below.


Why have you included only 30 systems?

As mentioned above, our interest is in the potential impact on legal effect. We selected legal technologies with the potential to have such an impact, whether or not that is intended by their providers. (For a more detailed explanation of this important criterion, see Our research concerns)

A typology is not a taxonomy; our goal is not to provide an exhaustive catalogue of legal technologies, available either now or in the past. The aim is to canvas a representative spectrum of systems, across apps, papers, and datasets, different functionalities, intended users, and fundamental approaches (code- and data-driven, respectively).

Additionally, a typology does not aim for a comprehensive mapping that is based on mutually exclusive ‘attributes’, but on potentially overlapping ‘affordances’ (= what does a specific type of technology enable, and what does it restrict).

Based on this approach and on the structure of the Typology profiles, we hope readers will be empowered to ask the right kinds of question with respect to legal protection – both of existing systems and of systems that will be developed in the future.

For more about the distinction between a typology and a taxonomy, see A typology, or a taxonomy?.


Can I suggest a new entry?

The Typology is not a list but a research tool providing both practical oversight and in-depth inquiry in specific types of legal technology. We do not intend to arbitrarily list systems but to help lawyers and developers to map and compare legal technologies, while also contributing to them doing some serious work on how the claims made on behalf of these tools can or cannot be substantiated.

See also Why have you included only 30 systems? for the criterion we applied in deciding what kinds of system might be represented.


The profile for system X is out of date, will you update it?

We do not guarantee completeness or accuracy of our interpretations or descriptions, as they are based on the limited information that was publicly available at the time of research. We aim to provide archive links for all quoted sources, valid at the time of writing. See our Statement about completeness below.


How do you define the terms that you’ve used?

We define code-driven systems as all those systems that do not learn based on training data (for instance legal expert systems, rules as code) and we group dedicated programming languages under code-driven, though they are not systems.
We define data-driven systems as all those systems that learn based on training data (whether supervised, unsupervised or reinforcement learning), we include training datasets under ‘data-driven’, though they are not systems.
Intended users
We define intended users as the natural persons, law firms, courts, litigants or academic researchers that providers or developers intend to deploy the system, programming language, academic paper or dataset. In many cases this will include end-users, though not always.
We define form as the way the system is provided: as a proof of concept, a component of another system, a dataset, an application, or a platform.
Automation or support
We define automation as referring to a system that is meant to take decisions without human intervention and we define support as referring to a system that is meant to support human decision making.
In use
We define in use as referring to whether the system, programming language, proof of concept, or dataset is currently deployed by its intended users (law firms, academics, courts, natural persons).
We define the creators of the system, programming language, dataset or paper, as those who developed the system, wrote the language, collected and curated the dataset or authored the paper.
We define access as referring to how users can access the system, programming language, dataset or paper, e.g. whether or not it is available in open access (paper) or as open source (code).

How can I give feedback?

We are interested in feedback on the Typology, especially from those who have used it to inform their own research. To contact us about it, see the Get in touch page.


Background to this project

Our research concerns

The COHUBICOL project (Counting as a Human Being in the Era of Computational Law) enquires into the assumptions of both modern positive law and of computer science/software engineering. These assumptions inform legal practice and the development of legal technologies.

The idea is to target the implications of any incompatible assumptions that are embedded in the design of legal technologies, particularly with respect to legal protection.


Our objective

The primary goal of the Typology is to provide the means to answer the question: what is the effect on legal effect? Because the various dimensions of this question might not be immediately obvious, the tool speaks in terms of ‘legal impact’.

Law is not just a bag of rules but an instrument to guide societal welfare, justice and societal order. In constitutional democracies, the rule of law requires that legal norms are always both constitutive (enabling legal subjects to act in law, with real consequences) and limitative (restricting how legal subjects can act, by stipulating under what conditions which legal effect is attributed).

In the case of text-driven law, the nature of law’s technological articulation (in text) has dedicated affordances, due to the multi-interpretability of natural language. The text-driven normativity (TDN) that underlies modern positive law, affords the kind of contestability that is key to democracy and the Rule of Law, while also offering closure. Legal effect is an affordance of TDN.

Computational ‘law’ is articulated in other technologies, such as code- or data-driven systems (prediction of judgment, legal search engines, rules as code, smart contracts). We cannot assume that the affordances of such technologies are equivalent to those of TDN. This raises the question of their effect, influence, impact on what we now call ‘legal effect’. It also raises the question of whether there could be something called computational ‘law’, or whether law-as-we-know-it will simply fade out while the meaning of both ‘law’ and ‘legal effect’ are transformed.

If the design of a legal technology changes how performative effects are brought about, it changes one of the fundamental elements of positive law. For example, if a lawyer relies on a search application that uses natural language processing techniques, the outputs of that system might lead the lawyer to consider a different notion of what the law is than if she had assessed all the potentially relevant cases manually. The ultimate legal effect of the lawyer’s work (e.g. an argument in court) is thus affected, however indirectly, by the design of the legal tech: the notion of relevance that is embedded in the algorithm will affect the materials the lawyer works with to do her job, and thus in turn will alter the precise contours of the effect her work has on the legal state of affairs. The extent to which this is desirable will vary, but the point is to highlight that there will always be some form of impact.


A typology, or a taxonomy?

A taxonomy works with mutually exclusive concepts and assumes an ontology that maps the distinctions between them.

By contrast, a typology makes analytical distinctions without claiming any mutual exclusiveness between the relevant concepts. Instead, a typology highlights the overlaps and dynamics between entries, while (in this case) making no claims to completeness.

Our typology aims to help users to look at the same phenomenon (legal technologies) from different perspectives, by allowing them to play around and engage with relevant distinctions in order to develop a multifocal view of what matters.

A typology thus creates a multi-dimensional mapping that brings a variety of tokens (‘legal techs’) under a variety of types of legal tech (‘legal search’, ‘legal prediction’, ‘code-driven’, ‘data-driven’, ‘rules as code’, ‘scientific papers’, ‘applications’, ‘datasets’) without claiming completeness as to either tokens or types. We believe this is a more open way of understanding the domain of legal tech, preventing pre-emptive closure of what fits the domain and what does not, while nevertheless being explicit about relevance (eliciting systems that have a direct or indirect effect on legal effect). Note that relevance is a key concept of information retrieval and thereby key to any kind of legal search. We use the concept here in a way that does not lend itself to metrification, noting that in the end, the decision on relevance is key to judgement rather than reckoning.

This typology takes a deep dive into how different types of legal tech (categories, connotation) are related to their tokens (examples, denotation), and vice versa. The types were chosen in view of (1) what is currently available on the market and in academic research and (2) what is the most relevant in terms of potential effect on legal effect. The tokens were chosen in view of the same.

The typology does not address systems used in the context of smart policing, nor does it involve technologies used for e-discovery or internal knowledge management for law firms or techs that support business processes in courts. This is in part because we have limited time to explore the domain and in part because we want to focus on systems that support or even replace the ‘making’ of law, whether by courts, law firms or legislators. The focus is on legal practice. We have included commercially available products and services, academic papers, training datasets and programming languages for rules as code. We deem them to be sufficiently representative of what we consider the most relevant types of legal techs at this moment and the foreseeable future.


Why substantiation, not explainability and bias?

To assess explainability or bias, users first need to understand how these technologies actually operate and what they can possibly offer. This empowers them to situate issues of explainability and bias within a deeper context.

COHUBICOL rejects the trade-off between accuracy and opacity as being key to the assessment of legal technologies; instead we seek to assess upstream design decisions that potentially have major downstream impact – notably on legal protection.

The Typology provides the groundwork that enables the posing of more informed questions concerning explainability and bias.


Some history of the Typology’s development

Work on the Typology started in the Fall of 2020 when we decided on a mapping exercise to ground our research in actually available systems (whether proof of concept or operational on the market). The development of the types preceded the choice of systems, but was extensively reconfigured while doing the exercise. We worked on the typology for around nine months with a team of lawyers and computer scientists, learning about the ins and outs of what they are claimed to do and what they are probably capable of achieving, depending on myriad circumstances.

While working on the Typology we also worked on the development of a vocabulary of relevant computer science terminology

  • to better grasp the internal perspective of those developing these systems,

  • to better understand how computer scientists reason about them, and

  • to comprehend what it means to validate, verify and test their functionality.

For a bit more on the technical underpinnings of the Typology, see ‘The medium is the message: some technical notes on our Typology of Legal Tech’ on the project’s blog.


Statement about completeness

We do not guarantee completeness or accuracy of our interpretations or descriptions, as they are based on the limited information that was publicly available at the time of research (we aim to provide archive links for all quoted sources, valid at the time of writing).

Each profile shows the month when the original research was completed. Our understanding of the systems is based on (i) what information was available when researching the techs, some of which we may have missed, and (ii) our necessarily limited understanding of detailed matters around e.g. jurisdiction, technical implementation, interoperability, etc.

We are happy to remedy any demonstrable mistakes made at time of writing, but do not intend to keep the Typology up-to-date. The goal is to show what these types of technologies are claimed to do and how one could (and should) investigate whether those who invest in their deployment can understand what they actually offer.


Page updated 7 Nov 2022.