Link Search Menu Expand Document

Chapter 3: The impact of code-driven legal technologies

By Laurence Diver

On this page

  1. 3.1 Introduction
  2. 3.2 Rules as Code
    1. 3.2.1 The COHUBICOL lens
      1. Legality and legalism
  3. 3.3 The texture of code-driven normativity
    1. 3.3.1 Mixing legal and technological normativity
  4. 3.4 A spectrum of impact on law
    1. 3.4.1 Stage 1: Legal metadata and machine-readable documents
      1. Structural markup
      2. Drafting software
      3. Potential impact(s) of code-driven law
        1. Representing legal meaning and structure
        2. Extending the scope of legal protection
    2. 3.4.2 Stage 2: policy development and automated analysis of rules
      1. Logic checking
      2. Policy development and parallel drafting
      3. Potential impact(s) of code-driven law
        1. The threshold for formalisation
    3. 3.4.3 Stage 3: bespoke languages and digital-first laws
      1. RegelSpraak: a controlled natural language
      2. Catala: a domain-specific language
      3. A ‘single source of truth’
      4. Digital-first laws
      5. Potential impact(s) of code-driven law
  5. 3.5 Anticipating legal protection under code-driven law
    1. 3.5.1 Who benefits?
    2. 3.5.2 Increased complexity and maintenance
    3. 3.5.3 Interpretative authority
      1. The mirage of human-readable code
      2. Technological normativity and interpretation
      3. Technological normativity and discretion
      4. Formalisation and the shaping of policy
    4. 3.5.4 RaC and the effect on legal effect
      1. The mode of existence of code-driven law
        1. (Mis)translations between law and code
        2. The ecology of code versus the ecology of law
  6. 3.6 Conclusion: treading the line

3.1 Introduction

For the purposes of this chapter, we adopt a definition of code-driven law as ‘legal norms or policies that have been articulated in computer code, either by a contracting party, law enforcement authorities, public administration or by a legislator.’1 More specifically, we focus on the latter two categories, given the increasing interest and speed of development in the ‘rules as code’ space and the tangible efforts of public administrations in adopting it for real-world application.2 This can be contrasted with so-called ‘cryptographic law’ based on blockchain applications, which despite huge interest (and hype) in recent years has nevertheless lost a great deal of public and scholarly attention in light of the ongoing collapse of initiatives based on cryptocurrency and non-fungible tokens (NFTs).3

Keeping the emphasis on the articulation of public legal norms in computer code, this chapter focuses primarily on ‘rules as code’ (RaC) as a subset of code-driven law. RaC initiatives are becoming very real, and the articulation in code of legal norms which lies at its heart speaks directly to the notion of computational law having a potential ‘effect on legal effect’.4 The position of our first Research Study is that attributed legal effect is a central mechanism by which law and the Rule of Law can provide protection, and so it is essential to enquire whether and how both the concepts of attribution and of legal effect might be impacted by the introduction of certain kinds of computation.

In the remainder of this chapter, we first consider what kinds of system and approach ‘rules as code’ refers to, gleaning some definitions from various prominent commentators. We then attempt to make sense of the RaC landscape by placing them on a spectrum of impact on law, identifying the potential issues raised by RaC systems as they progress further away from text-driven legality at one end of the spectrum towards code-driven legalism at the other. In the final section of the chapter, we take a step back from individual systems/approaches to consider what the COHUBICOL perspective can tell us more broadly about the implications of code-driven law for legal protection and the Rule of Law.

Some Rules as Code approaches have the potential to enhance legal protection and the Rule of Law, while others reflect an instrumental idea of what a legal rule is

Ultimately the analysis is nuanced: some Rules as Code approaches have the potential to enhance the practices that enable legal protection and the Rule of Law, while others reflect an instrumental idea of what a legal rule is and how it should be treated. The latter demonstrate a legalistic conception and application of the law, which is at odds with the idea of legality, whereby law is not just about rules, but also – and crucially – about affording spaces and procedures which allow for the interpretation, deliberation, and contestation of what those rules ought to mean in particular circumstances. Those affordances, which are readily supported by text-driven ‘infrastructure’ of law-as-we-know-it, both give law its capacity to provide protection, and give democratic legitimacy to the processes by which legal rules are produced. The interplay between RaC approaches and existing processes of government is complex and multifaceted. The hope in this chapter is to highlight where the introduction of computational methods can enhance the specifically and properly legal character of legislative rules (with all the procedural and interpretative connotations that ought to come with that term), while at the same time avoiding the reductive casting of legal rules as no more than technical or commercial instructions for compliance.

3.2 Rules as Code

There is no settled consensus on what precisely RaC is, or what it should seek to be. This perhaps reflects the fact that people from a variety of professional backgrounds are expressing an interest in its development and use, and so bring a range of ideas about what it is and what it ought to be and do. In that sense we hope this Research Study, along with the prior Research Study on Text-Driven Law,5 might contribute something to the normative development of its scope and aims.

As Mohun and Roberts put it in the OECD’s working paper ‘Cracking the Code: Rulemaking for humans and machines’, ‘RaC envisions a fundamental transformation of the rulemaking process itself and of the application, interpretation, review and revision of the rules it generates.’6 They adopt de Sousa’s definition of RaC as ‘the process of drafting rules in legislation, regulation, and policy in machine-consumable languages (code) so they can be read and used by computers.’7

Kelly suggests that, ‘at its most basic’,

[RaC] is a granular agile project management methodology focussed on

  • Creating a transparent algorithmic law representation, centred on decision tree diagramming and structured languages;
  • Secure, cloud-based production platforms, allowing iteration, testing, access, and maintenance.8

Waddington highlights a variety of approaches that come under the RaC umbrella. He suggests that the approach is

not wedded to any … technology solutions so much as to the idea that the “coding” (or mark-up) of the legislation should be widely usable, traceable to the legislation, rather than adding material to reflect assumptions about procedures or implementation.9

He makes a nuanced normative argument about what RaC should (and should not) seek to achieve:

what if, hype aside, Rules as Code is not really intended to be magically transformative, and is not really about automating legal decisions or about programs that implement law themselves? … [Rules as Code] is about applying human intelligence, rather than AI, and about the less glamorous ways in which computers are already handling law and could do better in aiding humans.10

This is echoed in visions for RaC that are as much about the process of developing rules as the means by which they are represented, processed and enforced. Casanovas, for example, suggests that RaC is

not a new technology (there is no way to present it as such) but an attitude that includes technological and political planning for policy making and a clear will to cope with the demands of the digital age.11

In this context of policy development, de Sousa and Andrews suggest that by making laws ‘machine consumable, we can take a whole new approach to testing them, and modelling different legislative approaches’.12 To achieve this will require new ways of drafting legislation that mean we ‘draft the code and the human-readable text at the same time, and allow them to influence each other’.13

The impact of RaC will depend very much on who is afforded what uses by the system, and at what stage of the ‘lifecycle’ of a legal norm

Once the policy has been developed and the RaC translations created (whether directly by the legislature/executive, or subsequently by third parties), the focus can shift to enforcement, compliance and the provision of legal advice. Extensible platforms such as DataLex can ‘be used to develop legal reasoning applications in areas such as legal advisory services, regulatory compliance, decision support and Rules as Code’.14 ‘Low-code’ and ‘no-code’ platforms like Blawx and Neota enable the declaration of rules using intuitive graphical user interfaces that hide the logical rule engines beneath.15 Domain experts who might not have experience with logic or general-purpose programming can thus be closely involved in the process of defining rules.16

The impact of RaC will depend very much on who is afforded what uses by the system, and at what stage of the ‘lifecycle’ of a legal norm. The ‘who’ can be anyone from legislative drafters to public administrators to commercial enterprises to citizens to judges. The point of application can be anything from point of initial conception to the developing and passing of a norm into law to the interpretation of its terms by those subject to them on to the authoritative determination of its meaning by a court (if that ever happens) – as well as many points in between.

3.2.1 The COHUBICOL lens

Taking a step back is part of COHUBICOL’s approach: attempting to tease out the implicit and explicit assumptions that are reflected in the design of legal technologies and the contexts where they are intended to be deployed. It is essential to properly delineate the ways in which humans (both citizen ‘users’ and legal practitioners, of all kinds) might be aided by technologies like RaC, to ensure that core legal values can be preserved and, where possible, enhanced by the appropriate use of such technology. In that light, some initial observations are worth making here, to frame the analysis later in the chapter. Legality and legalism

Casting law as ‘regulation’, and citizens as ‘consumers of rules’ or ‘rule-takers’, risks adopting a reductive and technocratic view of legal norms as merely instruments of policy, and citizens and other legal subjects as passive targets whose duty is simply to comply. This flattens the relationships that properly constitute legality, as opposed to legalism, in which there is a ‘reciprocity of expectations’ between legislator and citizen.17 Legality is about the reciprocal rule of law, where citizens have a role (however distant) in the making of rules and their subsequent application, whereas legalism is about a top-down rule by law, where laws are commands of a sovereign that we must simply follow.18 In any given instance a failure to perfectly uphold this reciprocity might not matter too much to those involved: perfect legality is an aspiration, and unlikely to be attained in every case (if ever). But in aggregate, the failure to reflect its ethos could threaten the social, civic and professional ‘anchoring practices’ of ‘interpretation, justification, contestation and creative action’ that both give rules their legal character and enable the law to afford protection.19

These practices are forever in tension, requiring constant reinvigoration for law to have legitimacy and to be effective in structuring societal relations.20 When the (technological) conditions that enable those practices change, the question that must be asked is whether or and how the practices themselves will change, and possibly falter, perhaps in unforeseen or non-obvious ways. That is the essential question the COHUBICOL project seeks to grapple with.

From that perspective, it is essential not to assume any determinism in the role played by technologies involved in the law: we cannot presume that they will have only beneficial or only negative effects, if even we can assume their introduction will have some kind of impact.21 Instead, to properly identify what that impact is or will be, we need to anticipate (i) who will be afforded what capacities by the technologies, and in what circumstances, (ii) which existing affordances will be changed or removed, and (iii) what impact might they have on the conceptual underpinnings of the law and its particular ‘mode of existence’ (these three elements are of course deeply intertwined).22 Ultimately the hope is that such technologies can be embraced where they facilitate and strengthen the specific type of protection that law affords, and resisted where they promote interests or centres of power that could or would otherwise have been constrained by the Rule of Law.

Casting law as ‘regulation’, and citizens as ‘consumers of rules’ or ‘rule-takers’, risks adopting a reductive and technocratic view of legal norms as merely instruments of policy, and citizens and other legal subjects as passive targets whose duty is simply to comply.

3.3 The texture of code-driven normativity

We mentioned above the idea that one way to make sense of the RaC landscape is to consider the spectrum of its potential impact on the law. More specifically, we want to tease out the difference between what a legal rule does and what a code rule does, and where particular approaches to RaC sit in relation to those two very different types of ‘normativity’ (the ways in which behaviour, action and practice are shaped through inducement, enforcement, inhibition or prohibition23).

In the first Research Study, we considered the texture or fabric of text-driven normativity from various angles, trying to identify its qualities and the conditions that makes it possible. One task in this chapter is to attempt to do something similar for code-driven law (specifically RaC).

Different technologies exert different amounts of ‘normative force’, from suggesting behaviour and action, through to guiding them in ways that can be resisted, on to defining their character and limits from the outset, with no possibility of reconfiguration or resistance.24 From a Science, Technology and Society studies perspective this is true of all technologies; they inevitably shape the practices they are embedded within. This shaping is often imperceptible and can be a constitutive part of the practice, sometimes in ways that are not obvious even to those engaged in the practice.25 Text is perhaps an example of such a technology.26 Despite it playing a fundamental role in shaping the nature of legal rules and the character of their application, this fact can be easily missed because it is so familiar to us.27

Crucially, the normative force of technology operates both through the technology in which the rules are embedded, be that text or code, and on the people involved throughout the lifecycle of a rule – from drafters and policymakers to public administrators, compliance officers, citizens, litigants and judges. While it might be tempting to think of legal rules (and indeed any rule) as cleanly logical and susceptible to application free of external influence, their interpretation and the technological means by which they are produced, accessed and enforced all have an impact on their operation in the real world.

Normative impact flows not only from the RaC-modelled rules themselves, but also the technologies used to produce, disseminate and enforce them.

It therefore follows that, when it comes to RaC, normative impact flows not only from the RaC-modelled rules themselves (in whatever specific form they take), but also the technologies used to produce, disseminate and enforce them.28 With respect to production, we can think about the impact of how RaC technologies represent or facilitate foundational concepts of law, for example rights, duties, legal personhood, legal effect and justice.29 With respect to implementation or enforcement, on the other hand, we can consider how RaC affects the interpretative and adjudicative processes of law. In either case, the potential impacts will be different for different actors within the legal system, whose interests and roles are essential to consider when thinking about what it is that legal rules are meant to do, and how.

Text-driven law has a very specific type of normativity. It is a common misconception that rules written in natural language are frustrated commands that would self-enforce if only we could find a way to make that happen. If indeed that were the goal, then the wholesale formalisation and automation of law would make perfect sense, and there would be no need for natural language in law. That this has not happened, despite a huge number of attempts to achieve it, draws attention to the fact that natural language plays a much larger role than simply articulating the terms of the rule. Legal rules are only effective in context, and for them to have any value in structuring society they must be interpreted at point of application, in light of that context.30 Legal rules are also given legal effect in the knowledge that they cannot determine for themselves in advance precisely how they will be understood, or what their meaning will be over time as societal mores and priorities change.31 These are features of natural language that are constitutive of law; they are not bugs to be solved.

Natural language plays a much larger role than simply articulating the terms of the rule.

If text-driven law affords this specifically legal form of normativity, what kind of normativity might code produce, or exert? This distinction – between code and legal normativity – is of fundamental importance to legal practice, because of the structural implications that flow from each type of rule.32 In stark contrast to natural language rules, rules represented in self-executing code will change the state of the world without the need for the presence or oversight of a human to interpret anything prior to its execution.33

Between these two starkly contrasted ideas of rules, textual and computational, there are many different ways in which digital technologies (including RaC approaches) can have an impact on law. The question for the purposes of this chapter is whether they tend towards supporting a legal idea of a rule, or something else. To make some sense of this, we can think of a spectrum, with notionally ‘pure’ legal normativity at one end, and notionally ‘pure’ technological normativity at the other:

Figure 1. A spectrum of RaC normativity Figure 1. A spectrum of RaC normativity

Depending on the type of RaC and where it sits on this spectrum, it will to a greater or lesser degree mix the two types of normativity in one medium. Where the text of the law is embedded unchanged in a digital technology, the anchoring practices of interpretation, deliberation and adjudication that are afforded by text-driven law may well be preserved. But even where that is the case, the system will necessarily include some measure of technological normativity, because it will to some degree structure the behaviour and actions of those who use it, simply by existing. The extent of this will depend on the RaC approach in question, and where and how it is deployed. It might exert normative force directly (e.g. via the user interface, structuring interactions with the system and framing its output34), or indirectly, when the code-driven translations it embeds mediate the meaning of natural language rules (e.g. where a benefits calculator provides an output that is treated as if it is legally accurate, even if this is not true).35 In many cases it will do both.

Depending on whether the RaC system is used in the development and articulation of legal norms ex ante, or in the provision of advice and/or automated compliance and enforcement ex post, the mixture of the two types of normativity will have different effects, with the technological force subsuming a lesser or greater part of what would previously have been text-driven legal ‘force’.

For example, the technological normativity embedded in the design of an application for drafting legislation might have some influence on the legal normativity – the natural language rules – it is designed to assist the writing of. Similarly, the particular way the interface of a legal expert system requests answers from the citizen will frame their understanding of the system and their interactions with it. Again, the legal rules are mediated through the assumptions made by the designers of the system about what kinds of question to ask, and even how to design the question-and-answer interactions. Because those assumptions mediate the experience (one might even say ‘user experience’, or ‘citizen experience’) of the law in ways that are not neutral,36 it is crucially important to consider whether or not they reflect Rule of Law values, and indeed democratic values, not least equality of access.37 There can be no doubt that natural language legal texts are often obscure, complex, require expertise to understand, and can be expensive to access, and that digital technology has an important role to play in solving these significant problems.38 But there is at the very least a risk that we create more long-term structural issues than we solve if we attempt to address those grave challenges by replacing the fundamentally democratic technology of text as the essential foundation of law.

A central concern of COHUBICOL is what might happen when legal and technological normativity are combined in a single system, with legal force being mediated by or even converted into technological force. This is a spectrum of technological impact, and the picture painted above is at the extreme end where legal normativity is entirely supplanted by technological normativity. As we shall see below, that is not the goal of the vast majority of RaC approaches or their creators, although it is a vision that has been mooted by some.39 The hope here is to highlight some of the problems that might arise if we go too far down that road.

Having made this brief initial foray into legal theory, which we will return to at various points below, we can turn to consider the spectrum of impact on law.

3.4 A spectrum of impact on law

In this section, we look at some primary classes of RaC approach, each representing a particular point of equilibrium, or disequilibrium, between legal and technological normativity. As we traverse the spectrum, the potential for more foundational impact increases:

  • RaC is used to provide added information to natural language legal documents, to enhance their usefulness in terms of legal search, archiving, and knowledge management.

  • RaC approaches are used as a tool to augment policy development and enhance the development of text-driven natural language rules;

  • RaC translations are made available for third parties using RaC to implement legal norms directly in their own systems in order to achieve ‘compliance’;

  • The executive provides RaC translations of natural language legal rules that it uses in service delivery;

  • The legislature promulgates digital-first RaC rules that are taken to be law and enforced automatically.

In considering these approaches and their potential impacts, we deliberately maintain an internal perspective on what law is and is for, as set out in depth in the first Research Study on Text-driven Law.40 This means that we do not dive into the technical specifics of these RaC systems to assess how they perform on discrete computer science tasks developed according to the internal perspective of that discipline. In many cases those tasks have no relationship to the goals and purposes of the law, and the notion of performance that is considered bears little relationship to legal protection and the Rule of Law.41 Instead, we hope to highlight those claims that are particularly salient in terms of the kind of critical appraisal the COHUBICOL project aims to foster.42 This means considering questions such as: What do RaC approaches afford (and disafford), and to whom? How do they interface with or change the mode of existence of law? And what impact do they have on the capacity of the law to provide protection?

We saw above that the threshold between technological normativity and legal normativity will vary according to (i) the design of a system, and (ii) the extent to which we treat its output as having legal effect. The bigger the role that technological normativity plays, the further along the spectrum the system will sit – with the potential for a simultaneous diminution in the role played by legal normativity.

At the least contentious end of the normative spectrum are RaC approaches that provide mechanisms to tag or ‘mark-up’ the structure and elements of legal documents, and in particular legislation. Such approaches are closely connected with the vision of the ‘Semantic Web’, where metadata is added to documents to provide additional information on what they contain, which in turn allows them to be processed by computers in ways more relevant to their domain of use. To give an example, the elements of a legislative document can be tagged to specify their structure: recitals, chapters, parts, sections, paragraphs and articles are specified as such, rather than left as blobs of text that can only be processed without reference to their meaning within the relevant domain (crucially, this structure is different from the visual structure that can be achieved in an ordinary word processor using headings, indentation and numbered lists; the structured tagging referred to here is usually stored internally within the document’s file and is invisible to the reader). Structural markup

One of the primary RaC technologies used for this purpose is Akoma Ntoso, an example of an eXtensible Markup Language (XML). Akoma Ntoso has been standardised, and as such is freely usable by anyone – and indeed it forms the backbone of numerous legal drafting, publishing and archiving initiatives.43

Figure 2a: AKN representation of part of the Scotland Act 1998 Figure 2b: AKN output in HTML

Figure 2. Akoma Ntoso markup (left) and generated HTML (right)

The left part of Figure 1 shows the markup, or tagging, within an Akoma Ntoso (AKN) version of section 1 of the Scotland Act 1998. The substantive legal text is shown in black, while the tags are shown in blue, with attributes in red. The tags specify the various elements of the Act, including its Parts, headings, sections, subsections and their numbering. From this source document further versions can be generated for human readers. The right of Figure 1 shows a web page (HTML) version of the same section, generated from the same source, with identical binding text.44

At this end of the normative spectrum, the affordances of RaC are perhaps unglamorous but nevertheless extremely useful. The fundamental structure provided by markup languages like Akoma Ntoso in turn creates a foundation which can be used to enhance other software systems that lawyers and citizens frequently rely on.45 For example, legal search systems can utilise the structure of AKN documents to facilitate more accurate and targeted search (for example by restricting results to those provisions tagged as recitals, or pinpointing a specific individual section of an enactment). Because the document structure is explicitly specified by the creator of the tagged document, who will usually be the legislative drafter, there is less reliance on fuzzy searches that treat the content of a document essentially as a bag of words with no structure.46 Although such search approaches have improved dramatically, the capacity to target with certainty and isolate a specific part or parts of a document and to access metadata about them is a powerfully generative affordance of this kind of RaC system. It has allowed legal archival and database systems to provide more granular access to the text of the law, and to track its evolution and status over time as it comes into force and is amended or repealed. Metadata about if and when a specific enactment has come into force allows for ‘point-in-time’ displays of which parts of a legislative instrument have legal effect at a given moment. The structured documents also allow for cross-references between provisions to be identified, enabling more comprehensive understanding of the interconnected legal effects of legislative provisions.

One prominent example implementing these affordances is the UK’s online public database of legislation,, which provides machine-readable versions of most primary and secondary legislation.47 The provision of a combination of (i) authoritative legal text and (ii) a structured machine-readable format means that third parties can build applications with granular access to the official source of legal text via API.48 Drafting software

One example of a class of application that builds upon the foundation provided by structured markup is the specialist software used by parliamentary counsel to draft legislation. Two prominent examples of these ‘integrated development environments for legislation’ include Legislation Editing Open Software (LEOS),49 developed by the European Commission, and LawMaker,50 developed by the National Archives and used in the UK. Precise functionality varies, but these systems have replaced the use of word processors for drafting legislation, which required complex and unreliable template files and had limited capacity for integration with other systems or for collaborative working.51 At its core, legislative drafting software produces documents directly within a structured format such as Akoma Ntoso (instead of in a generic word processor or Web format). As new elements of piece of legislation are drafted – articles, sections, subsections, paragraphs, etc. – they are immediately tagged with the relevant structural elements. This is usually done ‘in the background’, such that the drafter sees only the formatted natural language document.

As more legislation is produced directly in a structured format using such specialist software, the requirement to convert a traditional unstructured document is removed, which in turn minimises the likelihood of errors being introduced during conversion. On top of this core difference with traditional processes of creating and digitising legislation, these drafting systems are designed specifically to facilitate various aspects of the specialist work that legislative drafters do. This includes, for example, defining document structure to limit the potential for mistakes, facilitating collaborative editing across teams, tracking document versions, interlinking with legal databases for inserting/checking cross-references, allowing drafts of documents to be shared, and modularising common elements of the legislative workflow.52 Potential impact(s) of code-driven law

One can appreciate that by including additional metadata within the machine-readable structure of a document, approaches such as Akoma Ntoso can afford a deeper understanding of a particular piece of law which in turn might have a bearing on its legal interpretation. At the same time, there is in principle no effect on the natural language legislative document, as was shown on the right of Figure 2 – all things being equal, access to the law is not affected, nor are the traditional appearance or visual structure of the legislation. Machine-readable structured legal documents have the same basic interpretative affordances as do word processor or PDF versions of the same document.

The normative impact on law-as-we-know-it of providing basic machine-readable tagging of structure within legal documents appears minimal, provided what is tagged does not purport to provide the legal meaning of the elements of the document, but rather (and only) its unambiguous structure and the metadata required to capture it. This is fundamental, because to attempt to schematise the meaning of legal norms in an unambiguous and universally-accepted way is to elide one of their core affordances: the capacity to disagree about what they ought to mean. Attempting to codify the substantive meaning of legal norms is to do the job of the law before it gets the chance.

This concern is less acute with respect to the structure of legal norms or, even less problematically, the structure of the documents that contain their text. It is rarer for parties to disagree about whether a piece of text qualifies as a ‘section’ or ‘article’ of an enactment than it is for them to argue about what that section or article ought to mean – the latter type of argument is of course the bread and butter of litigation.53 But potentially problematic is the inclusion of metadata about, for example, the moment at which a provision came into force.54 If one hopes to rely on point-in-time snapshots of the state of the law, these need to be accurate to avoid potentially significant legal consequences. Dates and computers are not always good bedfellows,55 however, and even within text-driven law the question of how to specify them consistently is not without complexity.56 The consequences of inaccuracy could be significant, and a novel regime of liability might be necessary to account for them.57

If it is possible for such technical issues to be reliably overcome, and tagging of the text is limited to structural elements of legislation that are already recognised by the law, then the fundamentals of legal effect would seem to be unaffected. The mode of existence of legal norms is unchanged; legal rules are posited in natural language, they are produced by performative speech acts whose validity is governed by positive law (itself the product of the same essential process), and they become institutional facts within the legal-institutional order. The medium by which the legal texts are made available simply augments those texts with further information that may be contextually and legally relevant to their application in the real world, without affecting the capacity and methods of interpreting them. The capacity of the law to afford protection, built on the foundational capacity of text to afford multi-interpretation – and thus contest, stability, and geographical and temporal reach – are unchanged.

When systems are built on top of this foundation of structured documents, there is greater potential for normative impact on law and on legal effect. We saw above how searching and archival practices are extended by this type of RaC system, and how the affordances of this kind of RaC create new possibilities for legislative drafting practice. Improving the capacity to search, categorise and reference legal materials ought in principle to benefit users of the law. Affording access to the ‘raw material’ of law is a fundamental prerequisite of the capacity of citizens and their legal agents to develop novel arguments that are legally valid, a capacity that underpins legal protection.58 This is particularly true under the conditions of contemporary law, the volume and complexity of which make it difficult if not impossible for the citizen (let alone practitioner) to make sense of all the rules applicable in a given situation. The types of computational assistance afforded by this kind of RaC system may be not just desirable, but may be necessary, if the Rule of Law is to have genuine purchase in the contemporary world.59

Structured legal documents afford more than unstructured legal texts and the older methods of legal search built around them.

In that respect, then, structured legal documents afford more than unstructured legal texts and the older methods of legal search built around them. Structured documents extend what is already possible with natural language legal norms and facilitate the creation of systems that can increase and improve access to legal materials – though much will depend on the assumptions made in the design of those systems, in terms of how concepts such as relevance are handled. In principle, then, technologies at this point on the normative spectrum are an opportunity to strengthen the capacity of the law to provide protection, by facilitating more creative, forceful, or precise argumentation by reference to a wider range of relevant legal materials, the relationships between the provisions they contain, and the metadata that pertains to their status (such as validity and enforcement).

3.4.2 Stage 2: policy development and automated analysis of rules

The next stage on the spectrum of RaC normativity moves beyond the marking up of elements in the document to formalise additional information about the logic of those elements. The central motivating idea is that at a certain level legal rules can be abstracted into syllogisms: logical ‘if this, then that’ statements where conclusions flow deductively from certain premisses. The goal is to represent that essential logic.60

At the point of drafting legislation, logical representation can complement the structural markup at Stage 1 of the normative spectrum. At this stage, the goal is for computation to begin to interact with the meaning of the rules, or at least with how their meaning is likely to be interpreted once they are given legal effect.61 At Stage 1, the tagging allows for computational tools to manipulate the metadata to provide novel affordances (better search, linked cross-referencing, granular citation, etc.). This is sometimes referred to as the rules being ‘machine-readable’.

At Stage 2, what is manipulated are symbolic representations of the rules and the relationships between them. This has been referred to as ‘machine-consumable’ rules, to highlight the different levels of computational tractability. Here RaC approaches aim to capture the relationships between the symbolic representations of rules to enable conclusions to be drawn from them.62 The computer does not understand the linguistic or ‘semantic’ meaning of the rules, or their import within the legal domain – only the formal relationships between their symbolic representations. From both a legal theory and socio-legal perspective this is a fundamental limitation, but this does not mean such systems cannot play a role, for example in the production of higher-quality legislation.63 Logic checking

At the basic level, logic checking can be another unglamorous addition to the legislative drafter’s toolkit, where computation augments the drafting and subsequent comprehension of the statute that already takes place. As Waddington puts it,

It can involve merely highlighting the logical structures that the drafter is trying to create in the legislation, so that any use of that logic should always be traceable, explainable and open to correction or appeal in the same way as it is when a human follows the logic from the text.64

The end product is in the same medium as traditional legislative drafting – namely natural language text-driven rules – but these are the result of a process that has used some measure of testing and checking to ensure that they meet a base level of logical coherence. The result is legislation that contains fewer mistakes, for example syntactic ambiguities that create outcomes that are impossible to arrive at within the logic of the text, or cross-references to non-existent provisions.65 Since the logic of the output is non-formal and embodied in natural language text, it can still itself be contested, along with the meaning and ascription of the predicates it contains.66

When RaC is used in this way, the formal quality of the resulting legislation is higher, because when it comes to be interpreted, the chances are reduced of encountering a condition that cannot be made sense of in the real world, without recourse to a court. Although some forms of linguistic ambiguity are inherent features of natural language, and are fundamental to the contestability that lies at the heart of the Rule Law,67 avoiding the drafting of patently irresolvable conditions in a statute can only be beneficial in terms of both legal protection and the proper reflection of the intention of the democratic legislator.68

While the courtroom is where law’s capacity to resolve such difficulties is most clearly demonstrated, litigation that arises from mistakes in legislation hardly reflects the higher aspirations of the law. Avoiding them in the first place frees up limited court capacity to focus on conflicts that are of more substantive importance to the individuals affected and to the community as a whole. This is true not just in terms of the closure provided by a specific ruling itself, but also in the deeper sense that the ongoing practice of legality is continued and upheld, and with it with democratic engagement of citizens with the norms and processes that structure and co-constitute society.69 Policy development and parallel drafting

Further along this stage of the normative spectrum, RaC can have a more structuring impact on the practices involved in developing and producing legal rules. As we saw with the definitions in section 3.2, the policy sphere is engaging with RaC as both a tool and a perspective at the interface between policymaking and legislative drafting. The OECD’s working paper ‘Cracking the Code’ refers to RaC as ‘a fundamental transformation of the rulemaking process’ and a ‘strategic and deliberate approach to rulemaking, as well as an output’.70 Waddington articulates this vision:

This could mean that legislative drafters and policy officers understand each other better during the drafting, that consultees can more easily grasp what it proposed and demonstrate how it could be changed, that inconsistencies in drafts can be spotted before they become problems, and that those who need to read the legislation can be helped to navigate complex sets of cross-references, conditions and exceptions to other exceptions. Those would represent significant benefits in themselves, without going anywhere near automating the implementation or enforcement of legislation.71

In the same vein, the New Zealand Government’s Better Rules for Government project seeks to bridge the perceived gap between policy intent and implementation, applying a service design approach to integrate policy development with rule drafting.72 It brings together teams from across the legislative process, including policy makers and analysts, legislative drafters, rule analysts, service designers and software developers. Instead of following a sequential process from policy to drafting to implementation, akin to the waterfall approach in software development, the idea is that direct and iterative (‘agile’) collaboration between each discipline will result in rules that better reflect the policy intent of the government. As the findings of the project’s Discovery Report put it:

We concluded that the initial impact of policy intent can be delivered faster, and the ability to respond to change is greater, with:

  • Multidisciplinary teams that use open standards and frameworks, share and make openly available ‘living’ knowledge assets, and work early and meaningfully with impacted people.

  • The output is machine and human consumable rules that are consistent, traceable, have equivalent reliance and are easy to manage.

  • Early drafts of machine consumable rules can be used to do scenario and user testing for meaningful and early engagement with Ministers and impacted people or systems.

  • Use of machine consumable rules by automated systems can provide feedback into the policy development system for continuous improvement.73

Other benefits include the development of standard patterns of legislative language that represent recurring policy requirements.74 These set out skeleton formulations for policy aims, and the questions that must be addressed to implement the pattern in a legislative draft. Such ‘modular’ rule formulations can then be used to develop software design patterns that can implement them, thus reducing the potential for misrepresentation of the rules after-the-fact, and the engineering problems of continually reimplementing what could otherwise be robust standardised approach.75

Once again, the ultimate product is still text-driven legislation, but the process by which that product is arrived at is improved in various ways, owing to closer collaboration and understanding between the cross-disciplinary teams that are involved in the policy-to-legislation process. Although constitutionally speaking the legislature cannot decide in advance the meaning of the rules it produces, their quality as rules might be meaningfully improved where the teams involved in producing them have an understanding of one another’s practices and the constraints they work within, so that the ‘gearbox’ between democratic policymaking and legal drafting runs more smoothly.76 Potential impact(s) of code-driven law

The potential impact at this point on the spectrum depends to a large extent on whether the application of the approach is used ex ante during the drafting of the rules, or ex post in the attempt to deliver a service, automate compliance, or provide advice about what the rules mean.

As with the structural markup discussed at Stage 1, what comes out at the end of the process are rules that are, from a formal perspective, identical to what went before. Their fundamental affordances are unchanged (whatever their content might be). The text is still natural language, affording interpretation and contestation. The procedures of the Rule of Law are in principle still available; they take up where the legislative process leaves off, ready to deal with disagreements about legal right and duty in the normal way. And, where the goals of initiatives like the New Zealand Government’s Better Rules project are in fact realised, the quality of the resulting rules is improved – they contain fewer logical anomalies, and their structural translation is more faithful to the goals of the legislature’s policy than it might otherwise have been. The threshold for formalisation

Various issues arise, however, in relation to the content of the rules. First is the question of which elements of the legislation should be modelled to undergo checking for logical correctness. As the authors of the Better Rules report recognise, ‘not all rules are suitable for machine consumption’.77 This raises the question of which rules are suited to computational representation and, crucially, who gets to decide this. Just as the distinction between ‘easy’ and ‘hard’ cases is not a simple one in text-driven law, the question of what rules are readily formalizable is similarly vexed. Any rule can be formalised; the question is how it is done, what the effects are of this, how they interplay with Rule of Law values and procedures, and who is affected by the change.

Any rule can be formalised; the question is how it is done, what the effects are of this, how they interplay with Rule of Law values and procedures, and who is affected by the change.

In one of the experiments undertaken by the Better Rules project, for example, only one part of one piece of legislation was considered, in order to ‘keep the problem (reasonably) discrete’.78 While this is an understandable decision in an exploratory scoping exercise, the question of scalability is absolutely key, because the meaning of legal norms is never limited to just the text of the statute containing them but is influenced by other sources of law, including constitutional enactments, case law, doctrine and principle.79

This underlines the importance of understanding from the outset the nature of text-driven law, lest the use of artificially restricted examples give the impression that success on a small scale will be applicable to the wider law and legal system. As Leith suggests, in the legal world (to say nothing of any other domain) it is no defence to say that the intention was to formalise only one area or piece of law, because by nature the discipline of law requires more than that:

legal knowledge, the sociology of law demonstrates, cannot be partitioned off into neat blocks which will fall, one by one, to the technicalism of the AI researcher. Rather, it is only by having a global appreciation of all the aspects of law which will allow each of those aspects to be properly understood – for law is an interconnected body of practices, ideology, social attitudes and legal texts, the latter being in many ways the least important.80

Whether this limited view is a problem will depend on the context in which the RaC system is used. Policy and drafting experts using RaC in the development stage of a statute will perhaps understand the proper (limited) role that the system can play within their broader practice (in any event, RaC in those contexts is mostly aimed at producing well-formed documents rather than pronouncing on the meaning of the law). RaC systems that purport to facilitate compliance or give advice, however, might prompt people to take actions that are misinformed as to the proper extent and meaning of the law. This question of interpretation is of central importance, and we will return to it in section 3.5 below.

3.4.3 Stage 3: bespoke languages and digital-first laws

At this part of the normative spectrum, new domain-specific [programming] languages (DSLs) enable the declaration of rules in a format directly susceptible to computational processing and automated enforcement.81 DSLs can be distinguished from general-purpose programming languages such as Python or Rust because they are designed for a particular class of problem or task.82

In the contemporary RaC context, the subset of DSLs known as ‘controlled natural languages’ (CNLs) are commonly adopted.83 These make programming rules more accessible for non-technical domain experts, i.e. policymakers and lawyers. In some cases, the language design brings the ‘grammar’ and keywords of the CNL closer to that of natural language legal text, to allow it to be read and understood as one. Despite this, as their name suggests their syntax is tightly constrained so that the rules follow a strictly predefined form.

The compilers of RaC DSLs embed different approaches to the logical aspect of legal reasoning, for example allowing for exceptions to rules (and exceptions to those exceptions), prioritisation of the applicable order of rules, and the role of time in the applicability of a rule.84 The ultimate goal is basically the same: to model the logical structure of rules to produce automated conclusions that can be used for compliance checking, the application of the law by officials, and to provide advice on how the law might apply in a given situation (or, as is more likely, some mix of all three). RegelSpraak: a controlled natural language

A contemporary example of a RaC CNL is RegelSpraak, used for the calculation of tax liabilities by the Dutch Tax Authority.85 RegelSpraak is based on RuleSpeak, a ‘set of guidelines for expressing business rules in concise, business-friendly fashion’.86 Like logic programming more generally, the modelling of business rules has a long history that gives an insight into the lens through which legal rules are often viewed when approached from that perspective.87 RegelSpraak imposes a strict ‘[RESULT] IF [CONDITION]’ structure on rules.88 Those conditions compare attributes that can be Boolean (true/false), numerical, date, enumerative, or that define the role/object the rule is concerned with.89 As with other formalisms, the rules are defined ‘atomically’ as discrete units, allowing for the identification of broader rule patterns. This is intended to mirror modularisation in programming, and allows for recursive inductive reasoning about the rules.90

Figure 3. A RegelSpraak rule pattern (left) and substantive rule (right) Figure 3. A RegelSpraak rule pattern (left) and substantive rule (right)

The aims of RegelSpraak are to be intelligible to non-technical users, to allow ‘automated semantic analysis’, and to facilitate ‘automated execution of the rules’.91 The Dutch Taxation Authority use it to automate the execution of tax rules in its internal systems and on its website, to make it easier to handle the implementation of annual budget updates, and to provide a single, centralised ‘source of truth’ for fiscal rules that, despite their quasi-natural language representation, are ‘only interpretable in one way’.92 Given its intended audience, its implementation within the Dutch Taxation Authority (DTA) uses Dutch phrasing and grammatical construction to ‘maximize the resemblance to a natural sentence’.93 Catala: a domain-specific language

Another prominent example of a DSL used in RaC is Catala, a language originally designed for use in applying French fiscal law.94 It adapts the ‘literate programming’ approach to code documentation by putting the executable RaC code immediately adjacent to the legal text which it seeks to translate.95

Figure 4. Part of the US federal tax code, formalised in Catala Figure 4 Part of the US federal tax code, formalised in Catala96

Here, literate programming is not (just) intended to aid understanding of the code ex post, but is also integral to the ‘pair programming’ approach used in writing that code. There, an expert in Catala sits alongside an expert in the relevant fiscal law. Again, the cross-disciplinary approach to policy development discussed above can be facilitated: expertise in policy, programming and legal interpretation can be mutually beneficial for the end product, the legal or policy expert interpreting the legal norms and checking the programmer’s implementation in a cycle of iterative testing and refinement.97

The use of a DSL adds an extra layer to the RaC-enhanced policy development process discussed above at Stage 2, by providing a formalisation into which the text of the legal norms can be directly translated. The result is a text that is bi-directional; it is both a natural language text, intelligible by humans without a technical background, and a formalisation that can be compiled and executed by the machine.98 Where the compiler of the DSL is designed around formal verification principles, as Catala is, one can be certain that the compiled code reproduces the logic of the rules expressed in the DSL.99

This builds on the logic checking capacity of logic programming to facilitate the general-purpose code that can be used in production systems. The translation of the logical structure of the legislation into a medium that can be (i) checked computationally, and (ii) understood by a non-technical reader, means that the latter – usually a legal expert – can verify that the output of the code is congruent with how they expect the legal rules to operate. A ‘single source of truth’

The compiler of the DSL converts the intelligible text of the DSL model into a form that allows the modelled logic to be computed just like any other programme. Some languages, such as Catala, are designed to allow trans-compilation of that logic into general-purpose programming languages. The resulting code can then be integrated directly into user-facing applications.100 To the extent that the application is required to comply with a specific law, or purports to enforce or give advice on what the law means, it can be said to be faithful to the model of the law that was written in the DSL.

The goal here is to create a ‘single source of truth’:101 a quasi-natural language version of the law that is endorsed by legal experts as a canonical translation of the text-driven law that it models. This is somewhat analogous to the structured versions of legislative documents we saw at Stage 1 of the normative spectrum. Like those structured documents, the initial DSL translation is generative, in that it can be used as a source from which to produce multiple further versions for use in different contexts. The significant difference at this point on the spectrum, however, is that what can be done with those versions is potentially more impactful because they are computable, automatable, and because they can be directly integrated into infrastructural or application code that will operate automatically in the real world. The threshold between technological normativity and legal normativity is thus quite different from RaC approaches at Stage 1 on the spectrum. Digital-first laws

In the approaches just outlined, the goal is to reduce the friction of translation as far as possible, so that those involved in developing policy and the natural language rules that implement it can also be directly involved in the process of producing RaC translations for use in digital service delivery, compliance, or legal advice and knowledge management.

Going further, there is a push towards various visions of ‘digital-first’ drafting, including the direct use of DSLs for legal rules.102 Policymaking is thus oriented around simplifying rules and disambiguating terms, in order to reduce discretion and facilitate automated processing.103 Here the digital version is treated as the ‘single source of truth’. The law is expressed directly in – rather than translated into – an executable (and presumably still human-readable) DSL, and thus compliance is essentially guaranteed, provided the rules are then successfully integrated into the target systems. Friction between articulating a rule and it being implemented in a digital system is reduced as far as possible and, in an inversion of what was discussed above, natural language versions of the rules are generated from the DSL.

Painting what is probably an extreme picture, Wong envisages that at this point legislative drafters would consider the digital RaC version to be authoritative, while the public would interpret and use the natural language version that is generated from the latter. Taken further still, the digital version comes to be treated as the official source of law both inside the administration and by the public.104 Both legislative and contractual norms are digital-by-default; just as digital audio, video and image formats have become the default means of representing previously ‘analogue’ media (music, film, imagery, etc.), so too can the law have its essential substance represented digitally.105 If the legislature was to get to this point, where law is represented directly in code, the layers and steps of the legislative process would be dramatically reduced, and the code-driven rules would presumably have the imprimatur of constitutional validity.106 We would have Rules as Code as Law. Potential impact(s) of code-driven law

At this point on the normative spectrum, we are starting to depart from legal normativity per se. Constrained natural languages and domain-specific languages that are human-readable but machine-executable limit what can be validly expressed in them, which can have a shaping effect on the content of policy from the outset.

Even if the scope of policy is left unchanged (assuming that is possible in the shift from text to code – see the discussion at section 3.5.3 below), and all that is aimed for is a faithful or ‘isomorphic’ representation of the logic of the statute, this shaping effect remains because of the way technological normativity operates in contrast to text-driven normativity.107 At the point of drafting natural language rules, logic modelling can be helpful, as we have seen above, but at the point of execution or the provision of advice it necessarily elides large parts of what it means to interpret and apply a legal rule. For that reason, ‘isomorphism’ as a goal is fundamentally limited in ways that circumscribe its usefulness for real-world application of legal rules. Even a robust, formally-verified isomorphism must not be confused with the law itself, since rules, however high-quality they might be, are not the whole of the law, even when they are text-driven.

Articulating laws directly into code for the purposes of execution shifts us closer to computational legalism.

If rules are written directly into code, the effect is stronger still. In the latter case the disconnect between legal and technological normativity is complete: even if a natural language text is generated from the code and treated as notionally ‘authoritative’,108 in practice the normative divergence would mean those subject to the automated execution of the rules would be interpreting a vision of normativity (textual/legal) that is categorically different from what was being imposed in reality (i.e. technological). This would be deeply problematic in terms of the Rule of Law; the executive and the public would not be ‘reading from the same hymn sheet’, normatively speaking, which at the very least would undermine the Rule of Law principle of equality, and the affordance of procedural due process. Those with access to the code would have a different view of what the law is from those with access only to the natural language generated from it. The democratic affordances of textual interpretation would be sidelined, and power would accrue to those with access to the code and hence prior knowledge of the distinct technological normativity it will impose.

Articulating laws directly into code for the purposes of execution shifts us closer to computational legalism.109 It sidesteps the affordances of text-driven law, because automation is precisely the goal, eliding the direct and indirect values of those affordances. The relationship between legal and technological normativity shifts dramatically away from an equilibrium between rules posited ex ante and interpretation and procedure happening ex post. Rules become the central focus, and at the same time because those rules are computational rather than simply textual, imposing technological rather than legal normativity, their capacity to impose themselves is far stronger and their constitutional acceptability is accordingly much weaker.110

Above we set out a spectrum of the potential impacts of Rules as Code on law, and discussed some of the potential benefits and concerns that arise at the various points where RaC systems lie on that spectrum. In this section, we step back from individual systems and approaches to highlight some concerns about RaC more generally: who will benefit from its introduction? How might its maintenance requirements create additional burdens for the Rule of Law? How will it impact the concepts of interpretation and legal effect that are fundamental to legal protection.?

3.5.1 Who benefits?

A fundamental question that must be answered with respect to each RaC application is: who benefits? While the intention is generally to improve access to justice, reduce cost, and/or more efficiently convert policy into legal rules, it will take some time to tell whether these laudable goals have been realised. Whatever the goal, however, it remains the case that formalisation means adopting a certain view of the world. At least when used to execute the law, or to advise on its meaning, it imposes a frame that in a plural society it should be the law’s role to keep open. As Schafer puts it,

Legal AI becomes the stalking horse of a very specific conception of justice, turning what should be a contested public debate about the values of law into a technocratic decision of what is computationally possible.111

For all its faults in practice – some of which can indeed be ameliorated by RaC – text-driven law is fundamentally democratic, insofar as natural language normativity affords both accessibility and the co-existence of a multitude of differing worldviews. The systems and procedures built around the technology of text might be flawed and in need of (in some cases serious) reform, but that is not a consequence of text as the central medium of law.

Formalisation requires adopting a certain view of the world… it imposes a frame that in a plural society it should be the law’s role to keep open.

Regarding access to justice, we suggested above that certain RaC approaches, such as structured documents, can materially enhance access to the text of the law, and provide affordances that are genuinely valuable for interpreting legal meaning and legal status. But other RaC systems threaten to create a two-tier justice system, where those without power and the means to access bespoke legal advice must instead make do with commoditised output of a formalism. Those people will in many cases be in vulnerable positions and, given the normativity of the computational medium, might not appreciate how to manipulate the law to fit their needs or understand what their options are for contestation.112 The risk is therefore that existing problems of access to justice are potentially amplified rather than solved, with focus shifted away from potentially more effective measures such as increasing investment in legal aid (more costly than RaC and therefore less attractive to some though this is likely to be).

Another consideration is how we conceive of and value lawyers’ skills. A great deal of what lawyers actually do lies outside the interpretation of rules.113 By focusing on rules and their automation, we potentially deskill lawyers and reduce the extent of their role as skilled interpreters of those rules in light of their clients’ circumstances, and those circumstances in light of the rules.114 A commodification of expertise might remove lawyerly practices that are societally valuable, both in terms of providing high-quality legal advice that successfully upholds clients’ legal interests, but also in terms of support, understanding, and solidarity, which in many cases will be of great value regardless of the outcome of the case.

Questions of user need might in fact be better answered in part by design thinking

At the same time, it is possible that automation might instead free up time for those elements of the role. Understanding the true impact will require further (empirical) research.115 What is necessary, however, is a close(r) understanding of what conditions legal protection and the Rule of Law rely upon for their operation, and thus what theoretical and technological tools practitioners need at their disposal. Indeed, such questions of user need might in fact be better answered in part by the ‘design thinking’ approaches pioneered by the New Zealand Government’s Better Rules project.116 To that extent they might be welcomed as a means of getting to the heart of how those who ‘do’ law can be better understood and supported by technology to uphold its central values.

3.5.2 Increased complexity and maintenance

Viewed through the lens of the Rule of Law, with its dynamically interconnecting parts, it is conceivable that rather than reducing complexity some RaC approaches will produce or even require more of it to work. There may be ripple effects in the legal system, depending on how RaC is adopted – particularly if its outputs are treated as having legal effect.117

For example, procedure and due process might need to be adapted to account for the speed of RaC outputs. Translations in one part of the system might necessitate a cascade of translations in other parts, if we are to avoid the complications of attempting to combine or interface legal and technological normativity in or around the same subject matter.118 Areas of law that might hitherto have been thought to be inappropriate for formalisation might come to require translation, for example interpretative and procedural provisions, in order to support those areas whose formalisation is thought to be uncontroversial. Alternatively, if they are still thought to be resistant to translation, a kind of parallel set of equivalent code-driven rules might need to be developed, just to support the code-driven parts of the law. Interfaces will be required to connect the code-driven body of law with the text-driven, particularly where the latter is deemed authoritative.119 The potential complexity of the interplay between textual rules and code-driven rules is something that will need to be properly anticipated.

The potential complexity of the interplay between textual rules and code-driven rules is something that will need to be anticipated.

Another area of potential complexity is the maintenance of the rules: will they be kept up to date and, if so, by whom? While we generally accept that officially published legislation is often not kept perfectly current, as we have repeatedly seen technological normativity is quite different in its capacity to impose itself, and so the problem of inaccurate or out of date rules becomes hugely salient. RaC interpretations are at risk of failing to adequately reflect (i) the state of the rules, for example following legislative amendments or repeals, and (ii) the interpretations of those rules, either by the courts or in light of other instruments that have a bearing on their meaning (such as fundamental rights law). Legally invalid outputs produced from such RaC interpretations might be readily relied upon simply because of the medium that delivers and imposes them. If text-driven norms are ‘always speaking’, subject to purposive interpretation that allows for adaptation to meet new or unforeseen circumstances, the potential risk with inaccurate RaC translations is that they are ‘never listening’. Reliance on such translations, which might be inadvertent if they are automatically imposed or appear to have authoritative status, could have significant consequences.

3.5.3 Interpretative authority

There are significant normative implications inherent in decisions made about (i) what gets formalised, and in what ways, (ii) who makes that choice and under what authority, (iii) where the resulting system will be deployed and for what purposes, and (iv) which citizens and legal subjects it is aimed at or may come to be subject to its output.120

Where a RaC deployment purports to enforce the law or provide advice as to what it means, it is not sufficient to rely on the coders of RaC translations to ensure there is a ‘human-in-the-loop’.121 This upends the proper constitutional order: not only do those who write the rules get to decide on their interpretation and the logic of their imposition, they also decide on the extent and the nature of any ‘escape hatches’ for when something goes wrong.

This places too much power in the hands of those creating the rules, undermining the separation of powers and the role of the court in providing authoritative interpretations of the meaning of rules in particular cases (in the knowledge and with the foresight that those interpretations will have salience in future analogous cases). As Bennion puts it,

It is the function of the court alone authoritatively to declare the legal meaning of an enactment. If anyone else, such as its drafter or the politician promoting it, purports to lay down what the legal meaning is the court may react adversely, regarding this as an encroachment on its constitutional sphere.122

It also obscures the responsibility for ascribing or attributing a particular meaning to the rule in a particular case, since any such case has been reduced in advance to a set of abstract variables that it is assumed can exhaust all the salient aspects in any future context where the rule ought to apply. This elides the justificatory element that ought to be inherent in any application of a legal rule, including the responsibility to provide reasons for the conclusion that was reached – essential aspects of the legitimate enforcement of rules.123 Instead, a significant chunk of the rule-application reasoning is front-loaded, in the belief it is deductively universal and therefore not part of the open texture of the law.124 Compliance is foretold; legal subjects are objects of control, rather than agents who get to choose to comply, and how. Engagement with the community might be stunted if we no longer must actively interpret the relevance and meaning of the rules within a given context.125 The mirage of human-readable code

As mentioned above, the use of quasi-natural language runs the risk that the body of rules that is developed gets shifted to suit what can be represented in the DSL, even within the policy development process. The fact that it looks much like natural language heightens this risk: some might believe that because it looks like natural language anything can be formalised in it, or, conversely, that anything that cannot be formalised in it is not worth including in the law. This is a framing effect that over time risks limiting the scope of substantive legal protection. It raises the question of whether RaC rules should in fact avoid being human readable.

In the context of applying the rules, given that the automated execution of digitised rules is of a nature categorically different from how a textual rule is ‘executed’, making RaC interpretations look as close to natural language rules as possible might in fact mislead as to their nature.126 The risk of blurring the line between legal normativity and technological normativity might imply that digital representations of the law, insofar as they are directed at application by and to citizens, should actively seek to avoid appearing too similar to natural language rules.

How will judges fulfil their constitutional role in a text-driven way, when the original norm is code-driven?

This ambiguity is deepened when we consider how judges should respond to gaps in digital-first RaC ‘law’.127 How would they fulfil their constitutional function in a text-driven way, when the original norm is code-driven? If judges produce a natural language judgment about a space left in a RaC translation, does that then need to be converted into additional code and added to the RaC implementation? Would the separation of powers require that the judges produce the code themselves? Or will they write orthodox (i.e. textual) legal orders that require RaC coders to amend the digital translation? In that case, which seems most plausible, we come back round to the problem of interpretative authority – the constitutionally-empowered court decides, but it is the RaC coder, probably within the executive, who interprets that judgment and implements it in their system. We might end up in an infinite regress, with no acceptable closure, since the type of normativity the court can produce is categorically different from that which will be implemented in practice.

RaC systems should never be given legal effect, assuming we still wish the courts to have a text-driven adjudicative role.

This emphasises the point that RaC systems should never be given legal effect, assuming we still wish the courts to have a text-driven adjudicative role. To do so would introduce logical contortions like the one just described. It would undermine the fabric of the law and the way that legal rules fit into, and reflexively constitute, the complex web of interrelated practices that are oriented towards legal protection and the Rule of Law.

It might be that RaC formalisms should therefore emphasise their technological character, rather than seeking to ape natural language, in order to highlight that they are merely tools of implementation, rather than canonical sources of law. The difference between the two forms of normativity must at all times be clear. Technological normativity and interpretation

Legislative drafters are enjoined to write laws that respect and uphold the Rule of Law; in seeking to produce an ‘internally coherent conceptual scheme’ of rules, they serve ‘core rules of law values of legal certainty, predictability, formal justice and equality.’128 As we saw above, law cannot be split into discrete self-contained parts, and by the same token drafting requires a sensitivity to the broader legal domain within which a new piece of legislation will sit, adapting terminology and conceptual structure to ensure coherence with what has gone before.

If the drafter fails to do this, or does it badly, the problem could be solved through interpretation.129 While legal reasoning can ultimately be presented as syllogistic logic, that comes only at the point of justification of an argument, after various interpretative hurdles have been passed and attributions made (some of which might result from arguments about what that logic itself ought to be), and not before.130 The gap between interpreting a text rule and following through on its implications means latent incoherences in the text can be identified, ignored or if necessary contested. This inherent passivity is central to the nature of text-driven law and the spaces it affords for considered action.131

Code, however, is different. Its execution is effectively immediate, and clear-edged.132 When used for enforcement or advice, the output is the output, echoing the legalist idea that ‘the law is the law’.133 If the coded model fails to integrate properly with models of other legislation that are relevant to it, it will simply fail to execute as expected – perhaps without anyone being aware of that fact. Depending on the extent to which the output is treated as having legal effect, the consequences of this type of normativity will vary. As Barraclough, Fraser and Barnes emphasise, RaC representations ought therefore to be thought of not as translations of the law, but instead as individual interpretations of it.134 This highlights that any given RaC model is not the law per se but is just one interpretation of what the law says and means.135

But the role played by technological normativity creates a crucial difference between code-driven interpretations and other non-authoritative (i.e. non-judicial) interpretations. People, including lawyers, interpret the law all the time, and often they will get it wrong. But the difference with code-embedded interpretations and the systems that execute them is that the ‘user experience’ of the citizen is quite different: the embeddedness of the interpretation tends away from the capacity of those affected by it to question its validity or applicability. Depending on whether and how the Rac translation’s (lack of) authority is communicated, they might be misled into accepting the output as accurate or binding. In such situations the problem lies as much in the design of the application as it does in the specific formalism that is used. As Le Sueur suggests, the application in a very real sense becomes part of the law:

we should treat ‘the app’ (the computer programs that will produce individual decisions) as ‘the law’. It is this app, not the text of legislation, that will regulate the legal relationship between citizen and state in automated decision-making. Apps should, like other forms of legislation, be brought under democratic control.136

In this vein, Waddington makes an important distinction between a RaC system answering the question ‘what does the [original] act say’ on the one hand, and ‘what does the act mean (what are my rights)’ on the other.137 The difference is key, and whether or not the users of a RaC system understand it is in large part a question of design. As he observes, most jurisdictions publish legislation without later including alongside it all the relevant caselaw that has a bearing on its legal meaning. Pragmatically, if RaC outputs are treated as authoritative, then that is what they are, at least for those who take action based on their output. If a RaC system is taken to be giving concrete advice rather than a discrete, non-authoritative interpretation, the system has overreached. By the same token, if the limit of the interpretations’ authority is properly expressed at the point of interaction, in order to communicate that the output does not constitute advice or have legal effect, then insofar as the system benefits citizens’ understanding of their legal status it might be considered beneficial in terms of enhancing legal protection.138 Communicating this adequately is a question of design. Whether or not it will have a bearing on the perceived utility or commercial potential of any given RaC system remains to be seen. Technological normativity and discretion

We saw above that in some cases an explicit goal of RaC is the reduction of discretion and the promotion of automation. As Plesner and Justesen describe the Danish approach,

whenever possible, legislation should build on simple rules and unambiguous terminology to reduce the need for professional (human) discretion, thereby allowing for the extended use of automated case processing across all types of public-sector organizations and policy areas.139

Lipsky’s concept of ‘street-level bureaucracy’140 is relevant here, and reflects the dignitarian idea, inherent to legality, that law ought to be applied with sensitivity to particular contexts and needs. He suggests that those who are involved in the implementation of legal rules ‘on the ground’ invariably use their contextually informed judgement, whether they are benefits administrators, the police or even the judiciary.141 Discretion shapes the interactions between such actors and citizens, which are informed by much more than the bare terms of the rule and the abstract status of the citizen. It is worth quoting Lipsky at length:

The essence of street-level bureaucracies is that they require people to make decisions about other people. Street-level bureaucrats have discretion because the nature of service provision calls for human judgment that cannot be programmed and for which machines cannot substitute. Street-level bureaucrats have responsibility for making unique and fully appropriate responses to individual clients and their situations… the unique aspects of people and their situations will be apprehended by public service workers and translated into courses of action responsive to each case within (more or less broad) limits imposed by their agencies. They will not, in fact, dispose of every case in unique fashion. The limitations on possible responses are often circumscribed, for example, by the prevailing statutory provisions of the law or the categories of services to which recipients can be assigned. However, street-level bureaucrats still have the responsibility at least to be open to the possibility that each client presents special circumstances and opportunities that may require fresh thinking and flexible action.142

This highlights the distinction between what bare laws require on the one hand, and the reality of their application in complex real-world contexts on the other. This resonates with the distinction between legalism and legality; the automatic application of bare rules absent reasoned interpretation on the one hand, versus choosing whether and how to apply a rule in light of context on the other.143 The requirement to treat individuals with individual dignity and autonomy is a core tenet of liberal legality, and though it may not be achieved as often as one might like, seeking to restrict the discretion that is a necessary part of realising it is not a solution.

A recent report by the Child Poverty Action Group on the UK’s digitised Universal Credit benefit provides a useful case in point.144 The authors concluded that the digitalised implementation of the benefit undermined Rule of Law principles of transparency, procedural fairness, and lawfulness. They found that the interpretation of the law embedded in the system diverged from the terms of the various pieces of underlying legislation. Perhaps more importantly, the design of the system did not provide the latitude for ‘street-level’ discretionary ‘work arounds’ that might have solved some of the problems caused by those misinterpreted translations, for example the capacity to submit claims earlier than usual in certain circumstances where this is justified on grounds of fairness. The effect was that those who perhaps needed the protection of the law the most were denied it. The authors noted that this was not an inevitable consequence of digitalisation per se, and with careful design choices some of the system’s pitfalls could have been avoided.145

With the accelerating shift towards the ‘digital state’, the relationship between law and its delivery – previously mediated by the discretion of street-level bureaucrats – will change. As Buffat argues, the impact of this might be ambiguous, and might not necessarily mean a diminution in the affordance of localised discretion.146 For example, citizens might in some cases be empowered by access to (digitised) information and resources that were previously unavailable to them, potentially helping to deliver the promise of greater access to legal materials and to justice.147 At the same time, administrators might decide to exercise discretion more readily, countering the imposition of automated determinations that fail to capture the complexity of the situations they are applied in and to.148

Empirical research will be required to determine whether that is true in the contexts where RaC systems are deployed. It does, however, raise the question of whether some RaC systems can be effective, even on their own terms. If they are consistently circumvented ‘on the ground’ in order to achieve effective, just or even workable outcomes, this implies something structural about their value in those contexts. It may create complexities and costs that are unforeseen. It also raises the possibility that – paradoxically – the administration of public services becomes less transparent, because administrators are forced to circumvent digital systems of governance more readily and comprehensively than they might otherwise have needed to under text-driven law, shifting practices ‘beneath the radar’ in ways that might resist reasonable oversight.149 Formalisation and the shaping of policy

The question of what is deemed formalizable raises a further reflexive issue, namely that the subset of rules which are deemed susceptible to formalisation might end up being treated differently in practice to those that are not. From a certain perspective this is simply inevitable – the purpose of producing RaC-translated rules is that they can be used subsequently in digital systems, which means those rules will be treated and experienced differently by citizens compared to those that are not so translated. This is per se neither good nor bad, but the effects of the divergence between these two paths must be anticipated.

One consideration is the extent to which formalisation might come to be seen as attractive per se. There may be pressure to extend the reach of RaC translation into domains where previously the rules were thought not to be amenable to formalisation.150 This is by no means inevitable, but nor is it inconceivable, as governments look for ever greater reductions in the cost of delivering services.151

Another consideration is the other side of this same coin, and is perhaps even more problematic, if somewhat extreme. Instead of expanding the areas of the law that are deemed formalizable to include those that were previously thought unsuited to it, the impact goes further upstream to frame policymaking from the outset by reference to what is formalizable. This risk is reflected in one of the key findings of the Better Rules report, where it states that ‘[i]t is difficult to produce machine consumable rules if the policy and legislation has not been developed with this output in mind.’152 There is an implication here that at least some policy and legislation should be developed with RaC in mind.

If there is a push (however subtle or unintended) toward producing policy and legislative rules that are amenable to formalisation, a significant risk is that policy initiatives that are not so susceptible will be deprioritised or even ignored because they do not fit within the adopted RaC approach. At this point, the concern is that areas deemed unsusceptible to formalisation fall off the radar of policymakers, because they cannot find a way to make them ‘work’ within the RaC paradigm. The implicit pursuit of RaC compatibility may inadvertently constrain the legislator to express only those policy ideas that the formalisation can ostensibly cater for.153 The capacity of democratic legislation to articulate the interests and needs of the citizenry becomes circumscribed by whatever the (in)capacities are of the chosen computational representation.

The implicit pursuit of RaC compatibility may inadvertently constrain the legislator to express only those policy ideas that the formalisation can ostensibly cater for.

Shifting further towards formalisation might necessitate the implicit adoption of utilitarian frame that views legal rules as technocratic tools of compliance. While such compliance might be easier to achieve, the cost might be too much constraint on what policies the legislator can express in the code-driven medium. A vision of greater compliance might appear beguiling, but the Rule of Law and legality are about more than that: respect for autonomy and individual dignity necessarily come at the cost of some measure of certainty about whether and how we will comply.154

At the point of contesting the rules, the blunted interpretative capacities of the law mean that citizens are constrained to articulate their experiences in terms that might simplify or reduce them.155 Coupled with the law’s circumscribed capacity to respond (because of the same limited capacity imposed by code-driven abstractions), the scope for achieving proper justice that reflects the fullness of particular circumstances is thus limited in potentially significant ways.

Thus, even where the RaC translations do not have legal effect themselves, adopting the approach might nevertheless have an effect on text-driven law ‘by the back door’, by changing the ethos of the processes by which legal rules are developed. This would change the constellation of legal effect reflected in legislation, by shaping the earlier stages of the policy process in line with a computational framing of what is possible. This is an example of the first type of ‘effect on legal effect’, a concept to which we can now turn.

In this final section we can turn to a central element of the broader COHUBICOL analysis: the effect on legal effect. By this we refer to two forms of potential impact. First, the capacity or risk that a technology will alter the process of attributing legal effect – that is, will alter the process by which performative speech acts create new institutional facts in specific cases. This can happen, for example, by shaping the legal resources that appear in search results and therefore inform the preparation of a case, or by affecting the text of a generated brief according to the statistical distribution contained in the large language model.156 There, the nature of the legal effect of the resulting court judgment is not changed; it still comes about by declarative speech act, recorded in natural language and susceptible to further interpretation and appeal/overturn/distinction by future courts. The means by which the legal effect is arrived at do change (e.g. via the introduction of statistical legal search), but the result is not different in kind.

The second notion of effect on legal effect is where the underlying concept is itself altered, shifting it away from institutionality as its fundamental characteristic, with all that is implied by it. This is more fundamental to the mode of existence of law, because as we saw in the discussion of normativity above, what makes an effect qualify as legal is its compatibility with and co-dependency on anchoring practices of attribution, interpretation, contestation and adjudication, and the legal-institutional artefacts or ‘objects’ that flow from them (norms, rights, duties, personhood, etc.). Taken together, these are ultimately what afford the law’s mode of existence, and its capacity to provide protection. The mode of existence of code-driven law

As discussed in Chapter 1, the mode of existence of text-driven law is one of institutional facts brought into being by performative speech acts that accord with requirements laid down in positive law.157 Each of these elements affords a specific aspect of legal protection: the institutional fact is not physical; it cannot enforce itself in the way that e.g. a speed bump in the road can. The validity of the speech act is contingent on it properly reflecting the requirements laid down, textually, in the relevant positive law (legislative and judicial). Those requirements will cover a multitude of factors: who is capable of performing the act, under what circumstances, in which particular jurisdiction, and when. The essential contingency of textual meaning implies that the door is always open to the institutional fact being contested, on the basis of differing interpretations of what those requirements are or what they ought to have meant in a particular circumstance.158 (Mis)translations between law and code

If natural language is replaced by code as the basic ‘dependency’ underpinning the mode of existence of the law, then so too will these fundamental building blocks change.159 This may or may not be a good thing, but it will undeniably represent a profound shift, the implications of which are difficult fully to anticipate.

The further to the right of the normative spectrum a code-driven system sits, the greater the potential gap that must be bridged between a legal concept and its computational representation.

At the level of fundamental legal concepts such as right, duty and personhood, the individual institutional facts that are instances of those concepts must be represented in code before they can be automated. Code is not found, but must be created, and so there must be a conscious translation between the legal domain and the code domain. When this happens, the programmer’s understanding will necessarily mediate that translation and frame the representation of legal normativity via whatever computational methods they think are appropriate. How they make sense of legal concepts of right, duty, personhood, etc. will impact on the approach they take to representing those concepts within the programming tools that are available to them.160

The further to the right of the normative spectrum a code-driven system sits, the greater the potential gap that must be bridged between a legal concept and its computational representation. The more we move away from the text-driven mode of existence of law, the smaller the role played by attribution and interpretation and the flexibility they afford in our shared understanding of what the constellation of legal effect is at any given moment. Legal personhood becomes an instance of a user within a predefined role granted a certain range of permissions; rights become permissions and access control lists; duties become predefined paths that channel behaviour within the interface of the system; contracts and legislation become – as we have seen – ‘if this, then that’ algorithms. Complex interplays between these elements will arise, but these too will be defined in code, rather than natural language. This is inevitable, because those are central elements of modern software of any complexity.161 The ecology of code versus the ecology of law

What happens to legal concepts and the relationships between them when they are cast in code? Legal rights and duties are not directly analogous to computational permissions, even if superficially they might sometimes appear that way. What a human can do as a user within a software system is not the same as what a legal person can do within the legal system, however detailed the modelling.162 What goes on in the latter is of a different category to what goes on in the former. Once we step into the computational environment, and start to ‘do law’ there (either explicitly or de facto), we are forced to (re)frame legal concepts using the tools and representations that are available in that environment.

What a human can do as a user within a software system is not the same as what a legal person can do within the legal system, however detailed the modelling.

Looked at from an affordance perspective, we can think in terms of law as an ‘ecology of practice’ whose elements constitute its nature:163 personhood, rights, duties and norms have the character they have because of the ‘habitat’ within which they exist: a shared social world of institutional facts.164 To paraphrase Stengers, law has ‘no identity of a practice independent of its environment… the very way we define, or address, [legal] practice is part of the surroundings which produces its ethos’.165

Addressing the practice of law in code-driven terms risks changing its ethos and its identity. If the scope of the role played by the legal-institutional environment is reduced in favour of computation, the specifically legal nature of the concepts it can afford, including personhood, rights and duties, will also be curtailed. The legal nature of those concepts cannot be represented in code, no matter how complex the formalism: computational representations by definition exist within a computational environment.166 This is necessarily separate from the environment upon which law-as-we-know-it relies, namely (i) the materiality of text and the baseline set of affordances it provides that make law-as-we-know-it possible,167 and (ii) the legal-institutional ecology that is built on it, consisting of institutional facts and the affordances internal to the law that facilitate legal operations between legal subjects on an ongoing basis.

While computation, like law, relies on material facts (computers, keyboards, code, development paradigms etc. are all ‘in the world’),168 the ecology that these establish is fundamentally different from the ecology that law relies on. The text-driven legal ecology affords legal institutionality, whereas the ecology of computation has a categorically different set of affordances built around a specific notion of information processing. Computation results in a set of data structures and operations that can be performed on those structures that are defined within a very specific and limited paradigm of informational representation, albeit one that is of course very powerful.169

If we seek to embody legal concepts directly and substantially within the computational ecology, rather than relying on it only as a useful tool for representing the form of those concepts (e.g. via digitised documents), we change their mode of existence: we see only the Umwelt of bits, rather than the Welt of shared institutional facts.170 Digital infrastructure can provide the tools to facilitate the latter, but it cannot replace them directly. Doing so might be an explicit choice in more radical forms of RaC, or it might be an unintended consequence of seeking to fully represent legal normativity via technological means.

If we seek to embody legal concepts directly and substantially within the computational ecology, rather than relying on it only as a useful tool for representing the form of those concepts, we change their mode of existence.

3.6 Conclusion: treading the line

The shift to computation need not be as wholesale as was just described. Indeed, many RaC systems are not intended to supplant legal effect; they can be deployed as tools to be used in service of text-driven normativity, impacting legal effect in the first sense mentioned above, that is by shaping how legal effects are attributed in specific circumstances, rather than changing the underlying nature of the concept. That influence can be positive, in terms of legal protection and the Rule of Law, as for example where access to relevant legal information and interpretative materials strengthens understanding of the landscape of legal norms, and empowers people to avail themselves of the law’s protective capacity by enabling them to make stronger and more creative arguments in defence of their rights.

However, if natural language and the legal-institutional environment it affords are sidelined, the current concept of legal effect is necessarily also sidelined. With it go its affordances of attribution (declaring a legal state of affairs and thus bringing it into being), flexibility of interpretation (to determine what it means in a particular circumstance for a given person), contestability (to allow a formal mechanism to challenge that meaning), and adjudicative closure (to enforce the decision in a way that the community as a whole can understand and respect, even if they disagree). Each of these is an essential element of the law’s capacity to protect us.

Delineating the proper role of computation is therefore essential. We must ensure it complements and strengthens legal normativity, rather than sidelining it or converting it on-the-fly into a technological normativity that lacks the protective affordances of text-driven law.


  1. Mireille Hildebrandt, ‘Code-Driven Law: Freezing the Future and Scaling the Past’ in Simon F Deakin and Christopher Markou (eds), Is Law Computable? Critical Perspectives on Law and Artificial Intelligence (Hart 2020) 67. 

  2. For example, the Department for Work and Pensions in the UK recently contracted New Zealand-based firm Novallex to produce a rules as code implementation of Universal Credit, the UK’s first ‘digital-by-design’ benefit. See Sam Trendall, ‘DWP Looks to Embed Machine-Readable Laws into Digital “Universal Credit Navigator”’ (PublicTechnology, 24 October 2022) accessed 20 July 2023. 

  3. Although private contracting does have undoubted relevance here, notably via the notion of ‘smart contracts’, these have proven to be much less transformative than proponents of blockchains have argued. Nevertheless, the notion of treating a smart ‘contract’ as a legal contract is closely connected with the risk of ‘conceptual slippage’ in the idea of legal effect that is central to the project (see Laurence Diver and others, ‘Research Study on Text-Driven Law’ (COHUBICOL 2023) 137 accessed 18 September 2023). The question of how legal effect is conceptualised and created is still relevant to the putatively private law of blockchain applications, however, and so the analysis here can be read with those in mind. 

  4. See ‘FAQ and methodology’ in Laurence Diver and others, ‘Typology of Legal Technologies’ (Counting as a Human Being in the Era of Computational Law (COHUBICOL), 2022) accessed 6 November 2023, and the discussion in section 3.5.4 below. 

  5. Diver and others (n 3). 

  6. James Mohun and Alex Roberts, ‘Cracking the Code: Rulemaking for Humans and Machines’, vol 42 (2020) OECD Working Papers on Public Governance 16\_3afe6ba5-en accessed 23 October 2023. 

  7. Tim de Sousa, ‘Introduction: What Is Rules as Code?’ (Rules as Code Handbook, 19 March 2019) accessed 6 November 2023. 

  8. Adrian Kelly, ‘Evolution of Digital Law’ [2023] The Loophole – Journal of the Commonwealth Association of Legislative Counsel 43, 46. Kelly is a legislative drafter and is active in various governmental RaC projects around the world. 

  9. Cf. Matthew Waddington, ‘Research Note: Rules as Code’ (2020) 37 Law in Context. A Socio-legal Journal 179, 180. Waddington also a legislative drafter active in the RaC community. 

  10. ibid 182. 

  11. Pompeu Casanovas, ‘Comments on Cracking the Code. A Short Note on the OECD Working Paper Draft on Rules as Code’, Comments on Cracking The Code: Rulemaking For Humans And Machines (August 2020 draft) (LawTech La Trobe Research Group 2020) 19 accessed 23 February 2022. 

  12. Tim de Sousa and Pia Andrews, ‘When We Code the Rules on Which Our Society Runs, We Can Create Better Results and New Opportunities for the Public and Regulators, and Companies Looking to Make Compliance Easier’ (The Mandarin, 30 September 2019) accessed 6 November 2023. 

  13. ibid. 

  14. ‘DataLex: AustLII’s Legal Reasoning Application Platform’ accessed 6 November 2023. For the COHUBICOL analysis of DataLex through the lens of the Typology of Legal Technologies, see ‘DataLex’ in Diver and others (n 4). 

  15. See ‘Blawx.Com – User Friendly Rules as Code’ accessed 7 November 2023 and, for the COHUBICOL analysis of Blawx through the lens of the Typology of Legal Technologies, see ‘Blawx’ in Diver and others (n 4). On Neota, see ‘Automating Processes Just Got Easier’ (Neota) accessed 7 November 2023. 

  16. On low- and no-code approaches, see Jason Morris, ‘Code vs. No-Code’ (Rules as Code Diary, 24 February 2022)\_no-code/ accessed 7 November 2023. 

  17. Cf. Lon L Fuller, The Morality of Law (Yale University Press 1977) 209. (‘With a legal system… the existence of a relatively stable reciprocity of expectations between lawgiver and subject is part of the very idea of a functioning legal order.’ (my emphasis)). There is a significant literature on the crucial differences between legalism and legality; the former viewing law as chiefly concerned with top-down application of rules, and the latter having a more reflexive quality that respects interpretation, autonomy and judgement. See, for example, Mireille Hildebrandt, ‘Radbruch’s Rechtsstaat and Schmitt’s Legal Order: Legalism, Legality, and the Institution of Law’ (2015) 2 Critical Analysis of Law 42; Jeremy Waldron, ‘The Rule of Law and the Importance of Procedure’ (2011) 50 Nomos 3; Zenon Bańkowski and Neil MacCormick, ‘Legality without Legalism’ in Werner Krawietz and others (eds), The Reasonable as Rational? On Legal Argumentation and Justification; Festschrift for Aulis Aarnio (Duncker & Humblot 2000); Zenon Bańkowski, ‘Don’t Think About It: Legalism and Legality’ in Mikael M Karlsson, Ólafur Páll Jónsson and Eyja Margrét Brynjarsdóttir (eds), Rechtstheorie: Zeitschrift für Logik, Methodenlehre, Kybernetik und Soziologie des Rechts (Duncker & Humblot 1993). For discussion of legalism specifically in relation to computation, see Laurence Diver, Digisprudence: Code as Law Rebooted (Edinburgh University Press 2022) ch 3; Zenon Bańkowski and Burkhard Schafer, ‘Double-Click Justice: Legalism in the Computer Age’ (2007) 1 Legisprudence 31; Roger Brownsword, ‘Technological Management and the Rule of Law’ (2016) 8 Law, Innovation and Technology 100; Philip Leith, ‘The Application of AI to Law’ (1988) 2 AI & Society 31. 

  18. Mireille Hildebrandt, ‘Legal Protection by Design: Objections and Refutations’ (2011) 5 Legisprudence 223, 234 (‘This means that individual citizens have a means to challenge the administration’s interpretation of enacted law, thus preventing a mere rule by law that employs the law as a neutral instrument to achieve the goals of policy makers’). When the rules in question are embedded in digital architectures rather than text, this ‘rule by code’ threatens to reach a peak: ‘computational legalism’. See Diver, Digisprudence (n 17) pt 1 (‘Computational Legalism and the Rule(s) of Code’). The differences between the ‘force’ exerted by digital architectures and textual rules are discussed below. 

  19. Stephen J Toope, A Rule of Law for Our New Age of Anxiety (Cambridge University Press 2023) 173. 

  20. Cf. Radbruch’s antinomian concept of law: Gustav Radbruch, ‘Legal Philosophy’ in Kurt Wilk (ed), The Legal Philosophies of Lask, Radbruch, and Dabin (Harvard University Press 1950); Hildebrandt, ‘Radbruch’s Rechtsstaat and Schmitt’s Legal Order: Legalism, Legality, and the Institution of Law’ (n 17). 

  21. Reflecting Kranzberg’s maxim ‘technology is neither good nor bad; nor is it neutral’ (Melvin Kranzberg, ‘Technology and History: “Kranzberg’s Laws”’ (1986) 27 Technology and Culture 544, 545). 

  22. On the law’s current mode of existence, see Mireille Hildebrandt, ‘1. Introduction: The Mode of Existence of Text-Driven Positive Law’, Research Study on Text-Driven Law (COHUBICOL 2023) accessed 18 September 2023. 

  23. The distinction is set out in detail in Mireille Hildebrandt, ‘Legal and Technological Normativity: More (and Less) than Twin Sisters’ (2008) 12 Techné: Research in Philosophy and Technology 169. This chimes with Davis and Chouinard’s normative framing of affordances in terms of whether they request, demand, allow, encourage, discourage, or refuse a particular behaviour or action. See Jenny L Davis and James B Chouinard, ‘Theorizing Affordances: From Request to Refuse’ [2017] Bulletin of Science, Technology & Society. 

  24. Diver, Digisprudence (n 17) ch 2 (‘Code is more than law: a design perspective’). 

  25. See e.g. Katja de Vries and Niels van Dijk, ‘A Bump in the Road. Ruling Out Law from Technology’ in Mireille Hildebrandt and Jeanne Gaakeer (eds), Human Law and Computer Law: Comparative Perspectives (Springer Netherlands 2013) (discussing the salience of the media that underpin legality, in light of the ‘practice turn’ in law). 

  26. On text and the printing press as transformative technologies, see Walter J Ong, Orality and Literacy: The Technologizing of the Word (3rd edn, Routledge 2012); Elizabeth L Eisenstein, The Printing Revolution in Early Modern Europe (2nd edn, Cambridge University Press 2012). 

  27. On text as a technology underpinning text-driven law, see Mireille Hildebrant, ‘2.4 The Texture of Modern Positive Law’ in Diver and others (n 3). See also Mireille Hildebrandt, Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology (Edward Elgar Publishing 2015); Mireille Hildebrandt, ‘A Vision of Ambient Law’ in Roger Brownsword and Karen Yeung (eds), Regulating Technologies: legal futures, regulatory frames and technological fixes (Hart 2008); Laurence Diver, ‘Computational Legalism and the Affordance of Delay in Law’ (2021) 1 Journal of Cross-disciplinary Research in Computational Law. 

  28. There is an analogy here with Hart’s concepts of primary and secondary rules, where primary rules are directed at structuring behaviour and action, while secondary rules are about how validly to create them (see; the normativity embedded in the legal tech that frames the practice of rule-making and rule enforcement acts as another kind of ‘secondary rule’, or perhaps even a tertiary ‘rule’ (for a more detailed consideration of this idea see Diver, Digisprudence (n 17) 209–211). 

  29. See ‘Chapter 3. Foundational Concepts of Modern Law’ in Diver and others (n 3). 

  30. Neil MacCormick, Rhetoric and the Rule of Law: A Theory of Legal Reasoning (Oxford University Press 2005) ch 7. On the deep role that context and experience play in (legal) interpretation, see Hans-Georg Gadamer, Truth and Method (Joel Weinsheimer and Donald G Marshall trs, Bloomsbury 2013) 334ff; Stanley Fish, Doing What Comes Naturally: Change, Rhetoric, and the Practice of Theory in Literary and Legal Studies (Duke University Press 1989). 

  31. Cf. Martin David Kelly, ‘The “Always Speaking” Principle: Cracking an Enigma’ (3 August 2023) accessed 8 November 2023; Francis AR Bennion, Understanding Common Law Legislation: Drafting and Interpretation (Oxford University Press 2001) 17–20. 

  32. For an important analysis of the distinction, see Hildebrandt, ‘Legal and Technological Normativity’ (n 23). 

  33. Cf. James Grimmelmann, ‘The Structure and Legal Interpretation of Computer Programs’ (2023) 1 Journal of Cross-disciplinary Research in Computational Law accessed 10 November 2023. 

  34. See Andrew Le Sueur, ‘Robot Government: Automated Decision-Making and Its Implications for Parliament’ in Alexander Horne and Andrew Le Sueur (eds), Parliament - Legislation and Accountability (Hart 2016) 201. 

  35. For analyses of the challenges raised when code mediates the meaning and application of legal rules, see Anna Huggins, Alice Witt and Mark Burdon, ‘Digital Distortions and Interpretive Choices: A Cartographic Perspective on Encoding Regulation’ (2024) 52 Computer Law & Security Review 105895, 5–7; Laurence Diver, ‘Law as a User: Design, Affordance, and the Technological Mediation of Norms’ (2018) 15 SCRIPTed 4. For an important and damning real-world study of the latter example, see Rosie Mears and Sophie Howes, ‘You Reap What You Code: Universal Credit, Digitalisation and the Rule of Law’ (Child Poverty Action Group 2023)

  36. Huggins, Witt and Burdon (n 35) 7; Le Sueur (n 34). 

  37. On those values in such contexts, see for example Monika Zalnieriute, Lyria Bennett Moses and George Williams, ‘Automating Government Decision-Making: Implications for the Rule of Law’ in Siddharth Peter De Souza and Maximilian Spohr (eds), Technology, Innovation and Access to Justice (Edinburgh University Press 2021). 

  38. Cf. Lisa Burton Crawford, ‘Rules as Code and the Rule of Law’ [2023] Public Law 402 (discussing the deficiencies of the status quo, and the potential of digital technologies, including RaC, to ameliorate them). 

  39. See e.g. Meng Weng Wong, ‘Rules as Code - Seven Levels of Digitisation’ (Singapore Management University Centre for Computational Law 2020) 21–23. 

  40. Diver and others (n 3). 

  41. Cf. P Leith, ‘Fundamental Errors in Legal Logic Programming’ (1986) 29 The Computer Journal 545, 100. 

  42. In line with the ‘method and mindset’ of the Typology of Legal Technologies (Diver and others (n 4)). 

  43. See Accessed 11 November 2023. For the COHUBICOL analysis of Akoma Ntoso through the lens of the Typology of Legal Technologies, see ‘Akoma Ntoso’ in Laurence Diver and others, ‘Typology of Legal Technologies’ (COHUBICOL, 2022) For a short description of markup languages, see ‘Markup languages’ in ‘Computer Science Vocabulary’ (COHUBICOL, 9 October 2023) accessed 10 November 2023. 

  44. Other formats can be generated from the AKN source, including PDF, RDF and generic XML. 

  45. Like HTML, the markup language underpinning the Web, AKN provides a generic foundation (albeit one aimed at legal documents) upon which other systems can be built. 

  46. Though there are various tools intended to parse unstructured legislative text into structured formats such as AKN. See for example Francesco Sovrano, Monica Palmirani and Fabio Vitali, ‘Deep Learning Based Multi-Label Text Classification of UNGA Resolutions’, Proceedings of the 13th International Conference on Theory and Practice of Electronic Governance (ACM 2020) accessed 25 October 2023. 

  47. See Accessed 11 November 2023. Examples of other systems that provide programmatic access to structured legislative documents include the European Union’s EUR-lex and the Australian Legal Information Institute (AustLII). Another interesting experiment currently under development by Hamish Fraser builds on the availability of structured documents to improve citation and extraction of legislative provisions. See Hamish Fraser, ‘A Love Letter to the Parliamentary Counsel of the World.’ (3 February 2023) https://hamish.deva-love-letter-to-the-parliamentary-counsel-of-the-world accessed 25 October 2023. 

  48. Note that is itself an application built on top of the underlying affordances of the computational formats, like AKN, that it utilises. This highlights the infrastructural level at which RaC technologies such as Akoma Ntoso are situated. 

  49. See ‘LEOS - Open Source Software for Editing Legislation’ accessed 11 November 2023. For the COHUBICOL analysis of LEOS through the lens of the Typology of Legal Technologies, see ‘LEOS’ in Diver and others (n 4). 

  50. See ‘Legislative Drafting, Amending and Publishing Tools’ accessed 11 November 2023. 

  51. For an analysis of the affordances that an Integrated Legislative Drafting Environment, or ‘ILDE’, should have, see Elhanan Schwartz, Ittai Bar-Siman-Tov and Roy Gelbard, ‘Design Principles for Integrated Legislation Drafting Environment’ (SSRN, 30 August 2023) accessed 2 November 2023. See also Diver, Digisprudence (n 17) 234–236. 

  52. These features are selected from the profile of LEOS in the COHUBICOL Typology of Legal Technologies (Diver and others (n 4)). 

  53. A case law search on Westlaw at time of writing turned up no results for the former type of disagreement. 

  54. This is one affordance of Akoma Ntoso, see its OASIS standard specification at ‘Akoma Ntoso Version 1.0. Part 1: XML Vocabulary’\#\_Toc395114133 accessed 28 October 2023. 

  55. On the challenges of robustly calculating dates, see for example Ana de Almeida Borges and others, ‘FV Time: A Formally Verified Coq Library’ (arXiv, 28 September 2022) accessed 28 October 2023; Matthew Waddington, ‘Machine-Consumable Legislation: A Legislative Drafter’s Perspective – Human v Artificial Intelligence’ [2019] The Loophole - Journal of Commonwealth Association of Legislative Counsel 21, 46–47. 

  56. See for example the guidance on dates provided by the Scottish Government to legislative drafters: Parliamentary Counsel Office, ‘Drafting Matters!’ (Scottish Government 2018) 9–12. 

  57. Louis de Koker, ‘Rules as Code: The Need for an Impact Assessment to Inform Application’, Comments on Cracking The Code: Rulemaking For Humans And Machines (August 2020 draft) (LawTech La Trobe Research Group 2020) accessed 23 February 2022. The consequences of poor implementation can be severe, particularly for vulnerable constituencies who are perhaps more likely to be exposed to RaC systems because of the cost savings they promise. See for example the descriptions of mistranslated rules in the UK’s digitalised system for administering Universal Credit in Mears and Howes (n 35). 

  58. We have argued elsewhere, particularly in the context of data-driven law, that such access to unmediated ‘lossless law’ is essential to the proper operation of the Rule of Law. See Laurence Diver and Pauline McBride, ‘Argument by Numbers: The Normative Impact of Statistical Legal Tech’ (2022) 3 Communitas 6. 

  59. Cf. Crawford (n 38). 

  60. See generally Kevin D Ashley, Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age (Cambridge University Press 2017) ch 2. 

  61. Later we discuss what happens when computation potentially impacts the meaning of the rules. 

  62. For an approach that adopts defeasible reasoning to model legislative rules in the taxation domain, see Sarah B Lawsky, ‘A Logic for Statutes’ (2017) 21 Florida Tax Review 60. This rule-based understanding of statutory reasoning allows interim conclusions that are deduced from the rules to be ‘defeated’ by later rules, thus allowing for exceptions to be modelled. It has been influential in recent RaC initiatives, perhaps most notably in the design of the Catala language (see below). 

  63. Cf. Clement Guitton and others, ‘Pervasive Computational Law’ (2023) 22 IEEE Pervasive Computing 48. Decades of research into symbolic reasoning in law are testament to the desire to interface computation with law in a useful way. It is outwith the scope and aims of this Research Study to canvass the full extent of this significant body of research, not to mention the philosophical assumptions upon which it lies. For a useful survey see Trevor Bench-Capon and others, ‘A History of AI and Law in 50 Papers: 25 Years of the International Conference on AI and Law’ (2012) 20 Artificial Intelligence and Law 215. For a perspective that deeply interrogates this history from the perspective of the Rule of Law, see Gianmarco Gori, ‘Law, Rules, Machines: “Artificial Legal Intelligence” and the “Artificial Reason and Judgment of the Law”’ (PhD thesis, University of Florence 2021) accessed 27 October 2023. Regarding the background assumptions that have informed legal informatics research over the decades, see for example Thomas F Gordon, Guido Governatori and Antonino Rotolo, ‘Rules and Norms: Requirements for Rule Interchange Languages in the Legal Domain’ in Guido Governatori, John Hall and Adrian Paschke (eds), Rule Interchange and Applications, vol 5858 (Springer Berlin Heidelberg 2009); Layman E Allen, ‘Symbolic Logic: A Razor-Edged Tool for Drafting and Interpreting Legal Documents’ (1957) 66 The Yale Law Journal 833. 

  64. Waddington (n 9) 182. 

  65. Ashley (n 60) 46–47. Using computation for this purpose has a long pedigree in legal informatics. See for example Meldman’s use of the Petri net, a kind of mathematically verifiable visual graph of a system’s states, to model US federal civil procedure, which led to logical anomalies coming ‘right to the surface’: Jeffrey A Meldman, ‘A Petri-Net Representation of Civil Procedure’ (1977) 19 Idea 123, 145. 

  66. Waddington (n 55) 28–29. 

  67. Mireille Hildebrandt, ‘A Philosophy of Technology for Computational Law’ (David Mangan, Catherine Easton and Daithí Mac Mac Síthigh eds, OUP, forthcoming) accessed 15 March 2021. 

  68. Various of Fuller’s principles of formal legality in rule ‘design’ speak to this: laws must be reasonably clear, they ought not contradict one another, they must not require the impossible, and there should be ‘congruence’ between the declared rule and the state’s application of it. See Fuller (n 17) ch 2. On the relationship with legislative intent, see Bennion (n 31) ch 3. 

  69. Cf. e.g. Roger Brownsword, ‘Lost in Translation: Legality, Regulatory Margins, and Technological Management’ (2011) 26 Berkeley Technology Law Journal 1321; John Gardner, ‘The Mark of Responsibility’ (2003) 23 Oxford Journal of Legal Studies 15. 

  70. Mohun and Roberts (n 6) 16–17 (emphasis in the original). The forms of RaC described in the previous section could be described as just ‘output’. 

  71. Waddington (n 9) 182 (our emphasis). 

  72. ‘Better Rules for Government - Discovery Report’ (New Zealand Government 2018) accessed 6 October 2020. 

  73. ibid 13. 

  74. For a set of examples and a list of legislation where they have been implemented, see Parliamentary Counsel Office, ‘Guidance on Instructing Counsel: Common Legislative Solutions’ (Scottish Government 2018) accessed 26 May 2023. 

  75. ‘Better Rules for Government - Discovery Report’ (n 72) 32 (suggesting RaC provides an opportunity to ‘identify legislative barriers to data and digital transformation to inform the development of standard clauses, drafting guidance materials, and potential future amendments (working with PCO, DPMC Policy Project, and Stats NZ)’). 

  76. On the question of deciding meaning in advance, see section 3.5.3 below. For a recent discussion that aims to develop a link between legislative with software engineering processes, see Gordon Guthrie, ‘Can Parliamentary and Digital Delivery Engines Ever Drive in Unison?’ (Apolitical, 6 September 2023) accessed 4 November 2023. 

  77. ‘Better Rules for Government - Discovery Report’ (n 72) 3. 

  78. ibid 16. The experiment focused on the provisions on holiday entitlement within the New Zealand Holidays Act 2003. 

  79. This incorrect presumption that legal rules can be clear and self-contained was exhibited in one of the foundational papers in legal formalisation, Marek J Sergot and others, ‘The British Nationality Act as a Logic Program’ (1986) 29 Communications of the ACM 370 (‘the British Nationality Act is relatively self-contained, and free, for the most part, of many complicating factors that make the problem of simulating legal reasoning so much more difficult. Furthermore, at the time of our original implementation (summer 1983) the act was free of the complicating influence of case law’). See Leith, ‘Fundamental Errors in Legal Logic Programming’ (n 41). 

  80. Leith, ‘The Application of AI to Law’ (n 17) 44. See also Frank Pasquale, ‘A Rule of Persons, Not Machines: The Limits of Legal Automation’ (2019) 87 George Washington Law Review 1. 

  81. Cf. ‘Level five’ in Wong (n 39) 20. 

  82. Arie Van Deursen, Paul Klint and Joost Visser, ‘Domain-Specific Languages: An Annotated Bibliography’ (2000) 35 ACM SIGPLAN Notices 26 (A DSL is ‘a programming language or executable specification language that offers, through appropriate notations and abstractions, expressive power focused on, and usually restricted to, a particular problem domain’). 

  83. Examples include RegelSpraak and Catala, discussed below, and Logical English (Robert Kowalski and others, ‘Logical English for Law and Education’ in David S Warren and others (eds), Prolog: The Next 50 Years (Springer Nature Switzerland 2023)\_24 accessed 2 November 2023). 

  84. See, respectively, Lawsky (n 62) (on default logic as a representation of statutory reasoning), Gerhard Brewka and Thomas Eiter, ‘Prioritizing Default Logic’ in Steffen Hölldobler (ed), Intellectics and Computational Logic (Springer Netherlands 2000) (on prioritized default logic), and Guido Governatori and Antonino Rotolo, ‘Changing Legal Systems: Legal Abrogations and Annulments in Defeasible Logic’ (2010) 18 Logic Journal of the IGPL 157 (modelling the state of rule applicability over time). 

  85. Mischa Corsius and others, ‘RegelSpraak: A CNL for Executable Tax Rules Specification’, Proceedings of the Seventh International Workshop on Controlled Natural Language (CNL 2020/21) (2021). For our analysis of RegelSpraak through the lens of the Typology of Legal Technologies, see ‘RegelSpraak’ in Diver and others (n 4). 

  86. See ‘RuleSpeak® – Let the Business People Speak Rules!’ accessed 2 November 2023. 

  87. There is a significant literature concerned with formalising compliance between ‘internal’ business processes and ‘external’ sources such as contracts and laws. See for example Guido Governatori and Shazia Sadiq, ‘The Journey to Business Process Compliance’ in Jorge Cardoso and Wil Van der Aalst (eds), Handbook of Research on Business Process Modeling (IGI Global 2009). See also Guido Governatori, ‘Comments on Cracking the Code’ in ‘Comments on Cracking The Code: Rulemaking For Humans And Machines (August 2020 Draft)’ (LawTech La Trobe Research Group 2020) accessed 23 February 2022. 

  88. Corsius and others (n 85) 3. 

  89. ibid. 

  90. Ilona Wilmont and others, ‘A Quality Evaluation Framework for a CNL for Agile Law Execution’, Proceedings of the Seventh International Workshop on Controlled Natural Language (CNL 2020/21) (2021) 6. 

  91. ibid 2. 

  92. Frans Fokkenrood, ‘RegelSpraak for Business Rules: Experiences in Building a Business Rules Compiler for the Dutch Tax Administration’ (2011) 12 Business Rules Community accessed 6 November 2023. 

  93. Corsius and others (n 85) 2. 

  94. Denis Merigoux, Nicolas Chataing and Jonathan Protzenko, ‘Catala: A Programming Language for the Law’ (2021) 5 Proceedings of the ACM on Programming Languages 1, 9. For our analysis of Catala through the lens of the Typology of Legal Technologies, see ‘Catala’ in Diver and others (n 4). 

  95. For the classic discussion, see Donald Knuth, ‘Literate Programming’ (1984) 27 The Computer Journal 97. 

  96. This is an indicative example, reproduced from Denis Merigoux and others, ‘Catala’ accessed 12 November 2023. 

  97. Merigoux, Chataing and Protzenko (n 94) 22; Denis Merigoux and Liane Huttner, ‘Catala: Moving Towards the Future of Legal Expert Systems’ (INRIA 2020) accessed 25 February 2021. 

  98. For an analysis of what the bi-directionality means for different parties, see ‘3.1 Code’s Bi-directionality: A Text for both Computers and Humans’ in Laurence Diver, ‘Interpreting the Rule(s) of Code: Performance, Performativity, and Production’ [2021] MIT Computational Law Report accessed 30 October 2023. 

  99. Denis Merigoux, Raphaël Monat and Jonathan Protzenko, ‘A Modern Compiler for the French Tax Code’ (ACM 2021) accessed 3 November 2023. This will also be true for trans-compiled code (see below). 

  100. Cf Merigoux, Chataing and Protzenko (n 94) 25, taking this approach to demonstrate a prototype for a web-based benefits calculator. See also Merigoux, Monat and Protzenko (n 99). 

  101. Wong (n 39) 5. See also the requirement specified by the policy stakeholders for the design of RegelSpraak: Corsius and others (n 85) 2. 

  102. See for example the efforts of the European Commission’s ‘Better Legislation for Smoother Implementation’ (BLSI) project (‘Better Legislation for Smoother Implementation – European Commission’ (27 October 2023) accessed 9 May 2023) and its associated SEMIC conference, and the Danish Government’s digital-ready legislation project (‘Digital-Ready Legislation – Agency for Digital Government’ accessed 17 May 2023). 

  103. Ursula Plesner and Lise Justesen, ‘The Double Darkness of Digitalization: Shaping Digital-Ready Legislation to Reshape the Conditions for Public-Sector Digitalization’ (2022) 47 Science, Technology, & Human Values 146, 147. 

  104. Wong refers to first to ‘Code = Natural’ and then ‘Code > Natural’ (n 39) 21). 

  105. ibid 23. This vision draws implicitly on the abstractions that underpin information theory (cf. the seminal analysis in Claude E Shannon, ‘A Mathematical Theory of Communication’ (1948) 27 The Bell system technical journal 379). As will be discussed in section 3.5, it does not and cannot capture the entirety of the law, in particular its institutional nature: see Mireille Hildebrandt, ‘Law as Information in the Era of Data‐Driven Agency’ (2016) 79 The Modern Law Review 1. 

  106. Tom Barraclough, Hamish Fraser and Curtis Barnes, ‘Legislation as Code for New Zealand: Opportunities, Risks, and Recommendations’ (Brainbox; The Law Foundation 2021) para 254\_files/ugd/13cbd1\_cf3fd1723fb547c1ac00310ad20c0781.pdf accessed 24 March 2022. In the UK, the Interpretation Act 1978 and Interpretation and Legislative Reform (Scotland) Act 2010 do not explicitly define legislation as textual. Multiple references are made to ‘enactments’ and ‘provisions’, but apart from various references to ‘words’, the nature of these is implied rather than specified. 

  107. Gordon, Governatori and Rotolo define isomorphism as ‘a one-to-one correspondence between the rules in the formal model and the units of natural language text which express the rules in the original legal sources’ (n 63) 285. 

  108. Wong (n 39) 21. 

  109. See Diver, ‘Computational Legalism and the Affordance of Delay in Law’ (n 27) and more generally Diver, Digisprudence (n 17) ch 3. 

  110. Cf. Marco Goldoni, ‘The Politics of Code as Law: Toward Input Reasons’ in J Reichel and AS Lind (eds), Freedom of Expression, the Internet and Democracy (Brill 2015); Bert-Jaap Koops, ‘Criteria for Normative Technology: The Acceptability of “Code as Law” in Light of Democratic and Constitutional Values’ in Roger Brownsword and Karen Yeung (eds), Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes (Hart 2008); Diver, Digisprudence (n 17) ch 3. 

  111. Burkhard Schafer, ‘Legal Tech and Computational Legal Theory’ in Georg Borges and Christoph Sorge (eds), Law and Technology in a Global Digital Society (Springer International Publishing 2022) 320\_15 accessed 14 June 2022. 

  112. Cf. the experience of benefit claimants in ‘You Reap What You Code’ by Mears and Howes (n 35). See also the discussion of street-level bureaucracy below. 

  113. John Morison and Philip Leith, The Barrister’s World and The Nature of Law (Open University Press 1992); Philip Leith, ‘The Rise and Fall of the Legal Expert System’ (2016) 30 International Review of Law, Computers & Technology 94, 101–103. 

  114. See Tatiana Duarte, ‘3.5.2 Legal Reasoning and Interpretation’, Research Study on Text-Driven Law (COHUBICOL 2023) accessed 18 September 2023; Philip Leith, ‘The Problem with Law in Books and Law in Computers: The Oral Nature of Law’ (1992) 6 Artificial Intelligence Review 227. 

  115. Leith suggests that empirical research consistently finds that in practice the legal expert systems of the last generation did not provide lawyers and judges with much assistance. See Leith, ‘The Rise and Fall of the Legal Expert System’ (n 113) 101 (‘their needs are not met by a system which simply lists rules and indicates the ordering in which they were triggered’). 

  116. For an exploration of design thinking in the legal domain, see Rae Morgan, ‘Lawyers Are Still Lawyers. Except When They’re Not.’ in Emily Allbon and Amanda Perry-Kessaris (eds), Design in Legal Education (Routledge 2022). 

  117. See section 3.5.4 below. 

  118. The next section discusses some of the problems of mixing legal and technological normativity. 

  119. We will see below in section (‘The mirage of human-readable code’) why it is problematic to speak of natural language rules as authoritative when they are implemented via code-driven rules whose normativity is of an entirely different nature. 

  120. It is notable that in its recommendations the Better Rules report refers to the overlapping interests of various agencies, including the executive’s policy and service innovation teams, the Parliamentary Counsel’s Office, and the Internal Revenue department, but does not explicitly mention citizens or the courts in that context. 

  121. Wong suggests, for example, that ‘designers of systems are exhorted to leave in entry-points for human discretion’ (Wong (n 39) 23). 

  122. Bennion (n 31) 17. 

  123. Burkhard Schafer and Colin Aitken, ‘Inductive, Abductive and Probabilistic Reasoning’ in Giorgio Bongiovanni and others (eds), Handbook of Legal Reasoning and Argumentation (Springer Netherlands 2018) 310. 

  124. Pasquale (n 80) 4–5. 

  125. Roger Brownsword, ‘Code, Control, and Choice: Why East Is East and West Is West’ (2005) 25 Legal Studies 1. 

  126. Note that this is not an argument against the requirement for transparency or explanations/evidence about how they have operated in practice; affording these can be achieved for example through the better design of automated logging. This is an argument about the categorically different nature of the rules, and the risk that arises when their representations look (close to) identical. 

  127. Cf. Felicity Bell and others, ‘AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators’ (The Australasian Institute of Judicial Administration Incorporated 2022) 29. 

  128. Philip Sales, ‘The Contribution of Legislative Drafting to the Rule of Law’ (2018) 77 The Cambridge Law Journal 630, 633. 

  129. Bennion (n 31) chs 3–5. 

  130. MacCormick (n 30) chs 3–4 (At 71: ‘The conclusion to be drawn for all cases, the legal one included, is not that ascriptive decisions or determinations preclude or exclude deductive logic, but rather that they are a necessary precursor to any deductive reasoning whatsoever that is carried out with reference to the actual world. Every form of applied logic requires decisions as to the applicability of universals (predicate terms) to particular instances… If these require justification in given pragmatic circumstances, then certainly that “external” justification has to be provided before any syllogistic representation of a conclusion can be convincing.’). 

  131. Diver, ‘Computational Legalism and the Affordance of Delay in Law’ (n 27). 

  132. These are central aspects of ‘computational legalism’ that make technological normativity more problematic than even text-driven legalism (ibid.). 

  133. Cf. G Radbruch, ‘Five Minutes of Legal Philosophy (1945)’ (2006) 26 Oxford Journal of Legal Studies 13. 

  134. Barraclough, Fraser and Barnes (n 106) pt 3 (‘Code should not be legislation’), 42-79. 

  135. Cf. the discussion on the effect on legal effect in section 3.5.4 below. 

  136. Le Sueur (n 34) 201. 

  137. Waddington (n 9) n 17. 

  138. This is the approach taken by the legal expert system DataLex, for example, where the individual ‘consultations’ display prominent disclaimers that they should be used ‘only for education and testing purposes’, and the user must agree the output ‘will not be relied upon for any purpose’. See ‘DataLex: AustLII’s Legal Reasoning Application Platform’ (n 14). 

  139. Plesner and Justesen (n 103) 3 (our emphasis). The pursuit of ‘simple rules and unambiguous terminology’ connects back to the discussion above about policy being shaped by the RaC medium. 

  140. Michael Lipsky, Street-Level Bureaucracy: Dilemmas of the Individual in Public Service (30th Anniversary Edition, Russell Sage Foundation 2010). See also Le Sueur (n 34) 192–193. 

  141. Lipsky lists as typical street-level bureaucrats ‘teachers, police offers and other law enforcement personnel, social workers, judges, public lawyers and other court officers, health workers, and many other public employees who grant access to government programs and provide services within them.’ (Lipsky (n 140) 3). 

  142. ibid 161 (our emphasis). 

  143. Leith notes that in the context of applying welfare rights there is ‘an attempt to keep away from legalism and legal rules as much as possible’. See Leith, ‘The Application of AI to Law’ (n 17) 43. 

  144. Mears and Howes (n 35). Leith adverted to problems with this precise application of legal expert systems, as far back as 1988: ‘these tactics [computerising social security] would not simply be applying computers to ease present problems; they would also cause other problems. In the case of the DHSS [Department of Health and Social Security] it would mean that the client’s needs were being routinised more in accord with the needs of the bureaucracy than the needs of the client.’ (Leith, ‘The Application of AI to Law’ (n 17) 33). 

  145. Mears and Howes (n 35) 8. 

  146. Aurélien Buffat, ‘Street-Level Bureaucracy and E-Government’ (2015) 17 Public Management Review 149, 153. 

  147. ibid 156. 

  148. ibid 156–157. 

  149. In ‘The Problem with Law in Books and Law in Computers’ (n 114), Leith argues that orality and non-textual practices already make up a large part of legal practice. To the extent that this is a normal part of the balance of legality, putting trust in RaC to remove discretion might end up tipping the balance too far. 

  150. Cf. Luca Arnaboldi and others, ‘Formalising Criminal Law in Catala’, Programming Languages and the Law 2023 (ProLaLa) (ACM SIGPLAN) accessed 12 November 2023. 

  151. It is also the explicit vision of some proponents of computational law. See for example Michael Genesereth, ‘Computational Law: The Cop in the Backseat’ [2015] CodeX - The Stanford Center for Legal Informatics 1. 

  152. ‘Better Rules for Government - Discovery Report’ (n 72) 4. 

  153. As Meessen puts it, the decision to formalise “takes away the ability of the legislator to be as expressive about their intentions with the law and burdens them with the responsibility of correctly expressing ideas in a formal language.” See PN Meessen, ‘On Normative Arrows and Comparing Tax Automation Systems’, Proceedings of the Nineteenth International Conference on Artificial Intelligence and Law (Association for Computing Machinery 2023) 3 accessed 28 October 2023. 

  154. Waldron (n 17) 19. See also Bańkowski and Schafer (n 17). 

  155. Cf. Emilie van den Hoven, ‘Hermeneutical Injustice and the Computational Turn in Law’ (2021) 1 Journal of Cross-disciplinary Research in Computational Law (discussing the ‘hermeneutical injustice’ potentially imposed by computational abstractions). 

  156. For a discussion of the normative implications of this kind of ‘third voice’ being introduced into legal practice, see Diver and McBride (n 58). 

  157. The standard example is a marriage, which cannot be pointed at but is nevertheless very real. Its creation is a speech act performed by an authorised celebrant, which to be successful must be done in accordance with the relevant legal provisions, while also always being subject to potential contestation on the basis of those provisions and other rules and principle that have a bearing on their meaning. 

  158. Mireille Hildebrandt, ‘The Adaptive Nature of Text-Driven Law’ (2021) 1 Journal of Cross-disciplinary Research in Computational Law. 

  159. Diver and others (n 3) ch 4. 

  160. Hohfeld’s seminal theoretical analysis of legal relationships has, for example, been a frequent subject of logical formalisation attempts. See e.g. Réka Markovich, ‘Understanding Hohfeld and Formalizing Legal Rights: The Hohfeldian Conceptions and Their Conditional Consequences’ (2020) 108 Studia Logica 129. For Hohfeld’s original analysis see Wesley Newcomb Hohfeld, ‘Fundamental Legal Conceptions as Applied in Judicial Reasoning’ (1913) 23 Yale Law Journal 16. 

  161. Albeit not of most contemporary RaC systems lying towards the left of the spectrum. 

  162. ‘It is not simply that business or law are more complex than computer configuration (that, say, they have more ‘variables’). Rather it is that they are qualitatively different.’ (Leith, ‘The Application of AI to Law’ (n 16) 34). 

  163. Isabelle Stengers, ‘Introductory Notes on an Ecology of Practices’ (2005) 11 Cultural Studies Review 183 (referring to an ecology of practice as ’a tool for thinking through what is happening’). 

  164. Bruno Latour, An Inquiry into Modes of Existence: An Anthropology of the Moderns (Harvard University Press 2013) ch 13; Neil MacCormick, Institutions of Law: An Essay in Legal Theory (Oxford University Press 2007). 

  165. Stengers (n 163) 187. 

  166. AJ Wells, ‘Gibson’s Affordances and Turing’s Theory of Computation’ (2002) 14 Ecological Psychology 140, 171. 

  167. The distinction and interplay between what affords law, and what law affords, is important here. See Diver, ‘Law as a User’ (n 35) 22ff. 

  168. Even Turing’s abstract machine was ‘ecological’, in the sense that it was built around a very limited set of physical operations performed on a physical tape (Wells (n 166) 171). This idea is vividly portrayed in chapter 13 of Liu Cixin’s science fiction novel The Three-Body Problem (Ken Liu tr, Head of Zeus 2016). 

  169. Shannon (n 105). See also Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation (Freeman 1976) ch 3; Hildebrandt, ‘Law as Information in the Era of Data‐Driven Agency’ (n 105). 

  170. See Laurence Diver, ‘ Protecting the legal subject by protecting the mode of existence’ in Diver and others (n 3) 91. 

This page was last updated on 5 January 2024.