Link Search Menu Expand Document

Chapter 2: The impact of data-driven legal technologies

By Pauline McBride

On this page

  1. 2.1 The rise of data-driven legal technologies
  2. 2.2 The affordances of data-driven technologies
    1. 2.2.1 Answering an objection
  3. 2.3 Data-driven technologies as change agents and influencers
    1. 2.3.1 Making way for a new normativity
    2. 2.3.2 Engines of influence
    3. 2.3.3 New seats of power
    4. 2.3.4 Summary
  4. 2.4 A closer inspection of agentive effects: effect on legal effect
    1. 2.4.1 Effect on legal effect
  5. 2.5 A closer inspection of agentive effects: making way for data-driven normativity
    1. 2.5.1 The texture of data-driven normativity
    2. 2.5.2 The implications of data-driven normativity: legal protection and the Rule of Law
  6. 2.6 Conclusion

In Chapter 1 we described a set of concepts and philosophical frameworks which are key to explaining why code-driven and data-driven technologies may transform law-as-we-know-it. Following Hildebrandt, we introduced the idea of law in its current mode of existence as an affordance of an information infrastructure which encompasses language, text and the printing press. We demonstrated that the concept of affordance, understood as the action possibilities arising from the relations between object and user, allows us to make sense of the dynamics of change brought about by new technologies. Complementary perspectives, which draw on postphenomenology, actor-network theory and narrative theory also shed light on the agentive role of technologies. In this section we draw on these various perspectives to tease out the implications of data-driven technologies for law-as-we-know-it, the Rule of Law and the nature of the protection afforded by law.

Data-driven legal technologies – for our purposes those that employ machine learning techniques – are far from new. As early as 1974 Mackaay and Robillard used machine learning for the task of ‘prediction of judgment’.1 Neural networks were applied to the prediction of judgment task in the field of construction litigation in the 1990’s.2 Lex Machina, one of the first commercial organisations to use machine learning to assist lawyers to predict litigation outcomes, launched in 2010.3 Since 2015 there has been a conspicuous and sustained increase in research in the field of prediction of judgment, prompted in part by renewed interest in deep learning, the introduction and impact of transformer models and improved access to high quality, digitised data. Despite what Katz describes as ‘the march toward quantitative legal prediction’4 there are very few commercial products that offer prediction of judgment as a service.5 However, according to the European Commission for the Efficiency of Justice, ‘public decision-makers are beginning to be increasingly solicited by a private sector wishing to see these tools … integrated into public policies.’6 Senior judges in England and Wales have actively endorsed the use of prediction of judgment systems.7

Data-driven technologies which carry out other tasks, notably legal search, electronic discovery, document review and analytics, and compliance support have enjoyed commercial success.8 The online legal research service Westlaw has been using machine learning for case retrieval and natural language search for more than a decade.9 DiligenceEngine (now Kira), a contract analytics system that uses machine learning techniques, launched in 2010.10 Other commercial offerings (with launch dates in parenthesis) which employ or employed machine learning include LexPredict (2013), Ross Intelligence (2014), Luminance (2015), Predictrice, Jus Mundi (2019), Squirro (2019), Della (2020), Manupatra and Afriwise.

Despite these developments Surden, writing in 2021, suggested that the use of machine learning in law is ‘not extremely widespread’.11 It is difficult to obtain a clear picture of the use of data-driven legal technologies. Surveys about use within the profession often have a low response rate. Data about investment in legal tech companies suggests year on year growth but typically does not distinguish between code-driven and data-driven systems and may be an unreliable indicator of use.12 Use may vary from jurisdiction to jurisdiction. Large law firms likely made and continue to make use of a greater variety of data-driven legal technologies than small firms. However, by 2019 use of AI-enabled technologies was sufficiently widespread for the American Bar Association to issue new regulations concerning the use of such technologies.13 If, nevertheless, in 2021 use of data-driven legal technologies was not extremely widespread that may be about to change.

The launch of ChatGPT in 2022 may prove to be a watershed moment for the adoption of data-driven legal technologies.

The launch of ChatGPT in 2022 may prove to be a watershed moment for the adoption of data-driven legal technologies. The ‘generality and versatility of output’14 of so-called foundation models such as the GPT family make it particularly attractive to incorporate these models in commercial products. By February 2023, at least fourteen legal tech companies had announced that they were using GPT models in their product offerings.15 Casetext’s CoCounsel which is built on GPT-4 was launched in March 2023.16 By July of that year Casetext had been acquired by Thomson Reuters, the global publishing company and owners of Westlaw.17 Big law has also shown an interest in the capabilities of foundation models; Dentons,18 Allen and Overy19 and Troutman Pepper20 have already launched systems built on OpenAI’s GPT family. A recent survey by Thomson Reuters found that 82% of respondents to a survey of mid-size and large firms in the US, Canada and the UK said they believe that ChatGPT and generative artificial intelligence (AI), can be readily applied to legal work, and 51% said that it should be.21

Data-driven legal technologies have also become increasingly sophisticated. Consider, for example, developments in commercial legal research systems. Many such systems offer conceptual search as standard. Conceptual search enables lawyers to input natural language queries; the system finds and returns documents containing terms that are conceptually similar to the input terms.22 Contextual search is a more recent data-driven innovation. Lawyers can upload a document, such as a brief, into the system. The system assesses the context of the search from the document and uses the context to provide relevant results. One of the latest features offered by providers of legal research systems is the ability for the user to pose questions and receive answers. For example:

WestSearch Plus is a closed domain, non-factoid Question Answering system for legal questions that allows attorneys to zero in on the most salient points of law, related case law, and statutory law appropriate to their jurisdiction, in a way that traditional search and other legal research platforms cannot.23

As Ko notes, ‘Increasingly, the output of artificially intelligent LegalTech resembles regulated activities that constitute legal practice.’

Instead of merely offering enhanced search functionality, the system provides responses that resemble legal advice. Casetext’s CoCounsel, built on GPT-4, will provide answers to research questions in the form of a memo, summarise documents including contracts or legal opinions, and prepare for a deposition.24 ChatGPT (though clearly not marketed as a legal technology) can produce a draft contract (the jury is out on the utility of its outputs, even as a first draft).25 As Ko notes, ‘Increasingly, the output of artificially intelligent LegalTech resembles regulated activities that constitute legal practice.’26 Against this background, Hildebrandt’s anticipation of the emergence of data-driven ‘law’ appears perspicacious.

2.2 The affordances of data-driven technologies

Affordances, in Gibson’s account, are ‘subjective in that an actor is needed as a frame of reference.’27 Most commercial data-driven legal technologies target lawyers, though judges and citizens might also interact with these systems. Yet the affordances of things are also ‘objective in that their existence does not depend on value, meaning, or interpretation’.28 Identifying ‘objective’ affordances represents a challenge, particularly where the context of use is a material and institutional environment such as law. Typically, our understanding of the action possibilities afforded by a commercial product is deeply informed by the claims made by those who market the product. We may be inclined to interpret the action possibilities of the technology in relation to its intended user in the light of these claims, the target market and signifiers set out in the product interface. Identifying ‘objective’ affordances involves endeavouring to look beyond these framings29 even while recognising that ‘our perception is always already mediated by language and interpretation’.30

In this vein we suggest that machine-learning components in data-driven legal technologies offer the following broad affordances31 to users:32

  1. search of digitised materials using conceptual search (e.g. Westlaw Edge33, Elevate’s Analyse Documents,34 and Kira35). Conceptual search allows users to obtain relevant results even when their input query does not contain words that appear in the information that is retrieved.36

  2. refining search results by providing information (in the form of documents) about the context of search (e.g. CARA AI,37 Vincent38)

  3. obtaining insights (objectively, additional information) about collections of (usually textual) information (e.g. WestSearch Plus,39 Lex Machina,40 Uhura,41 Della,42 CoCounsel43)

  4. generating texts or textual responses (e.g. Mapping Bits,44 CoCounsel45)46

All these affordances may change behaviours and produce real-world effects by allowing users to carry out certain kinds of actions. They are noteworthy because they depend on functionality which, at least in humans, requires language understanding and human reasoning. Data-driven technologies possess neither. Where, for example, data-driven legal technologies are used to draft contracts, make predictions or summarise case law, the affordances of the technologies are realised without the technologies engaging in (legal) reasoning.47

2.2.1 Answering an objection

Some will object to the assertion that data-driven technologies do not engage in legal reasoning. They will point to the outputs of these systems. Look, they will say, GPT-4 passed the US Uniform Bar Exam;48 prediction of judgment systems can achieve accuracy and F1-scores of over 90%;49 large language models can be prompted to output predictions in the form of legal syllogisms,50 a ‘chain of thought’51 or ‘reasoning steps’.

All these affordances … are noteworthy because they depend on functionality which, at least in humans, requires language understanding and human reasoning.

These claims, and their implications, deserve close scrutiny. Martínez points out various difficulties in verifying claims about GPT-4’s performance in the bar exam.52 He also notes, for example, that GPT-4 performed rather less well overall in essay questions than in multiple choice questions. Prediction of judgment systems that employ an appropriate experimental set-up typically obtain rather more modest accuracy scores.53 Jiang and Yang suggest that the fact that LLMs can be prompted to output text in the form of syllogisms indicates these systems are capable of deductive reasoning but accept their method does not involve the exercise of practical reasoning.54 Yu et al. propose a method to prompt GPT-3 to ‘think like a lawyer’.55 They maintain that ‘our analysis shows significant promise in prompt engineering for high-order LLM-based reasoning tasks’ but concede ‘it is questionable whether prompting actually teaches a LM to “think like a lawyer”’.56 Thinking like a lawyer includes reasoning by analogy. Machine learning systems have shown poor performance on tasks which, for humans, require analogical reasoning.57 Neither an output in the form of step-by-step reasoning nor an accurate output on a task which requires human reasoning, should be taken as evidence of the exercise of reasoning. Appearance is not the same as reality.58

Neither an output in the form of step-by-step reasoning nor an accurate output on a task which requires human reasoning, should be taken as evidence of the exercise of reasoning.

Data-driven technologies (for our purposes those that employ machine learning) from decision trees to GPT-4 employ statistical processes to learn patterns in their training data. A trained model takes an input and generates output (classifications, probability rankings, textual output) based on the patterns inferred from the data. A pre-trained large language model, for example, may take a textual prompt as an input and output text which is generated according to the model’s ‘(statistical) capacity to associate words’.59 The outputs can be impressive, but these models do not understand language as we do.60 They have no conception of the world beyond their training data.61 They have no sense of overarching principles,62 legal or not; there is no hierarchy in training data. Machine learning systems can output text that resembles the product of legal reasoning, but the processes by which they output such text have nothing to do with the exercise of legal reasoning.63 There is no poring over the constellation of facts at issue in a case or a contracting situation, no looking up the law, no exercise of judgment, no reflection on the demands of fundamental rights or of justice. There is no hesitation, no ‘re-tracings and re-attachment’64 of speech acts and speakers,65 no ‘legal trajectory’,66 no possibility of satisfaction of the felicity conditions for the speech acts of law. Such systems are oblivious to law’s ‘regime of veridiction’.67

2.3 Data-driven technologies as change agents and influencers

‘What matter who’s speaking, someone said what matter who is speaking’68

Samuel Beckett

2.3.1 Making way for a new normativity

Why should it matter that data-driven legal technologies are simultaneously capable of generating insights and texts and incapable of engaging in legal reasoning? In Samuel Beckett’s words, ‘What matter who’s speaking’69 – and relatedly, why should it matter how they produce speech? On one view (we return to this in sections 2.3.2 and 2.3.3) it may not much matter, so long as we do not imagine that such systems are speaking ‘legally’, so long, that is, as we do not make the mistake of supposing that these systems are oriented to the felicity conditions of the speech acts of law. If we make that mistake,70 we risk undermining or eroding law’s distinctive mode of ‘speaking’. Consciously or not, we open the door to a very different mode of existence of law. A shift in the register of what counts as ‘legal’, involving a move from ‘the realm of law to the realm of statistics’,71 implies a commensurate departure from law-as-we-know-it. In law, as Latour tells us, who speaks and how they speak matters.72

A shift in the register of what counts as ‘legal’, involving a move from ‘the realm of law to the realm of statistics’, implies a commensurate departure from law-as-we-know-it.

We can speak ‘legally’ because law’s ‘enlanguaged’73 mode of existence allows us to comprehend the re-tracings and attachments of law described by Latour and attribute legal effect. The very notion of legal effect presupposes the performative effect of a network of speech acts.74 These performative effects establish and ‘define […] the legal protection that is offered by modern positive law.’ 75 Thus, the regime of enunciation of law allows us to orient our behaviour, anticipate legal outcomes, become legal subjects, engage with norms, speak of rights, recognise concepts such as ‘ownership’, ‘marriage’, ‘contracts’, ‘legal wrongs’. It allows us to make sense of the institutions of law, its connection to the state, our vulnerability to state-imposed sanctions. It is simultaneously a regime of enunciation and veridiction which allows us both to create new legal norms and to specify the conditions under which norms are ‘legal’. It ensures a high degree of coherence.76 However imperfect,77 this mode of existence of law preserves the rule of law value of respect for human autonomy.78 As Hildebrandt notes,

Autonomy, accountability and justification all depend on prediction; we cannot act if we have no idea of the effects, we cannot be held accountable for what we could not have foreseen, and we cannot claim justification if we cannot not anticipate how others will evaluate our action.79

If we make the mistake of supposing that data driven technologies can speak ‘legally’, we risk severing the connection between law and the shared communicative processes and understandings that make it possible for us to engage with law, to predict, foresee and anticipate legal effects.80 We mangle the idea of what it means to engage in legal reasoning and interpretation, divorcing these practices from the ‘web of meaning’81 and the iterative ‘re-tracings and re-attachment’82 on which positive law relies.83 We compound this risk if we imagine that these technologies generate legal norms, that is, enunciate speech acts which produce legal effects, in the same way as legislators or courts.84

The concern is not for the mode of existence of law as such, but for how citizens and legal subjects experience and engage with the law. To the extent that we mistake or substitute the outputs of data-driven legal technologies for the rulings of judges, the advice given by lawyers, the views of citizens with some knowledge of the law, we make way for a very different kind of normativity than that of law-as-we-know-it and a different source and form of ‘legal’ effect.85

2.3.2 Engines of influence

Let us suppose that we – citizens and lawyers – do not make the mistake of supposing that these systems can speak ‘legally’. Let us assume that we remain cognisant of the very different ‘reasoning’ processes by which data-driven legal technologies generate outputs. Data-driven systems may nevertheless operate as ‘engines of influence’86 in their context of use.

Data-driven systems may operate as ‘engines of influence’ in their context of use.

This influence may be exerted in different ways. As Verbeek points out ‘[a]t the very moment human beings use them, artifacts change from mere “objects lying around” into artifacts-for-doing-something.’87 In the case of data-driven legal technologies they become artifacts-for-search, -for-drafting-contracts, -for-prediction-of-judgment. They become situated in a practice or set of behaviours; their action possibilities are made manifest in use. They acquire meaning.88

Coeckelbergh describes how data-driven technologies can be understood as ‘shaping the narrative’ of human actors, ‘giving them roles’, ‘influencing meaning making’ and ‘re-shaping a […] practice’.89 In the short term it may be that legal professionals will:

do less manual data assembly and initial analysis work but take on new tasks associated with interpreting and acting on the outputs of AI systems.90

In the longer term, as Coeckelbergh says of ChatGPT, such systems may ‘change the way we think and experience the writing process and ourselves as writers.’91

The concrete implications of the re-shaping of practice and of the meaning ascribed to data-driven legal technologies (that is, as for-doing-something) and their outputs may be hard to pin down.92 Sometimes, however, the influence of legal technologies and the potential effects of their use are more obvious. Ihde describes how technologies may present in relations of alterity, interacting with humans as a ‘quasi-other’.93 Data-driven legal technologies are often marketed as quasi-others – as ‘an automated associate assigned to write the first draft of your brief’ or a ‘CoCounsel’.94 Uhura Solutions claim that their technology ‘reads and understands contracts just as humans do’.95 Squirro say of their Augmented Intelligence Solutions that they ‘provid[e] a Smart Assistant-like experience’.96

Bylieva, following Coeckelbergh, argues that language capability – or at least the ability to engage in dialogue – increases the likelihood of a technology being seen as a quasi-other. 97 This is relevant for data-driven systems which possess question answering functionality such as WestlawPlus,98 Della99 and Kira.100 There is also anecdotal evidence to suggest that lawyers engage with some data-driven legal technologies as quasi-others.101 This need not imply deference to the technology,102 but it points to its role as an engine of influence.103

By shaping the speech acts of judges, they affect the authoritative ascription of legal effect. By influencing lawyers’ or citizens’ expectations concerning the ascription of legal effect, they affect the outcomes to which legal effect is ascribed.

Moreover data-driven legal technologies employed in tasks such as legal research, document review, analysis and drafting implicitly or explicitly recommend, suggest, caution104 and flag.105 They may not dictate the content of advice, contracts, court documents or courses of action. However, they inevitably exert influence.106 At the very least – as in the case of search, prediction of judgment, or systems used by court administrations to ‘triage’ cases to assess their relative importance – they influence a train of thought, a research strategy, consideration of options, courses of action.107 A system which creates a first draft or reviews an earlier draft, is bound to have some effect on the final document.108 In this way data-driven legal technologies exercise a degree of influence over the advice given by lawyers, the judgments issued by judges,109 the content of contracts and ultimately the courses of action adopted by citizens.110 By shaping the speech acts of judges, they affect the authoritative ascription of legal effect.111 By influencing lawyers’ or citizens’ expectations concerning the ascription of legal effect, they affect the outcomes to which legal effect is ascribed. In both cases – and despite their inability to ‘speak legally’ – they have an effect on legal effect.112

2.3.3 New seats of power

We have cast data-driven legal technologies as agents of influence, considering, as it were, their prospective effect. However, it is important to recognise that they are also seats of power; they owe their existence to a network of actors with their own commitments, agendas, epistemologies, and regimes of veridiction. As Jongepier and Keymolen point out:

By focussing only on the output of a technology (the decision), we no longer take into account that this outcome is actually the interplay of a variety of associations of engineers, algorithms, data scientists, insurers, […] experts, hardware, corporations, software, regulators and other stakeholders.113

We might add to that list: researchers, funding organisations, financial institutions, major accounting firms and, notably, legal publishers.114

How much influence will lawyers, judges and, for that matter, citizens exert over the design of data-driven legal technologies, the selection of training data,115 algorithms, experimental set-up, the metrics used in testing the systems, the choice of ‘explainability’ techniques (if any), the documentation of risks?116 These choices matter. They impact on outputs117 and affect the assessment of performance.118 They have a bearing on whether a technology will be adopted.119 Most importantly, they determine the affordances of the technology in its contexts of use. Design choices may make information about legal norms and the likely effects of those norms more or less accessible; they may reduce or increase the likelihood of the system being treated as a quasi-other and an authoritative source; they may facilitate or restrict human oversight and control and make it more or less easy to independently assess the outputs of the system.120 Ultimately the developers and providers of these systems have the power to determine what ‘law’ is communicated by their technologies, to whom, at what price and for which uses and purposes.121

Ultimately the developers and providers of these systems have the power to determine what ‘law’ is communicated by their technologies, to whom, at what price and for which uses and purposes.

2.3.4 Summary

In this section, we demonstrated why and how data-driven legal technologies operate as change agents and influencers. They may have an effect on legal effect. As influencers, they may shape the speech acts of judges and affect the authoritative ascription of legal effect. They may shape lawyers’ or citizens’ expectations concerning the ascription of legal effect and so influence the outcomes to which legal effect is ascribed. This is the here and now of law and legal practice.

However, use of these technologies may bring about more fundamental change. If we fail to distinguish between law’s modes of enunciation and veridiction and the processes by which the technologies outputs texts, insights, answers, we put at risk the very mode of existence of law. We open the door to a very different kind and source of ‘legal’ effect, normativity and law. This is not yet the here and now of law and legal practice, but we should not be naïve. Financial pressures on justice systems and the interests of legal technology companies will play into the narrative that, at least for ordinary citizens and low value claims, data-driven ‘law’ is good enough.122

Our examination in section 2.3 explored the dynamics through which data-driven legal technologies systems may impact on law and the practice of law. We distinguished between two kinds of impact suggesting that in the here and now, data-driven legal technologies may have an effect on legal effect, but also noting the risk that use of such technologies may open the door to a different kind of ‘legal’ normativity. In this and the following section we offer a closer inspection of the implications of these effects by reference to Rule of Law values and the practices that sustain them.

That data-driven legal technologies can shape legal outcomes or norms created by judges is not news. Lawyers use these technologies in the hope of achieving better or more cost-effective results. Few can imagine that such use is neutral in its effects. In many cases, the technologies make it possible to carry out analyses that would otherwise be impossible or prohibitively expensive.123 However, such use can also be problematic where there is overreliance on the technology and its outputs.124 Overreliance might be occasioned by laziness or poor practice, but it can also result from ignorance about the capabilities and limitations of the systems that are employed.125 Mart’s research about the very different results obtained by different commercial legal search systems is valuable; few will have appreciated the extent to which ‘search results may vary’.126 Similarly, Medvedeva’s research concerning prediction of judgment systems – demonstrating that the high accuracy scores touted by prediction of judgment systems should not be taken at face value – provides a welcome reality check about the effectiveness and utility of prediction of judgment systems.127 Few lawyers, we suspect, receive training about automation bias. 128 Systems may explicitly encourage reliance – even if their contract terms say something different.129 Indeed a careful reading of the terms on which many data-driven legal technologies are supplied ought to put users of these systems on notice about their limitations!

Lawyers and citizens forget at their peril that an independent judiciary and legal profession – with all that that entails – is crucial to democracy and the Rule of Law.

Overreliance is a concern not only because of the risk of poor legal outcomes, but because it inappropriately puts power in the hands of the developers and providers of the technologies. However, at least in the case of lawyers, there are ways of managing the risk of overreliance – through monitoring use of the systems, training and education. Law schools,130 legal regulatory bodies131 and bar associations132 have a part to play here. Whether as part of pre- or post-qualifying education, lawyers should be equipped to be capable of understanding, in broad terms, the capabilities, limitations and likely effects of the systems they use.133

The risk may also be tackled through system design; systems may be designed to prompt reflection and hesitation, employing what Passi and Vorvoreanu describe as ‘cognitive forcing functions’ (more prosaically, making you think).134 Lawyers’ professional obligations of independence and competence should act as a buffer against overreliance135 provided that legal regulatory bodies do not give in to calls for relaxation of these standards of practice. 136 Lawyers and citizens forget at their peril that an independent judiciary and legal profession – with all that that entails – is crucial to democracy and the Rule of Law.137

2.5 A closer inspection of agentive effects: making way for data-driven normativity

In section 2.3 we argued that if we fail to distinguish and keep a clear separation between law’s modes of enunciation and veridiction and the processes by which the technologies outputs texts, insights and answers we open the door to a very different kind and source of ‘legal’ effect and normativity, and a different mode of existence of law. We make way for a form of data-driven normativity. This is a transformation of a different order.

On one view, formidable obstacles stand in the way of this vision of the future of law. One of these is the limited functionality of most current data-driven legal technologies. In general, these are not fact-finding machines, evidence gatherers or capable of near-simultaneous dialogue with multiple persons.138 However, data-driven normativity need not depend on ‘robot’ judges; all that is necessary is that human judges or justice systems are not merely influenced by but defer to the outputs of data-driven technologies as though they spoke ‘legally’. If, across societies and jurisdictions, we have not yet embraced this new order, we have certainly flirted with it here and there. In France, judges in the Courts of Appeal in Douai and Rennes conducted a three-month trial of AI-powered software designed to reduce variability in the rulings of judges.139 The Shanghai intelligent assistive case-handling system for criminal cases (the ‘206 System’) has a feature which can provide an alert as to whether (according to the analysis carried out by the system) a draft judgment deviates from the approach adopted in previous similar cases.140 One of the most senior judges in England and Wales maintains that data-driven legal technologies ‘may also, at some stage, be used to take some (at first, very minor) decisions.’141 Yadong Cui, the former secretary and President of Party Committee of Shanghai Senior People’s Court, strongly advocates the adoption of data-driven legal technologies describing the ‘dream’ of making ‘justice a real science by combining justice and science and technology, using modern scientific and technological means.’142

2.5.1 The texture of data-driven normativity

In the Research Study on Text-Driven Law we read that:

Because legal norms are enacted as written legal speech acts combined with the unwritten principles that are implied in the entirety of legal norms within a jurisdiction, their mode of existence is text-driven and thereby firmly grounded in natural language.143

Moreover:

law is not a system of static rules where logical consistence is a goal in itself … [it] is not a monologue based on deductive reasoning from immutable axioms, but a situated adversarial dialogue based on iterant constructive re-interpretation of the relevant legal norm. With law, we are not in the realm of mathematics but rather in the realm of practical reason, grounded in experience rather than logic.144

The text-driven nature of law-as-we-know-it allows and obliges us to find a trade-off between certainty and uncertainty in law. It affords stability without stagnation. It underpins the extraordinary coherence and flexibility of law, makes it possible for us to participate in law as rational actors, to find in law both reasons for actions and justifications for decisions. It leaves room for contestation and ensures accountability.145 What of the texture of data-driven normativity?

Quite differently from code-driven systems, legal rules are not explicitly represented in data-driven systems.

Legal theorists who have grappled with expressing the texture of code-driven normativity have offered evocative descriptions: computational legalism,146 or Double-Click justice,147 implying a mechanical application of the law,148 ‘not thinking about’149 or hesitating over how rules may apply. Such lack of hesitation is also characteristic of data-driven normativity but there is an important distinction. Legal rules are explicitly (if imperfectly) represented in code-driven legal technologies.150 In the case of data-driven normativity the connection to the rules or norms of law is much more attenuated. Machine learning systems that use decision trees may learn (technically, induce) rules from training data.151 However, quite differently from code-driven systems, legal rules are not explicitly represented in data-driven systems. As Suksi points out:

While … [the] previous decisions [forming part of the training data] may have a provision in the law as the point of departure, the new decision made based on a machine-learning algorithm has the pool of previous decisions as the point of departure, rather than the legal norm.152

The outputs of these systems have no legal-normative inflection. In such outputs the normativity that informed the texts and other inputs used as training data is a vestigial trace.153 This, if it is ‘law’, is a ‘law’ dissociated from legal normativity, the ‘web’ of legal powers,154 the performativity of speech acts, the grounding in legal reasoning and interpretation.155 Nevertheless, as we will show, the implications of this new normativity may very much depend on the extent to which the outputs of the technologies come to resemble outputs produced through the exercise of legal reasoning.

We can address the implications of data-driven normativity by answering a provocation posed by Volokh. His provocation may be read as an answer to the second limb of our question – what matters who’s speaking and how they produce speech. Volokh suggests that how AI-enabled technologies produce outputs matters not. According to Volokh, when we ask whether AI-enabled technologies are ‘intelligent enough to do a certain task’ it is the outputs that matter, not the methods by which the outputs are produced.156 Volokh extends this argument to the task of judging, advocating for the promotion of suitably trustworthy AI systems to the role of judge. He offers a vision of full-blown data-driven normativity.

Volokh’s ‘thought experiments’ raise a crucial what-if question: what if these systems could issue judgments which, both in form and in substance, are indistinguishable from or would pass for those issued by human judges? Of course, this is a big ‘what-if’. Such judgments, as Volokh acknowledges, would ‘have to offer explanatory opinions and not just bottom-line results.’157 In law, a ‘result’ may be a finding of guilt or innocence, a sentence, the imposition of a fine or an order for damages, an order for divorce or any other order that may competently be granted by a court. ‘Explanatory opinions’ are justifications, provided by judges, which (1) are informed by and take the form of legal reasoning and (2) link the outcome of the case (the result) to the facts as established by the court (or agreed by the parties) and the relevant law.

As Schafer and Aitken point out:

It is essential that the legal process does not just try to give the right result. As a core requirement for the transparent administration of justice, the process has also to justify the result in a public way and to give reasons that can, at least in principle, be checked universally for correctness.158

This is an aspect of legality and is deeply connected to the idea that laws should be made known to those affected by them.159

The inability of data-driven legal technologies to link ‘results’ with legally relevant justifications remains a significant obstacle to the use of such technologies in legal decision-making.

Volokh’s commitment to the need for justifications as well as results explains why, of necessity, his argument is presented in a series of ‘thought experiments’. Given the current limitations of data-driven legal technologies we might expect to see – and do currently see – that such systems are incapable of producing legally relevant justifications.160 Developers and providers of such systems can offer explanations of how the systems work. Such explanations contribute to transparency but shed no light on the justification for the output results.161 Some systems can output information about the features of the input data which contribute most strongly to the system’s decision or classification.162 This approach provides some information about why the system reached its decision, but these are explanations about the statistical significance of the features rather than justifications based on the norms of law. Hybrid systems which combine machine learning and traditional rule-based approaches might attempt, in effect, to retrofit a ‘justification’ derived through case-based argumentation onto a machine learning system output.163 In the hybrid system envisaged by Prakken and Ratsma the ‘justification’ is not output by the machine learning system itself164 and may be inconsistent with the output of that system.165 Large language models can be prompted to output a prediction of judgment in the form of a ‘chain of thought’ or legal syllogism.166 However, such systems use the facts of already-decided cases for prediction; they are not reaching decisions about contested facts. The inability of data-driven legal technologies to link ‘results’ with legally relevant justifications remains a significant obstacle to the use of such technologies in legal decision-making.167

Consider the implications of ‘bare’ results without justifications. There is no easy way for legal subjects to make sense of those results; no step-by-step reasoning, no way to identify the factual considerations which were judged to be relevant,168 no possibility of looking ‘backwards’ to the set of written legal norms which may have informed the result. It is, as Schafer and Aitken point out, impossible to check whether the result was justified according to legal norms. Moreover, while a ‘bare’ result may produce legal effects for the parties in the case, it has no wider legal normative effect. Figure 1, for example, shows the result (strictly, ‘order’ or ‘ruling’) in Toivanen v Finland:169

Order in Toivanen v Finland Figure 1

The order, per se, does not operate as a general legal norm. It can no more offer a guide to future conduct (or the likelihood of wanted or unwanted legal effects flowing from courses of action) for legal subjects than provide its own justification. The absence of a justification also has implications for contesting the order.170 A bare order supplies no hint as to why the judge made the order. This presents difficulties not only for legal subjects who wish to contest the order but also for appellate courts who may wish to assess the soundness of the order by reference to the justification. There are implications therefore for judicial accountability; it is hard to hold a judge to account when it is impossible to scrutinise the basis on which their rulings have been made.171 Orders without justifications can therefore be seen to be at odds with principles associated with the Rule of Law172 including the idea that law should be publicly promulgated, general, prospective rather than retrospective, understandable, consistent, capable of being observed, stable and congruent.173 Such orders are not conducive to the values of contestability, accountability and participation in the discourse of law – core values afforded (though not guaranteed) by law-as-we-know-it.

Between current systems and Volokh’s ‘trustworthy’ systems there lies an entire spectrum of possibilities and problems which are far from solved. The inability of current systems to provide legally relevant justifications is not the only obstacle to the use of such technologies in legal decision-making. It is well-known that machine learning systems can replicate and amplify bias encoded in training data.174 Bias can also be introduced through the design of machine learning systems and as a result of how information is presented by the system.175 Other issues which affect performance of data-driven systems include an inability to generalise outside the distribution of the training data,176 spurious correlations,177 model degradation,178 data drift179 and concept drift.180 Retraining models can be costly but a failure to retrain may increase the risk of ‘freezing the future and scaling the past’.181 It can prove difficult to allocate responsibility for failures or harms on account of the ‘many hands’ involved in design and use of the systems.182 There is increasing recognition that such systems should be assessed not only for technical issues but a wide range of potential impacts and harms including environmental harms and dependency on harmful or exploitative labour conditions.183

There is increasing recognition that such systems should be assessed not only for technical issues but a wide range of potential impacts and harms including environmental harms and dependency on harmful or exploitative labour conditions

However, let us meet Volokh on his own terms. What if developers were capable of finding technical means of addressing interpretability and other issues relating to the performance and capabilities of these systems? What if the systems were capable of producing (in Volokh’s words) ‘trustworthy results’ accompanied by explanations in the form of legally relevant justifications? We must revisit the question, what matter who is speaking, and how they produce speech? It will not do, to answer this question, to say that we want judges to possess an ‘internal’ point of view on the normative effect of legal rules or that judges should demonstrate a commitment to law on the basis that these factors are relevant to the quality of the outputs.184 We have already (for the purposes of argument only) conceded that the systems are technically capable of producing qualitatively acceptable results. The issue, if there is one, must lie elsewhere.

The key to the question, we suggest, involves considering the effect on (in Gibson’s language) the wider environment or niche. In other words, it is not clear that as Volokh suggests:

The normative question whether we ought to use AI judges should be seen as turning chiefly on the empirical question whether they reliably produce opinions that persuade the representatives that we have selected to evaluate those opinions.185

For example, full-blown reliance on such systems would imply that humans would no longer judge. We would have abandoned the practice of the authoritative ascription of legal effect. Lawyers (if they still exist) would have to anticipate the outputs of the system. Just as ‘[t]o follow rules is to adopt a particular form of life’,186 if we are to be able to authoritatively ascribe legal effect, we must engage in the practice of authoritatively ascribing legal effect. We cannot evaluate outputs according to the standards of a practice if we do not engage in the practice.187 As a result, in the event of wholesale substitution of such systems for human judges we will be incapable of assessing the trustworthiness of these systems according to human standards of adjudication.188 Our ability to contest the outputs of the system will be constrained. We will no longer be able to hold the systems to account.189 We would also – inexplicably as it seems to us – have given up control of one of the mainstays of the system of checks and balances which provides protection against arbitrary power.

Wholesale delegation of adjudication to data-driven systems, even those which output ‘trustworthy’ results, may threaten this communicative exchange, and with it, our ability to speak ‘legally’ and actively participate in applying, anticipating and shaping the law.

Such delegation may be even more far-reaching in its effects. Speaking ‘legally’ presupposes and anticipates the closure afforded by an authoritative decision of a court.190 More, it depends on legal subjects being able, in principle, to understand and engage with those decisions and the concepts (legal and social) on which they rely, and on courts, in turn, being able to connect with and resolve the concerns of legal subjects.191 It involves a communicative exchange between legal subjects and the courts which is mediated through language and is therefore capable of employing, absorbing and sharing new concepts and ideas, legal or otherwise.192 Wholesale delegation of adjudication to data-driven systems, even those which output ‘trustworthy’ results, may threaten this communicative exchange, and with it, our ability to speak ‘legally’ and actively participate in applying, anticipating and shaping the law.193

Contrary to the position adopted by Volokh, in the context of adjudication, who speaks and how they produce speech matters and has implications for contestability, accountability and participation – values that underpin the Rule of Law. Indeed, the current inability of data-driven legal technologies to produce legally meaningful justifications and (in a world where Volokh’s vision reaches its apotheosis) the threat to our ability to speak ‘legally’ and evaluate the outputs of these technologies according to legally relevant standards of judging may be understood as two sides of the same coin. There is a gulf between the ‘enlanguaged’ normativity of text-driven law which allows us to speak ‘legally’ and the statistically mediated normativity of data-driven law. It is not clear that the gulf can be bridged. With law, as Latour maintains:

Either you are inside it and you understand what it does – without being able to explain it in another language – or you are outside it and you don’t do anything ‘legal’.194

2.6 Conclusion

Our exploration of the implications of data-driven legal technologies for law-as-we-know-it grounds that inquiry in the affordances of these technologies. Drawing on Gibson, we argue that the formidable affordances of such technologies are not matched by an ability on the part of such technologies to speak ‘legally’ and engage in legal reasoning and interpretation. We examine the agentive effects of the technologies, showing how they may act as engines of influence, shaping the outcomes to which legal effect is ascribed and the authoritative ascription of legal effect. We alert to their emergence as new seats of power. This, we suggest, is the here and now of law and legal practice. It is already clear that there are reasons for some concern about the implications of overreliance on these technologies. However, there is also reason to suppose that these concerns may be addressed though novel approaches to education and training, renewed emphasis on lawyers’ professional duties of independence and competence, and a focus on the design of legal technologies.

Crucially we draw attention to a more profound risk: that by mistakenly treating the outputs of data-driven legal technologies as though the product of legal reasoning and interpretation, we make way for a new normativity and a different source and nature of legal effect. We explore the implications of a data-driven normativity marked by its dissociation from the ‘web’ of legal powers, the performativity of speech acts and the practices of legal reasoning and interpretation.

Taking Volokh’s ‘thought experiments’ as a provocation we examine the implications of deferring – or delegating – to data-driven technologies in the context of judging. There are implications for law-as-we-know-it whether the outputs are trustworthy or not. In their current form, data-driven systems have a range of technical limitations which makes them unsuited for the task of judging. The most obvious limitation relates to their inability to output legally relevant justifications. This is presently an unsolved problem, but we should not assume either that it will remain so, or that such inability will prevent states or judicial systems from exploring their use. That inability has clear consequences for the Rule of Law values of contestability, accountability and participation. Even if the outputs are ‘trustworthy’ on Volokh’s account, we imperil these same values through wholesale deference or delegation to data-driven legal technologies in the task of judging. The implications, in both scenarios, are much more far-reaching than market effects.


References

  1. Ejan Mackaay and Pierre Robillard, ‘Predicting Judicial Decisions: The Nearest Neighbour Rule and Visual Representation of Case Patterns’, Band 3, Heft 3/4 November, 1974 (De Gruyter 1974) https://doi.org/10.1515/9783112320594-012 accessed 28 December 2023. 

  2. David Arditi, Fatih E Oksay and Onur B Tokdemir, ‘Predicting the Outcome of Construction Litigation Using Neural Networks’ (1998) 13 Computer-Aided Civil and Infrastructure Engineering 75. 

  3. Lex Machina, ‘Lex Machina Celebrates 10 Years of Legal Analytics’ (Lex Machina) https://lexmachina.com/media/press/lex-machina-celebrates-10-years-of-legal-analytics/ accessed 15 October 2023. 

  4. Daniel Martin Katz, ‘Quantitative Legal Prediction–or–How I Learned to Stop Worrying and Start Preparing for the Data-Driven Future of the Legal Services Industry’ 62 EMORY LAW JOURNAL 823, 912. 

  5. Case Crunch and CourtQuant, for example, are no longer trading, prompting Artificial Lawyer to ask, ‘… is litigation prediction dead?’. artificiallawyer, ‘Litigation Prediction Pioneer, CourtQuant, To Close’ (Artificial Lawyer, 7 October 2020) https://www.artificiallawyer.com/2020/10/07/litigation-prediction-pioneer-courtquant-to-close/ accessed 15 October 2023. 

  6. European Commission for the Efficiency of Justice (CEPEJ), ‘European Ethical Charter on the Use of Artificial Intelligence (AI) in Judicial Systems and Their Environment’ 14 https://www.coe.int/en/web/cepej/cepej-european-ethical-charter-on-the-use-of-artificial-intelligence-ai-in-judicial-systems-and-their-environment accessed 16 October 2023. 

  7. Scottish Legal News, ‘Lord Chief Justice Anticipates AI Predictions of Case Outcomes’ https://www.scottishlegal.com/articles/lord-chief-justice-anticipates-ai-predictions-of-case-outcomes accessed 8 August 2023; Sir Geoffrey Vos, ‘Speech by the Master of the Rolls to the Bar Council of England and Wales’ (Courts and Tribunals Judiciary, 18 July 2023) https://www.judiciary.uk/speech-by-the-master-of-the-rolls-to-the-bar-council-of-england-and-wales/ accessed 16 October 2023. 

  8. Michael Mills lists legal research, e-discovery, compliance, contract analysis, case prediction and document automation as areas in which ‘Artificial intelligence is hard at work in the law…’ Michael Mills, ‘Artificial Intelligence in Law: The State of Play 2016’ (Thomson Reuters Institute, 23 February 2016) https://www.thomsonreuters.com/en-us/posts/legal/artificial-intelligence-in-law-the-state-of-play-2016/ accessed 16 October 2023. 

  9. Noah Waisberg and Alexander Hudek, AI for Lawyers: How Artificial Intelligence Is Adding Value, Amplifying Expertise, and Transforming Careers (Wiley 2021). 

  10. David Curle and Steve Obenski, ‘Ebook: AI-Driven Contract Analysis in Perspective and in Practice’ (10 September 2020) https://kirasystems.com/forms/guides-studies/ai-driven-contract-analysis-perspective-and-practice/ accessed 16 October 2023. 

  11. Harry Surden, ‘Machine Learning and Law: An Overview’, Research Handbook on Big Data Law (Edward Elgar Publishing 2021) 179 https://www.elgaronline.com/display/edcoll/9781788972819/9781788972819.00014.xml accessed 18 August 2023. 

  12. Chris Metinko, ‘Legal Tech Makes Its Case With Venture Capitalists, Tops $1B In Funding This Year’ (Crunchbase News, 23 September 2021) https://news.crunchbase.com/venture/legal-tech-venture-investment/ accessed 19 August 2023; Jane Croft, ‘Why Are Investors Pouring Money into Legal Technology?’ Financial Times (28 July 2022) https://www.ft.com/content/b6f0796e-0265-40c6-ad4c-a900cd788c39 accessed 19 August 2023. 

  13. Lance Eliot, ‘Latest Insights About AI And The Law With A Keen Spotlight On The American Bar Association Remarkable Resolution 604’ (Forbes) https://www.forbes.com/sites/lanceeliot/2023/08/09/latest-insights-about-ai-and-the-law-with-a-keen-spotlight-on-the-american-bar-association-remarkable-resolution-604/ accessed 19 August 2023. The text of the Resolution is available at https://www.americanbar.org/content/dam/aba/directories/policy/annual-2019/112-annual-2019.pdf. 

  14. Proposed Recital 60e in DRAFT Compromise Amendments on the Draft Report Proposal for a regulation of the European Parliament and of the Council on harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM(2021)0206 – C9 0146/2021 – 2021/0106(COD)). 

  15. Nicola Shaver, ‘The Use of Large Language Models in LegalTech’ (Legaltech Hub, 18 February 2023) https://www.legaltechnologyhub.com/contents/the-use-of-large-language-models-in-legaltech/ accessed 23 August 2023. 

  16. Casetext, ‘Casetext Unveils CoCounsel, the Groundbreaking AI Legal Assistant Powered by OpenAI Technology’ https://www.prnewswire.com/news-releases/casetext-unveils-cocounsel-the-groundbreaking-ai-legal-assistant-powered-by-openai-technology-301759255.html accessed 22 April 2023. 

  17. Casetext, ‘Casetext to Join Thomson Reuters, Ushering in a New Era of Legal Technology Innovation’ (27 June 2023) https://casetext.com/blog/casetext-to-join-thomson-reuters-ushering-in-a-new-era-of-legal-technology-innovation/ accessed 3 July 2023. 

  18. ‘Dentons to Launch Client Secure Version of ChatGPT’ https://www.dentons.com/en/about-dentons/news-events-and-awards/news/2023/august/dentons-to-launch-client-secure-version-of-chatgpt accessed 21 October 2023; ‘Product Walk Through: FleetAI, Dentons’ Gen AI Platform – Artificial Lawyer’ https://www.artificiallawyer.com/2023/10/09/product-walk-through-fleetai-dentons-gen-ai-platform/ accessed 21 October 2023. 

  19. ‘A&O Announces Exclusive Launch Partnership with Harvey’ (Allen Overy, 15 February 2023) https://www.allenovery.com/en-gb/global/news-and-insights/news/ao-announces-exclusive-launch-partnership-with-harvey accessed 15 October 2023. 

  20. ‘Troutman Pepper Launches GPT-Powered AI Assistant’ (Troutman Pepper - Troutman Pepper Launches GPT-Powered AI Assistant, 22 August 2023) https://www.troutman.com/insights/troutman-pepper-launches-gpt-powered-ai-assistant.html accessed 24 August 2023. 

  21. Thomson Reuters, ‘New Report on ChatGPT & Generative AI in Law Firms Shows Opportunities Abound, Even as Concerns Persist’ (Thomson Reuters Institute, 17 April 2023) https://www.thomsonreuters.com/en-us/posts/technology/chatgpt-generative-ai-law-firms-2023/ accessed 15 October 2023. 

  22. See, for example, Allison Hart, ‘Elevate’s Analyse Documents ELM Module: AI You Can Use’ (Elevate, 13 May 2021) https://elevate.law/expertise/elevates-analyse-documents-elm-module-ai-you-can-use/ accessed 5 November 2023. 

  23. Tonya Custis and others, ‘Westlaw Edge AI Features Demo: KeyCite Overruling Risk, Litigation Analytics, and WestSearch Plus’, Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law (Association for Computing Machinery 2019) https://doi.org/10.1145/3322640.3326739 accessed 16 October 2023. 

  24. Casetext (n 16). For a detailed analysis of CoCounsel see Pauline McBride and Masha Medvedeva, ‘Casetext’s CoCounsel through the Lens of the Typology’ (COHUBICOL, 4 July 2023) https://www.cohubicol.com/blog/casetext-cocounsel-openai-typology/ accessed 7 November 2023. 

  25. Ken Adams, ‘ChatGPT Won’t Fix Contracts’ (Adams on Contract Drafting, 9 December 2022) https://www.adamsdrafting.com/chatgpt-wont-fix-contracts/ accessed 16 October 2023. 

  26. Sebastian Ko, ‘The Dark Side of Technology in Law: Avoiding the Pitfalls’ in Susanne Chishti (ed), The Legaltech Book: The Legal Technology Handbook for Investors, Entrepreneurs and FinTech Visionaries (John Wiley & Sons 2020) 197. 

  27. Joanna McGrenere and Wayne Ho, ‘Affordances: Clarifying and Evolving a Concept’, Proceedings of Graphics Interface 2000 (2000) 2. 

  28. ibid. 

  29. Lialana advocates for an approach to affordances which ‘allow[s] oneself and others to recognize (and, potentially, to act upon) opportunities and risks of a world that is no longer restrained to mechanical age conventions, assumptions, and design choices’. Olia Lialina, ‘Once Again, the Doorknob: Affordance, Forgiveness, and Ambiguity in Human-Computer Interaction and Human-Robot Interaction’ [2019] Media Theory 49, 60. 

  30. Mireille Hildebrandt, ‘The Artificial Intelligence of European Union Law’ (2020) 21 German Law Journal 74, 76. 

  31. Davis notes that ‘theories of affordance have long been central to understanding and intervening in the development and analysis of technological systems, yet ML has remained outside of the design studies purview.’ Jenny L Davis, ‘“Affordances” for Machine Learning’, 2023 ACM Conference on Fairness, Accountability, and Transparency (ACM 2023) 330 https://dl.acm.org/doi/10.1145/3593013.3594000 accessed 19 October 2023. 

  32. This analysis draws on our work in creating a Typology of Legal Technologies. L Diver and others, ‘Typology of Legal Technologies’ https://publications.cohubicol.com/typology/

  33. ‘Westlaw Edge - A.I. Powered Legal Research’ https://legal.thomsonreuters.com/en/products/westlaw-edge accessed 30 October 2023. 

  34. Hart (n 22). 

  35. Waisberg and Hudek (n 9) 136. 

  36. Hart (n 22). 

  37. Valerie McConnell, ‘What Is CARA A.I. and How Do I Use It?’ https://help.casetext.com/en/articles/1971642-what-is-cara-a-i-and-how-do-i-use-it accessed 30 October 2023. 

  38. Susan Cunningham, ‘Introducing Vincent: The First Intelligent Legal Research Assistant of Its Kind’ (Medium, 20 September 2018) https://blog.vlex.com/introducing-vincent-the-first-intelligent-legal-research-assistant-of-its-kind-bf14b00a3152 accessed 30 October 2023. 

  39. ‘WestSearch Plus - Westlaw Edge’ https://legal.thomsonreuters.com/en/products/westlaw-edge/westsearch-plus accessed 30 October 2023. 

  40. ‘Legal Analytics by Lex Machina’ (Lex Machina) https://lexmachina.com/ accessed 30 October 2023. 

  41. Uhura, ‘An Introduction to Information Extraction from Unstructured and Semi-Structured Documents’ (14 May 2021) https://uhurasolutions.com/2021/05/14/an-introduction-to-information-extraction-from-unstructured-and-semi-structured-documents/ accessed 12 November 2023. 

  42. Della, ‘The Most Advanced AI on the Market for Legal Contract Review’ (Della AI) https://dellalegal.com/ accessed 30 October 2023. 

  43. Casetext (n 16). 

  44. Dmitriy Skougarevskiy and Wolfgang Alschner, ‘Mapping Investment Treaties’ (Mapping Investment Treaties) http://mappinginvestmenttreaties.com/ accessed 30 October 2023. 

  45. Casetext (n 16). 

  46. See Luciano Floridi and Massimo Chiriatti, ‘GPT-3: Its Nature, Scope, Limits, and Consequences’ (2020) 30 Minds and Machines 681, 690 (suggesting that GPT-3 allows us to ’mass produce good and cheap semantic artefacts’). 

  47. Floridi and Chiriatti maintain that ‘The real point about AI is that we are increasingly decoupling the ability to solve a problem effectively—as regards the final goal—from any need to be intelligent to do so.’ ibid 683. 

  48. Daniel Martin Katz and others, ‘GPT-4 Passes the Bar Exam’ (15 March 2023) https://papers.ssrn.com/abstract=4389233 accessed 17 April 2023. 

  49. See Masha Medvedeva, Identification, Categorisation and Forecasting of Court Decisions (University of Groningen 2022) 48; Masha Medvedeva and Pauline McBride, ‘Legal Judgment Prediction: If You Are Going to Do It, Do It Right’ in Daniel Preotiuc-Pietro and others (eds), Proceedings of the Natural Legal Language Processing Workshop 2023 (Association for Computational Linguistics 2023) https://aclanthology.org/2023.nllp-1.9 accessed 7 December 2023. 

  50. Cong Jiang and Xiaolei Yang, ‘Legal Syllogism Prompting: Teaching Large Language Models for Legal Judgment Prediction’, Proceedings of the Nineteenth International Conference on Artificial Intelligence and Law (Association for Computing Machinery 2023) https://dl.acm.org/doi/10.1145/3594536.3595170 accessed 24 October 2023. 

  51. Fangyi Yu, Lee Quartey and Frank Schilder, ‘Legal Prompting: Teaching a Language Model to Think Like a Lawyer’ (arXiv, 8 December 2022) http://arxiv.org/abs/2212.01326 accessed 5 November 2023. 

  52. Eric Martínez, ‘Re-Evaluating GPT-4’s Bar Exam Performance’ (8 May 2023) https://papers.ssrn.com/abstract=4441311 accessed 19 May 2023. See also Arvind Narayanan and Sayash Kapoor, ‘GPT-4 and Professional Benchmarks: The Wrong Answer to the Wrong Question’ (AI Snake Oil, 20 March 2023) https://aisnakeoil.substack.com/p/gpt-4-and-professional-benchmarks accessed 8 June 2023. 

  53. Medvedeva (n 49); Masha Medvedeva and others, ‘Automatic Judgement Forecasting for Pending Applications of the European Court of Human Rights’, In K. D. Ashley, K. Atkinson, L. K. Branting, E. Francesconi, M. Grabmair, V. R. Walker, B. Waltl, & A. Zachary Wyner (Eds.), Proceedings of the Fifth Workshop on Automatec Semantic Analysis of Information in Legal Text (ASAIL 2021); Masha Medvedeva, Martijn Wieling and Michel Vols, ‘Rethinking the Field of Automatic Prediction of Court Decisions’ (2023) 31 Artificial Intelligence and Law 195; Medvedeva and McBride (n 49). 

  54. Jiang and Yang (n 50). 

  55. Yu, Quartey and Schilder (n 51). 

  56. ibid. As Duarte points out, the legal syllogism merely provides a ‘framework’ for the presentation of legal arguments or justifications. The major and minor premises of the syllogism must first be constructed through a process of interpretation. Tatiana Duarte, ‘Legal Reasoning and Interpretation’ in Laurence Diver and others, ‘Research Study on Text-Driven Law (Brussels 2023), Funded by the ERC Advanced Grant “Counting as a Human Being in the Era of Computational Law” (COHUBICOL) by the European Research Council (ERC) under the HORIZON2020 Excellence of Science Program ERC-2017-ADG No 788734 (2019-2024)’ (COHUBICOL, 20 September 2023) 105, 106 https://publications.cohubicol.com/research-studies/text-driven-law/ accessed 15 October 2023. 

  57. John Pavlus, ‘The Computer Scientist Training AI to Think with Analogies’ (Scientific American) https://www.scientificamerican.com/article/the-computer-scientist-training-ai-to-think-with-analogies/ accessed 5 November 2023; Ian R Kerr and Carissima Mathen, ‘Chief Justice John Roberts Is a Robot’ (1 April 2014) 9 https://papers.ssrn.com/abstract=3395885 accessed 5 November 2023. 

  58. Daria Bylieva, ‘Language of AI’ (2022) 3 Technology and Language 111, 117 (noting that Searle’s ‘Chinese room’ experiment is relevant and ‘the ability to give adequate answers and understanding are different things’. For a critique of the appearance/reality dichotomy see Mark Coeckelbergh and David J Gunkel, ‘ChatGPT: Deconstructing the Debate and Moving It Forward’ [2023] AI & SOCIETY https://doi.org/10.1007/s00146-023-01710-4 accessed 23 August 2023. 

  59. Floridi and Chiriatti (n 46) 689. 

  60. Gary Marcus and Ernest Davis, ‘GPT-3, Bloviator: OpenAI’s Language Generator Has No Idea What It’s Talking about’ (MIT Technology Review) https://www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/ accessed 29 October 2023; Emily M Bender and Alexander Koller, ‘Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data’ in Dan Jurafsky and others (eds), Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (Association for Computational Linguistics 2020) https://aclanthology.org/2020.acl-main.463 accessed 8 November 2023; Emily M Bender and others, ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜’, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (ACM 2021) https://dl.acm.org/doi/10.1145/3442188.3445922 accessed 13 August 2023. Coeckelbergh and Gunkel agree that large language models‘manipulate signs without knowing that to which these tokens refer … They generate different sequences of signs based not on actual meaning but according to statistically probable arrangements of difference.’ However, for these authors this characteristic might be a feature rather than a bug provided one accepts a non-representational view of language. Coeckelbergh and Gunkel (n 58). 

  61. Bender and others (n 60). 

  62. Written legal norms must be interpreted in accordance with the sources of law and the principles of law relevant for the particular jurisdiction. Diver and others (n 56) 30. 

  63. Benjamin Alarie, Anthony Niblett and Albert H Yoon, ‘How Artificial Intelligence Will Affect the Practice of Law’ (2018) 68 The University of Toronto Law Journal 106, 120 (noting that AI is not yet capable of ’reasoned judgment’). 

  64. Katja de Vries and Niels van Dijk, ‘A Bump in the Road. Ruling Out Law from Technology’ in Mireille Hildebrandt and Jeanne Gaakeer (eds), Human Law and Computer Law: Comparative Perspectives (Springer Netherlands 2013) 106. 

  65. Bruno Latour, The Making of Law: An Ethnography of the Conseil d’Etat (Polity 2010) 218 (‘Everything happens as if law were interested exclusively in the possibility of re-engaging the figures of enunciation by attributing to a speaker what he or she said. Linking an individual to a text through the process of qualification; attaching a statement to its enunciator by following the sequences of signatures; authenticating an act of writing; imputing a crime to the name of a human being; linking up texts and documents; tracing the course of statements: all law can be grasped as an obsessive effort to make enunciation assignable’). 

  66. de Vries and van Dijk (n 64) 111, 113. 

  67. In language that may be more familiar to those brought up on a diet of Anglo-American legal theory, such systems can have no ‘internal point of view’ about the bindingness of legal rules. HLA Hart, The Concept of Law (Third, Oxford University Press 2012) 115–117. Kerr and Mathen (n 57) 22, 27–30. 

  68. Samuel Beckett, Stories & Texts for Nothing (Grove Press 1967) 85. Coeckelberg and Gunkel raise this question in relation to the outputs of large language models such as ChatGPT noting that ‘we now confront texts that have no identifiable author.’ Coeckelbergh and Gunkel (n 58). 

  69. Samuel Beckett (n 68) 85. 

  70. Cabitza notes that ‘Machines – especially those developed using Machine Learning (ML) techniques – can only make arguments and decisions, or even just “speak the truth” (which cannot be contested), to the extent that we allow them’. Federico Cabitza, ‘A Reply: Lost in Communication? We Need a More Conscious and Interactive Use of AI’ (2022) 1 Journal of Cross-disciplinary Research in Computational Law https://journalcrcl.org/crcl/article/view/10 accessed 10 November 2023 (original emphasis). 

  71. Markku Suksi, ‘Formal, Procedural, and Material Requirements of the Rule of Law in the Context of Automated Decision-Making’ in Markku Suksi (ed), The Rule of Law and Automated Decision-Making: Exploring Fundamentals of Algorithmic Governance (Springer International Publishing 2023) 71. See also Laurence Diver and Pauline McBride, ‘Argument by Numbers: The Normative Impact of Statistical Legal Tech’ (2022) 3. 

  72. Law, Latour tells us, ‘insists on asking whether there is a path from one particular utterance to another, or between a given utterance and a given enunciator’. Bruno Latour, An Inquiry into Modes of Existence: An Anthropology of the Moderns (Harvard University Press 2013) 370. 

  73. The term ‘enlanguaged’ was coined by Kiverstein and Rietveld. Julian Kiverstein and Erik Rietveld, ‘Scaling-up Skilled Intentionality to Linguistic Thought’ (2021) 198 Synthese 175. 

  74. Such speech acts are performative in the sense that they ‘do things with words’. They are also, in Austin’s terminology, ‘illocutionary’; they have a certain force through the operation of convention. See JL Austin, How To Do Things With Words (2nd ed, Harvard Univ Press, 1975) (Lectures III and IX, on performatives and illocutionary acts respectively). See also Diver and others (n 56) 120. 

  75. Diver and others (n 56) 123. 

  76. A high, but not a perfect degree of coherence as Brownsword points out. Roger Brownsword and Karen Yeung (eds), Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes (Hart 2008) 134–159. 

  77. Law has justice as a goal but should not be conflated with justice or a particular conception of justice. Mireille Hildebrandt, Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology (Paperback edition, Edward Elgar Publishing 2016) 146–155. See also Mariano-Florentino Cuéllar, ‘Cyberdelegation and the Administrative State’ in Nicholas R Parrillo (ed), Administrative Law from the Inside Out: Essays on Themes in the Work of Jerry L. Mashaw (Cambridge University Press 2017) 156 (noting that ’Human deliberation is replete with all the limitations associated with human cognition, but implicit in the endeavor is an aspiration for dialogue and exchange of reasons that are capable of being understood, accepted, or rejected by policymakers, representatives of organized interests, and members of the public’). The material set-up of law needs to be resourced. Although access to justice is recognised as a fundamental right, courts in many countries have a significant backlog of cases. See, for example ‘Justice Delayed as Thousands of Cases Wait More than Two Years to Be Heard’ https://www.lawsociety.org.uk/contact-or-visit-us/press-office/press-releases/justice-delayed-as-thousands-of-cases-wait-more-than-two-years-to-be-heard accessed 30 December 2023. 

  78. John Tasioulas, ‘The Rule of Algorithm and the Rule of Law’ (7 January 2023) https://papers.ssrn.com/abstract=4319969 accessed 24 October 2023. 

  79. Hildebrandt, Smart Technologies and the End(s) of Law (n 77) 59. 

  80. Wiggins suggests that law in turn sustains shared communication between persons. David Wiggins, Continuants: Their Activity, Their Being and Their Identity Twelve Essays (Oxford University Press 2016) 91 (‘our sharing in a given specific animal nature and a law-sustained mode of activity is integral to the close attunement of person to person in language and integral to the human sensibilities that make interpretation possible’). See also Tasioulas (n 78) 17 in relation to ‘reciprocity’ between citizens and officials of the law. 

  81. Diver and others (n 56) 43, 50, 71. 

  82. de Vries and van Dijk (n 64) 106. 

  83. We could of course choose to formally attribute legal effect to the output of these systems but it does not resolve the difficulty. de Vries and van Dijk draw the same conclusion about the implications of democratically sanctioned rule by the scripts of technology. ibid 119 (‘We will then be in a situation in which every bit of script is created in accordance with a ‘rule of law’. But when no legal acts of reattachments are enunciated these technological intermediaries will not partake in legal enunciation’) . 

  84. For the text output by a data-driven technology to function as a speech act in law-as-we-know-it, more is required than that it should be intelligible or make sense as a set of words; it must ‘make sense’ by conforming to the felicity conditions of prior legal norms and setting the felicity conditions for future legal effects. In particular, Duarte notes that ‘Savigny establishes two felicity conditions for interpretation: the interpreter must (i) attempt to reconstruct the intellectual trail of the legislator and (ii) acknowledge the historico-dogmatic whole of the legal system and perceive its relations with text.’ Tatiana Duarte, ‘Legal Reasoning and Interpretation’ in Diver and others (n 56) 102. 

  85. We are therefore at odds with those who suggest, as Volokh does, that what matters is the output and not the method by which it is achieved. Eugene Volokh, ‘Chief Justice Robots’ 68 DUKE LAW JOURNAL. Susskind, in a similar vein urges us to consider ‘whether machines can deliver decisions at the standard of human judges or higher, not by replicating the way that judges think and reason but by using their own distinctive capabilities (brute processing power, vast amount of data, remarkable algorithms).’ Richard E Susskind, Online Courts and the Future of Justice (First edition, Oxford University Press 2019) 280. See also John Armour and Mari Sako, ‘AI-Enabled Business Models in Legal Services: From Traditional Law Firms to next-Generation Law Companies?’ (2020) 7 Journal of Professions and Organization 27. For contrary views see, for example, Kerr and Mathen (n 57); Reuben Binns, ‘Analogies and Disanalogies Between Machine-Driven and Human-Driven Legal Judgement’ (2021) 1 Journal of Cross-disciplinary Research in Computational Law https://journalcrcl.org/crcl/article/view/5 accessed 6 November 2023. 

  86. Pasquale and Cashwell use this phrase to describe prediction of judgment systems. Frank Pasquale and Glyn Cashwell, ‘Prediction, Persuasion, and the Jurisprudence of Behaviorism’ (8 November 2017) 3 https://papers.ssrn.com/abstract=3067737 accessed 13 August 2023. 

  87. Peter-Paul Verbeek, Moralizing Technology: Understanding and Designing the Morality of Things (University of Chicago Press 2011) 97. 

  88. Mark Coeckelbergh, ‘The Grammars of AI: Towards a Structuralist and Transcendental Hermeneutics of Digital Technologies’ (2022) 3(2) Technology and Language 148, 151. 

  89. Mark Coeckelbergh, ‘Time Machines: Artificial Intelligence, Process, and Narrative’ (2021) 34 Philosophy & Technology 1623, 1627. Hildebrandt notes that ‘Clark and Latour have pointed out that the usage of tools basically integrates them into our extended mind or delegates cognitive tasks to things that subsequently restrict or enlarge our “action potential”.’ Hildebrandt, Smart Technologies and the End(s) of Law (n 77) 108. 

  90. James Faulconbridge, Atif Sarwar and Martin Spring, ‘How Professionals Adapt to Artificial Intelligence: The Role of Intertwined Boundary Work’ Journal of Management Studies 10 https://onlinelibrary.wiley.com/doi/abs/10.1111/joms.12936 accessed 2 November 2023. Such a dynamic appears to be very much in play in Norkute at al.’s account of the experience of Thomson Reuters’ legal editorial team. Milda Norkute and others, ‘Towards Explainable AI: Assessing the Usefulness and Impact of Added Explainability Features in Legal Document Summarization’, Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (Association for Computing Machinery 2021) https://dl.acm.org/doi/10.1145/3411763.3443441 accessed 17 August 2023 (‘Since this AI model [a legal text summarisation system] has been in active use, the primary task of the editors has become to review and edit the machine-generated summaries rather than creating them from scratch based on the long input documents’). 

  91. Coeckelbergh and Gunkel (n 58). 

  92. Diver and McBride alert to the risk of ‘robotomorphy’ where humans benchmark and align themselves according to the standards of the technologies they create. L Diver and P McBride, ‘Argument by Numbers: the Normative Impact of Statistical Legal Tech’ (2022) 2022 Theoretical and Applied Law 8, 16. See also Laurence Diver and Pauline McBride, ‘High Tech, Low Fidelity? Statistical Legal Tech and the Rule of Law’ [2022] Verfassungsblog https://verfassungsblog.de/roa-high-tech-low-fidelity/ accessed 3 November 2023; Henrik Skaug Sætra, ‘Robotomorphy’ (2022) 2 AI and Ethics 5. Choi et al. claim to have conducted ‘the first randomized controlled trial of AI assistance’s effect on human legal analysis.’ Jonathan H Choi, Amy Monahan and Daniel Schwarcz, ‘Lawyering in the Age of Artificial Intelligence’ (7 November 2023) https://papers.ssrn.com/abstract=4626276 accessed 13 November 2023. Empirical tests that address how the use of data-driven technologies can produce efficiencies are valuable, but do not take account of systemic effects. 

  93. Don Ihde, Technology and the Lifeworld: From Garden to Earth (Indiana University Press 1990) 97. 

  94. Casetext (n 16). 

  95. ‘Uhura Solutions’ (27 May 2022) https://uk.linkedin.com/company/uhurasolutions accessed 31 October 2023. 

  96. ‘Unlocking the Power of AI for Business Users’ (Squirro) https://squirro.com/why-squirro/ accessed 31 October 2023. 

  97. Bylieva (n 58) 121. 

  98. ‘WestSearch Plus - Westlaw Edge’ (n 39). 

  99. Artificial Lawyer notes that along with Della, ‘there are a number of legal AI companies that allow you to pose questions to the system and get answers back from a doc stack.’ artificiallawyer, ‘Meet Della AI – A New Challenger in the Doc Review/Analysis Market’ (Artificial Lawyer, 21 January 2020) https://www.artificiallawyer.com/2020/01/21/meet-della-ai-a-new-challenger-in-the-doc-review-analysis-market/ accessed 13 July 2022. 

  100. Kira Systems launched question answering capability in 2020. ‘Kira Systems Launches Answers & Insights, A New-to-Market Capability in Contract & Document Analysis’ (24 August 2020) https://kirasystems.com/company-announcements/kira-systems-launches-answers-insights/ accessed 31 October 2023. 

  101. Waisberg and Hudek quote a client of Kira Systems as saying ‘… when I told executives at a client we were going to use Kira and explained what it was, the GC [General Counsel] said “I haven’t met ‘her’ yet but I am glad we have her on the team.”’ The authors themselves describe Kira as a ‘virtual Noah’. Waisberg and Hudek (n 9) 83. The names of some of these technologies lend themselves to a degree of anthropomorphism, including Kira, Della, Legal Robot. Anthony Niblett, a co-founder of Blue J Legal suggests that use of the system entails ‘… letting the data speak. It is not lawyers using their judgment about what is important.’ ibid 122. 

  102. Alicia Ryan’s comments are instructive: ‘You will get users with expectations at both ends of the spectrum. Either they think it’s [the AI system is] never going to work and they never give it a chance, or they think it’s AI and therefore it’s going to be perfect, so they just rely on it without checking.’ Alicia Ryan, ‘The ROI of AI: How a large firm determines it’ in Waisberg and Hudek (n 9) 141. 

  103. According to Romele, ‘Technologies, probably more than language, have their materialities and their affordances. And yet, they are also, or even mostly, signs of authority, intended to be believed and obeyed as they are.’ Alberto Romele, Digital Habitus: A Critique of the Imaginaries of Artificial Intelligence (Routledge 2024) 98. 

  104. Wang’s review of the use of AI-powered systems in China’s judicial system notes that these ‘can “warn” human judges of similar cases and scenarios [where misjudgments or wrongful convictions were made] preventing the recurrence of past fallibilities’. Nu Wang, ‘“Black Box Justice”: Robot Judges and AI-Based Judgment Processes in China’s Court System’, 2020 IEEE International Symposium on Technology and Society (ISTAS) (2020) 59 (citation omitted). 

  105. Latour positions non-humans and humans, figurative and non-figurative technologies, flags and signs as actors. Wiebe E Bijker and John Law (eds), Shaping Technology/Building Society: Studies in Sociotechnical Change (Nachdr, MIT Press 2010) 244. 

  106. Former Justice Mariano-Florentino Cuéllar notes that ‘people underappreciate the influence of certain technologies and information on their decisions.’ Cuéllar (n 77) 154. 

  107. Some ‘prediction of judgment’ systems are explicitly presented as means of reducing ‘excessive variability’ in court decisions. European Commission for the Efficiency of Justice (CEPEJ) (n 6) 42. 

  108. Compose is one example of such a system. Compose claims that the system ‘cures “blank page” syndrome and starts attorneys off right’. ‘Better Briefs. Less Time. Fewer Headaches.’ (Compose) https://compose.law/ accessed 31 October 2023. Shepherd offers an interesting reflection on the effects of use of AI in drafting and review. Jack Shepherd, ‘Lawyers: How Much Should You Rely on AI to Make First Drafts?’ https://jackwshepherd.medium.com/lawyers-how-much-should-you-rely-on-ai-to-make-first-drafts-69b7b0682c51 accessed 1 November 2023. 

  109. A senior judge in England and Wales has used ChatGPT to write part of a judgment. Hibaq Farah, ‘Court of Appeal Judge Praises “Jolly Useful” ChatGPT after Asking It for Legal Summary’ The Guardian (15 September 2023) https://www.theguardian.com/technology/2023/sep/15/court-of-appeal-judge-praises-jolly-useful-chatgpt-after-asking-it-for-legal-summary accessed 1 November 2023; Luke Taylor reports that ‘A judge in Colombia has caused a stir by admitting he used the artificial intelligence tool ChatGPT when deciding whether an autistic child’s insurance should cover all of the costs of his medical treatment.’ Luke Taylor, ‘Colombian Judge Says He Used ChatGPT in Ruling’ The Guardian (3 February 2023) https://www.theguardian.com/technology/2023/feb/03/colombia-judge-chatgpt-ruling accessed 1 November 2023. Other judges are less impressed. ‘Most Judges Haven’t Tried ChatGPT, and They Aren’t Impressed’ (The National Judicial College) https://www.judges.org/news-and-info/most-judges-havent-tried-chatgpt-and-they-arent-impressed/ accessed 8 November 2023. 

  110. It is interesting to note that Volokh advocates that data-driven legal technologies engaged as AI judges should be assessed according to their persuasiveness rather than their accuracy. Volokh (n 85) 1152. 

  111. This may also be true of judges’ clerks who may write the first drafts of judgments. Kerr and Mathen (n 57) fn 17 and associated text. No doubt recognising the risk that the outputs of data-driven systems may influence judicial opinions, in October 2023 the West Virginia Judicial Investigations Commission issued an opinion that the ‘use of AI in drafting opinions or orders should be done with extreme caution.’ Judicial Advisory Commission, ‘JIC Advisory Opinion 2023-22’ https://www.courtswv.gov/sites/default/pubfilesmnt/2023-11/JIC%20Advisory%20Opinion%202023-22\_Redacted.pdf accessed 28 December 2023. As will be obvious to litigation lawyers, the precise choice of words in a judgment matters as much as the gist of the judgment. New guidance for judges in England and Wales is silent about use of artificial intelligence for drafting judgments. The guidance permits use for text summarisation (in judgments?) but discourages use for legal research or analysis. ‘Artificial Intelligence (AI): Guidance for Judicial Office Holders’ https://www.judiciary.uk/wp-content/uploads/2023/12/AI-Judicial-Guidance.pdf accessed 30 December 2023. 

  112. In Austin’s language, these technologies have a ‘perlocutionary’ effect; they influence or persuade and so bring about effects. Austin (n 74) Lecture IX. For a discussion of legal effect and how technologies have effect on legal effect see Diver and others (n 56) 57–61, 134–137. 

  113. Fleur Jongepier and Esther Keymolen, ‘Explanation and Agency: Exploring the Normative-Epistemic Landscape of the “Right to Explanation”’ (2022) 24 Ethics and Information Technology 49. 

  114. Davis notes that ‘Three groups can be seen as predominating in the development of AI legal solutions.’ These are legal publishers, the major accounting firms and ‘venture capital supported entrepreneurs’. Anthony E Davis, ‘The Future of Law Firms (and Lawyers) in the Age of Artificial Intelligence’ (2020) 16 Revista Direito GV e1945, 10 . Legal publishers have shown considerable interest in legal tech companies. Thomson Reuters acquired Casetext, SurePrep and ThoughtTrace. Wolters Kluwer acquired Della. Caroline Hill, ‘What Wolters Kluwer’s Acquisition of Della Means for Customers of Both Companies’ (Legal IT Insider, 5 January 2023) https://legaltechnology.com/2023/01/05/what-wolters-kluwers-acquisition-of-della-means-for-customers-of-both-companies/ accessed 4 November 2023. The reach of these publishing giants is considerable. In 2020, Thomson Reuters announced that ‘Westlaw Edge is now in 100 per cent of U.S. law schools and nearly 50 per cent of AM Law 100 firms.’ The company also reported that it had signed ‘a multiyear contract … with the administrative office of U.S. courts.’ Anita Balakrishnan, ‘All US Law Schools Now Use WestLaw Edge, Says Thomson Reuters’ (Law Times, 26 February 2020) https://www.lawtimesnews.com/resources/legal-technology/all-us-law-schools-now-use-westlaw-edge-says-thomson-reuters/326751 accessed 25 October 2020. 

  115. Cantwell Smith notes the use of ‘vast collections of data sets, where we do not know what normative standards, registrations schemes, ethical stances, epistemological biases, social practices, and political interests have wrought their influence across the tapestry.’ Brian Cantwell Smith, The Promise of Artificial Intelligence: Reckoning and Judgment (The MIT Press 2019) 80. 

  116. Yeung points to the ‘chronic asymmetry of power between those who design, own, and implement these algorithmic decision-making systems and have access to the voluminous and valuable data upon which they rely, and the individuals whose lives they affect.’ Karen Yeung, ‘Why Worry about Decision-Making by Machine?’ in Karen Yeung and Martin Lodge (eds), Algorithmic Regulation (Oxford University Press 2019) 36 https://doi.org/10.1093/oso/9780198838494.003.0002 accessed 3 November 2023. D’Ignazio and Klein point to the ‘privilege hazard’ associated with data science and artificial intelligence. Catherine D’Ignazio and Lauren F Klein, Data Feminism (The MIT Press 2020) 29. 

  117. Mart’s research into the variability of search results obtained through different legal research systems is instructive. Susan Mart, ‘Results May Vary’ [2018] ABA Journal https://scholar.law.colorado.edu/faculty-articles/964

  118. Medvedeva and others (n 53); Medvedeva (n 49); Medvedeva, Wieling and Vols (n 53); Medvedeva and McBride (n 49) (critiquing of the use of data contained in already-decided judgments for testing the performance of models used in the ‘prediction of judgment’ task); Cor Steging, Silja Renooij and Bart Verheij, ‘Taking the Law More Seriously by Investigating Design Choices in Machine Learning Prediction Research’, Proceedings of the Sixth Workshop on Automated Semantic Analysis of Information in Legal Text (ASAIL 2023), June 23, 2023, Braga, Portugal (highlighting the effect of the choice of metrics for testing on performance scores). 

  119. Martínez argues that ‘To the extent that capabilities estimates for generative AI in the context [sic] law are overblown, this may lead both lawyers and non-lawyers to rely on generative AI tools when they otherwise wouldn’t and arguably shouldn’t …’ Martínez (n 52) 3. 

  120. Passi and Vorvoreanu provide an insightful overview of the practices that may contribute to or militate against overreliance on AI generated outputs. They recommend that systems employ ‘cognitive forcing functions’ (ways of nudging people to reflect more carefully) and offer effective explanations of the system’s outputs to reduce the likelihood of overreliance. Samir Passi and Mihaela Vorvoreanu, ‘Overreliance on AI Literature Review’ (Microsoft Research, 2022). 

  121. On access to case law in a UK context, see Daniel Hoadley and others, ‘How Public Is Public Law? The Current State of Open Access to Administrative Court Judgments’ [2022] Judicial Review https://www.tandfonline.com/doi/abs/10.1080/10854681.2022.2111966 accessed 28 December 2023; Daniel Hoadley, Amy Conroy and Editha Nemsic, ‘Mission Possible! Free Access to Case Law and The National Archives’ (2023) 23 Legal Information Management 16. 

  122. See Ashwin Telang, ‘The Promise and Peril of AI Legal Services to Equalize Justice’ (Harvard Journal of Law & Technology, 14 March 2023) https://jolt.law.harvard.edu/digest/the-promise-and-peril-of-ai-legal-services-to-equalize-justice accessed 12 November 2023. 

  123. For example, AI-powered contract review and analytics systems allow lawyers to carry out contract reviews at scale instead of reviewing a sample. Waisberg and Hudek (n 9) 134, 143. 

  124. The example of the New York lawyers who relied on ChatGPT is as instructive as it is notorious. Sara Merken, ‘New York Lawyers Sanctioned for Using Fake ChatGPT Cases in Legal Brief’ Reuters (26 June 2023) https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/ accessed 7 November 2023. 

  125. See Katherine Medianik, ‘ARTIFICIALLY INTELLIGENT LAWYERS: UPDATING THE MODEL RULES OF PROFESSIONAL CONDUCT IN ACCORDANCE WITH THE NEW TECHNOLOGICAL ERA’ 39 CARDOZO LAW REVIEW 1529 (suggesting that in the early days of adoption of e-discovery tools lawyers trusted these systems blindly). 

  126. Mart (n 117). 

  127. Medvedeva and others (n 53); Medvedeva (n 49); Medvedeva, Wieling and Vols (n 53); Medvedeva and McBride (n 49). See also Pasquale and Cashwell (n 86); Mireille Hildebrandt, ‘Algorithmic Regulation and the Rule of Law’ (2018) 376 Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 20170355; Mireille Hildebrandt, ‘Data-Driven Prediction of Judgment. Law’s New Mode of Existence?’ (Social Science Research Network 2019) SSRN Scholarly Paper 3548504 https://papers.ssrn.com/abstract=3548504 accessed 13 July 2022 (critiquing the experimental set-up of many of such systems). 

  128. Noting the implications of automation bias Gentile argues that the legal profession will have to pass between ‘Scilla and Charybdis: the desire to “keep the law human” on the one hand, and blind faith in the “superior” powers of poorly understood and developing technologies (which will inevitably be flawed) on the other.’ Giulia Gentile, ‘LawGPT? How AI Is Reshaping the Legal Profession’ (Impact of Social Sciences, 8 June 2023) https://blogs.lse.ac.uk/impactofsocialsciences/2023/06/08/lawgpt-how-ai-is-reshaping-the-legal-profession/ accessed 7 November 2023; For a discussion of automation bias and the mechanisms by which it operates see Kate Goddard, Abdul Roudsari and Jeremy C Wyatt, ‘Automation Bias: A Systematic Review of Frequency, Effect Mediators, and Mitigators’ (2012) 19 Journal of the American Medical Informatics Association: JAMIA 121. 

  129. See, in relation to Casetext’s CoCounsel, McBride and Medvedeva (n 24). 

  130. Mireille Hildebrandt, ‘Grounding Computational “Law” in Legal Education and Professional Legal Training’ in Bartosz Brożek, Olia Kanevskaia and Przemysław Pałka (eds), Research Handbook on Law and Technology (Edward Elgar Publishing 2023) https://www.elgaronline.com/view/book/9781803921327/chapter7.xml accessed 28 December 2023. 

  131. See for example the guidance issued by The Law Society of England and Wales. The Law Society, ‘Lawtech and Ethics Principles Report’ https://www.lawsociety.org.uk/topics/research/lawtech-and-ethics-principles-report-2021 accessed 28 December 2023. 

  132. According to the American Bar Association at least seven US state bar associations have set up AI Task Forces. American Bar Association, ‘State AI Task Force Information’ https://www.americanbar.org/groups/centers\_commissions/center-for-innovation/state-ai-task-force-information/ accessed 28 December 2023. 

  133. Hildebrandt, ‘Grounding Computational “Law” in Legal Education and Professional Legal Training’ (n 130). 

  134. Passi and Vorvoreanu (n 120). 

  135. See the State Bar of California Standing Committee on Professional Responsibility and Conduct, ‘PRACTICAL GUIDANCE FOR THE USE OF GENERATIVE ARTIFICIAL INTELLIGENCE IN THE PRACTICE OF LAW’ https://www.calbar.ca.gov/Portals/0/documents/ethics/Generative-AI-Practical-Guidance.pdf accessed 28 December 2023 (noting that ‘Overreliance on AI tools is inconsistent with the active practice of law and application of trained judgment by the lawyer’). 

  136. As to the risks presented by artificial intelligence tools to lawyers’ independence and competence see Peter Homoki, ‘Guide on the Use of Artificial Intelligence-Based Tools by Lawyers and Law Firms in the EU’ https://www.ccbe.eu/fileadmin/speciality\_distribution/public/documents/IT\_LAW/ITL\_Reports\_studies/EN\_ITL\_20220331\_Guide-AI4L.pdf; Medianik (n 125) (calling for changes in the Model Rules of Professional Conduct to address the challenges of the use of AI tools by the profession). 

  137. As to the crucial role of an independent judiciary and legal profession for the rule of law see Margaret Satterthwaite, ‘A/HRC/53/31: Reimagining Justice: Confronting Contemporary Challenges to the Independence of Judges and Lawyers - Report of the Special Rapporteur on the Independence of Judges and Lawyers’ (OHCHR) https://www.ohchr.org/en/documents/thematic-reports/ahrc5331-reimagining-justice-confronting-contemporary-challenges accessed 7 November 2023 (noting that ’algorithmic decision-making brings promise and peril for the rule of law and for judicial independence’). 

  138. Note however that the claimed functionality of the Shanghai intelligent assistive case-handling system for criminal cases includes the ability to convert speech to text, provide summaries of evidence, associate evidence with claims, identify missing evidence, and produce a draft judgment. Yadong Cui, Artificial Intelligence and Judicial Modernization (Cao Yan and Liu Yan trs, Springer 2020) 158–163. 

  139. European Commission for the Efficiency of Justice (CEPEJ) (n 6) 42. 

  140. Cui (n 138) 158, 163; Nyu Wang and Michael Yuan Tian, ‘“Intelligent Justice”: AI Implementations in China’s Legal Systems’ in Ariane Hanemaayer (ed), Artificial Intelligence and Its Discontents: Critiques from the Social Sciences and Humanities (Springer International Publishing 2022) https://doi.org/10.1007/978-3-030-88615-8\_10 accessed 7 November 2023. Papagianneas notes the argument that ‘by trying to achieve consistency through technology, the judicial system risks surrendering its power, shifting the nexus of decision-making power to the algorithms behind the smart systems.’ Straton Papagianneas, ‘Towards Smarter and Fairer Justice? A Review of the Chinese Scholarship on Building Smart Courts and Automating Justice’ (2022) 51 Journal of Current Chinese Affairs 327, 336. 

  141. Sir Geoffrey Vos, ‘Speech by the Master of the Rolls to the Law Society of Scotland’ (Courts and Tribunals Judiciary, 14 June 2023) https://www.judiciary.uk/speech-by-the-master-of-the-rolls-to-the-law-society-of-scotland/ accessed 12 November 2023. 

  142. Cui (n 138) xix. 

  143. Diver and others (n 56) 32 (citation omitted). 

  144. ibid 29 (original emphasis). 

  145. ibid 1–5. 

  146. Laurence Diver, ‘Computational Legalism and the Affordance of Delay in Law’ (2021) 1 Journal of Cross-disciplinary Research in Computational Law. 

  147. Zenon Bankowski and Burkhard Schafer, ‘Double-Click Justice: Legalism in the Computer Age’ (2007) 1 Legisprudence 31, 43. 

  148. See Lola v Skadden, Arps, Slate, Meagher & Flom, No 14-3845 (2d Cir 2015) (reviewing authorities to the effect that the practice of law presupposes some exercise of judgement). See also Augustus Calabresi, ‘Machine Lawyering and Artificial Attorneys: Conflicts in Legal Ethics with Complex Computer Algorithms’ 34 THE GEORGETOWN JOURNAL OF LEGAL ETHICS. 

  149. Bankowski and Schafer (n 147). 

  150. Diver (n 146). 

  151. John Zeleznikow, ‘The Benefits and Dangers of Using Machine Learning to Support Making Legal Predictions’ (2023) 13 WIREs Data Mining and Knowledge Discovery e1505, 7, 8. 

  152. Suksi (n 71) 72 (emphasis added). 

  153. In the words of de Vries and van Dijk, law becomes ‘pieces of historical evidence’ de Vries and van Dijk (n 64) 119. 

  154. Diver and others (n 56) 2. 

  155. As to the relevance of all these factors for text-driven law, see Diver and others (n 56). It may be objected that Retrieval Augmented Generation (RAG) allows systems constructed on large language models to generate texts supported, for example, with links to legal texts. It is beyond the scope of this paper to address this point in full. However while there are advantages to RAG, its deployment imports known difficulties in information retrieval into the system. Parishad BehnamGhader, Santiago Miret and Siva Reddy, ‘Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model’ (arXiv, 6 May 2023) http://arxiv.org/abs/2212.09146 accessed 11 June 2023. 

  156. Volokh (n 85) 1137, 1138. For contrary views see Kerr and Mathen (n 57); Binns (n 85); Tasioulas (n 78). 

  157. Volokh (n 85) 1138. 

  158. Burkhard Schafer and Colin Aitken, ‘Inductive, Abductive and Probabilistic Reasoning’ in Giorgio Bongiovanni and others (eds), Handbook of Legal Reasoning and Argumentation (Springer Netherlands 2018) 310. See also Geneviève Vanderstichele, ‘The Normative Value of Legal Analytics. Is There a Case for Statistical Precedent?’ (30 August 2019) 48, 49 https://papers.ssrn.com/abstract=3474878 accessed 28 December 2023. For a judicial expression of this principle see NJCM et al v The Dutch State (2020) The Hague District Court ECLI: NL: RBDHA:2020:1878 (SyRI) (Rb Den Haag). Concerning the obligations on courts to give reasons see Ashley Deeks, ‘The Judicial Demand for Explainable Artificial Intelligence’ (2019) 119 Columbia Law Review 1829. 

  159. Lon L Fuller, The Morality of Law (Rev ed, 15 print, Yale Univ Press 1978) 43, 49–51. 

  160. In Gori’s words, ‘In asking an explanation of machine decisions, the meaning of the “why” and “because” which introduce, respectively, the question and the answer potentially belong to different linguistic games, each of which has its own vocabulary and forms of explanation, and make reference to different kinds of rules.’ Gianmarco Gori, ‘Law, Rules, Machines: “Artificial Legal Intelligence” and the “Artificial Reason and Judgment of the Law”’ (PhD Thesis, 2021) 162. 

  161. Elena Esposito, ‘Transparency versus Explanation: The Role of Ambiguity in Legal AI’ (2022) 1 Journal of Cross-disciplinary Research in Computational Law https://journalcrcl.org/crcl/article/view/10 accessed 10 November 2023. 

  162. See for example, Masha Medvedeva and others, ‘JURI SAYS: An Automatic Judgement Prediction System for the European Court of Human Rights’, Legal Knowledge and Information Systems (IOS Press 2020) https://ebooks.iospress.nl/doi/10.3233/FAIA200883 accessed 10 November 2023. 

  163. Henry Prakken and Rosa Ratsma, ‘A Top-Level Model of Case-Based Argumentation for Explanation: Formalisation and Experiments’ (2022) 13 Argument & computation 159. Branting notes that ‘it seems very probable that useful decision support systems for explainable legal prediction must have a hybrid, two-stage design that permits explanation both in terms of legal predicates and in terms of factual features to span the gap between legal predicates and the language of ordinary discourse.’ Karl Branting, ‘Explanation in Hybrid, Two-Stage Models of Legal Prediction’, XAILA@JURIX (2020) 8 https://api.semanticscholar.org/CorpusID:235827410

  164. Tasioulas suggests that ‘It is quite [sic] different thing, a fool’s gold version perhaps, to be given an ex post rationalisation of the decision that is causally inert, when the real cause of the decision is quite different.’ Tasioulas (n 78) 15. 

  165. Prakken and Ratsma (n 163) 187. 

  166. Jiang and Yang (n 50). The authors effectively concede that the system, in its current form, is incapable of ‘interpret[ing] the law and reconstruct[ing] the facts.’ 

  167. Zeleznikow suggests that ‘Perhaps, the most important challenge for using machine learning to support legal decision-making relates to explaining the derived decisions.’ Zeleznikow (n 151). Paradoxically, however, the ability for a system to output an explanation may increase the risk of overreliance. Cabitza (n 70). 

  168. As Vanderstichele observes, ‘context is essential to precedent’. Vanderstichele (n 158) 49. 

  169. Toivanen v Finland App no 46131/19 (ECtHR, 9 November 2023)

  170. Yeung (n 116) 24, 43 (pointing to ‘the legitimate interests of individuals in being able to identify a competent human person to whom they can appeal in contesting the decision’ and identifying ’moral and legal rights to due process and participation, to be provided with an explanation of the reasons for adverse decisions, and to respect for one’s dignity and responsibility as a moral agent with capacity for self-reflection and self-control’). Article 6 of the European Convention on Human Rights (the right to a fair trial) implies a duty on the court to give reasons. Europäische Union, Europäischer Gerichtshof für Menschenrechte and Europarat (eds), Handbook on European Law Relating to Access to Justice (Publications Office of the European Union 2016) 44; Desara Dushi, ‘Human Rights in the Era of Automated Decision Making and Predictive Technologies’ (GlobalCampus of Human Rights - GCHR, 11 April 2022) https://gchumanrights.org/gc-preparedness/preparedness-science-technology/article-detail/human-rights-in-the-era-of-automated-decision-making-and-predictive-technologies.html accessed 13 November 2023. 

  171. Dushi points out that this in turn impacts on the right to an effective remedy provided by Article 13 of the European Convention on Human Rights. Dushi (n 170). 

  172. For a discussion of the Rule of Law see Gianmarco Gori, ‘Rule of Law and Positive Law’ in Diver and others (n 56). 

  173. According to Fuller, these eight principles together express the ‘internal morality of law’. Fuller (n 159) 38, 39, 41–90. For a discussion of the implications of Fuller’s principles see Brownsword and Yeung (n 76) 118–128; Kristen Rundle, ‘The Morality of the Rule of Law: Lon L. Fuller’ in Jens Meierhenrich and Martin Loughlin (eds), The Cambridge Companion to the Rule of Law (Cambridge University Press 2021) 187 (arguing that Fuller’s eight principles evince the ’distinctly moralized conception of reciprocity between lawgiver and legal subject that Fuller saw to be constitutive to the practice of the rule of law’) . 

  174. Ninareh Mehrabi and others, ‘A Survey on Bias and Fairness in Machine Learning’ (2021) 54 ACM Comput. Surv. https://doi.org/10.1145/3457607

  175. ibid. 

  176. Gary Marcus, ‘The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence’ (arXiv, 19 February 2020) http://arxiv.org/abs/2002.06177 accessed 11 November 2023; Jiashuo Liu and others, ‘Towards Out-Of-Distribution Generalization: A Survey’ (arXiv, 27 July 2023) http://arxiv.org/abs/2108.13624 accessed 11 November 2023. 

  177. Parikshit Bansal and Amit Sharma, ‘Controlling Learned Effects to Reduce Spurious Correlations in Text Classifiers’ in Anna Rogers, Jordan Boyd-Graber and Naoaki Okazaki (eds), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (Association for Computational Linguistics 2023) https://aclanthology.org/2023.acl-long.127 accessed 11 November 2023. 

  178. Daniel Vela and others, ‘Temporal Quality Degradation in AI Models’ (2022) 12 Scientific Reports 11654. 

  179. Grace A Lewis and others, ‘Augur: A Step towards Realistic Drift Detection in Production ML Systems’, Proceedings of the 1st Workshop on Software Engineering for Responsible AI (ACM 2022) https://dl.acm.org/doi/10.1145/3526073.3527590 accessed 11 November 2023. As to the need to consider environmental dynamics associated with the deployment of machine learning technologies, see Sina Fazelpour, Zachary C Lipton and David Danks, ‘Algorithmic Fairness and the Situated Dynamics of Justice’ (2022) 52 Canadian Journal of Philosophy 44. 

  180. Zhixue Zhao and others, ‘On the Impact of Temporal Concept Drift on Model Explanations’ in Yoav Goldberg, Zornitsa Kozareva and Yue Zhang (eds), Findings of the Association for Computational Linguistics: EMNLP 2022 (Association for Computational Linguistics 2022) https://aclanthology.org/2022.findings-emnlp.298 accessed 11 November 2023. 

  181. Mireille Hildebrandt, ‘Code-driven Law: Freezing the Future and Scaling the Past’ in Simon Deakin and Christopher Markou (eds), Is Law Computable?: Critical Perspectives on Law and Artificial Intelligence (Hart Publishing 2020). 

  182. Mark Coeckelbergh, AI Ethics (The MIT Press 2020) 113. 

  183. Irene Solaiman and others, ‘Evaluating the Social Impact of Generative AI Systems in Systems and Society’ (arXiv, 12 June 2023) http://arxiv.org/abs/2306.05949 accessed 21 October 2023. 

  184. These being issues flagged by Kerr and Mathen, and Tasioulas respectively. Kerr and Mathen (n 57); Tasioulas (n 78). Note that Tasioulas suggests that a commitment to law on the part of a judge has intrinsic value which is ‘relatively autonomous’ from concerns about the ‘correctness’ of the decision. He maintains that litigants value the fact that human judges are answerable for their decisions. We agree that litigants value being heard (as Tasioulas puts it, having their ‘day in court’) but are less convinced that they attach value to the fact that the judge is ‘answerable’. Indeed, except to the extent that judgments are published and may be subject to appeal, judges are largely not ‘answerable’ for their judgments. Tasioulas also points to the fact that such systems, being incapable of making a commitment ‘cannot stand back from an array of options, such as whether to commit morally to the legal system, or to the requirement of congruence, on the basis of deliberation about the pros and cons of doing these things.’ ibid 16. These limitations appear to be deeply connected to concerns about the ‘correctness’ of the output. 

  185. Volokh (n 85) 1192. 

  186. Kerr and Mathen (n 57) 25 (citing Wittgenstein in relation to the notion of a ‘form of life’, original emphasis). 

  187. Patterson, drawing on Wittgenstein observes that ‘… the meaning of a practice is an internal phenomenon. It is within the practice, and by virtue of the acts of the participants in the practice, that the practice has meaning … It is, therefore, against the specifics of a practice that claims for actions consistent with the practice are validated. Our perception of the objectivity of any particular decision is a function of the degree to which the act in question is in conformity with the demands of the practice as understood by the participants.’ Dennis M Patterson, ‘Law’s Pragmatism: Law as Practice & Narrative’ (1990) 76 Virginia Law Review 937, 966. 

  188. Tasioulas notes ‘the potential attrition and loss of human capacities in the domain of legal adjudication that would result from the ever-increasing deployment of AI tools.’ Tasioulas (n 78) 8. 

  189. ibid (alerting to the ’knock-on effect of diminishing our capacity to subject AI adjudicatory tools to effective critical scrutiny’). 

  190. As Gutwirth maintains, ‘a thing becomes legal when it is processed or thought from a position that anticipates how a judge could or should do it’. Serge Gutwirth, ‘Providing the Missing Link: Law after Latour’s Passage’ in Kyle McGee (ed), Latour and the Passage of Law (Edinburgh University Press 2015) 130. 

  191. Van den Hoven positions such mutual understanding as an aspect of ‘hermeneutic justice’. Emilie van den Hoven, ‘Hermeneutical Injustice and the Computational Turn in Law’ (2021) 1 Journal of Cross-disciplinary Research in Computational Law https://journalcrcl.org/crcl/article/view/6 accessed 4 December 2023. Note that this requirement for mutual understanding is not exhausted by a mutual understanding of the law such as may be implied by Tasioulas’ concern for ‘reciprocity’ between citizens and officials of the law. Tasioulas (n 78) 17, 18. As van den Hoven explains, to ‘count as a human being’ before the law, a citizen must be able to give ‘an account of oneself’. van den Hoven 8. 

  192. See, for example, van den Hoven’s discussion of the emergence of the notions of ‘sexual harassment’ and ‘coercive control’ and the relevance of their recognition for hermeneutical justice. van den Hoven (n 191). See also, somewhat relatedly, Richard M Re and Alicia Solow-Niederman, ‘Developing Artificially Intelligent Justice’ (2019) 22 Stan Tech L Rev 242 (as to the value, in the case of human judges, of ‘natural cultural updating’ as opposed to deliberate software updating). 

  193. Using van den Hoven’s terminology, delegation of adjudication to data-driven systems may present a ‘systemic hermeneutical challenge to contestation’ of the outputs of such systems. van den Hoven (n 191) 11 (original emphasis). 

  194. Latour (n 72) 359. 

This page was last updated on 5 January 2024.