Chapter 1: The mode of existence of text-driven positive law
- Principles of the rule of law as affordances of written legal speech acts
- The mode of existence of a code-driven positivist ‘law’?
- COHUBICOL foresees the need for computational counterclaims
In this chapter I outline the text-driven nature of what lawyers call ‘positive law’ and how this aligns with core elements of the rule of law and with the kind of legal protection it offers. This is followed by a brief mapping of the data- and code-driven ‘law’ that is the focus of the COHUBICOL project, summed up under the heading of ‘computational law’.
I highlight why the kind of legal protection that is inherent in the rule of law cannot be taken for granted in the era of computational law (without implying it could be taken for granted in the era of text-driven law). Legal protection might, however, be articulated in data- or code-driven architectures, to the extent that we learn to anchor ‘legal protection’ in law’s new mode of existence. I thus end this chapter by making the case for legal protection ‘by design’.
The study of law can be seen as the learning of a new language.
In 1992 Rene Foqué held his Inaugural Lecture under the title: ‘The Space of the Law’,2 a scholarly treatise on the nature and importance of positive law and the foundational architecture of constitutional democracy. Foqué emphasised that law is a language and that the study of law can be seen as the learning of a new language, taking note of the interplay between the given language that lawyers and legal scholars tap into and their use of that language. Language and language use define each other, while leaving room for productive and confusing misunderstandings. As a result, scientific research into law is first and foremost an argumentative practise, taking note that legal argumentation is not about (mono)logical reasoning, but about arguing points of view in the face of their (potential) contestation. The contestability of law is created by the ambiguity or ‘open texture’ inherent in legal concepts.3 Together with ‘t Hart, Foqué developed the idea of ‘contrafactual conceptualisation’,4 asserting that conceptualisation in natural language is inherently unstable and, precisely because of this, has a subversive kernel that enables us to push back against whatever interpretation affects us. In this chapter I will clarify that the artificial nature of spoken and written speech makes our speech acts inherently contestable.5
The double play of contestability and predictability is thus at the heart of the rule of law.
The uncertainty associated with the use of natural language calls for the stabilisation of meaning (although not for its petrification). One way our society enacts this stabilisation is by issuing and enforcing positive (‘posited’) law. This provides legal certainty because the enactment provides closure after an adversarial debate, either when the legislator enacts legislation or when a court settles a dispute. With such dispute resolution, the judge in point of fact decides the meaning of the law for the case at hand and thus also for subsequent cases. After all, the administration of justice cannot be arbitrary. In his Inaugural Lecture Foqué referred to the Dutch legal historian Schönfeld,6 who came to the conclusion that Montesquieu’s famous qualification of the judge as ‘bouche de la loi’ (iudex lex loqui) must be understood against the background of an even older maxime that designated the king rather than the judge as ‘bouche de la loi’ (rex lex loqui). Montesquieu’s aim was to prevent both the legislature and public administration from playing the role of judge in their own case: in the final instance neither the legislature nor public administration decide on the legal effect conferred by the law. This is in the hands of an independent third party,7 that is, nevertheless, bound by prior case law and relevant legislation. In this way, the legislature and the administration are placed under the rule of law and democracy is saved from the tyranny of the majority.
The realm of text-based law is created by a complex interplay of speech acts8 that create a web of legal powers that in turn instantiates specific institutional checks and balances that sustain a system of countervailing powers. This safeguards the contestability of legal decision making, while simultaneously providing for closure (which must, however, be justified in the light of the arguments put forward in relation to the positive law that is in force).9 The double play of contestability and predictability is thus at the heart of the rule of law.
Ambiguity is not a ‘bug’ but a ‘feature’; it offers the possibility of constantly tuning in to changing circumstances and insights.
According to Waldron, this is the meaning of legal certainty,10 which should not be reduced to internal consistency but concerns the argumentative nature of such consistency. At the same time, legal certainty concerns what Dworkin has called the ‘integrity’ of the law,11 which entails more and maybe even less than logical consistency. To count as ‘legal’ rather than ‘logical’ consistency, the integrity of the law must be grounded in its moral foundations, or what Dworkin addressed as the ‘implied philosophy of the law’ (think of the interplay of freedom, equality, predictability, justice and effective protection thereof). The internal consistency of the law demands continuous reconstitution and new arguments due to the changing circumstances in which the law operates, while simultaneously the moral principles that ground the law may require reinterpretation in the light of those new circumstances. Fortunately, positive law — precisely because of the multi-interpretability of natural language — is fundamentally adaptive. And not in the sense of arbitrarily ‘bending any way the wind is blowing’, but in the sense of an iterant refinement of legal norms with a view to a fair and reliable administration of justice. Ambiguity, therefore, is not a ‘bug’ but a ‘feature’; it offers the possibility of constantly tuning in to changing circumstances and insights.
There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says ‘Morning, boys. How’s the water?’ And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes ‘What the hell is water?’12
It is not obvious for lawyers to be aware of the fact that modern positive law is anchored in the ‘technology of text’.13 The idea of text being a technology, let alone the idea that the nature of law is co-determined by that technology will not easily occur to those who depend on what ‘text’ affords; text is to lawyers what water is to fish. Though, as lawyers, we are ‘naturally’ familiar with the idea of ‘law as text’, the extent to which positive law depends on the technology of text makes it hard to even acknowledge its performative effects. Text-based legality is our default, the lens through which we navigate the world it creates. The best way to grasp this is to remind ourselves that even unwritten law (principles, custom) depends on written law.
In a society without the script there is no unwritten law, but rather a normativity grounded in an orality that cannot be reduced to ‘the unwritten’. The latter necessarily assumes an infrastructure of reading and writing informed by and informing a new type of speech, whose impact is now reconfigured in relation to text. For instance, as indicated above, the ambiguity of natural language is not without consequences, whether part of an oral or a text-based normativity, but those consequences are reinforced by the ‘sedimentation’ of language in written or printed text.14 Text, as externalised speech, takes on a life of its own, emancipating itself, as it were, from the tutelage of its author. This becomes possible and even imperative where the reader ‘internalises’ the externalised speech acts of an author who is absent.15 The author, then, can no longer correct the way in which the reader ‘understands’ the text, which would be possible in face-to-face communication. This gives the text a certain autonomy in relation to the author, but also in relation to subsequent readers, because a text cannot be interpreted arbitrarily; this would deprive it of any meaning.
Written law shares a number of characteristics with the technology of the text. Since the printing press began to play an increasingly important role and since law became increasingly dependent on the printed word, the role of interpretation and the need for complex argumentation became more prominent in legal practice. The legislature has limited powers to determine how its legislation will be interpreted by those subject to its binding force, and at the end of the day it is the court that has the authority to decide the meaning of the law when disagreements arise. In the interplay between legislature (author), public administration and citizens (readers) and courts (vested with the authority to decide on interpretation), the law thus acquires a certain autonomy.16 The specific nature of the technology of the text thus leads a shift from ‘rule by law’, i.e. the law as an instrument by which governments enforce their own interpretation of the norms they issue, to ‘rule of law’, i.e. the law as a system of checks and balances that institutes countervailing powers, such that public administration and even the legislature itself are brought under the rule of law. In that sense the core principles of the rule of law (such as contestability and accountability) are not merely historical artifacts but also technological artifacts, directly linked to the flexibility of natural language and the responsive autonomy of text-driven normativity.
Written law shares a number of characteristics with the technology of text.
This begs the question what principles qualify as informing the rule of law? If we focus on the most foundational elements of the rule of law we arrive at notions that are core to constitutional law, such as legality and purpose limitation, fair play of public administration, independent courts and effective protection of fundamental rights. These notions are neither mental representations of given moral precepts nor contingent on whatever a given order qualifies as law. They depend on the institutionalisation of countervailing powers that scaffold practical and effective legal protection. Legality is linked to legal certainty, which has been defined above as a particular combination of contestability and predictability, directly linked to the need to interpret and reinterpret the same binding legal text (whether legislation or case law) in the face of everchanging circumstances. The fixation inherent in a text calls for flexibility in its use, without lapsing into arbitrariness (as this would result in a disruptive ‘anomie’). To this end, the authoritative determination of the right to contest is entrusted to an independent court that is guided by the whole ‘architecture’ of the law on the one hand and the ever-changing world it constitutes and regulates on the other hand. Thus, the task of a court is to sustain the tension between general rules and the singularity of acts and events. The task is not to resolve that tension; neither rigid application (legalism), nor ‘Einzelfallgerechtkeit’ (arbitrary rule) will do.17
We can say that the principles of the rule of law are tied to the text-driven nature of positive law, without lapsing into technological determinism.
Effective protection of fundamental rights asserts the role of the state in effecting equal respect and concern for natural persons under its jurisdiction.18 Such respect implies consideration of the relationship between, for example, freedom and non-discrimination, privacy and freedom of information, between the presumption of innocence and security and, more generally, between subjective rights and the interest in having a state that is capable of protecting those rights – if needed against the state itself. The latter requires a well-designed institutionalisation of countervailing powers, an internal distribution of sovereignty, such that protection does not depend on the state’s willingness to protect, but on independent counter-play. This institutionalisation ‘consists of’ a set of speech acts which, for example, determine what office has which legal powers, what counts as a legally binding decision and under what conditions which legal consequences come about.
In short, we can say that the principles of the rule of law are tied to the text-driven nature of positive law, without lapsing into technological determinism. This is not a question of logic or causal coercion, but of what is made possible or impossible by the large-scale ‘use’ of text.
The world that law constitutes and regulates is an evermoving target. This is not a new insight. However, if the information and communication infrastructure (ICI) of that world is transformed and if the ICI of law itself changes from text to code and computation, the way law exists cannot remain identical with its previous incarnations.
In the context of computational ‘law’, information and communication technology (ICT) is no longer a matter of word processors and electronic availability of court judgments, legislation and treaties, but a toolkit with which the ‘output’ of text-driven law can be searched intelligently. This involves, for example, locating relevant doctrines, mining lines of argument within a legal domain and predicting the outcome in pending cases.19 Until recently, legal method has been a matter of ‘close reading’, the diligent work of individual lawyers selecting relevant text corpora (perhaps after consulting one or more search engines) and then scrutinising them in close detail. Either in order to derive arguments for a particular point of view (reasoning by analogy or a contrario), or in order to abstract and reconstruct relevant argumentation patterns to be used when interpreting the law (doctrine).
In literary studies, the emergence of computational techniques such as machine learning has led to a new way of ‘reading’ large text corpora. Franco Moretti speaks of ‘distant reading’,20 where software mines huge text corpora to detect mathematical relationships in the data relevant for identifying genre, developments in the history of the novel, or previously invisible connections between authors, gender, genre, language, cultural background and so on. This type of technology is now also used within the law — often under the heading of ‘legal tech’.21
The techniques in question are part of the technology of ‘machine learning’, more specifically that of ‘natural language processing’ (NLP).22 Let us briefly summarise these techniques here under the heading of data-driven systems, deploying machine learning (ML) techniques. ML is about algorithms that ‘learn’ on the basis of so-called ‘training data’, by detecting mathematical and statistical correlations within the data.
The use of such techniques stems from the high expectations that some people have of artificial intelligence as a solution to all kinds of problems, often based on a somewhat naive conception of what computer science can and cannot do. Precisely from the point of view of computer science itself one can question the reliability, accessibility and suitability of this type of system as a means to mine the law as if it were an oil field to be monetised.23 An example of such a system is the prediction of court judgments based on mathematical correlations within datasets consisting of relevant case law.24
In the upcoming working paper on data-driven normativity we will investigate the assumptions underlying these types of correlations and the reliability of the statistical relationships involved. Here I restrict myself to a succinct inventory of relevant issues, demonstrating that we are indeed confronted with novel understandings of what ‘makes’ legal knowledge legal.
The high expectations that some have of artificial intelligence as a solution to all kinds of problems are often based on a somewhat naive conception of what computer science can and cannot do.
For instance, we need to inquire into: the quality of the data set (does it concern only the published judgments or also underlying evidence and memoranda, or does it also concern relevant cases that did not reach the court); the quality of the processing of the data (highlighting metadata such as the court hearing the case, the jurisdiction, the background of the plaintiff and defendant, the time span between bringing the case and ruling); the choice between ‘supervised’ or not ‘supervised’ machine learning, and in the case of ‘supervised’, the labelling of the data (the choice of certain variables, the qualification of individual data in terms of the variables chosen). Also important are: the development of a hypothesis space (the selection of mathematical functions capable of making the right connections within the data); the determination of ‘performance metrics’ (such as ‘accuracy’, ‘precision’, ‘sensitivity’); and finally, the choice of mathematical optimisation techniques (such as loss and cost functions).
This excursion into the methodology of code-driven techniques confronts both lawyers and citizens with their inability to grasp how ‘legal tech’ derives lines of argument, predictions or advice and what such a derivation actually means. Can we assume that the mappings of argument, precedent, legislation and doctrine are ‘true’, ‘correct’ or ‘probably true or correct’? On what would the answer to these questions depend and who could actually provide the answers: lawyers or computer scientists, both or neither?
Will lawyers have to learn to apply these kinds of techniques or can they quietly outsource their application to Big Tech, Big Law or to startups that identify a gap in the market of legal services?25 Do lawyers need to explain to computer scientists how law actually operates and why it is important that legal concepts are not disambiguated? Does ‘legal tech’ increase the space to adapt or even personalise the law because precision-justice can be done based on myriad circumstances (introduced as variables in a multidimensional feature space)?26 Or does ‘legal tech’ reduce the space to adapt the law, because the choice of variables implies an invisible form of interpretation decided upon by software developers, after which the system is, as it were, screwed onto that one interpretation?27 What does it mean that lawyers have no idea what choices have been made in the design of the software they use, let alone what trade-offs are involved? To what extent can lawyers assist litigants who want to dispute the outcome of ‘legal tech’?
Do lawyers need to learn a new language, namely that of machine learning, in order to be able to defend themselves against the output of their opponent’s ‘legal tech’? Or should lawyers refuse to do so and rely on their traditional skills, which to a large extent build on ‘close reading’ of legal texts? Or is it possible, without any knowledge of machine learning, to integrate ML-based systems by way of shortcuts; can we proceed to ‘distant reading’ of the legal sources and thus achieve a degree of efficiency that is desperately needed due the ever-expanding reservoir of binding legal text?28
What does it mean that lawyers have no idea what choices have been made in the design of the software they use, let alone what trade-offs are involved?
Legal theory and philosophy of law distinguish different conceptions of law and the rule of law. This may regard the relationship between law and morality (natural law as opposed to formal positivism), or the core tenets of the rule of law (which can be understood in formal terms or in substantive terms). For our purpose the pivotal distinction is that between a positivist and a hermeneutical conception of law and the rule of law.
The first, positivism, makes a strict distinction between law and morality (the separation thesis) and thus between how the law ‘is’ (de lege lata) and how it ‘should be’ (de lege ferenda). From the perspective of a positivist, the task of a lawyer is to explain how the law ‘is’, whereas a discussion of how it should be is in the remit of the legislature and otherwise depends on the ethical insights of individual citizens. Positivism is associated with legalism and assumes that either the law is clear and must be followed or unclear, thus leaving room for judicial discretion.
The nature of computational normativity aligns more easily with a positivist approach.
The second, hermeneutical conception of law, acknowledges the text-based nature of positive law and the implied need for interpretation. Here, deciding the meaning of the law is always a matter of interpretation, whether done tacitly or explicitly, based on intuitive common-sense judgements or complex argumentation. The need to interpret a legal text in light of the facts of a case interacts with the need to interpret the facts of the case in the light of the relevant legal norms, thus entering a virtuous circle that requires keen attention to the text-driven normativity on the one hand and acuity as to its multi-interpretability in the light of the circumstances that apply. A hermeneutical approach embraces the polysemous nature of human language, whereas a positivist approach tends to disambiguate words, sentences and paragraphs. The first achieves closure after hearing arguments for different interpretations, the second prefers to arrive at closure even before the argument has begun.
The nature of computational normativity aligns more easily with a positivist approach. The idea of disambiguating a text as if it were a standalone device, after which it should be applicable in the same way to any new case fits well with the need for disambiguation that is key to the formalisation that defines computational ‘law’. This means that legal positivism connects easily with ‘legal tech’. It also means that the deployment of legal technologies as part of a hermeneutical approach may be less intuitive and will require a bespoke design.
Above, I have argued that effective legal protection requires ‘consideration of the relationship between, for example, freedom and non-discrimination, privacy and freedom of information, between the presumption of innocence and security and, more generally, between subjective rights and the interest in a state that is capable of protecting those rights — if needed against the state itself. The latter requires a well-designed institutionalisation of powers and countervailing powers, an internal distribution of sovereignty, such that protection does not depend on the states willingness to protect, but on independent counter-play’.
The use of code-driven ‘law’ or ‘legal tech’ calls for contestation at the level of the technology, to reinstate the double play of contestability and predictability. Power and countervailing power must be anchored in the code-driven architecture to make sure that law’s new mode of existence remains true to the rule of law instead of imploding to a rule by law.
In the case of text-driven law, the counterplay is text-driven; adversarial and contradictory proceedings, objection, redress and appeal are embedded in the narrative, argumentative structure of natural language.
As with text-based law, the possibility of counterplay should not depend on the goodwill of those who develop or use the software; it is not about self-binding, but about the institutionalisation of countervailing powers.
In code-driven ‘law’ a similar type of counterplay will have to be built into the software, both at the level of the interface and at the backend of the system, where counterplay should lead to proper safeguards against emancipated citizens being nudged into well behaved subjects). As with text-based law, the possibility of counterplay should not depend on the goodwill of those who develop or use the software; it is not about self-binding, but about the institutionalisation of countervailing powers. This will be a matter of design; the construction of checks and balances is no longer about written and spoken speech acts, but about design decisions that determine whether, when and which speech acts can be performed by whom.
In a seminal judgment of 2020 in the Netherlands, on the System Risk Indication (SyRI) that was developed for the automated detection of e.g. social security fraud and tax fraud, the The Hague District Court29 quoted the advice of the Council of State on so-called ‘deep learning’ systems (consideration 6.46):30
The term “self-learning” is confusing and misleading: an algorithm does not know and understand reality. There are predictive algorithms that are now reasonably accurate in predicting the outcome of a lawsuit. However, they do not do so on the basis of the merits of the case. They cannot, therefore, justify their predictions in a legally sound manner, whereas this is required for every legal procedure in each individual case.
The reverse is also true: the human user of such a self-learning system does not understand why the system concludes that there is a connection. An administrative body that (partly) bases its actions on such a system cannot properly justify its actions and cannot properly motivate its decisions.
The Council of State hits the nail on the head. Even if we could explain how a predictive algorithm arrives at its prediction, this does not provide for legal justification.
The point, therefore, is not to know how ‘deep learning’ works, but whether the outcome is lawful and that means justifiable. Likewise, court decisions are not about whether a decision was made under the influence of a hot temper, a wrong diet, personal affinities or whatever. None of these can be used as a basis for a court decision. The law in point of fact restricts the court’s ‘decision space’. Judges — whatever their personal motivation or irritation may be — can only justify their decisions based on the applicable law.
The law restricts the court’s ‘decision space’. Judges — whatever their personal motivation or irritation may be — can only justify their decisions based on the applicable law.
This restriction of the decisional space also applies if judges were to employ ‘deep learning’ or other code-driven systems. However accurate a prediction may be from a statistical perspective, the judge must remain within the boundaries of a valid legal argumentation. And the validity of that argumentation does not depend on a statistical correlation with similar arguments in earlier judgments, but on the validity and the applicability of substantive and procedural legal norms.
What, then, is the meaning of computational counterplay? Should lawyers join forces with the developers of legal technologies to build-in such counterplay? How can legal protection be incorporated and guaranteed ‘by design’?
We end this introduction with five recommendations, which require further elaboration in the other working papers:
- when preparing legislation and regulations, counterplay must be foreseen at the level of the legislature, by deciding whether and how code-driven ‘law’ can be employed; keen attention to the implications of legal tech cannot be outsourced to the level of ‘implementation’ because these implications concern the constitution of law,31
- when public administration or the judiciary develop or purchase legal technology, their purposes should be decided by the judiciary, the public prosecutor’s office or the police, and such purposes should be both mathematically and empirically testable, which will involve falsification rather than verification, i.e. attention must be paid to the extent to which and the way in which the software ‘does’ something other than what was intended,32
- deployment of code-driven ‘law’ must integrate counterplay at the computational level, implying that those subject to automated decisions are aware of this and are provided with the tools to contest them,33
- they must be able to defend themselves in a relatively simple manner against the way in which the system qualifies their actions, because such qualification may give rise to further investigation, discrimination, invisible manipulation, interference in private life, and legal consequences attributes based on computational correlations rather than legal justification,34
- similarly, when it comes to ‘legal tech’, those concerned must be able to contest the legal effect created on the basis of, or by, the system. At the computational level, this requires a user-friendly environment where an ‘objection’ or ‘appeal’ button is ‘at hand’, with a layered backend system that enables smooth, understandable and effective interaction. This interaction involves human intervention, not as a ‘human in the loop’,35 but as a competent human agent. In a constitutional state it is the machine that - if that were to provide added value - belongs ‘in the loop’, not the human.
This interaction involves human intervention, not as a ‘human in the loop’, but as a competent human agent. In a constitutional state it is the machine that — if that were to provide added value — belongs ‘in the loop’, not the human.
This introduction is an adapted and extended version of M. Hildebrandt, Computationeel tegenspel: de nieuwe ruimte van het recht 211 Actioma 12–19 (2020). ↩
R. Foqué, De ruimte van het recht (1992). ↩
About ‘open texture’ H.L.A. Hart, The Concept of Law (1994). About the importance of ambiguity for agonism in democracy, see S. Kruks, Simone de Beauvoir and the Politics of Ambiguity (2012). ↩
R. Foqué & A.C. ‘t Hart, Instrumentaliteit en rechtsbescherming (1990). ↩
About the fact that man is artificial by nature, see H. Plessner & J. M. Bernstein, Levels of Organic Life and the Human: An Introduction to Philosophical Anthropology (2019); M. Hildebrandt, ‘The Artificial Intelligence of European Union Law’ (2020) 21 German Law Journal 74–79. ↩
K.M. Schönfeld, ‘Rex, Lex et Judex: Montesquieu and la bouche de la loi revisited’ (2008) 4 European Constitutional Law Review 274–301. ↩
D. Salas, Du procès pénal. Eléments pour une théorie interdisciplinaire du procès (1992). ↩
N. MacCormick, Institutions of Law: An Essay in Legal Theory (2007); H. van der Kaaij and J. Hage, ‘Rechtshandelingen als taalhandelingen’ (2012) 10 Ars Aequi 712-19. ↩
J. Waldron, ‘The rule of law and the importance of procedure’, (2011) 50 Nomos 3–31. ↩
R. Dworkin, Law’s Empire (1991). ↩
D. Foster Wallace, This Is Water: Some Thoughts, Delivered on a Significant Occasion, about Living a Compassionate Life (2009). ↩
W. Ong, Orality and Literacy: The Technologizing of the Word (1982); J. Goody, The logic of writing and the organization of society (1986). ↩
E. Eisenstein, The Printing Revolution in Early Modern Europe (2005). ↩
P. Ricoeur, Tekst en betekenis. Opstellen over de interpretatie van literatuur (1991). ↩
P. Nonet and P. Selznick, Law and Society in Transition: Toward Responsive Law (1978). ↩
This has to do with the role of discretion, see R Dworkin, ‘Judicial Discretion’ (1963) 60 The Journal of Philosophy 624–638. As to the discretion of the police M. Hildebrandt, ‘New Animism in Policing: Re-animating the Rule of Law?’, in The SAGE Handbook of Global Policing 406–428 (B. Bradford et al. eds, 2016). Cp. Wolswinkel, for example, who even advocates a ‘right to algorithmic decision-making’ within administrative law, suggesting that this would solve the problem of administrative arbitrariness. In my opinion, this confuses legality with legalism, and discretionary powers with arbitrary decisionism. See his provocative Inaugural Lecture (in Dutch): J. Wolswinkel, ‘Willekeur of algoritme?: Laveren tussen analoog en digitaal bestuursrecht’ (2020), 53-54, https://research.tilburguniversity.edu/en/publications/willekeur-of-algoritme-laveren-tussen-analoog-en-digitaal-bestuur. ↩
The idea that governments owe their ‘subjects’ equal concern and respect was put forward by Dworkin as the core tenet of both democracy and the rule of law, R. Dworkin, Law’s Empire (Fontana 1991). ↩
M.A. Livermore and D.N. Rockmore (eds), Law as Data: Computation, Text, and the Future of Legal Analysis (2019); K.D. Ashley, Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age (2017); M. Hartung, M.-M. Bues and G. Halbleib, Legal Tech: How Technology is Changing the Legal World (2018); Susanne Chishti et al. (eds), The LegalTech Book: The Legal Technology Handbook for Investors, Entrepreneurs and FinTech Visionaries, (2020). ↩
F. Moretti, Graphs, Maps, Trees. Abstract Models for a Literary History (2005). ↩
The LegalTech Book, supra n. 19; Hartung, Bues, and Halbleib, supra n. 19. ↩
N. Aletras et al, ‘Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective’ (2016) 2 PeerJ Comput. Sci. e93; I. Chalkidis, I. A. and N. Aletras, ‘Neural Legal Judgment Prediction in English’, arXiv:1906.02059 [cs] (2019), http://arxiv.org/abs/1906.02059. ↩
Legal tech’ is therefore often discussed in the context of the ‘legal services industry’, and presented as an inevitable consequence of market forces, cf. D.M. Katz, ‘Quantitative Legal Prediction — Or How I Learned to Stop Worrying and Start Preparing for the Data-Driven Future of the Legal Services Industry’ (2012) 62 Emory L.J. 909–966. ↩
F. Pasquale, ‘A Rule of Persons, Not Machines: The Limits of Legal Automation’ (2019) 87 The George Washington Law Review 1–55; M. Hildebrandt, ‘Algorithmic regulation and the rule of law’ (2018) 376 Philos Transact A Math Phys Eng Sci; M. Hildebrandt, ‘Law as computation in the era of artificial legal intelligence: Speaking law to the power of statistics’ (2018) 68 University of Toronto Law Journal 12-35; G. Vanderstichele, ‘The Normative Value of Legal Analytics. Is There a Case for Statistical Precedent?’ (2019) https://papers.ssrn.com/abstract=3474878. ↩
R. Susskind, The End of Lawyers?: Rethinking the nature of legal services (Revised edition ed. 2010); D.M. Katz, ‘Quantitative Legal Prediction — or How I Learned to Stop Worrying and Start Preparing for the Data Driven Future of the Legal Services Industry’, (2012) 62 Emory Law Journal 909-66; The LegalTech Book, supra note 19; Hartung, Bues, and Halbleib, supra n. 19. ↩
P. Lippe, D.M. Katz and D. Jackson, ‘Legal by Design: A New Paradigm for Handling Complexity in Banking Regulation and Elsewhere in Law’ (2015) 93 Oregon Law Review 832-51. ↩
D.K. Citron, ‘Technological Due Process’ (2008) 85 Washington University Law Review 1249–1313. ↩
M. Hildebrandt, ‘The Meaning and Mining of Legal Texts’, in Understanding Digital Humanities: The Computational Turn and New Technology 145-160 (D.M. Berry ed., 2011); M. Hildebrandt, ‘Law as Information in the Era of Data‐Driven Agency’ (2016) 79 The Modern Law Review 1–30. ↩
The Hague District Court, 5 February 2020, ECLI:NL:RBDHA:2020:865. The District Court considers the use of the system unlawful due to violation of the right to privacy (Article 8 ECHR). This conclusion is based on a violation of the proportionality requirement, whereby the significant interference in privacy and the lack of transparency and contestability do not outweigh the potential benefits of achieving the legitimate aim (detecting fraud). ↩
Parliamentary Papers II 2017/18, 26643, 557, p. 13. M. Hildebrandt ‘ICT en Rechtsstaat’, in Recht en computer 25–45 (S. Van der Hof, A.R. Lodder, & G.J. Zwenne eds., 2014). Which includes a discussion of the SyRI system. ↩
See again the Advice of the Council of State on Information and Communication Technology (ICT), Parliamentary Papers II 2017/18, 26643, 557, 25-6. ↩
P. Polack, Beyond algorithmic reformism: Forward engineering the designs of algorithmic systems (2020) 7 Big Data & Society 1-15. See also the way the Brazilian judiciary is handling this, in G. Gori, ‘Promoting Artificial Legal Intelligence while securing Legal Protection: the Brazilian challenge’, COHUBICOL Research Blog, 1 September 2020, available at https://www.cohubicol.com/blog/promoting-artificial-legal-intelligence-while-securing-legal-protection-the-brazilian-challenge/ ↩
Art. 22 in conjunction with Art. 13-15 General Data Protection Regulation (AVG) provide for a right to information about the fact that decisions have been taken on the basis of automated systems. See also EDPB (formerly Art. 29 Working Party), 3 October 2017, WP251rev.01, Guidelines on automated individual decision-making and profiling for the application of Regulation (EU) 2016/679. ↩
The qualification will often be derived from the ‘labelling’ of the training data and not be based on an individualised assessment. Although judgements based on generalisations will often be made even without the use of ‘legal tech’, the point here is that those subject to there decisions must be able to contest them. ↩
But not in the sense of Wolswinkel, supra n. 13, who in his Inaugural Lecture advocates a right to algorithmic decision-making, in other words a right to a ‘computer in the loop’. My point is that code-driven ‘law’ should be at the service of human beings and not the other way round. ↩