Views: 1106
The digital technologies and AI models are rapidly changing not only national political economy’s patterns and governance’s methods, but the legal systems and practices, as well. The reviewed book is interesting at least in two aspects: a) in combining researchers from different scientific spheres (legal, political science and digitalisation), and b) in providing insights into the ways the process of ‘encoding regulations” can alter the essence of the legal profession.
Introduction
The authors correctly postulate that “AI is changing legal practice, government processes and individuals’ access to those processes, encouraging us to consider how technological advances are changing the legal system”.
Particularly vital is the choice of the authors’ team, which makes it a positive book’s outcome quite distinct from traditional publications on digital transition issues. Th collective manuscript is focusing on a “progressive merger” among three distant spheres, such as: a) computational and digital technologies methods, b) transforming legal rules through modern challenges in development (e.g. sustainability, renewables, environmental quality, etc.), and c) the structural application of AI methods in the legal profession (with 177 references!).
Book’s citation: AI and Law: How Automation is Changing the Law. (2025) By Aurelia Tamo-Larrieux, Clement Guitton and Simon Mayer. – Publ. CRC Press, 2025. ISBN 9781032464527, – 208 pages.
The book’s importance
The authors investigate the digital automation processes and corresponding algorithms’ structures at a time of changing the “legal atmosphere”, in analysis, legal rulemaking and the rules’ extraction, as well as practical legal rules application to the needs of individuals, policymakers, civil servants and the society at large.
Through many examples the authors reveal the “automation process” that is changing the law in question, which is already “revolving around the democratic legitimacy of the automation of legal processes. Then the digital technology’s feasibility adds specific features to “responsible automation” in dealing with a “closed” legal system.
Source: https://www.routledge.com/AI-and-Law-How-Automation-is-Changing-the-Law/Tamo-Larrieux-Guitton-Mayer/p/book/9781032464527
The indication of the book’s chapters shows the deep and constructive authors’ approach to the subject from different points of view: – automation of law (with the analysis of three waves of legal automation); – law and computer science interactions (with sub-chapters on laws of code, encoding legal knowledge for machines, and machine learning); – automatically processable regulation; – challenges and controversies (including representative and balanced, as well as transparent, controversial and responsible APRs (aka automatically processed regulations); – needed (public) debates on digitalisation process in social domain, and providing answers to such questions in the notion “law for all”, as i.e. a) defining a digital model to make law accessible; b) to what extent should the state strive to make the law “digitally readable”, and c) how to promote digital legal design thinking?.
And, finally, a vital chapter on “digital learning”, where the authors depict issues concerning the legal “education’s shift” needed for dealing with the new challenges in the legal profession.
Besides, a special chapter includes the so-called “exercises” (aka. case studies on approaches to exemplary solutions).
Among various aspects of items concerning the so-called “AI in law” issues, the authors discuss a “progressive merger” between digital methods and legal rules, where the merger not only changes “the very structure and application of law itself”, but the AL models and the “relevance ranking algorithms” are changing the legal analysis and the rule-making. (p.4).
The authors remind that the initial digital formalization, actually, occurred with the adoption of the British Nationality Act in 1986, using logic programing in order to transfer legal rules into a computer program.
The research perspectives
In the developmental perspectives, as authors note, a strong focus will be put on identifying available data, and the so-called “code base” used to transform regulation and laws into “automatically processible” forms. (p.58)
Modern digital science postulates that machine learning and AI models can be trained to issue court decisions, but only from available and updated (sic!) data from past cases, with little or no human mediation; such models can increase the efficiency of court rulings. (pp. 66-67).
Still, the authors underline, the optimal use of AI models in law needs a “more general debate” to streamline the digital process and coordinate the legal part with that of political economy’s issues.
The authors also specify one of the biggest problems in “legal digitalisation”, e.g. one that includes the existing legal drafts and adopted legislations capable of being represented in a so-called “digitally readable” format.
For example, the robot-judge would be adjusting (aka learning and/or training) existing rules to a running case on the basis of past case); the approach which is suitable to precedential-type legal systems. (p.49)
That would be a system, which can perfectly fit both into an automated (or algorithmic) process, and in the process of “formalization” of legal texts followed by a final creation of digital-type solutions. (p. 104).
The European specifics
The book will be a good reference tool for the European researchers working in the actively developing spheres of “legal digitalisation.
It is important to mention that the EU law-making institutions have adopted the artificial intelligence act, so-called AI Law in June 2024 (i.e. the regulation 2024/1689), which is not only the first binding EU-wide horizontal regulation on AI models; it sets a common worldwide framework for the use and supply of numerous digital and AI systems.
The new act offers a classification for AI systems with different requirements and obligations tailored to a ‘risk-based approach’; thus, some AI systems presenting ‘unacceptable’ risks are prohibited. E.g. numerous ‘high-risk’ AI systems that can have a detrimental impact on people’s health and safety have to get an authorization and are subject to a set of requirements and obligations to gain access to the EU market. AI systems posing limited risks because of their lack of transparency will be subject to information and transparency requirements, while AI systems presenting only minimal risk for people will not be subject to further obligations.
The AI law also lays down specific rules for the general-purpose AI models (so-called GPAI) and lays down more stringent requirements for GPAIs ‘high-impact capabilities’ that could pose a systemic risk and have a significant impact on the internal market.
Note. The European AI Act was published in the EU’s Official Journal on 12 July 2024; it entered into force in August 2024 and will be effective from 1 August 2026. More on the Regulation 2024/1689 (AI Act) in: https://eur-lex.europa.eu/eli/reg/2024/1689/oj