Sunday, May 2

Sun, 6/2: 8:00 AM  - 9:45 AM, Hyatt  Capitol B 

Chair/Discussant: Hannah Bloch-Wehba, Drexel University Kline School of Law  

Artificial Intelligence Legal Literacy: Redefining Categories

Carolina Torres-Sarmiento, Universidad del Rosario  

Far from science fiction or a feminist tale, my contention is to analyze the impact of tech logics in law. The application of technology involves the process of rethink the existent legal systems and redefine traditional categories. Actual relationships between technology and law, the application of liability regimes to fully autonomous concepts, the redefinition of ownership in an economy based on services and goods in the cloud are key elements that challenge the notion of “person” and “legal person”. My aim is to generate the AI literacy by pointing out black holes, incoherence and legal instability in the existent legal frameworks and theorizing liability regimes to solve present and future issues on the topic.

Techno-Legal Consciousness

Malcolm Langford, University of Oslo, Kjersti Lohne, University of Oslo, Kristin Sandvik, University of Oslo  

Advances in technology herald a potential transformation in the understanding and meaning of law in its circulation in social relations. Discourses abound on how technology is replacing law (‘Google Law’, ‘rule by algorithms’), displacing law (creating unregulated fields, shifting jurisdictions), and inflecting law (new legal technologies for dispute resolution and policing). Drawing on and developing the existing literature on legal, risk and technological consciousness, we propose the concept of techno-legal consciousness. In our view, such an analytical category may shed light on how citizens shape and accept rule by technology. In other words, how do the actions, imaginaries and silences of individuals constitute a new rule-based hegemony? After an initial scoping of media coverage and popular science literature, we have categorised citizen responses to techno-legalism in three ways: acceptance, appropriation and resistance. Acceptance is often utopian and hierarchical, accompanied by a belief that technology is positive for fairness, justice and efficiency (or social status). Appropriation is often anarchic and individualistic. Technology is a morally neutral game in which individuals create new legal rules, forms or spaces. Resistance is critical to techno-legalism. Sceptical to the sudden absence of traditional law (e.g., protection of our digital and physical bodies) or technology’s role in legitimating coercion (e.g., algorithmic sentencing), citizens and interest groups search for ways to dampen potential rule by technology. We conclude by sketching the contours of a research agenda that should place weight on (1) understanding this new form of power and the extent to which techno-legal consciousness is constitutive of it; (2) empirical analysing different forms of techno-legal consciousness and the routinisation of techno-legalism; and (3) studying the gendered, class and racial dimensions of techno-legal consciousness and its emancipatory potential.

The Sooner the Better: The Arguments for the Use of Extended Welfare Assessment Grids in Animal Welfare Cases

Rachel Dunn, Northumbria University

Animals are protected under national animal welfare legislation, usually against intentional acts of cruelty and negligence causing suffering. Many countries allow for an animal that is suffering to be seized and, if the suffering is particularly severe, an owner can be disqualified from owning animals in the future. Evidence that an animal is suffering is necessary before action can be taken by organisations, but this is after the animal has already been subjected to distress and their dignity violated. Honess and Wolfensohn (2010) have developed an Extended Welfare Assessment Grid (EWAG), a visualisation mapping tool of welfare impact, which has proved useful for assessing the welfare of animals used in laboratories. This tool has been so useful, some are now using it in veterinary hospitals to help assess whether an animal is likely to further deteriorate, due to disease and illness, and to show any short-term welfare impact on the animal (Williams 2018). This paper will explore the potential for the EWAG to be adapted to assess the welfare of animals when owners are not meeting welfare needs. Animal organisations, such as the Royal Society for the Prevention of Cruelty to Animals, could use it to support their assessments of the current welfare of an animal under a person’s ownership and whether their welfare will deteriorate should they remain under that ownership. The EWAG could be a useful tool in appropriate cases in many countries, allowing organisations to intervene earlier and support claims of a risk to animal welfare.  

Transparency’s Artificial Intelligence Problem

Hannah Bloch-Wehba, Drexel University Kline School of Law

AI has a serious transparency problem. Although machine learning technology is increasingly ubiquitous, its outcomes are difficult to explain, and its processes impossible for lay users to understand. An increasingly widespread consensus recognizes that AI’s ability to explain itself, and our ability to understand what it’s doing, are critical to social and political acceptance. Nowhere are these problems more keenly felt than in government, where critical decisions in areas as diverse as criminal law, financial regulation, and healthcare are increasingly characterized by reliance on automated reasoning. Existing efforts to promote accountability and transparency in machine learning are largely premised on the need to protect individuals from unfair treatment. But the government already has widespread transparency obligations, evinced in the First Amendment, the Freedom of Information Act, and other statutory and constitutional protections. These mechanisms reflect a different vision of transparency, primarily oriented toward protecting the public interest, and premised on central democratic values of participatory governance. Solving AI’s transparency problem requires accommodation of both the public interest and individual rights frameworks. It also makes clear that AI’s transparency problem is deeply rooted in existing challenges to democratic governance. In other words, AI also poses a threat to transparency itself. These challenges stem from three related areas: privatization; secrecy; and unclear chains of responsibility. More fundamentally, by accepting AI’s technocratic promise of speed, efficiency, and (supposed) objectivity, government risks jeopardizing values of democratic deliberation and political accountability that are comparatively expensive, time consuming, and inefficient. New transparency paradigms must answer these challenges and reaffirm the value of democratic oversight.