Thursday, May 30
Technoscientific Legalities
Thu, 5/30: 8:00 AM - 9:45 AM, Hyatt Capitol B
This panel takes up the relationship between science, technology, and law. It sees the three terms as fundamentally yet unstably connected. Papers take up questions such as: How is law being mobilized to advance or curtail developments in science and technology? How do technological developments take on lawlike qualities, even functioning as a law unto themselves, or as alternative forums for doing the work usually ascribed to law (resource distribution, order production, conflict resolution, etc)? How do commercial interests shape the relationship between, law, technology, and science? And, how are science and technology being used (or resisted) in the production, interpretation, and enforcement of law? Papers take up these questions while also considering the points of overlap and divergence between sociolegal studies and STS.
Chair/Discussant: Sally Engle Merry, New York University
CRISPR Gene Editing: A Case Study in Emerging Self-Governance
John Conley, University of North Carolina-Chapel Hill
For years, genomic medicine has been hailed as the future of clinical treatment. The general premise is that doctors will use detailed information about a particular patient’s DNA (and other “biomarkers”) to custom-tailor diagnoses, advice, drug choices and doses, and other specifics of treatment. Despite this hope and hype, genomic medicine has thus far had limited effect on the day-to-day practice of medicine. The Holy Grail of genomic medicine has always been the ability not just to identify dangerous gene mutations but to fix them. Now a new “gene-editing” technology, called CRISPR, may have the potential to provide a safe and effective way to cut out mutated sequences of DNA and paste in normal variants. This paper explores some of CRISPR’s significant implications for law, ethics, regulation, and health policy. Among the major issues are: the strengths and weaknesses of proposals to police the development and use of human gene editing through legislation or regulation; the legal, regulatory, and ethical issues involved in introducing gene editing into medical practice; the implications for health and the potential of CRISPR to exacerbate inequities in access to care; and the implications of “off-label” uses of therapeutic and preventive gene editing interventions to improve on normal human traits. This paper will focus on the last topic. Specifically, it will report on the early stages of a multi-sited study of an emerging international movement toward CRISPR self-governance. The paper examines this movement as a unique case study in “new governance”>
Gun Politics in Blue: Firearms, Police and the National Rifle Association
Jennifer Carlson, University of Arizona
This paper reconstructs 20th century US policing from the perspective of gun politics, focusing particularly on the relationship between the National Rifle Association and American public law enforcement. While police may be popularly defined by “the gun and the badge,” it was the NRA that forged the handgun as a tool of police expertise in the early 20th century. As the century unfolded, the NRA played a key role in brokering the relationship between police and firearms, most of the time garnering the support of public law enforcement but at other times galvanizing extreme opposition. As the NRA courted, then antagonized, and then courted again public law enforcement in the 20th century, it eventually chiseled a surgically precise brand of gun policy, often setting tropes of black criminality against the specter of law and order and thus animating and harnessing the racial politics of crime. This brand of gun policy was aggressively “tough on crime” for certain gun offenders but also reinforced police’s relationship to their own guns. Rather than just an advocate of the firearms industry or shaper of American gun culture, this paper establishes that the NRA also has played a critical-even if, at times, counterproductive to its own agenda-role in linking together gun politics, public law enforcement, and the racial politics of crime, shaping how police understand both the guns of the suspects that police may encounter and the police’s own guns.>
Internet Governance and the Ideology of Openness: Lessons from Brazil
Jeffrey Omari, Gonzaga University School of Law
This paper is inspired by ethnographic research conducted in Rio de Janeiro, Brazil, and is a comparative work juxtaposing internet law in Brazil with similar regulation of the internet in the US. In 2014, Brazil adopted the Marco Civil da Internet (MCI), an “Internet Bill of Rights” that regards internet access as a requisite for civil rights, protects net neutrality, and advances regulatory and ideological openness in the online realm. Unlike Brazil, the US does not have any such comprehensive internet laws. Thus, after thoroughly exploring the history of the MCI and the geopolitical climate that led to its enactment, I will compare three of the MCI’s component parts ––(1) the right to internet access; (2) net neutrality; and (3) online speech–– to their legislative counterparts in the US. I argue that juxtaposing the history, politics, and practice of internet legislation in Brazil with similar legislation in the US reveals similarities that may provide a glimpse of where internet governance is headed in both countries.>
Is the Legalization of Marijuana Like the Pasteurization of France?
William Garriott, Drake University
This paper is inspired by ethnographic research conducted in Rio de Janeiro, Brazil, and is a comparative work juxtaposing internet law in Brazil with similar regulation of the internet in the US. In 2014, Brazil adopted the Marco Civil da Internet (MCI), an “Internet Bill of Rights” that regards internet access as a requisite for civil rights, protects net neutrality, and advances regulatory and ideological openness in the online realm. Unlike Brazil, the US does not have any such comprehensive internet laws. Thus, after thoroughly exploring the history of the MCI and the geopolitical climate that led to its enactment, I will compare three of the MCI’s component parts ––(1) the right to internet access; (2) net neutrality; and (3) online speech–– to their legislative counterparts in the US. I argue that juxtaposing the history, politics, and practice of internet legislation in Brazil with similar legislation in the US reveals similarities that may provide a glimpse of where internet governance is headed in both countries.>
Performing Analysis in Sexual Assault Adjudication
Sameena Mulla, Marquette University, Heather Hlavka, Marquette University
This paper explores the ways in which crime lab analysts perform expertise when they are called to testify during sexual assault trials. Following Pratiksha Baxi’s argument that there is no science outside of the law, we demonstrate how the law itself becomes the arbiter of the scientific standards brought into play during sexual assault adjudication. In this setting, analysts embody expertise through their self-presentation, and their scripted and practiced mode of response. They selectively describe some phenomenon with detail and precision, while simultaneously allowing other mechanisms and calculations to remain “black-boxed.” The crime lab analysts’ approaches are contingent upon their assumptions of the jurors’ collective knowledge and cultural disposition towards race, sexual harm, and sexual consent. We argue that the court, therefore, reifies particular notions of biological race, sexual harm, and sexual consent: namely, the fiction that race has a genetic component, and that sexual harm and the withdrawal of the consent always manifest on the body.
Taming of the Abused: Docile Images, Threatening Victims and Virtual Evidence of Domestic Assault
Dawn Moore, Carleton University
The occularcentrism of the courtroom is changing the ways in which justice is seen to be done. Images are especially important to the implementation of the ‘victimless prosecution’ of domestic violence. Presented as a benevolent intervention protecting victims from the assumed trauma of a criminal trial, the various iterations of this form of evidence collection rely heavily on image capture – particularly of the victim and especially of her injuries. Images of injuries create a ‘data double’ of the victim, a virtual proxy who is not always as protective of her originator as prevailing logics suggest. The data double plays the role of docile, curated victim, rife with vulnerability and an uncanny ability to embody the fragility of victimhood written into scripts of domestic violence. The data double embodies the perfect victim in the sense that she is always and forever in temporal proximity to the assault. Her bruises do not heal, she cannot access time nor reflection and most importantly, because she has no future self, she can never change her story. Fixed in image, she is a battered woman in perpetuity. The flesh and blood victim, in contrast, can and does move on, she heals, she changes and in so doing, she threatens the very processes meant to ‘protect’ her, hence the need to quiet the flesh in order that the virtual might speak instead. I interrogate the relationship between these two characters, theorizing that the images turned victim do violence to the flesh and blood victim by condemning her will (Ahmed, 2001), emotions (Walklate, 2014), and animation. I argue the victim made of pure virtuality is simultaneously tamed in the court while weaponized against her originator. Drawing on an in-depth case study of State of West Virginia v. Peter Lizon, I show how the data double is made to accuse her originator of egregious digressions from the script of assault. It is the flesh victim who ultimately stands accused of betraying and victimizing herself.
Privacy, Data Protection, Security and Surveillance – Tensions and Issues of Accountability in a Transatlantic Perspective**
Thu, 5/30: 10:00 AM - 11:45 AM, Hyatt Capitol B
Chair: Hartmut Aden, Berlin School of Economics and Law (FÖPS Berlin)
Discussant: Francesca Bignami, GWU Law School
Foreigners in U.S. Surveillance Law
Francesca Bignami, GWU Law School
In U.S. law, there are a number of different legal sources that govern data protection in the field of federal law enforcement and the related field of national security investigations. Many of these sources treat U.S. persons, i.e. U.S. citizens and legal permanent residents, and foreigners differently. This paper reviews the legal guarantees for data protection and the ways in which foreigners are singled out for different treatment-in constitutional rights, national security surveillance, and, since the passage of the Cloud Act, surveillance connected to ordinary criminal investigations. After reviewing the law, the paper explains the historical reasons for the difference in treatment, rooted in the workings of American democracy, and examines whether they are still valid today. The paper also examines the links to the treatment of non-nationals in other areas of public law and argues that personal status-in addition to, or as a substitute for, territory- is emerging as a critical determinant of constitutional rights, undermining many of the ideals of the US Constitution.
Interoperability Between Policing and Migration Databases – Another Step Towards Mass Surveillance?
Hartmut Aden, Berlin School of Economics and Law (FÖPS Berlin)
In recent years, the European Commission has published numerous policy initiatives related to a ‘Security Union’. The Commissions’ interoperability initiative presented in December 2017 seeks new strategies for identifying dangerous individuals who employ multiple identities. This initiative constitutes a massive paradigm shift for data protection and privacy. If this proposal enters into force, the databases established for the European Union’s Area of Freedom Security and Justice would become interconnected rather than remaining separate. The paper will show that cooperation between European police and intelligence services with agencies from third countries (and especially from the U.S.) make this development relevant in a transatlantic perspective as well.
Sentenced to Surveillance
Kate Weisburd, GW Law
Even as courts and legislatures have increasingly demonstrated a willingness to limit government electronic surveillance of private data, one group is consistently excluded from this privacy-protective discourse: the five million people in the United States on some form of community supervision. This Article is the first to explore the emerging practice of requiring those on probation and parole to waive their Fourth Amendment rights and agree to continuous warrantless electronic searches as a condition of supervision. Such terms allow law enforcement to monitor and search supervisees’ social media activity, emails, texts, location, cellphone usage, and all other personal data and communications contained on electronic devices, twenty-four hours a day. This level of potential government surveillance of intimate and private activity of the non-incarcerated is unprecedented. This Article both explores this phenomenon in its own right and, more broadly, contends that these “consensual” electronic searches reflect two noteworthy but understudied shifts in criminal justice. First, the increased reliance on a probation-as-privilege narrative, as opposed to the traditional “special needs” doctrine, reflects a growing practice of deploying consent to justify increasingly invasive surveillance of people on community supervision. While the debate over criminal justice bargaining is not new, the blanket use of consent reveals blind spots in the existing debate. Second, these electronic searches reflect the rising and distortive influence of surveillance technology on community supervision, now the most common form of punishment in the United States. In particular, this Article argues that unlimited warrantless electronic searches jeopardize the noble rehabilitative goals of community supervision, both through the psychological effects of an electronic panopticon and by incentivizing “automated” supervision.
The Panoptic Administrative State in Europe
Deirdre Curtin, European University Institute
One way of framing the administrative state and its evolution over time and space is through global administrative law theories and the advent of a global administrative ‘space’ populated by private actors, regulatory networks and international institutions. My focus is instead on the reach of executive power both within and beyond the state. Rather than starting from the global, I start from the European reality and explore how public administration, in its extended form including law enforcement, has meandered way beyond the confines of administrative law at any level, global, European or national. The cutting edge of the debate in Europe is on the way in which administrations join up beyond the state, in particular through the sharing of information, in a manner that is technologically enabled and connected. One of the key ways in which public administration has changed in recent decades is through processes of automation. Data can become more meaningful through aggregation or sharing. Data in proximity to other data may reveal previously unknown relationships. Governments around the world are building databases and deploying data mining to mine law enforcement, communications and intelligence data. The contemporary ‘panoptic sort’ (Gandy,1993 ) has enormous consequences for citizens and their relationship with the state and beyond the state entities in ways that are still largely unexplored. A crucial issue is political control and accountability of the drafting and composing of the rules (code) themselves rather than legality and the rule of law as applied to individual cases. Automated information sharing systems impose accountability fissures that administrative procedures cannot remedy. A new conversation is urgently needed among public lawyers about the significance of advanced data sharing through databases in particular for current law enforcement frameworks and its blurred boundaries with intelligence–in Europe and globally.
Data, Human Dignity, Surveillance and Social Control in the Algorithmic Society: Law, Technology, Rights and Market**
Thu, 5/30: 12:45 PM - 2:30 PM, Hyatt Capitol B
Chair: Amnon Reichman, Faculty of Law, University of Haifa
Discussant: John Morison, Queen’s University Belfast
Impoverished Algorithms: Misguided Governments, Flawed Technologies, and Social Control
Sarah Valentine, City University of New York School of Law
Impoverished Algorithms: Misguided Governments, Flawed Technologies, and Social Control (forthcoming 46 Fordham Urb. L.J. March 2019) Governmental deployment of algorithms as social control mechanisms are used to contain and criminalize marginal populations. Though recognition of the dangers inherent in misuse of big data and predictive analytics is growing, insufficient attention has been paid to how governmental use of these systems specifically targets the poor, the disabled, and communities of color. As the criminal justice and social welfare systems have become fused, big data analytics increases the breadth of government control over those caught within these systems. Challenging governmental use of algorithms as instruments of social control requires understanding the fallibility of the technology, the historical and political forces driving governmental adoption of the systems, and the advocacy strategies that have been most effective against it. It also requires recognition that system capacity to control and punish includes, but also expands far beyond use by law enforcement. State and local governments are increasingly purchasing and deploying algorithmic decision-making systems created by third party vendors with little oversite or review. This effectively cedes the power to make determinations about who is eligible for social services, who is likely to abuse or neglect their child, or which public school teachers should receive tenure to private corporations. At the same time, this process provides these private vendors access to enormous amounts of government data on individual citizens that these companies use to design and train other algorithmic systems to market to other entities. This article discusses the most problematic aspects of governmental use of these technologies, especially as implemented in civil administrative systems.>
On Dignity and Robots
John Morison, Queen’s University Belfast, Kathryn McNeilly, Queen’s University Belfast
This paper is concerned to transcend the very elementary thinking currently happening around the need to develop ideas of legal personhood for machines in general. Such thinking is partly about dealing with problems about criminal responsibility and legal liability in relation to innovations such as self-driving cars, drones etc. but also more widely to find ways of maintaining a tax take when human jobs become replaced, and develop ways of living in a new machine age. This has led some to think also of developing a rights regime around the new technology – most notably the Committee on Legal Affairs, Draft Report with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). This suggests not just a network of rights to protect humans from machines, but also the possibility of rights for machines. This paper builds upon this background to look (briefly) at how robots can be used in a law and justice context, as either partly or fully autonomous decision-making systems, and uses this as a way of thinking more critically about the range of current notions about dignity both generally and in the context of decision-making in a justice context. Dignity is a central idea in contemporary political discourse, and also underpins legal discourse and concepts like human rights. The rise of machines in both the legal sphere and everyday life more broadly raises questions about how we conceptualize dignity, how it is grappled with in legal contexts, and how its boundaries may need to be revised.>
Privacy, Dignity and (Virtual) Social Control: Reconfiguring Due Process in the Algorithmic Era
Amnon Reichman, Faculty of Law, University of Haifa
As the Market gathers immense data-points and analyses Big Data, as Machine Learning and AI harnessed by local and multinational corporations and state agencies generate predictive profiles, processes designed with 19th century technologies in mind in order to protect liberal democratic values are deeply challenged. This paper will focus on the right to privacy, one understanding of which places the “right to be let alone” at its center, and suggest that the technological innovations, including the eco-system within which they flourish, call for fundamental re-configuration of this right. The paper will examine US, Canadian and EU jurisprudence on point, including the debates surrounding the GDPR and the right for explainability, and proceed to offer an alternative conceptualization, one that is sensitive to the concerns the Algorithmic Era raises. More specifically, the paper suggests to view privacy as a collective right (to interact - not to be let alone), which includes a positive dimension (not merely a negative one), and the purpose of which is to diffuse social control (not merely guarantee individual control of information). In so doing, the paper will connect privacy to the concept of Human Dignity, and to a particular tradition of Due Process.>
Regulating Algorithms and Robots for Responsibility?
Lyria Bennett Moses, UNSW Sydney
When lawyers encounter new technologies, an early question often posed is “How should we regulate” the new entities and activities associated with them. Thus, as autonomous programs and robots become unpredictable, concerns are often expressed as a need for regulation of these things to ensure accountability, transparency and, of course, responsibility. However, before we rush to regulate the new, it is important to understand what the law demands of humans in the context of decision-making. This opens up diversity of expectations – there are different requirements depending on who is making the decision, the impact of the decision (particularly whether it might breach any legal rules), and the context in which it is being challenged. In the law governing humans – certain kinds of decisions must be justified, others only sometimes and, in some cases, the decisions must be kept secret from the public entirely (as in the context of national security secrecy laws). This paper reflects on the ways the law touches on human decision-making to ask whether and how this should change to deal with situations where decisions are made by software or robots. It argues for a narrowed focus on the kinds of circumstances in which account must be given or responsibility taken. It also explores whether certain automated systems are inappropriate in contexts where a high level of transparency, accountability and responsibility are important. However, the paper argues that demands for transparency and accountability of algorithms in such contexts should be achieved, not by regulating AI and robots per se, but by ensuring that what is required of humans and corporations (or ought to be required of them) is also required of all autonomous agents.>
Unlawful Distinctions? Canadian Human Rights Law and Algorithmic Bias
Jane Bailey, University of Ottawa, Jacquelyn Burkell, University of Western Ontario
Canadian human rights legislation prohibits discrimination in the provision of goods and services to the public, in employment, and in accommodation based on certain grounds, such as race, gender, sexual orientation, and gender identity (e.g. Canadian Human Rights Act, ss. 3 and 5 (CHRA)). In so doing, it imposes equality-based restrictions on certain kinds of decision-making by public and private providers of goods, services, jobs and accommodation. Increasingly, in order to access these resources, Canadians use and are subjects of online search engines that rely on machine-based algorithms to profile and sort users to personalize search results and ad placement, and to understand and identify cultural categories (e.g. images of “professional hairstyles”). These algorithms not only affect access to these resources (e.g. who gets access to what information and what price we are charged for goods (O’Neil, 2016)), they also generate representations that can fundamentally affect our understandings of others and our selves. This paper begins to explore algorithmic bias and its relationship to human rights, highlighting some of the challenges for obtaining meaningful responses to algorithmic discrimination under the CHRA as currently framed.>
Regulation of Data & Data Companies
Thu, 5/30: 2:45 PM - 4:30 PM, Hyatt Capitol B
Chair/Discussant: Mira Burri, University of Lucerne
Data Flows Versus Data Protection: Understanding the Implications of Trade Law for Privacy
Mira Burri, University of Lucerne
Cross-border data flows have become essential to the contemporary economy. They underlie all digital transactions and call for international cooperation. At the same time, data issues have become the new battlefield of sovereign states, which attempt to impose borders in the digital space, so as to protect vital interests, such as national security or privacy. The venue of trade agreements has, perhaps strangely so, turned into the center of digital data discussions. The paper will map the developments in global trade law – in particular the new rules created in free trade agreements. Against this backdrop, the paper will focus on one distinct battle – that of data flows versus data protection. This is an apt case that clarifies the tension between the global and the local – between the desire of national regulators to foster free flow of data with few or no hindrances, so as to make the best of the new digital economy and their unyielding wish to protect interests and values fundamental to the nation state. This dilemma becomes only more difficult to solve, as the Internet becomes an indispensable part of the strategies of many countries to become more efficient, productive and innovative and to position themselves better in the global economy by embracing digital trade. At the same time, the externalities of the digital trade have become more palpable. It is evident that it can affect the capacity of domestic regulators to achieve their aims in areas as diverse as consumer protection or health. The area of privacy is particularly strongly affected as well as highly controversial, as the divergence in the approaches of the EU and the US exemplifies. Ultimately, the paper seeks answers to the governance dilemma of how to match the freeing of trade with principles of regulating trade that are meant to ensure balance and provide for trust in cyberspace.
Guarding the Coin of the Realm: How Liability in the Market for Personal Data Can Protect Privacy in the Digital Age
Chris Hirokawa, Temple University Law School
Everyday each of us creates thousands of bits of information merely by living our lives in an increasingly digitally connected world. This information can include anything from phone numbers and personal interests to IP addresses and user device specifications. The market for this personal data has formed the cornerstone of what is expected to soon become a multi-trillion dollar industry. However, not all participants within the data economy act with the best intentions, and the risk of malicious actors stealing personal data has grown alongside the proliferation of a fully connected world. This paper argues that the United States Congress should develop a comprehensive data privacy plan that synthesizes existing rules, develops mechanisms for individual recovery, and responds to the era of big data. Private companies and governments alike have struggled to develop clear and consistent standards to keep our personal data private and protected. Upon discovering a data breach, both companies and individuals must navigate a convoluted network of laws and regulations to determine how to respond and recover. In particular, the American legal system fails to protect individuals who do not have a legal remedy when hackers steal and exploit their personal data.
Navigating Welfare Technologies: Technological Monitoring and Monitoring Technologies
Kelsi Barkway, University of Alberta
In this paper I explore how people navigate the Ontario Works system in Toronto, Ontario, Canada, after the implementation of a new benefits card for people who do not have bank accounts. The City of Toronto was the first in Canada to phase out welfare cheques, requiring people without bank accounts to access their payments through a benefits card. I begin by providing context to the introduction of this card by highlighting some of the key debates happening within city council. I will draw on interview data from 50 participants who described their experiences with these new cards. Narratives about the potential for surveillance introduced by these cards, as well as the strategies people using the cards engaged in to resist surveillance or use the potential for surveillance to their advantage will be explored. Drawing on Viviana Zelizer’s work on the social nature of money, I use this framework to think about how the implementation of new payment technologies shape social relations, particularly in the case of social assistance when people are operating from a position of disempowerment.