Abolish the #TechToPrisonPipeline - Coalition for Critical Technology

By Coalition for Critical Technology

A graphic of circuits made to look like the bars of a prison cell. Two hands hold the bars.

Sign the letter — Email Springer — Authorship

Springer Publishing
Berlin, Germany
+49 (0) 6221 487 0
customerservice@springernature.com

RE: A Deep Neural Network Model to Predict Criminality Using Image Processing

June 22, 2020

Dear Springer Editorial Committee,

We write to you as expert researchers and practitioners across a variety of technical, scientific, and humanistic fields (including statistics, machine learning and artificial intelligence, law, sociology, history, communication studies and anthropology). Together, we share grave concerns regarding a forthcoming publication entitled “A Deep Neural Network Model to Predict Criminality Using Image Processing.” According to a recent press release, this article will be published in your book series, “Springer Nature — Research Book Series: Transactions on Computational Science and Computational Intelligence.”

We urge:
The review committee to publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it.
Springer to issue a statement condemning the use of criminal justice statistics to predict criminality, and acknowledging their role in incentivizing such harmful scholarship in the past.
All publishers to refrain from publishing similar studies in the future.

This upcoming publication warrants a collective response because it is emblematic of a larger body of computational research that claims to identify or predict “criminality” using biometric and/or criminal legal data.[1] Such claims are based on unsound scientific premises, research, and methods, which numerous studies spanning our respective disciplines have debunked over the years.[2] Nevertheless, these discredited claims continue to resurface, often under the veneer of new and purportedly neutral statistical methods such as machine learning, the primary method of the publication in question.[3] In the past decade, government officials have embraced machine learning and artificial intelligence (AI) as a means of depoliticizing state violence and reasserting the legitimacy of the carceral state, often amid significant social upheaval.[4] Community organizers and Black scholars have been at the forefront of the resistance against the use of AI technologies by law enforcement, with a particular focus on facial recognition.[5] Yet these voices continue to be marginalized, even as industry and the academy invests significant resources in building out “fair, accountable and transparent” practices for machine learning and AI.[6]

Part of the appeal of machine learning is that it is highly malleable — correlations useful for prediction or detection can be rationalized with any number of plausible causal mechanisms. Yet the way these studies are ultimately represented and interpreted is profoundly shaped by the political economy of data science[7] and their contexts of use.[8] Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world. These research agendas reflect the incentives and perspectives of those in the privileged position of developing machine learning models, and the data on which they rely. The uncritical acceptance of default assumptions inevitably leads to discriminatory design in algorithmic systems, reproducing ideas which normalize social hierarchies and legitimize violence against marginalized groups.[9]

Such research does not require intentional malice or racial prejudice on the part of the researcher.[10] Rather, it is the expected by-product of any field which evaluates the quality of their research almost exclusively on the basis of “predictive performance.”[11] In the following sections, we outline the specific ways crime prediction technology reproduces, naturalizes and amplifies discriminatory outcomes, and why exclusively technical criteria are insufficient for evaluating their risks.

I. Data generated by the criminal justice system cannot be used to “identify criminals” or predict criminal behavior. Ever.

In the original press release published by Harrisburg University, researchers claimed to “predict if someone is a criminal based solely on a picture of their face,” with “80 percent accuracy and with no racial bias.” Let’s be clear: there is no way to develop a system that can predict or identify “criminality” that is not racially biased — because the category of “criminality” itself is racially biased.[12]

Research of this nature — and its accompanying claims to accuracy — rest on the assumption that data regarding criminal arrest and conviction can serve as reliable, neutral indicators of underlying criminal activity. Yet these records are far from neutral. As numerous scholars have demonstrated, historical court and arrest data reflect the policies and practices of the criminal justice system. These data reflect who police choose to arrest, how judges choose to rule, and which people are granted longer or more lenient sentences.[13] Countless studies have shown that people of color are treated more harshly than similarly situated white people at every stage of the legal system, which results in serious distortions in the data.[14] Thus, any software built within the existing criminal legal framework will inevitably echo those same prejudices and fundamental inaccuracies when it comes to determining if a person has the “face of a criminal.”

These fundamental issues of data validity cannot be solved with better data cleaning or more data collection.[15] Rather, any effort to identify “criminal faces” is an application of machine learning to a problem domain it is not suited to investigate, a domain in which context and causality are essential and also fundamentally misinterpreted. In other problem domains where machine learning has made great progress, such as common object classification or facial verification, there is a “ground truth” that will validate learned models.[16] The causality underlying how different people perceive the content of images is still important, but for many tasks, the ability to demonstrate face validity is sufficient.[17] As Narayanan (2019) notes, “the fundamental reason for progress [in these areas] is that there is no uncertainty or ambiguity in these tasks — given two images of faces, there’s ground truth about whether or not they represent the same person.”[18] However, no such pattern exists for facial features and criminality, because having a face that looks a certain way does not cause an individual to commit a crime — there simply is no “physical features to criminality” function in nature.[19] Causality is tacitly implied by the language used to describe machine learning systems. An algorithm’s so-called “predictions” are often not actually demonstrated or investigated in out-of-sample settings (outside the context of training, validation, and testing on an inherently limited subset of real data), and so are more accurately characterized as “the strength of correlations, evaluated retrospectively,”[20] where real-world performance is almost always lower than advertised test performance for a variety of reasons.[21]

Because “criminality” operates as a proxy for race due to racially discriminatory practices in law enforcement and criminal justice, research of this nature creates dangerous feedback loops.[22] “Predictions” based on finding correlations between facial features and criminality are accepted as valid, interpreted as the product of intelligent and “objective” technical assessments.[23] In reality, these “predictions” materially conflate the shared, social circumstances of being unjustly overpoliced with criminality. Policing based on such algorithmic recommendations generates more data that is then fed back into the system, reproducing biased results.[24] Ultimately, any predictive algorithms that are based on these widespread mischaracterizations of criminal justice data justifies the exclusion and repression of marginalized populations through the construction of “risky” or “deviant” profiles.[25]

II. Technical measures of “fairness” distract from fundamental issues regarding an algorithm’s validity.

Studies like the aforementioned reflect a growing crisis of validity in AI and machine learning research that’s plagued the field for decades.[26] This crisis stems from the fact that machine learning scholars are rarely trained in the critical methods, frameworks, and language necessary to interrogate the cultural logics and implicit assumptions underlying their models. Nor are there ample incentives to conduct such interrogations, given the industrial incentives that are driving much machine learning research and development.[27] To date, many efforts to deal with the ethical stakes of algorithmic systems have centered mathematical definitions of fairness that are grounded in narrow notions of bias and accuracy.[28] These efforts give the appearance of rigor, while distracting from more fundamental epistemic problems.

Designers of algorithmic systems need to embrace a historically grounded, process-driven approach to algorithmic justice, one that explicitly recognizes the active and crucial role that the data scientist (and the institution they’re embedded in) plays in constructing meaning from data.[29] Computer scientists can benefit greatly from ongoing methodological debates and insights gleaned from fields such as anthropology, sociology, media and communication studies, and science and technology studies, disciplines in which scholars have been working for decades to develop more robust frameworks for understanding their work as situated practice, embedded in uncountably infinite[30] social and cultural contexts.[31] While many groups have made efforts to translate these insights to the field of computer science, it remains to be seen whether these critical approaches will be widely adopted by the computing community.[32]

Machine learning practitioners must move beyond the dominant epistemology of computer science, in which the most important details of a model are considered those that survive abstraction to “pure” technical problems, relegating social issues to “implementation details.”[33] This way of regarding the world biases research outputs towards narrowly technical visions of progress: accuracy, precision and recall or sensitivity and specificity, F-score, Jaccard index, or other performance metric of choice, all applied to an ever-growing set of applications and domains. Machine learning does not have a built-in mechanism for investigating or discussing the social and political merits of its outputs. Nor does it have built-in mechanisms for critically exploring the relationship between the research they conduct and the researchers’ own subject positions, group memberships, or the funding sources that make their research possible. In other words, reflexivity is not a part of machine learning’s objective function.

If machine learning is to bring about the “social good” touted in grant proposals and press releases, researchers in this space must actively reflect on the power structures (and the attendant oppressions) that make their work possible. This self-critique must be integrated as a core design parameter, not a last-minute patch. The field of machine learning is in dire need of a critical reflexive practice.

III. Conclusion: Crime-prediction technology reproduces injustices and causes real harm

Recent instances of algorithmic bias across race, class, and gender have revealed a structural propensity of machine learning systems to amplify historic forms of discrimination, and have spawned renewed interest in the ethics of technology and its role in society. There are profound political implications when crime prediction technologies are integrated into real world applications, which go beyond the frame of “tech ethics” as currently defined.[34] At the forefront of this work are questions about power[35]: who will be adversely impacted by the integration of machine learning within existing institutions and processes?[36] How might the publication of this work and its potential uptake legitimize, incentivize, monetize, or otherwise enable discriminatory outcomes and real-world harm?[37] These questions aren’t abstract. The authors of the Harrisburg University study make explicit their desire to provide “a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime” as a co-author and former NYPD police officer outlined in the original press release.[38]

At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world.

To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.

1320professors, researchers, practitioners, and students spanning the fields of anthropology, sociology, computer science, law, science and technology studies, information science, mathematics, and more (full list below the footnotes)

__________________________

Footnotes

1 Scholars use a variety of terms in reference to the prediction of criminal outcomes. Some researchers claim to predict “anti-social” or “impulsive” behavior. Others model “future recidivism” or an individual’s “criminal tendencies.” All of these terms frame criminal outcomes as the byproduct of highly individualized and proximate risk factors. As Prins and Reich (2018) argue, these predictive models neglect population drivers of crime and criminal justice involvement (Seth J. Prins, and Adam Reich. 2018. “Can we avoid reductionism in risk reduction?” Theoretical criminology 22 (2): 258–278). The hyper-focus on individualized notions of crime leads to myopic social reforms that intervene exclusively on the supposed cultural, biological and cognitive deficiencies of criminalized populations. This scholarship not only provides a mechanism for the confinement and control of the “dangerous classes,” but also creates the very processes through which these populations are turned into deviants to be controlled and feared. As Robert Vargas (2020) argues, this type of scholarship “sees Black people and Black communities as in need of being fixed. This approach is not new but is rather the latest iteration in a series of efforts to improve cities by managing Black individuals instead of ending the police violence Black communities endure.” Robert Vargas. 2020. “It’s Time to Think Critically about the UChicago Crime Lab.” The Chicago Maroon June 11. (Accessed June 17, 2020). For examples of this type of criminalizing language see generally: Mahdi Hashemi and Margeret Hall. 2020. “Criminal tendency detection from facial images and the gender bias effect.” Journal of Big Data. 7 (2) . Eyal Aharoni, et al. 2013. “Neuroprediction of future rearrest.” Proceedings of the National Academy of Sciences 110 (15): 6223–6228. Xiaolin Wu and Xi Zhang. 2016. “Automated inference on criminality using face images.” arXiv preprint arXiv:1611.04135: 4038–4052. Yaling Yang, Andrea L. Glenn, and Adrian Raine. 2008. “Brain abnormalities in antisocial individuals: implications for the law.” Behavioral sciences & the law 26 (1): 65–83. Adrian Raine. 2014. The anatomy of violence: The biological roots of crime. Visalia: Vintage Press.

2 AI applications that claim to predict criminality based on physical characteristics are a part of a legacy of long-discredited pseudosciences such as physiognomy and phrenology, which were and are used by academics, law enforcement specialists, and politicians to advocate for oppressive policing and prosecutorial tactics in poor and racialized communities. Indeed, in the opening pages of Hashemi and Hall (2020), the authors invoke the criminological studies of Cesare Lombroso, a dangerous proponent of social Darwinism whose studies the authors cited below overturn and debunk. In the late nineteenth and early twentieth century, police and other government officials relied on social scientists to create universalized measurements of who was “capable” of criminal behavior, based largely on a person’s physical characteristics. This system is rooted in scientific racism and ultimately served to legitimize a regime of preemptive repression, harassment, and forced sterilization in racialized communities. The connections between eighteenth and nineteenth century pseudoscience and facial recognition have been widely addressed. For examples of the historical linkage between physiognomy, phrenology, and automated facial recognition, see Blaise Agüera y Arcas, Margaret Mitchell, and Alexander Todorov. 2017. “Physiognomy’s New Clothes.” Medium, May 6.; on links between eugenics, race science, and facial recognition, see Sahil Chinoy. 2019. “The Racist History Behind Facial Recognition.” New York Times, July 10.; Stephanie Dick, “The Standard Head,” YouTube.

3 For example, Wu and Zhang (2016) bears a striking resemblance to the Harrisburg study and faced immense public and scientific critique, prompting the work to be rescinded from publication and the authors to issue a response (see Wu and Zhang, 2017.). Experts highlighted the utter lack of a causal relationship between visually observable identifiers on a face and the likelihood of a subject’s participation in criminal behavior. In the absence of a plausible causal mechanism between the data and the target behavior, and indeed scientific rejection of a causal mechanism, the model is likely not doing what it claims to be doing. In this case, critics rightfully argued that the published model was not identifying criminality — it was identifying historically disadvantaged ethnic subgroups, who are more likely to be targeted by police and arrested. For a summary of the critique see here. The fact that the current study claims its results have “no racial bias” is highly questionable, addressed further below in Sections I (for whether such a thing is possible) and II (whether metrics for bias really capture bias).

4 As Jackie Wang (2018) argues, “‘police science’ is a way for police departments to rebrand themselves in the face of a crisis of legitimacy,” pointing to internally generated data about arrests and incarcerations to justify their racially discriminatory practices. While these types of “evidence based” claims have been problematized and debunked numerous times throughout history, they continue to resurface under the guise of cutting-edge techno-reforms, such as “artificial intelligence.” As Chelsea Barabas (2020, 41) points out, “the term ‘artificial intelligence’ has been deployed as a means of justifying and de-politicizing the expansion of state and private surveillance amidst a growing crisis of legitimacy for the U.S. prison industrial complex.” Sarah Brayne and Angèle Christin argue (2020, 1) that “predictive technologies do not replace, but rather displace discretion to less visible — and therefore less accountable — areas within organizations.” Jackie Wang. 2018. Carceral capitalism (Vol. 21). MIT Press. Chelsea Barabas. 2020. “Beyond Bias: Reimagining the Terms of ‘Ethical AI’ in Criminal Law.” 12 Geo. J. L. Mod. Critical Race Persp. 2 (forthcoming). Sarah Brayne and Angèle Christin. 2020. “Technologies of Crime Prediction: The Reception of Algorithms in Policing and Criminal Courts.” Social Problems.

5 The hard work of these organizers and scholars is beginning to gain public recognition. In recent weeks, major tech companies such as IBM and Amazon, and Microsoft have announced commitments to stop collaborating with law enforcement to deploy facial recognition technologies. These political gains are the result of years of hard work by community organizations such as the Stop LAPD Spying Coalition, Media Mobilizing Project (renamed Movement Alliance Project), Mijente, The Carceral Tech Resistance Network, Media Justice, and AI For the People. This on-the-ground work has been bolstered by research led by Black scholars, such as Joy Buolamwini, Timnit Gebru, Mutale Nkonde and Inioluwa Deborah Raji. See: Joy Buolamwini and Timnit Gebru. 2018. “Gender shades: Intersectional accuracy disparities in commercial gender classification.” Conference on fairness, accountability and transparency. Inioluwa Deborah Raji, Timnit Gebru, Margaret Mitchell, Joy Buolamwini, Joonseok Lee, and Emily Denton. “Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing.” ArXiv:2001.00964 [Cs], January 3, 2020. Mutale Nkonde. 2020. “Automated Anti-Blackness: Facial Recognition in Brooklyn, New York.” Harvard Kennedy School Review: 30–36.

6 The Algorithmic Justice League has pointed out this blatant erasure of non-white and non-male voices in their public art project entitled, “Voicing Erasure” a project that was inspired in part by the work of Allison Koenecke, a woman researcher based at Stanford whose work uncovering biases in speech recognition software was recently covered in the New York Times. Koenecke was not cited in the original New York Times article, even though she was the lead author of the research. Instead, a number of her colleagues were named and given credit for the work, all of whom are men. In “Voicing Erasure” Joy Buolamwini pushes us to reflect on “Whose voice do you hear when you think of intelligence, innovation and ideas that shape our worlds?”

7 Timnit Gebru points out that “the dominance of those who are the most powerful race/ethnicity in their location…combined with the concentration of power in a few locations around the world, has resulted in a technology that can benefit humanity but also has been shown to (intentionally or unintentionally) systematically discriminate against those who are already marginalized.” Timnit Gebru. 2020. “Race and Gender.” In Oxford Handbook on AI Ethics. Oxford Handbooks. Oxford University Press. Facial recognition research (arguably a subset of AI) is no different — it has never been neutral nor unbiased. In addition to its deep connection with phrenology and physiognomy, it is entwined with the history of discriminatory police and surveillance programs. For example, Woody Bledsoe, the founder of computational facial recognition, was funded by the CIA to purportedly develop identify criminals and criminal behavior: Leon Harmon. 2020. “How LSD, Nuclear Weapons Led to the Development of Facial Recognition”. Observer. Jan 29. Shaun Raviv. 2020. “The Secret History of Facial Recognition.” Wired. Jan 21. See also: Inioluwa Deborah Raji and Genevieve Fried, “About Face: A Survey of Facial Recognition Datasets.” Accepted to Evaluation Evaluation of AI Systems (Meta-Eval 2020) workshop at AAAI Conference on Artificial Intelligence 2020. Likewise, FERET, the NIST initiative and first large scale face dataset that launched the field of facial recognition in the US was funded by intelligence agencies, for the express purpose of use in identifying criminals in the war on drugs. This objective of criminal identification is core to the history of what motivated the development of the technology. Phillips Jonathon, Harry Wechsler, Jeffery Huang, and Patrick J Rauss. “The FERET Database and Evaluation Procedure for Face-Recognition Algorithms.” Image and Vision Computing 16, no. 5 (1998): 295–306.

8As Safiya Umoja Noble (2018, 30) argues, the problems of data-driven technologies go beyond misrepresentation: “They include decision-making protocols that favor corporate elites and the powerful, and they are implicated in global economic and social inequality.” Safiya Umoja Noble, 2018. Algorithms of Oppression. New York: New York University Press. D’Ignazio and Klein (2020) similarly argue that data collection environments for social issues such as femicide are often “characterized by extremely asymmetrical power relations, where those with power and privilege are the only ones who can actually collect the data but they have overwhelming incentives to ignore the problem, precisely because addressing it poses a threat to their dominance.” Catherine D’Ignazio and Lauren F. Klein. 2020. Data feminism. Cambridge: MIT Press. On the long history of algorithms and political decision-making, see: Theodora Dryer. 2019., Designing Certainty: The Rise of Algorithmic Computing in an Age of Anxiety. (PhD diss. University of California, San Diego, 2019). For an ethnographic study that traces the embedding of power relations into algorithmic systems for healthcare-related decisions, see Beth Semel. 2019. Speech, Signal, Symptom: Machine Listening and the Remaking of Psychiatric Assessment. PhD diss., Massachusetts Institute of Technology, Cambridge.

9 As Roberts (2019, 1697) notes in her review of Eubanks (2018), “in the United States today, government digitization targets marginalized groups for tracking and containment in order to exclude them from full democratic participation. The key features of the technological transformation of government decision-making — big data, automation, and prediction — mark a new form of managing populations that reinforces existing social hierarchies. Without attending to the ways the new state technologies implement an unjust social order, proposed reforms that focus on making them more accurate, visible, or widespread will make oppression operate more efficiently and appear more benign.” Dorothy Roberts. 2019. “Digitizing the Carceral State.” Harvard Law Review 132: 1695–1728. Virginia Eubanks. 2018. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. Audrey Beard. 2020. “The Case for Care.” Medium May 27, 2020. Accessed June 11, 2020. See also: Ruha Benjamin, Troy Duster, Ron Eglash, Nettrice Gaskins, Anthony Ryan Hatch, Andrea Miller, Alondra Nelson, Tamara K. Nopper, Christopher Perreira, Winifred R Poster, et al. 2019. Captivating Technology: Race, Carceral Techno-science, and Liberatory Imagination in Everyday Life. Durham: Duke University Press. Chelsea Barabas et al. 2020. “Studying up: reorienting the study of algorithmic fairness around issues of power.” Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. Anna Lauren Hoffmann. 2019. “Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse”. Information, Communication & Society 22 (7): 900–915. Meredith Broussard. 2018. Artificial Unintelligence: How Computers Misunderstand the World. Cambridge, MA: MIT Press. Ben Green. 2019. “‘Good’ Isn’t Good Enough.” In NeurIPS Joint Workshop on AI for Social Good. AC. Sasha Costanza-Chock. 2020. Design justice: Community-led practices to build the worlds we need. Cambridge, MA: MIT Press.

10 As Ruha Benjamin (2016, 148) argues, “One need not harbor any racial animus to exercise racism in this and so many other contexts: rather, when the default settings have been stipulated simply doing one’s job — clocking in, punching out, turning the machine on and off — is enough to ensure the consistency of white domination over time.” Ruha Benjamin. 2016. “Catching our breath: critical race STS and the carceral imagination.” Engaging Science, Technology, and Society 2: 145–156.

11 By predictive performance we mean strength of correlations found, as measured by e.g. classification accuracy, metric space similarity, true and false positive rates, and derivative metrics like receiver operator characteristic curves. This is discussed by several researchers, most recently Rachel Thomas and David Uminsky. 2020. “The Problem with Metrics is a Fundamental Problem for AI.” arXiv preprint arXiv:2002.08512.

12 Scholars have long argued that crime statistics are partial and biased, and their incompleteness is delineated clearly along power lines. Arrest statistics are best understood as measurements of law enforcement practices. These practices tend to focus on “street crimes” carried out in low income communities of color while neglecting other illegal activities that are carried out in more affluent and white contexts (Tony Platt. 1978. “‘Street Crime’ — A View From the Left.” Crime and Social Justice 9: 26–34; Laura Nader. 2003. “Crime as a category — domestic and globalized.” In Crime’s Power: Anthropologists and the Ethnography of Crime, edited by Philip C. Parnell and Stephanie C. Kane, 55–76, London: Palgrave). Consider how loitering is treated compared to more socially harmful practices like wage theft and predatory lending. Similarly, conviction and incarceration data primarily reflect the decision-making habits of relevant actors, such as judges, prosecutors, and probation officers, rather than a defendant’s criminal proclivities or guilt. These decision-making habits are inseparable from histories of race and criminality in the United States. As Ralph (2020, xii) writes, with reference to Muhammad (2019), “since the 1600s, and the dawn of American slavery, Black people have been viewed as potential criminal threats to U.S. society. As enslaved people were considered legal property, to run away was, by definition, a criminal act…Unlike other racial, religious, or ethnic groups, whose crime rates were commonly attributed to social conditions and structures, Black people were (and are) considered inherently prone to criminality…Muhammad [thus] argues that equating Blackness and criminality is part of America’s cultural DNA.” Khalil Gibran Muhammad. 2011. The Condemnation of Blackness: Race, Crime, and the Making of Modern Urban America. Cambridge, MA: Harvard University Press; Laurence Ralph. 2020. The Torture Letters: Reckoning with Police Violence. Chicago: University of Chicago Press. See also: Victor M. Rios. 2011. Punished: Policing the Lives of Black and Latino Boys. New York: NYU Press.

13 Yet, criminal justice data is rarely used to model the behaviors of these powerful system actors. As Harris (2003) points out, it is far more common for law enforcement agencies to use their records to justify racially discriminatory policies, such as stop and frisk. David A. Harris. 2003. “The reality of racial disparity in criminal justice: The significance of data collection.” Law and Contemporary Problems 66 (3): 71–98. However, some data science projects have sought to reframe criminal legal data to center such powerful system actors. For example, the Judicial Risk Assessment project repurposes criminal court data to identify judges who are likely to use bail as a means of unlawfully detaining someone pretrial. Chelsea Barabas, Colin Doyle, JB Rubinovitz, and Karthik Dinakar. 2020. “Studying up: reorienting the study of algorithmic fairness around issues of power.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAccT ’20). Association for Computing Machinery, New York, NY, USA, 167–176. Similarly, the White Collar Crime project is a satirical data science project that reveals the glaring absence of financial crimes from predictive policing models, which tend to focus on “street crimes” that occur in low income communities of color. Brian Clifton, Sam Lavigne, and Francis Tseng. 2017. “White Collar Crime Risk Zones.” The New Inquiry 59: ABOLISH (March).

14 Decades of research have shown that, for the same conduct, Black and Latinx people are more likely to be arrested, prosecuted, convicted and sentenced to harsher punishments than their white counterparts, even for crimes that these racial groups engage in at comparable rates. Megan Stevenson and Sandra G. Mayson. 2018. “The Scale of Misdemeanor Justice.” Boston University Law Review 98 (731): 769–770. For example, Black people are 83% more likely to be arrested for marijuana compared to whites at age 22 and 235% more likely to be arrested at age 27, in spite of similar marijuana usage rates across racial groups. (Ojmarrh Mitchell and Michael S. Caudy. 2013. “Examining Racial Disparities in Drug Arrests.” Justice Quarterly 2: 288–313.) Similarly, Black drivers are three times as likely as white drivers to be searched during routine traffic stops, even though police officers generally have a lower “hit rate” for contraband when they search drivers of color. “Ending Racial Profiling in America: Hearing Before the Subcomm.” 2012. on the Constitution, Civil Rights and Human Rights of the Comm. on the Judiciary, 112th Cong. 8 (statement of David A. Harris). In the educational sector, Nance (2017) found that schools with a student body made up of primarily of people of color were two to eighteen times more likely to use security measures (metal detectors, school and police security guards, locked gates, “random sweeps”) than schools with a majority (greater than 80%) white population. Jason P. Nance. 2017. “Student Surveillance, Racial Inequalities, and Implicit Racial Bias.” Emory Law Journal 66 (4): 765–837). Systematic, racial disparities in the U.S. criminal justice system run historically deep as well. In as early as 1922, white Chicagoans who testified on a report that city officials commissioned following uprisings after the murder of 17-year-old Eugene Williams asserted that “the police are systematically engaging in racial bias when they’re targeting Black suspects” (Khalil Gibran Muhammad, quoted in Anna North. 2020. “How racist policing took over American cities, explain by a historian.” Vox, June 6. (Accessed June 18, 2020). These same inequities spurred William Patterson, then-president of the Civil Rights Congress, to testify to the United Nations in 1951 that “the killing of Negroes has become police policy in the United States.” In addition, Benjamin (2018) notes that institutions in the U.S. tend toward the “wiping clean” of white criminal records, as in the case of a Tulsa, Oklahoma officer who had any evidence of her prosecution for the murder of Terrance Crutcher, a 43-year-old unarmed Black man, removed from her record altogether (Ruha Benjamin, 2018. “Black Afterlives Matter.” Boston Review July 28. Accessed on Jun 1, 2020.) . All of these factors combined lead to an overrepresentation of people of color in arrest data.

15 On the topic of doing “ethical” computing work, Abeba Birhane (2019) avers “the fact that computer science is intersecting with various social, cultural and political spheres means leaving the realm of the ‘purely technical’ and dealing with human culture, values, meaning, and questions of morality; questions that need more than technical ‘solutions’, if they can be solved at all.” Abeba Birhane. “Integrating (Not ‘Adding’) Ethics and Critical Thinking into Data Science.” Abeba Birhane (blog), April 29, 2019. It is worth mentioning the large body of computer vision, machine learning, and data science research that acknowledges the gross ethical malfeasance of the work typified in the offending research, reveals the impotence of data “debiasing” efforts, and argues for deeper integration of critical and feminist theories in computer science. See, for instance: Michael Skirpan, and Tom Yeh. 2017. “Designing a Moral Compass for the Future of Computer Vision Using Speculative Analysis.” In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 1368–77. Honolulu, HI, USA: IEEE, 2017. Hila Gonen, and Yoav Goldberg. 2019. “Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But Do Not Remove Them.” ArXiv:1903.03862 [Cs], September 24. Green, Ben. 2019. “‘Good’ Isn’t Good Enough.” In NeurIPS Joint Workshop on AI for Social Good. ACM,. Rashida Richardson et. al.. 2019. “Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice” NYU Law Review Online 192, February 13. Available at SSRN. Audrey Beard and James W. Malazita. “Greased Objects: How Concept Maintenance Undermines Feminist Pedagogy and Those Who Teach It in Computer Science.” To be presented at the EASST/4S Panel on Teaching interdependent agency: Feminist STS approaches to STEM pedagogy, August 2020.

16 In these applications, both groupings of pixels and human-given labels are directly observable, making such domains suitable for machine learning-based approaches. Criminality detection or prediction, on the other hand, are not because criminality has no stable empirical existence. See also: Momin M. Malik 2020. “A Hierarchy of Limitations in Machine Learning.” arXiv preprint arXiv:2002.05193.

17 Yarden Katz. 2017. “Manufacturing an Artificial Intelligence Revolution.” SSRN.

18 Arvind Narayanan. 2019. “How to Recognize AI Snake Oil.” Arthur Miller Lecture on Technology and Ethics, Massachusetts Institute of Technology, November 18, Cambridge, MA.

19 By insisting that signs of criminality can be located in biological material (in this case, features of the face), this research perpetuates the process of “racialization”, defined by Marta Maria Maldonado (2009: 1034) as “the production, reproduction of and contest over racial meanings and the social structures in which such meanings become embedded. Racial meanings involve essentializing on the basis of biology or culture.” Race is a highly contingent, unstable construct, the meaning of which shifts and changes over time with no coherent biological correlate. To imply that criminality is eminent in biology and that certain kinds of bodies are marked as inherently more criminal than others lays the groundwork for arguing that certain categories of people are more likely to commit crimes because of their embodied physicality, a clearly false conclusion. This has motivated leading scholars to move beyond analysis of race and technology to race as technology. In Wendy Hui Kyong Chun’s (2013, 7) words: “Could race be not simply an object of representation and portrayal, of knowledge or truth, but also a technique that one uses, even as one is used by it — a carefully crafted, historically inflected system of tools, mediation, or enframing that
builds history and identity?” See also; Simone Browne. 2010. “Digital Epidermalization: Race, Identity and Biometrics.” Critical Sociology 36 (1): 131–150; Simone Browne. 2015. Dark Matters: On the Surveillance of Blackness. Durham: Duke University Press; Alondra Nelson. 2016. The Social Life of DNA: Race, Reparations, and Reconciliation After the Genome. Boston, MA: Beacon Press; Amande M’Charek. 2020. “Tentacular Faces: Race and the Return of the Phenotype in Forensic Identification.” American Anthropologist doi:10.1111/aman.13385; Keith Wailoo, Alondra Nelson, and Catherine Lee, eds. 2012. New Brunswick: Rutgers University Press; Marta Maria Maldonado. 2009. “It is their nature to do menial labour’: the racialization of ‘Latino/a workers’ by agricultural employers.” Ethnic and Racial Studies, 32(6): 1017–1036; Wendy Hui Kyong Chun. 2013. “Race and/as Technology, or How to do Things to Race.” In Race after the Internet, 44–66. Routledge; Beth Coleman. 2009. “Race as technology.” Camera Obscura: Feminism, Culture, and Media Studies, 24 (70): 177–207. Fields across the natural sciences have long employed the construct of race to define and differentiate among groups and individuals. In 2018, a group of 67 scientists, geneticists, and researchers jointly dissented to the continuation of scientific discourse of race as a way to define differences between humans, and called attention to the inherently political work of classification. As they wrote, “there is a difference between finding genetic differences between individuals and constructing genetic differences across groups by making conscious choices about which types of group matter for your purposes. These sorts of groups do not exist ‘in nature.’ They are made by human choice. This is not to say that such groups have no biological attributes in common. Rather, it is to say that the meaning and significance of the groups is produced through social interventions.” “How Not To Talk About Race And Genetics.” 2018. BuzzfeedNews March 30. (Accessed June 18, 2020).

20 For further reading on why “strength of correlations, evaluated retrospectively,” is a more accurate term for “prediction,” see Momin M. Malik. 2020. “A Hierarchy of Limitations in Machine Learning.” arXiv preprint arXiv:2002.05193; Daniel Gayo-Avello. 2012. “No, You Cannot Predict Elections with Twitter.” IEEE Internet Computing November/December 2012. Arvind Narayanan (2019)

21 These reasons for real-world performance being less than test set performance include overfitting to the test set, publication bias, and distribution shift.

22 Hinton (2016) follows the construction of Black criminality through the policies and biased statistical data that informed the Reagan administration’s War on Drugs and the Clinton administration’s War on Crime. She tracks how Black criminality, “when considered an objective truth and a statistically irrefutable fact…justified both structural and everyday racism. Taken to its extreme, these ideas sanctioned the lynching of black people in the southern states and the bombing of African American homes and institutions in the urban north before World War II…In the postwar period, social scientists increasingly rejected biological racism but created a new statistical discourse about black criminality that went on to have a far more direct impact on subsequent national policies and, eventually, serve as the intellectual foundation of mass incarceration” (19). Elizabeth Hinton, 2016. From the War on Poverty to the War on Crime. Cambridge, MA: Harvard University Press. See also: Charlton D. McIlwain. 2020. Black Software: The Internet & Racial Justice, from the AfroNet to Black Lives Matter. Oxford, UK: Oxford University Press. Data-gathering enterprises and research studies that uncritically incorporate criminal justice data into their analysis fuel stereotypes of African-Americans as “dangerous” or “risks to public safety,” the history (and violent consequences) of which is reviewed in footnotes 12 and 14. The continued propagation of these stereotypes via academic discourse continues to foment anti-Black violence at the hands of the police. It is within this historically embedded, sociocultural construction of Black criminality and Blackness as inherently threatening that police often find their justification for lethal uses of force. Today, Black Americans are twice as likely as white Americans to be murdered at the hands of police. (Julie Tate, Jennifer Jenkins, and Steven Rich. 2020. “Fatal Force: Police Shootings Database.” Washington Post, May 13). As of June 9, 2020, the Mapping Police Violence project found that 24% of the 1,098 people killed by the police in 2019 were Black, despite the fact that Black people make up only 13% of the population in the U.S.

23 Wendy Hui Kyong Chun (2020) points to the performativity of predictive ML more broadly: “predictions are correct because they program the future [based on the past]. She offers a way to reimagine their use to work against an unwanted future: “In contrast, consider global climate-change models — they too make predictions. They offer the most probable outcome based on past interactions. The point, however, isn’t to accept their predictions as truth but rather to work to make sure their predictions don’t come true. The idea is to show us the most likely future so we will create a different future.” Wendy Hui Kyong Chun and Jorge Cottemay. 2020. “Reimagining Networks An interview with Wendy Hui Kyong Chun.” The New Inquiry.

24 Barocas et al. 2019. “A 2016 paper analyzed a predictive policing algorithm by PredPol, one of the few to be published in a peer-reviewed journal. By applying it to data derived from Oakland police records, they found that Black people would be targeted for predictive policing of drug crimes at roughly twice the rate of whites, even though the two groups have roughly equal rates of drug use (Lum and Isaac 2016). Their simulation showed that this initial bias would be amplified by a feedback loop, with policing increasingly concentrated on targeted areas. This is despite the fact that the PredPol algorithm does not explicitly take demographics into account.” (Solon Barocas, Moritz Hardt and Arvind Narayanan. 2019. Fairness and Machine Learning; Kristian Lum, and William Isaac. 2016. “To predict and serve?” Royal Statistical Society 13 (5): 14–19).

25 As Reginald Dwayne Betts (2015, 224) argues, “How does a system that critics, prisoners, and correctional officials all recognize as akin to torture remain intact today? The answer is simple: we justify prison policy based on our characterizations of those confined, not on any normative belief about what confinement in prison should look like.” Reginald Dwayne Betts. 2015. “Only Once I Thought About Suicide.” Yale LJF 125: 222. For more on the construction of deviant profiles as a means of justifying social exclusion, see: Sharon Dolovich. 2011. “Exclusion and control in the carceral state.” Berkeley Journal of Criminal Law 16: 259. David A. Harris. 2003. “The reality of racial disparity in criminal justice: The significance of data collection.” Law and Contemporary Problems 66 (3): 71–98. ; Michael J. Lynch. 2000. “The power of oppression: Understanding the history of criminology as a science of oppression.” Critical Criminology 9: 144–152.

26 This crisis is nothing new: Weizenbaum noted some of the epistemic biases of AI in 1985 (ben Aaron 1985), and Agre discussed the limits of AI methods in 1997 (Agre 1997). More recently, Elish and boyd directly interrogated practices and heritage of AI. Diana ben-Aaron. 1985. “Weizenbaum Examines Computers and Society.” The Tech. April 9. Philip E. Agre. 1997. “Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI.” In Bridging the Great Divide: Social Science, Technical Systems, and Cooperative Work, edited by Geof Bowker, Les Gasser, Leigh Star, and Bill Turner. Hillsdale, NJ: Erlbaum. M.C Elish and danah boyd. 2018. “Situating Methods in the Magic of Big Data and AI.” Communication Monographs 85 (1): 57–80.

27 This is perhaps unsurprising, given the conditions of such interventions, as Audre Lorde (1984) points out: “What does it mean when the tools of a racist patriarchy are used to examine the fruits of that same patriarchy? It means that only the most narrow parameters of change are possible and allowable.” Audre Lorde. “The Master’s Tools Will Never Dismantle the Master’s House.” 1984. The specific challenges of “ethical” AI practice (due to a lack of operational infrastructure, poorly-defined and incomplete ethics codes, and no legal or business incentives, among others) have been well documented in the past several years. Michael A. Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. “Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI.” In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 14, 2020. Stark, Luke, and Anna Lauren Hoffmann. 2019. “Data Is the New What? Popular Metaphors & Professional Ethics in Emerging Data Culture.” Journal of Cultural Analytics. Daniel Greene, Anna Lauren Hoffmann, and Luke Stark. 2019. “Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning,” In Proceedings of the 52nd Hawaii International Conference on System Sciences. Maui, HI,. Hagendorff, Thilo. 2019. “The Ethics of AI Ethics — An Evaluation of Guidelines.” ArXiv Preprint ArXiv:1903.03425, 15. Jess Whittlestone, Rune Nyrup, Anna Alexandrova, and Stephen Cave. 2019. “The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions.” In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 7,. Anna Jobin, Marcello Ienca, and Effy Vayena. “2019. The Global Landscape of AI Ethics Guidelines.” Nature Machine Intelligence, September 2.

28 Worthy of note are other discourses of “ethics” in AI, like transparency, accountability, ethics (with fairness, comprising the FATE framework), and trust. For discussion around fairness and bias, see Chelsea Barabas 2019. “Beyond Bias: Reimagining the Terms of ‘Ethical AI’ in Criminal Law”. S.M. West, M. Whittaker, and K. Crawford 2019. “Discriminating Systems: Gender, Race and Power in AI. AI Now Institute”. However, many scholars have identified limitations of research and design within the Fairness, Accountability, Transparency and Ethics (FATE) streams of machine learning to over-simplify the “interlocking matrix” of data discrimination and algorithmic bias which are always differentially (and disproportionately) experienced (Costanza-Chock, 2018). Others have argued that the focus on fairness through antidiscrimination discourse from law, policy and cognate fields over-emphasizes a liberal framework of rights, opportunities and material resources (Hoffman, 2019: 908). Approaches which bring to bear the lived experience of those who stand to be most impacted into the design, development, audit, and oversight of such systems are urgently needed across tech ethics streams. As Joy Buolamwini notes, “Our individual encounters with bias embedded into coded systems — a phenomenon I call the ‘coded gaze’ — are only shadows of persistent problems with inclusion in tech and in machine learning.” Joy Buolamwini. 2016. “Unmasking Bias”. Medium. Dec 14. In order for “tech ethics” to move beyond simply mapping discrimination, it must contend with the power and politics of technological systems and institutions more broadly. Sonja, Solomun. 2021. “Toward an Infrastructural Approach to Algorithmic Power” in Elizabeth Judge, Sonja Solomun and Drew Bush, eds. [Forthcoming]. Power and Justice: Cities, Citizens and Locational Technology. UBC Press.

29 Greene, Hoffman, and Stark 2019. Chelsea Barabas, Colin Doyle, JB Rubinovitz, and Karthik Dinakar. 2020. “Studying up: reorienting the study of algorithmic fairness around issues of power”. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAccT ’20). Association for Computing Machinery, New York, NY, USA, 167–176. Sasha Costanza-Chock. 2018. “Design Justice: towards an intersectional feminist framework for design theory and practice”. Proceedings of the Design Research Society (2018).; Madeleine Clare Elish and danah boyd. 2018. “Situating methods in the magic of Big Data and AI”. Communication monographs 85, 1 (2018), 57–80.; Andrew D Selbst and Solon Barocas. 2018. “The intuitive appeal of explainable machines”. Fordham Law Review 87 (2018), 1085.

30 We borrow verbiage from set theory here to illustrate the deep complexity of such contexts, and to illustrate the peril of attempting to discretize this space.

31 In outlining parallels between archival work and data collection efforts for ML, Eun Seo Jo and Timnit Gebru (2020) bring forth a compelling interdisciplinary lens to the ML community, urging “that an interdisciplinary subfield should be formed focused on data gathering, sharing, annotation, ethics monitoring, and record-keeping processes.” Eun Seo Jo and Timnit Gebru. 2020. “Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning.” Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. For other great examples of this kind of interdisciplinary scholarship, see: Chelsea Barabas, Colin Doyle, JB Rubinovitz, and Karthik Dinakar. 2020. “Studying up: reorienting the study of algorithmic fairness around issues of power”. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAccT ’20). Association for Computing Machinery, New York, NY, USA, 167–176.

32 Several key organizations are leading the charge in forwarding reflexive, critical, justice-focused, and anti-racist computing. Examples include Data 4 Black Lives, which is committed to “using the datafication of our society to make bold demands for racial justice” and “building the leadership of scientists and activists and empowering them with the skills, tools and empathy to create a new blueprint for the future” (Yeshimabeit Milner. 2020. “For Black people, Minneapolis is a metaphor for our world.” Medium May 29. Accessed June 4, 2020). Another example is Our Data Bodies, which is “based in marginalized neighborhoods in Charlotte, North Carolina, Detroit, Michigan, and Los Angeles, California,” and tracks “the ways [these] communities’ digital information is collected, stored, and shared by government and corporations…[working] with local communities, community organizations, and social support networks, [to] show how different data systems impact re-entry, fair housing, public assistance, and community development.” A third example is the Algorithmic Justice League, which combines “art, research, policy guidance and media advocacy” to build “a cultural movement towards equitable and accountable AI,” which includes examining “how AI systems are developed and to actively prevent the harmful use of AI systems” and “[preventing] prevent AI from being used by those with power to increase their absolute level of control, particularly where it would automate long-standing patterns of injustice.”

33 Abstraction as epistemology in computer science was independently developed by Malazita and Resetar (2019) and Selbst et al. (2019). James W. Malazita and Korryn Resetar. 2019. “Infrastructures of Abstraction: How Computer Science Education Produces Anti-Political Subjects.” Digital Creativity 30 (4): 300–312. Andrew D. Selbst, danah boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. “Fairness and Abstraction in Sociotechnical Systems.” In Proceedings of the Conference on Fairness, Accountability, and Transparency — FAT* ’19, 59–68. Atlanta, GA, USA: ACM Press.

34 Sareeta Amrute (2019, 58) argues that the standard, procedural approach of conventional tech ethics “provide[s] little guidance on how to know what problems the technology embodies or how to imagine technologies that organise life otherwise, in part because it fails to address who should be asked when it comes to defining ethical dilemmas” and “sidesteps discussions about how such things as ‘worthy and practical knowledge’ are evaluated and who gets to make these valuations.” In so doing, it risks reinforcing “narrow definitions of who gets to make decisions about technologies and what counts as a technological problem.” Alternatively, postcolonial and decolonising feminist theory offer a framework of ethics based on relationality rather than evaluative check-lists, in a way that can “move the discussion of ethics from establishing decontextualized rules to developing practices to train sociotechnical systems — algorithms and their human namkers — to being with the material and embodied situations in which these systems are entangled, which include from the start histories of race, gender, and dehumanisation” (ibid. Sareeta Amrute. 2019. “Of Techno-Ethics and Techno-Affects.” Feminist Review 123 (1): 56–73). In other words, the conventional frame of “tech ethics” does not always acknowledge that the work of computer science is inherently political. As Ben Green (2019) states, “Whether or not the computer scientists behind [racist computational criminal prediction projects] recognize it, their decisions about what problems to work on, what data to use, and what solutions to propose involve normative stances that affect the distribution of power, status, and rights across society. They are, in other words, engaging in political activity. And although these efforts are intended to promote “social good,” that does not guarantee that everyone will consider such projects beneficial.” See also: Luke Stark. 2019. “Facial recognition is the plutonium of AI.” XRDS: Crossroads, The ACM Magazine for Students 25 (3): 50–55. For efforts that exemplify the relational approach to ethics that Amrute endorses and includes the people most marginalized by technological interventions into the design process, see Sasha Costanza-Chock. 2020. Design Justice: Community-Led Practices to Build the Worlds We Need. Cambridge, MA: MIT Press. For an example of an alternative ethics based around relationality, see Jason Edward Lewis, Noelani Arista, Archer Pechawis, and Suzanne Kite. 2018. “Making kin with the machines.” Journal of Design and Science.

35 Power is here defined as the broader social, economic, and geopolitical conditions of any given technology’s development and use. As Ruha Benjamin (2016; 2019), Safiya Umoja Noble (2018), Wendy Hui Kyong Chun (2013; 2019), Taina Bucher (2018), and others have argued, algorithmic power is productive; it maintains and participates in making certain forms of knowledge and identity more visible than others.

36 Crime prediction technology is not simply a tool–it can never be divorced from the political context of its use. In the U.S., this context includes the striking racial dimension of the country’s mass incarceration and criminalization of racial or ethnic minorities. Writing in 2020, acclaimed civil rights lawyer and legal scholar, Michelle Alexander (2020, 29) observes that “the United States imprisons a larger percentage of its black population than South Africa did at the height of apartheid”. Michelle Alexander. 2020. The new Jim Crow: Mass incarceration in the age of colorblindness. The New Press.

37 The Carceral Tech Resistance Network (2020) provides a useful set of guiding questions to evaluate new projects, procurements and programs related to law enforcement reform. These questions are centered in an abolitionist understanding of the carceral state, which challenges the notion that researchers and private actors for profit can “fix” American policing through technocratic solutions that are largely motivated by profit and not community safety and reparations for historical harms. For a comprehensive record of the risks law enforcement face recognition poses to privacy, civil liberties, and civil rights, see Clare Garvie, A. M. Bedoya, and J. Frankle. 2016. “The perpetual line-up. Unregulated police face recognition in America”. Georgetown Law Center on Privacy & Technology.

38 2020. “HU facial recognition software predicts criminality.” Harrisburg University of Science and Technology, May 5. See also Rose Janus. 2020. “University Deletes Press Release Promising ‘Bias-Free’ Criminal Detecting Algorithm.” Vice, May 6

__________________________

Signatories

Audrey Beard, MS student of Computer Science, Rensselaer Polytechnic Institute

Sonja Solomun, Research Director, Centre for Media, Technology & Democracy, McGill University

Chelsea Barabas, PhD Candidate, Massachusetts Institute of Technology

Beth Semel, Postdoctoral Associate, Massachusetts Institute of Technology

Theodora Dryer, Faculty Researcher, New York University (AI Now Institute)

Meredith Whittaker, Research Professor, NYU; Co-founder, AI Now Institute, NYU

Cathy O’Neil, author of Weapons of Math Destruction and founder of ORCAA

Wendy Hui Kyong Chun, Simon Fraser University’s Canada 150 Research Chair in New Media

Chris Gilliard, Independent Privacy Researcher

Sarah Myers West, Postdoctoral Researcher, New York University (AI Now Institute)

Kelly Gates, Associate Professor, UC San Diego

Amba Kak, Director of Global Programs, New York University (AI Now Institute)

Momin M. Malik, Postdoctoral Data Science Fellow, Berkman Klein Center for Internet & Society at Harvard University

Taylor Owen, Beaverbrook Chair in Media, Ethics and Communications, Max Bell School of Public Policy, McGill

Ben Green, PhD Candidate, Harvard University

James W. Malazita, Assistant Professor, RPI

Carrie Anne Streeter, PhD Candidate, University of California, San Diego

Vincent Southerland, Executive Director, Center on Race, Inequality, and the Law at NYU Law

Meredith Broussard, Associate Professor, NYU; author of Artificial Unintelligence

Ethan Zuckerman, Associate Professor, University of Massachusetts, Amherst

Brook Hopkins, Executive Director, Criminal Justice Policy Program, Harvard Law School

Alex Hanna, Research Scientist, Google

Os Keyes, Ada Lovelace Fellow, University of Washington

Neda Atanasoski, Professor of Feminist Studies and Critical Race and Ethnic Studies, UC Santa Cruz

Jeffrey Selbin, Clinical Professor of Law, UC Berkeley School of Law

Marc S. Janowitz

Asher Waite-Jones, Berkeley Law Faculty

Rodrigo Ochigame, PhD Candidate, MIT

Morgan Klaus Scheuerman, PhD Student of Information Science, University of Colorado Boulder

Michelle Carney, Lecturer, Stanford d.school

Simone Wu, Software Engineer, Google

Madeleine Clare Elish, Program Director, Data & Society Research Institute

Robyn Caplan, Data & Society Research Institute

Sareeta Amrute, Associate Professor, Director of Research, and Principal Researcher, University of Washington and the Data & Society Research Institute

Michele Gilman, Venable Professor of Law, University of Baltimore School of Law

Dan Bouk, Associate Professor of History, Colgate University

Matt Goerzen, Researcher, Data & Society Research Institute

Eden Medina, Associate Professor of Science, Technology, and Society, MIT

Gillian Smith, Associate Professor of Computer Science, Worcester Polytechnic Institute

Will Hawkins, Research Associate, DeepMind

Vinodkumar Prabhakaran, Research Scientist, Google Research

Stefan Helmreich, Professor of Anthropology, MIT

Andrew Smart, Researcher, Google

Catherine D’Ignazio, Assistant Professor of Urban Science & Planning, MIT

Nick Seaver, Assistant Professor, Tufts University

Luke Stark, Postdoctoral Researcher, Microsoft Research

Dwai Banerjee, MIT

Verónica Uribe A. , Communication Department & Science Studies Program UC San Diego

Yelena Gluzman, PhD Candidate, UC San Diego

Emanuel Moss — PhD Candidate, Anthropology — CUNY Graduate Center

Matthew Vitz, Associate professor of History, UC-San Diego

Magdalena Donea, PhD Student, UC San Diego

Tawana Petty, Director, Data Justice Program at Detroit Community Technology Project

Cathy Gere, professor of history of science, UC San Diego

Elena Spitzer, Program Manager, Google

Veena Dubal, Professor of Law, University of California, Hastings

Jamie Morgenstern, Assistant Professor, University of Washington.

Seth J. Prins, PhD MPH, Assistant Professor of Epidemiology and

Sociomedical Sciences, Columbia University

Cristopher Moore, Professor, Santa Fe Institute

Erin McElroy, NYU (AI Now Institute)

Jonathan Zong, PhD researcher, MIT CSAIL

Ben Hutchinson, Researcher, Google

danah boyd, Partner Researcher, Microsoft Research

Megan Graham, Clinical Supervising Attorney, UC Berkeley School of Law

Laurence Ralph, Professor of Anthropology, Princeton University

Lisa Nakamura, professor and director of digital studies, U. Michigan Ann Arbor

Rhea Rahman, Assistant Professor of Anthropology, Brooklyn College, CUNY

Ezra J. Teboul, Ph.D., Rensselaer Polytechnic Institute

William Clyde Partin, Research Analyst, Data & Society

Faithe Day, CLIR Postdoctoral Fellow in Data Curation, Purdue University

Prathima Muniyappa, Researcher, MIT Media Lab

Joy Buolamwini, Founder of the Algorithmic Justice League

Emily Boardman Ndulue, Researcher, Center for Civic Media, MIT Media Lab

Arwa Mboya

Lauren Klein, Associate Professor, Department of Quantitative Theory and Methods, Emory University

Ellen Long, MIT Media Lab

Neha Narula, Director, Digital Currency Initiative, MIT Media Lab

Ian Condry, Comparative Media Studies / Writing, MIT

Mary Anne Smart, PhD Student, University of California, San Diego

Stacy Godfreey-Igwe, Undergraduate Student, MIT

James Kilgore, Media Fellow, MediaJustice

Igor Rubinov, co-founder, Dovetail Labs

Alexa Hagerty, PhD. Associate Researcher, University of Cambridge

Dean Jansen, Executive Director, Participatory Culture Foundation

Matthew Battles

Cindy S. Bishop -MIT Civic Media

Manuel Sabin, PhD Candidate, UC Berkeley

Dr. Kerry McKenzie, University of California, San Diego

Deb Raji, AI Now Institute, New York University

Ola Kowalewski, Faculty, Singularity University

Samir Dayal, Professor, Bentley University.

Pat Pataranutaporn, MIT Media Lab

Chijindu Obiofuma, Legal Fellow, Criminal Justice Policy Program, Harvard Law School

Nicholas Krapf

Jack Reid, PhD Candidate, Massachusetts Institute of Technology

Matthew J. Bietz, Researcher, University of California, Irvine

Daniel Engelberg, MIT

Sahithi Madireddy, MIT

Jahrid Clyne, Undergraduate, MIT

Sharon Lin, Undergrad, MIT

Alia Husain Rizvi, MIT Department of Urban Studies and Planning Undergrad

Jonnie Penn (University of Cambridge)

Edward L. Platt, PhD Candidate, University of Michigan School of Information

Sabelo Mhlambi, Technology & Human Rights at Harvard University Fellow, Carr Center for Human Rights Policy

Colin Doyle, Climenko Fellow and Lecturer on Law, Harvard Law School

Mike Nellis, Emeritus Professor of Criminal Justice, University of Strathclyde , Glasgow

Azzo Seguin, MIT

Elizabeth Popkov, Massachusetts Institute of Technology

Zoe Gong, MIT

Andrea Arias, undergraduate student at MIT

Brandie Nonnecke, Director, CITRIS Policy Lab, UC Berkeley

Elena Sobrino, PhD Candidate, MIT

Michelle Kornberg, MIT

Camille Crittenden, Executive Director, Center for IT Research in the Interest of Society (CITRIS), University of California

Ali Alkhatib, Research Fellow, Center for Applied Data Ethics

Maia Campbell, Undergraduate, MIT

Nathaniel Poor, Underwood Institute

Morgan G. Ames, Assistant Adjunct professor, School of Information, U.C. Berkeley

Thomas Krendl Gilbert, PhD candidate, UC Berkeley

Samantha Robertson, PhD Student, UC Berkeley

Julia Irwin, PhD Student, UC Berkeley

Morgan Livingston, Undergraduate, University of California, Berkeley

Niloufar Salehi, Assistant Professor, UC, Berkeley

Jesse Livezey, Postdoctoral Researcher, Lawrence Berkeley National Lab

Dr. Jack Poulson, Executive Director, Tech Inquiry

Marc Faddoul

Pratik Sachdeva, Graduate Student, University of California, Berkeley

Devin Michelle Bunten, Assistant Professor of Urban Economics and Housing, MIT

Ryo Morimoto, Assistant Professor of Anthropology, Princeton University

Justin Steil, Associate Professor, MIT

Brian Bartz, MFA, UC Berkeley

Zeerak Waseem, Ph.D. Candidate, University of Sheffield

Katherine Mohr, MIT

Cherise McBride, Ph.D., University of California, Berkeley Graduate School of Education

Alex Saum, Associate Professor of New Media and Spanish, UC Berkeley

Jared Joseph, Ph.D. Candidate, University of California — Davis

Natalia Bilenko

Rachel Lawrence, PhD student, UC Berkeley

Ryan Ikeda, University of California, Berkeley

Madison Stoddard

Vidushi Marda, Senior Programme Officer, ARTICLE 19

Gabriel Pereira, PhD Fellow, Aarhus University (Denmark)

Victor Vicente-Palacios, PhD (Philips Healthcare)

Adam Greenfield, writer

Nicholas de Monchaux, Professor and Head of Architecture, MIT

Prof. Dr. Ines Weizman, Bauhaus-Universität Weimar

Hannah Sassaman, Policy Director, Movement Alliance Project

Eber Nolasco-Martinez

Shumi Bose, Senior Lecturer, University of the Arts London & Royal College of Art

Sasha Costanza-Chock, Associate Professor of Civic Media, MIT

Hemank Lamba, Postdoctoral Associate, Carnegie Mellon University

Alex Reinking, PhD student, UC Berkeley

Edward McFowland III, Assistant Professor, University of Minnesota

Sheila Baber, Undergraduate Student, MIT

Keller Easterling, Professor Yale University

Molly Jane Nicholas, Graduate Student Researcher, Electrical Engineering and Computer Science department, University of California, Berkeley

Jackie Wang, Incoming Assistant Professor of Culture and Media Studies, The New School

Jillian C. York

David Sengeh, Chief Innovation Officer, DSTI

Greta Byrum, Co-Director, Community Tech New York

Casey Hong, MIT

Lilly Irani, Associate Professor, UC San Diego

Khahlil Louisy, Harvard University

Pedro Reynolds-Cuéllar, Ph.D Student, MIT

Tony Platt, Distinguished Affiliated Scholar, Center for the Study of Law & Society, University of California, Berkeley

R Mishael Sedas, Research Assistant, University of California, Irvine

Alexander J. Root, Undergraduate, Massachusetts Institute of Technology

Marco Lu Nocito, Undergraduate Student, MIT

Kaili Glasser, Undergraduate, Massachusetts Institute of Technology

Alex Saum, Associate Professor of New Media and Spanish, UC Berkeley

Emily Levenson, MIT Department of Urban Studies and Planning undergraduate

Kierstin Torres, Undergraduate Massachusetts Institute of Technology

Natasha Hirt, Massachusetts Institute of Technology School of Architecture and Planning

Roderic Crooks, Assistant Professor of Informatics, UC Irvine

Micol Seigel, Professor, Indiana University, Bloomington

Jon Turner, PhD Student, University of California, Berkeley

Gilbert Bernstein, Post-Doc, UC Berkeley

Meital Hoffman, Graduate Student, Massachusetts Institute of Technology

Mehtab Khan

Lydia T. Liu, PhD student, UC Berkeley

Renata Barreto, JD / PhD Candidate, Berkeley Law

Jacob Gaboury, Assistant Professor of Film and Media, University of California, Berkeley

Anne Jonas, PhD Candidate, UC Berkeley

Jola Idowu, Graduate Student, Massachusetts Institute of Technology

Kara Schechtman, MPhil Student in Philosophy at Trinity College Dublin

Jesse Anderson, B.S. Candidate Bioinformatics, University of Maryland

Richmond Wong, PhD Candidate, UC Berkeley

Matt Hodel, Massachusetts Institute of Technology

Roel Dobbe, Postdoctoral Researcher, NYU (AI Now Institute)

Alfonso Parra Rubio, MIT Center for Bits and Atoms

Rubez Chong, MIT Media Lab, Center for Civic Media

Niklas Mannhardt, undergraduate, MIT

Paul “Khalid” Alexander, Founder and President of Pillars of the Community

Safiya Umoja Noble, PhD, Associate Professor and Co-Director, UCLA Center for Critical Internet Inquiry

Astra Taylor, Co-Founder, The Debt Collective

Lucianne Walkowicz, Astronomer, The Adler Planetarium

Chanda Prescod-Weinstein, Assistant Professor of Physics and Core Faculty in Women’s and Gender Studies, University of New Hampshire

Renata Avila , Race and Technology Fellow Stanford Institute for Human-Centered Artificial Intelligence / Center for Comparative Studies in Race and Ethnicity (CCSRE)

Dr Jess Wade, Imperial College London

Desmond Upton Patton, Columbia University

Sarah T. Roberts, Associate Professor and Co-Director, Center for Critical Internet Inquiry (C2i2), UCLA

Dvir Yogev, PhD student, UC Berkeley

Joseph Klett, PhD, research staff, Science History Institute

Joanne McNeil

Timnit Gebru, Senior Research Scientist, Google

Samy Bengio, Distinguished Scientist, Google Research, Brain Team

Dr Cory Doctorow, visiting professor of computer science, Open University UK; visiting professor practice and library science, University of North Carolina; research affiliate, MIT Media Lab

Wendy Liu, writer

Dr. Rumman Chowdhury, Global Lead, Responsible AI, Accenture

Greg M. Epstein, Humanist Chaplain at Harvard and MIT

Abeba Birhane, PhD candidate, University College Dublin

Jevan Hutson, Tech Policy Advocate / HCI Researcher, University of Washington School of Law

Caroline Sinders, Convocation Design + Research

Jonathan Sterne, Professor, Department of Art History and Communication Studies, McGill University.

Donna Lanclos, PhD, Anodyne Anthropology LLC

Francis Tseng, Lead Independent Researcher, Jain Family Institute

Chris Dulhanty, MASc Candidate, University of Waterloo

Saeid Tizpaz-Niari, PhD, CU Boulder

Dr. Frank Schimmel

Acacia Ackles, PhD Student, BEACON Center at Michigan State University

Dr Fintan Sheerin, MA MB BChir MRCP FRCR, Consultant Neuroradiologist, Oxford University Hospitals

Ryan Shaw, Associate Professor, School of Information and Library Science, University of North Carolina at Chapel Hill

Kushal Sood, Consultant Solicitor-Advocate, Instalaw Solicitors

Dr Shane A. McGarry, Data Scientist

S.T.O.P. — The Surveillance Technology Oversight Project

Martin Felipe Pastor Iglesias Rouco Conde De Cea

Giulio Valentino Dalla Riva, Lecturer in Data Science, University of Canterbury | Te Whare Wananga o Waitaha

Dr Hyo Yoon Kang, Senior Lecturer in Law, Kent Law School, United Kingdom

Min Baek, Founder, Philosophy of Computation at Berkeley

Zanele Munyikwa, PhD Candidate, MIT Sloan School of Management

Jeffrey Bigham, Associate Professor, Carnegie Mellon University

Celeste Kidd, Assistant Professor, UC Berkeley

Michael Nitabach, Professor, Yale School of Medicine

Romy Rasper, Science and Technology Studies (STS) Master’s Student, Technical University Munich (TUM), Munich Center for Technology in Society (MCTS)

Anhong Guo, PhD Candidate, Carnegie Mellon University

Stuart Watt, PhD, CTO at Turalt Inc

Tarek R. Besold, Chief Science Officer, Alpha Health, Telefonica Innovation Alpha

Olly R. Teregulova, PhD Candidate, Durham University

Joey Paulsen, Data Scientist, C.H. Robinson

Jutta Treviranus, Inclusive Design Research Centre, OCAD University

Michelle Galeas

Natasha Mhatre, Canada Research Chair in Invertebrate Neurobiology, Western University

Vladan Joler, Professor, University of Novi Sad; Co-founder, SHARE Foundation

Charlton McIlwain, Vice Provost and Professor of Media, Culture, and Communication, NYU

Rupesh Kumar Srivastava, Senior Research Scientist, NNAISENSE

Derek S Lundberg, Postdoc, Max Planck Institute for Developmental Biology

Manon Ironside, PhD candidate in Clinical Science, UC Berkeley

Dylan Mulvin, Assistant Professor, London School of Economics and Political Science

Cynthia Bennett, Ph.D. Carnegie Mellon University

Susie Swithers, Professor of Psychological Sciences, Purdue University

David Murakami Wood, Associate Professor, Surveillance Studies Centre, Queen’s University, Ontario.

Yoshua Bengio, full professor, scientific director of Mila, University of Montreal

Sasha Luccioni, Postdoctoral Researcher, Université de Montréal

David Rolnick, Postdoctoral Fellow, University of Pennsylvania

Doina Precup, McGill University, Mila & DeepMind Montreal

Alexia Jolicoeur-Martineau, PhD Student, Mila

Kushal Arora, Ph.D. Student, Mila and McGill University

JS, Software Engineer, Microsoft

Matthew Andres Moreno, Graduate Student, Michigan State University

Teffera Teffera, Graduate student, 3A Institute at ANU

Jules Gagnon-Marchand

Nantina Vgontzas, PhD Candidate, New York University

João Felipe Santos, PhD, NVIDIA

Emily M. Bender, Professor of Linguistics, University of Washington

Monique Desnoyers, MDes, Canadian Fédéral Government employée

Patrick R. Alba

Benson Mwangi, PhD Assistant Professor — The University of Texas Health Science Center at Houston (Uthealth)

Steve Sloto, Data Engineer, Amazon Web Services

Jacob Eisenstein, Research Scientist, Google AI

Julia Hockenmaier, Associate Professor, Computer Science, University of Illinois

Siddharth Agarwal, KU Leuven MAI

Suresh Venkatasubramanian. Professor, University of Utah.

Hansenclever Bassani, Associate Professor, Universidade Federal de Pernambuco

Christopher J. Morten, Fellow and Supervising Attorney, NYU School of Law

Andrew Garrett, Professor of Linguistics, University of California, Berkeley

Nicolas Le Roux, senior research scientist, Canada CIFAR AI Chair, Google Brain

Simone Browne, Professor, University of Texas at Austin

Gonzalo G. De Polavieja, Champalimaud Research, Portugal

María Angel, PhD Student, University of Washington School of Law

Casey Fiesler, Information Science, University of Colorado Boulder

Jeremy Barnes — Postdoctoral Fellow — University of Oslo

Arturo Magidin, Associate Professor of Mathematics, UL Lafayette

Ushnish Sengupta, PhD Candidate, University of Toronto

Neil Thawani

Dr Kaye Rolls, University of Wollongong

Jennifer Aronson, University of Washington Law Graduate, Class of 2020

Sarah Tuttle, Assistant Professor of Astronomy, University of Washington, Seattle

Alan Lundgard, Teaching Fellow, MIT

Prasanna Parthasarathi, PhD Candidate, McGill University.

Gabriel Pettier — software engineer

Lisa Davidson, Professor of Linguistics, New York University

A Kinikar, PhD Student, ETHZ

Maggie Oates, PhD Student, Carnegie Mellon University

Nathan Oseroff-Spicer, researcher

Nikita Srivatsan, PhD Student, Carnegie Mellon University

Aasakiran Madamanchi, Lecturer, University of Michigan School of Information

Leif Nepstad

Carolyn Jane Anderson, PhD candidate, University of Massachusetts, Amherst

Natasha Warner, Professor, University of Arizona

Fabiola Henri

Simon Guiroy, PhD Student, Mila — Université de Montréal

Professor Kathy Bowrey, Faculty of Law, University of New South Wales, Australia.

Nazanin Sepahvand, McGill, Yes

Krystal Maughan, PhD student, University of Vermont

Ryan A. Cannistraci, Ph.D. Candidate, University of Tennessee

Ted Pedersen, Professor, Department of Computer Science, University of Minnesota, Duluth

Mazen Alotaibi

Dr. Sonia Balagopalan, Assistant Professor, School of Mathematical Sciences, Dublin City University

Alex Davis, Associate Professor, Carnegie Mellon University

Blakeley H. Payne, MIT Media Lab Alumna ‘20

Jeremy Kahn, Strategy Linguist, AR/VR — Facebook

Sara Beery, PhD Candidate, Caltech

Mutale Nkonde, fellow Berkman Klein Center of Internet & Society and Digital Civil Society Lab, Stanford University

Rahul Dandekar, Postdoctoral Fellow, IMSc, Chennai, India

Robert White, academic Senate Vice President and physics instructor, Butte College

R. Stuart Geiger, Assistant Professor, UC San Diego, Dept of Communication and Halıcıoğlu Data Science Institute

Catherine Holmes, J.D University of Washington School of Law alumni

Jaime Ullinger, Associate Professor of Anthropology, Quinnipiac University

Natalie Bernat, PhD Candidate, Caltech

Frédéric Bastien, NVIDIA, principale deep learning compiler engineer

massimo caccia, phd, MILA

Su Lin Blodgett, PhD Candidate, UMass Amherst

Bo Wang, Dr, University of Oxford

Brigitte Rooney, PhD, California Institute of Technology

Eldan Goldenberg, data analyst.

Román Corfas, Postdoc, Caltech

Claire Fontaine, Postdoctoral Researcher, University of Pennsylvania, School of Social Policy & Practice

Sanmi Koyejo, Assistant Professor, University of Illinois at Urbana-Champaign

Jorge Garcia Flores, Research Engineer, CNRS-Université Sorbonne Paris Nord

Clayton Lewis, Professor of Computer Science, University of Colorado Boulder

Hillary J. Haldane, PhD. Professor of Anthropology, Quinnipiac University

Eugenio Tisselli Vélez, PhD, Programa ACT — UNAM

Jack B. Muir, Graduate Student, Seismological Laboratory of the California Institute of Technology

Lauren M. Sardi, PhD, Associate Professor of Sociology, Quinnipiac University

Dr. William Janeway, University of Cambridge

Jon Pincus, A Change Is Coming

Jerome Hodges, Managing Director and Chief Research Officer, Jain Family Institute

William Wentworth

Brandeis Marshall, Professor, Spelman College

Jacinta González, Senior Campaign Organizer, Mijente

Haley Bauser, Applied Physics PhD Candidate, Caltech

Sarthak Jain, PhD student, Northeastern University

Qin Shi Huang, Masters Student, Tsinghua University

Paulus A J M de Wit MSc., PhD candidate, Universidade Federal de Santa Catarina

Hugo Larochelle, Senior Staff Research Scientist, Canada CIFAR AI Chair, Google Brain

Smarika Lulz, PhD Researcher, Dept. of Law, Humboldt University Berlin

Han Kim, PhD Student, Caltech

Nikolaus Howe, MSc Student, Mila + University of Montreal

Daniel Currie Hall, associate professor, Program in Linguistics, Saint Mary’s University, Halifax, Nova Scotia

Brian Nord, Associate Scientist, Fermi National Accelerator Laboratory (FNAL)

Margarita Boyarskaya, PhD Candidate, NYU Stern dept. of Technology, Operations, and Statistics

Rachel Thomas, Director of University of San Francisco Center for Applied Data Ethics

Philip Alston, Professor, New York University School of Law

Katie Albanese, Attorney and Physicist

Meng Cao, Master in Computer Science, McGill University/Mila

Chris Brew, Linguist, Facebook AR/VR

Wendy Norris, PhD Candidate, University of Colorado Boulder

Nicolas Gontier; PhD; Mila

Pierre-Luc Bacon, assistant professor, Mila University of Montreal

Kiran Samuel, PhD student, Columbia University

John Philip, Software Engineer, BuzzFeed

Heidi Harley, Professor, University of Arizona

Aronis Mariano, Machine learning engineer

Henry M. Clever, Ph.D. Candidate, Georgia Institute of Technology

Howard Huang, PhD Student, Mila & McGill University

Joshua Loftus, Assistant Professor of Statistics, New York University

Corey Lynch, Research Engineer, Google Brain

Ruth Rouvier, PhD Candidate, UC Berkeley

Jennifer Medina, Graduate Student, California State University Fullerton

Sadik Muzemil

Cynthia Matuszek, UMBC

Aparajithan Venkateswaran

Michael Katell, PhD Candidate, University of Washington

Matthew Hernandez, J.D. — University of Washington School of Law

Fiona J McEvoy

Nandita Sampath, MPP, UC Berkeley

Daniel Estrada, Lecturer, NJIT

Chung-chieh Shan, Associate Professor, Indiana University

Julian Stephens, Undergraduate, Georgia Institute of Technology

Prof. William J. Bowman, University of British Columbia

Manuela Girotti, researcher, Mila — Université de Montréal and Concordia University

Anisha Karnail, BS-MS Student, IISER Pune

Caitlin Green

Alvin Grissom II, Ph.D., Assistant Professor of Computer Science, Haverford College

Paul Crouther, MSc Student, Mila + University of Montreal

Hannah Mieczkowski, PhD candidate, Stanford University

Amy Fountain, PhD, University of Arizona

Emma Ward, PhD Candidate, Donders Institute for Brain ,Cognition and Behaviour

Vincent Michalski, Ph.D. candidate, Université de Montréal/Mila

Daniel Johnson, Google Research

Milan Roberson, B.S. Caltech 2020

Leonardo Gonzalez

Anastasia Schaadhardt, PhD Student, University of Washington Information School

Tegan Maharaj, PhD student, Mila (Montreal Polytechnic)

Lev M Tsypin (PhD Candidate at the California Institute of Technology)

Brendan ThesinghAbigail Gilbert

Dr Abigail Gilbert, Institute for the Future of Work

Michael Israel, Associate Professor of English Language, University of Maryland, College Park

Professor Gina Neff, Oxford Internet Institute

Charlie Negri

Dr T.Timan, TNO, The Netherlands

Neil Girdhar

Angela Cardoso,Philosophy, Science and values Master student, Universidade del Pais Vasco

Angela Cardoso,Philosophy, Science and values Master student, Universidade del Pais Vasco

Aparna Krishnan, BS in Computer Science Candidate, University of Texas at Austin

Olivia Guest, PhD

Jonathan Pekar, MS, UC San Diego

Achref Jaziri, Goethe University Frankfurt am Main

Sarah Lamm, Graduate Student, Kansas State University

Mariya I. Vasileva, PhD candidate, University of Illinois at Urbana-Champaign

David Stap

Dr. Albert Ali Salah, Utrecht University

Habtamu Bekele, Addis Abeba University

wuletawu.abera@cgiar.org

Reubs Walsh, Vrije Universiteit Amsterdam

Leeza Soulina, J.D., University of Washington School of Law

Prof. Dr. Anne Koelewijn, Machine Learning & Data Analytics Lab, FAU Erlangen-Nürnberg

Titouan Vayer, PhD Student, IRISA.

David Dao, PhD Student, ETH Zurich

Dr Andrew Princep, Keeley-Rutherford Junior Research Fellow, University of Oxford

Dr. Matan Yah Ben Zion, ESPCI, Paris

Lorijn Zaadnoordijk — Postdoctoral Researcher — Trinity College Dublin

Marlo Souza, PhD, Universidade Federal da Bahia

Kirstie Whitaker, Programme Lead for Tools, Practices and Systems, Alan Turing Institute

Mr Henry Wilde, Cardiff University

Dr. Christina Bergmann, Max Planck Institute for Psycholinguistics

Jevan Hutson, Tech Policy Advocate / HCI Researcher, University of Washington School of Law

Scott Robbins, PhD Researcher, Delft University of Technology

Jesse Benjamin, PhD Student, University of Twente

Esther Payne, privacy and community advocate, Librecast.

Geoffrey Jobson

Nick Bestor, Lecturer, University of Texas at Austin

Parker Rose

Ayodele Odubela, Data Scientist, SambaSafety

Imani, Student Rotterdam University of Applied Siences

Markus Andrezak, überproduct GmbH, Germany

Mário PlattMolly Nagele, MIT 2020Ana Valdivia -

Dr Garfield Benjamin, Researcher, Solent University

Moe Bakheit; MEASC International

Kalim Ahmed, UCL

Prabhant Singh, Research engineer, TU Eindhoven

dr. Felienne Hermans, Leiden University

Bram van Es, PhD, University Medical Center Utrecht

Elida Maiques

Grady Booch, IBM Fellow, IBM Research

Jelle van Dijk, University of Twente

Pieter Sleutels, student of medical history, Radboud University Nijmegen

Dr. Bernat Guillén Pegueroles, Data Scientist, Google

Dr Cynthia C. S. Liem MMus, Delft University of Technology

Amrit Purshotam, Machine Learning Engineer, Takealot

Valérie Nowak, MCTS

Celia Cintas, PhD

Edwn Lopez, Software Engineer, Independent contractor. Colombia/Spain

Laurie Winkless

Tristan Bergh, BSc (Eng), Data Science Lead, EOH Mthombo

Tessa Darbyshire, Scientific Editor, Patterns, Cell Press

Dr Beth Singler, Homerton College, University of Cambridge

Ethan P. White, Associate Professor, University of Florida

Marc Evers

Tracy Sweet, Associate Professor, UMD

Riccardo Angius, Machine Learning Research Scholar, Università degli Studi di Padova

Igor Brigadir, PhD, University College Dublin

Florian Dreher, Senior Data Scientist, Evolutionizer

Misgina Tsighe Hagos, Associate researcher, Ethiopian Biotechnology Institute

Kendra Clarke, VP Data Science and Product Development, sparks & honey

Michele Lewis, Assoc Professor, WSSU

Aman Tiwari, Engineer, Ctrl-Labs

Dr. M. Dingemanse, Radboud University, Nijmegen’

Félix Harvey, Mila, Québec, Canada

Joshua Thorpe, PharmDAndrew Janjigian

Associate Professor Jane Anderson, New York University

David Colby Reed, Researcher, MIT Media Lab

Yong Xin Hui, graduate student, University of Pittsburgh

Prof Andy Way, DCU

Federico Micheli, PhD student

Laurence Perreault Levasseur, Assistant Professor, Mila, Université de Montréal

Silvester Sabathiel, NTNU Trondheim

Adriana Heguy

Surya Karthik Mukkavilli, Project Scientist, University of California and affiliate — US DOE (LBNL/PNNL), McGill School of CS

Anya E Vostinar, PhD, Carleton College

Mélisande Teng, Mila

Sarah T. Hamid, Policing Tech Campaign Lead at Carceral Tech Resistance Network

Paul Zivich, PhD Candidate, University of North Carolina at Chapel Hill

Sean McDonald, Digital Public

Tom Johnson, VAP, Skidmore College

Yseult Héjja-Brichard, Phd student, Cerco

Scarlett Winters, Data Analyst

Dani Shuster

Jonathan Lebensold, Ph.D. Researcher, Mila

Milo Phillips-Brow, Postdoctoral Associate in the Ethics of Technology, MIT Philosophy

Ather Shabbar. Inclusive Designer, OCAD U.

Jacqueline D. Wernimont, Distinguished Chair of Digital Humanities and Social Engagement, Dartmouth

Dylan Phelan, Master’s Student, Tufts University.

Sarah Amandes

Tadeusz Zawidzki, Associate Professor and Chair of Philosophy, George Washington University

Daniel Brennan, PhD Student CUNY Neuroscience

David Cox, IBM Director, MIT-IBM Watson AI Lab, IBM Research

Anne Rochon Ford

Vivian Song, MIT alumnus

Anna Chung, Alum, MIT

Kade Crockford, Director, Technology for Liberty Program, ACLU of Massachusetts

Mara Mills, Co-Director, NYU Center for Disability Studies

Frederic Osterrath, Research software development manager, Mila

Aimi Hamraie, Vanderbilt University and Critical Design Lab

Kerime Alejo

Dr. Joy Lisi Rankin, Research Lead, AI Now Institute, NYU

Tom Mullaney, Professor of History, Stanford University

Blake Richards, Canada CIFAR AI Chair, Mila/McGill

Whitney Brim-DeForest, PhD, University of California

Dr Pierre Petitet, University of Oxford

Egemen Sert, Undergraduate Researcher, Middle East Technical University

Eray Ozkural, Celestial Intellect Cybernetics (Machine learning researcher)

Jianyuan Zhong

Elizabeth Shabbar, Concerned retiree

Shane O’Connell, PhD candidate at NUI Galway

Heidi Overhill, Professor, Sheridan College Institute of Technology and Advanced Learning

Jonathan Peck, M.Sc., Ghent University

Josh Glaser, Postdoctoral researcher, Columbia University

Vinod Subramanian, PhD student, Queen Mary University of London

Florian Golemo, Postdoc, Mila & ElementAI

Zoe Mitchell, M.L.I.S

Faye Ginsburg, Kriser Professor of Anthropology, NYU

tania

Mimi Onuoha, Visiting Assistant Arts Professor, NYU

Emily Cunningham, User Experience Designer, Amazon Employees for Climate Justice

Amar Ashar, Berkman Klein Center

Eric Lawton, Executive IT Architect, retired.

Kat R Matchett, BA, University of Windsor

Lora Johns

Phil Torres, Global Catastrophic Risk Scholar

Rebecca Smith

Miranda Wei, PhD Student, University of Washington

Meredith Ringel Morris, Sr. Principal Researcher and Research Manager, Microsoft Research

Jez Humble, Google and UC Berkeley

Sebastian HarkoRoger McNameeRussell RichieMolly Moroz

Ore Ogundipe, Software Engineer @ Microsoft

libi rose striegl, PhD, University of Colorado at Boulder

Samuel, Engineer

Dr. Leon Derczynski, Assistant Professor, IT University of Copenhagen

Douglas Blank, emeritus professor computer science, Bryn Mawr College

Jordi Vitria, Professor, Universitat de Barcelona

Waseem Abbas, PhD, University of Barcelona

Sarah Fox, Assistant Professor, Carnegie Mellon University

JS Tweedie

Angela Zito, Associate Professor, Anthropology, NYU

Maxime Gasse, PhD, Polytechnique Montréal

MATHANA, Tech Ethicist

Ari Morcos, Research Scientist, Facebook AI Research

Sarah Atchinson, 2020 J.D., University of Washington School of Law

Brian Rice

This is clearly harmful work, and does not belong in any rigorous publication -Michael Vogelsang

Michael Brennan, CS/Math Student, PCC

Sandeep Silwal, Graduate Student, MIT

Jeremy Kun

Josue Ortega Caro, Machine Learning Graduate Student, Baylor College of Medicine

M. Stella T, Data Scientist, Musixmatch

marcantonio rendino

Nilesh M Negi, Systems/SW Engineer, Hewlett Packard Enterprise

Chenda Bunkasem

Emma Bluemke, PhD Student, University of Oxford

Elizabeth Anne Watkins, Columbia University, Data & Society Research Institute

Dr. Manuchehr Aminian, Cal Poly Pomona

Rayna Rapp, Professor, New York University

Nic Fishman, Stanford University

Sivaramakrishnan Subramanian, Data Scientist, AppOrchid Inc.

Dr. Abraham Hmiel

Cheryl Giraudy, Assoc. Prof. OCAD University

Clay ONeil, data scientistRichard Coca, Stanford

Fred Myers, Professor of Anthropology, New York University

Jacob Sujin Kuppermann, M.S Student, Stanford University

Alex Rudnick, Google AI

Ashok Khosla, President, The Khosla Foundation

Maxime Lenormand, Junior Remote Sensing Engineer, and just a concerned machine learning practitioner + citizen

Michael Brent, Data Ethics Officer, Enigma Technologies

Raphael Labaca Castro

Sohini Upadhyay, Research Engineer, IBM Research AI

Mariah Peebles, Managing Director, AI Now Institute at NYU

Jacob Miller, Postdoctoral Fellow, Mila

Javier Iranzo-Sánchez, PhD Fellow, Universitat Politècnica de València

Daniel Valdenegro, University of Leeds

R. Luke DuBois, Associate Professor, NYU Tandon School of Engineering

Simone Montali, studenti at Università di Parma

Cora Went, PhD Candidate, Caltech

Francis Hunger, Bauhaus Universität Weimar, Germany, PhD candidate

Sayan Goswami, Senior Year Undergraduate, Jadavpur University, India.

Bhaskar DuttaJulia Morcos

Susan Brown, PE, Unitarian Universalist

Antoine Prouvost

Phurushotham Shekar, Student, Rutgers University

Noelle Campbell-Smith, sr. Interaction designer, Ontario Digital Service

Christine Geeng, PhD Student, University of Washington

Dylan Phelan, Master’s Student, Tufts University.

Anand D. Sarwate, Assistant Professor, Rutgers University

Boury Mbodj — McGill University Graduate

Lucy Suchman, Professor, Anthropology of Science and Technology, Lancaster University, UK

Leo Stewart, PhD Student, Information Science at University of Washington

Travis Chamberlain, MSc LSE, UCSD Philosophy and Rady School of Management

Karin Sattler

Gauthier Gidel, Ph.D. student, Mila and DIRO, Université de Montréal

Maria Y. Rodriguez PhD, MSW, Assistant Professor, University at Buffalo School of Social Work

Mikael Gramont, Software Engineer

Andy Birch, Product Designer

Lee Clement, PhD

Christopher Carlson, MBA Candidate 2021, USC Marshall School of Business

D Pham

Avital Oliver, Senior Research Engineer, Google Brain

Nikhil Thorat, Google, Software Engineer in Google Brain (signing as a citizen)

Eloy Geenjaar, BSc, Delft University of Technology

Dr Richard Hull, Programme Director for MA Social Entrepreneurship, Goldsmiths, University of London

Jack Gold, web developer

Trevor Ortega, Computer Vision Research Assistant, Western Washington University

Rajesh Sundaram, Machine Learning developer, Texas A&M University.

David Gil de Gómez Pérez, PhD Student, University of Eastern Finland

Nicolas Castellanos, Student, Dalhousie University

Dr. Stefan Fridriksson

Ore Ogundipe, Software Engineer @ Microsoft

Joseph Fridman

Matthew Scicluna. Researcher at the Montreal Heart Institute.

Sarah Ng, PhD student, University of California-Irvine

Megan — Engineering Student at Dalhousie University

Dr Alex Voss, Lecturer in Software Engineering, School of Computer Science, University of St Andrews

Michael D. Ekstrand, Assistant Professor, Boise State University

Alexis Hope, MIT Media Lab

Andrew Lampinen, Graduate Student, Stanford University

Sierra Sparks, Senior Year Electrical Engineering Student, Dalhousie University

Baasit Abubakr, Ph.D scholar, Jawaharlal Nehru University (JNU), Delhi

Ole Winther, University of Copenhagen, Professor

Shubham Sah

Dr. Ingmar Weber, Qatar Computing Research Institute, HBKU

Parker Koch, University of Michigan PhD Candidate

Kaelen Watters

Dustin Wright, PhD Student, University of Copenhagen

Carol Klassen

Stella Biderman, AI Researcher, Booz Allen Hamilton

Kate Leitch, PhD, Caltech

Pierre Gutierrez, ML engineer. Face is not linked to criminality except from existing bias. This cannot help justice. This has to be stopped.

Jeremy Pinto, Applied Research Scientist, Mila

Cole Gleason, Ph.D. Candidate, Carnegie Mellon University

Deepak George, Law Student

Gabriel Dulac-Arnold, Researcher, Google Research

Arun Sai Suggala, Carnegie Mellon University

Frank Blendinger, Senior Software Engineer, Method Park

David Samuel, bc., Charles University

Jessica Faure

Dr. Piotr Mirowski, Staff Research Scientist, DeepMind

Brandon Bohrer, PhD Candidate, Carnegie Mellon University

Sophia Sun, PhD student

Roman Werpachowski, Research Engineer, DeepMind

Paromita Shah, Executive Director, Just Futures Law

Madeline Smithdr. andrew quitmeyer

McKane Andrus, Research Associate, Partnership on AI

Bianka Hofmann, Head of Communications, Fraunhofer MEVIS

Budhaditya Deb, Principal Scientist, Microsoft

Dylan Phelan, Master’s Student, Tufts University.

Cara Hall

Dr. Álvaro Cabana, Universidad de la República, Uruguay

Harshvardhan Uppaluru, PhD Student , George Washington University

Julia Kramer

Danny Spitzberg, UX researcher

Shamelle Richards, JD Candidate, Yale Law School

Maria De-Arteaga, Assistant Professor, UT Austin

Margaret Levi, Professor of Political Science, Stanford University

Rebecca Tingley, Electrical and Software Engineer, Dalhousie University

Clarissa Forbes, Postdoctoral Fellow, University of Arizona

Cybele SackJohn Rigoni

Riley Miladi, Machine Learning Research Engineer at Embark Studios

NM Amadeo, Software Engineer, Google

Barry Lathrop, Law Student, Temple University Beasley School of Law ‘21Sheshera Mysore, PhD Student, UMass Amherst

Lawrence Han, Undergraduate, Carnegie Mellon University

Edward Ongweso JR

Cathleen Fry, Agnew Postdoctoral Fellow, Los Alamos National Laboratory

Moira Weigel, Harvard Society of Fellows

Mia Judkins, Technical Program Manager

Michelle Kuchera, Assistant Professor of Physics, Davidson College

Elena Zheleva, University of Illinois at Chicago

Gabriel Tseng, Machine Learning Engineer

Sydney Corona

Christopher Tang, Ph. D. Candidate, California Institute of Technology

Sun-ha Hong, Assistant Professor, Simon Fraser University

libi rose striegl, PhD, Media Archaeology Lab, University of Colorado at Boulder

Professor Kavita Philip, History of Science and Technology, U C IrvineDr. Rachael Tatman, Senior Developer Advocate, Rasa TechnologiesAnah Shabbar, Greenbelt Arts

Matthew L Leavitt, PhD, Facebook AI Research

Tianyu Zhang, Incoming MSc Student, Montreal Institute for Learning Algorithms (Mila)Tianyu Zhang, Incoming MSc Student, Montreal Institute for Learning Algorithms (Mila)Lorenzo Porcaro, PhD Student, Universitat Pompeu Fabra

Dr. Sergio Guadarrama, Staff Software Engineer, Google Research

Ana M. Tarano

Dr. Matthias Korn, Social Informatics, Germany

Molly Quinn, University College Dublin

Ben Verhoeven, PhD in Computational Linguistics, Antwerp, Belgium

Andrea Reyes Elizondo, researcher, Leiden University

Seda Gurses

Robert Elliott Smith PhD FRSA, University College London

Jeffrey Liu, PhD, MIT Lincoln Laboratory

Dr Harry Farmer, Lecturer in Psychology, University of Greenwich

Melissa McCradden, Bioethicist, University of Toronto

Dr. Dennis Müller, Computer Science, University Erlangen-Nürnberg

Ieke de Vries, Assistant Professor, Florida State University

Paul Feigelfeld

Michael Barany, University of Edinburgh

P M Krafft, Senior Research Fellow, Oxford Internet Institute

Samyu Comandur, Computer Science and Statistics Student, University of South Carolina

Dr. Mona Sloane, New York University

Dr Shauna Concannon, University of Cambridge

Henry Choi, Assistant Professor at Handong Global University

Masataka NakajimaJeremy Clark, Concordia UniversityKen Norton, Product Manager, GoogleMolly Des Jardin, data analyst (University of Pennsylvania)Stacy Wood, Assistant Professor in the School of Computing and Information at the University of Pittsburgh

Graham Jones, Associate Professor of Anthropology, MIT

Tom Williams, Assistant Professor of Computer Science, Colorado School of Mines

Dr Catherine Flick, Reader in Computing and Social Responsibility, De Montfort University, UK

Kathleen Mills, PhD Candidate, Memorial Sloan Kettering

Gabriel Nicholas

Scott Fitzgerald, Industry Associate Professor, NYU Tandon

Megan Doerr, MS, LGC

Toby Walsh, Professor of AI, UNSW Sydney

Sarah Villeneuve

Gemma Galdon Clavell, PhD, Eticas Research and Consulting

Steven Umbrello, Managing Director, Institute for Ethics and Emerging Technologies

Jared M. Field, McKenzie Fellow, University of Melbourne

Zeerak Waseem, Ph.D. Candidate, University of Sheffield

Dr Alison Powell, Associate Professor, London School of Economics and JUST AI Project, Ada Lovelace Institute

Sarada Mahesh

Laura Forlano, Associate Professor, Illinois Institute of Technology

Nicholas Kroeger, PhD Student, UF

Hala Iqbal, Postdoctoral Scientist, NYU Langone

Gretchen Krueger

Jonathan Soffer Professor of History and Chair, Dept of Technology, Culture, and Society NYU Tandon School of Engineering

Dr Jared M. Field, McKenzie Fellow, University of Melbourne

Loubna Benabbou Professor University of Quebec

Jane Anderson, Associate Professor New York University

Ben WintersHee-seung Yun

Evan Selinger, Professor, Rochester Institute of Technology

Michael Zimmer, Associate Professor of Computer Science, Marquette University

Judeth Oden Choi, PhD student, HCII, Carnegie Mellon

Benjamin Prud’homme, Executive Director, AI For Humanity, Mila

Tiffany Vazquez, Editor, GIPHY

Florian Bordes, PhD Candidate, Université de Montréal

Arwa Mboya

Andrew Williams

Shea Swauger, Senior Instructor, University of Colorado Denver

Kay Kirkpatrick, Associate Professor, University of Illinois

TJ Kolleh

Jason Clarke

Jacinthe Mongrain, IT Validation Specialist at Optel, supplier of traceability systems

Heng Ji, Professor, University of Illinois at Urbana-Champaign

Kaila Colyott, PhD, University of Kansas

Mahta Ramezanian, Research Assistant, Mila

Mutale Nkonde, AI for the People

Dr Kristopher Wilson, Faculty of Law, University of Technology Sydney

Sinan Ozdemir, Director of Data Science at Directly

Ms Uma Zalakain, University of Glasgow

Teemu Roos, PhD, Professor of Computer Science, University of Helsinki

Joanne Boisson

Andy Stuhl, PhD Student, McGill University

Emma Manning, PhD Candidate, Georgetown University

Kate Crawford, NYU

L Jean Camp, Professor, Indiana University

Andrew M.C. Dawes, Professor of Physics, Pacific University

Mehdi Merai, CEO at Dataperformers

Michael G. Lerner, Associate Professor of Physics and Astronomy, Earlham College

Tom Price (PhD, Theory of Condensed Matter Group, Cambridge); Software Engineer

Thomas Nigl, Graduate Student, Penn State University, USA

Michael W. Busch, PhD, SETI Institute

Ömer Sümer, PhD Candidate, University of Tübingen

Jiri Hron, PhD candidate, University of Cambridge

Jean Gallagher, Professor, NYU Tandon School of Engineering

Myrthe Reuver, MSc Student Cognitive Science & AI

Konstantinos Drossos, Tampere University

Danya Glabau, Industry Assistant Professor, NYU Tandon School of Engineering

Linnet Taylor, Tilburg University, NL

Graham Pash, Graduate Student, University of Texas at Austin

Mehak Sawhney, PhD student, McGill University

Bernard Geoghegan, Senior Lecturer in the History and Theory of Digital Media, King’s College London

Sriram Mohan, PhD candidate, Communication and Media, University of Michigan

Keith O’Hara, Associate Professor of Computer Science, Bard College

Dorothy Roberts, University Professor of Africana Studies, Law & Sociology, University of Pennsylvania

Felix Stalder, Professor for Digital Culture, Zurich University of the Arts

Ana María Ochoa, Professor, Department of Music, Columbia University

Demetrius Davis, PhD candidate, Virginia Tech

Erik Wijmans, PhD Student, Georgia Institute of Technology

Nanna Bonde Thylstrup, Associate Professor , Copenhagen Business School

Christopher Coenen, Karlsruhe Institute of Technology

Dr. Marcel Bollmann, Researcher and MSCA Fellow, University of Copenhagen

Ky Grace Brooks, PhD Candidate, McGill University

Pierre-Alexandre Fournier, CEO Hexoskin

Arlene Ducao, Instructor, NYU

Alexander Trott — Senior Research Scientist — Salesforce Research

Kyle DeCoste, PhD Candidate, Columbia University

Álvaro Peris, PhDRob Arbon, Bristol University

Diana M. Rodriguez, doctorate student, Columbia University

Lauren Alexandra, Artificial Intelligence and Machine Learning Student at Colorado State University

Brendan McQuade, Assistant Professor, Criminology Department, University of Southern Maine

Ahmed Ansari, Industry Asst. Professor, NYU Tandon

Kendra Albert, Clinical Instructor, Harvard Law School

Gavin Steingo, Associate Professor, Princeton University

Subho Majumdar, Senior Inventive Scientist, AT&T Labs Research

Steve High

Lauren Hay, Graduate Student, University at Buffalo (SUNY)

Burcu Baykurt, University of Massachusetts Amherst

Jason Edward Lewis, University Research Chair, Concordia University

Chris Peterson, MIT

Micalyn Struble, Undergraduate student of Computer Science, Duke University

Brendan Chambers, Applied Research Scientist, QuillBot

Kendra DelaCadena

David Murphy, Postdoctoral Associate, MIT CTP

Emiliano Falcon-Morano, Policy Counsel, ACLU of Massachusetts

Samuel R. Bowman, Assistant Professor of Linguistics, Data Science, and Computer Science, NYU

Laurent Najman, professor in machine learning, University Gustave Eiffel, France

James Anthony, Software Developer, BASc Computer Science (McMaster University)

Sam DiBella

Daniel Greene, Assistant Professor if Information Studies, U Maryland

Andrew Ó Baoill, Lecturer, School of English and Creative Arts, National University of Ireland Galway

Laura Mamo, Professor

Rahul Bhargava, Research Scientist, MIT Center for Civic Media

Prof Chris Lintott, University of Oxford

Julian Posada, PhD Student, University of Toronto, Canada

Dan Saint-Pierre

William R. Frey, doctoral student, Columbia University

Lassana Magassa

Jordan Jackson

Christopher Marks, Data Scientist

Natália da Silva Perez, Centre for Privacy Studies, University of Copenhagen

Christian Hudon

Maria Sobrino, undergraduate student, University of Michigan

Derek Arnold

Marc-Anthony Brooks Snead II

Jay D. Aronson, Director, Center for Human Rights Science, Carnegie Mellon University

Barbara E. Bullock, Professor, University of Texas

Lauren Chambers, Technology Fellow, ACLU of Massachusetts

Robin L. Zebrowski, Assoc. Professor of Cognitive Science, Beloit College

Indrapramit Das, author

Dr. Kaitlin Stack Whitney, Assistant Professor, Science, Technology & Society, RIT

Sandeep Mertia, PhD Candidate, New York University

Jacob Ratliff, UX Researcher/Designer in AI

Grace Abuhamad

Venkatesh Srinivas, Software Engineer, Google

CJ Valasek, Lecturer, University of California San Diego

Ana Brandusescu, Professor of Practice, McGill University

David R. Ambaras, Professor of History, North Carolina State University

Jeremy Crampton Professor Newcastle University

Michael Perry

Rhema Linder, Ph,D Human-Computer Interaction

Dr DL Clements, Imperial College London

Hal Daumé III, Professor, University of Maryland and Sr Principal Researcher, Microsoft Research

Jason Bowen, Columbia University

Sarah Semel

Dr. Kevin M. Hines — Cornell University

Shea Brown, Lecturer, Department of Physics & Astronomy, University of Iowa

Tim Schwartz, Los Angeles Cryptoparty

Denise McLane-Davison, Assoc. Professor of Social Work, Morgan State University

Tim A. Miller, SVP Engineering, Quizlet

Professor Tom Buchanan

Travis Hall, PhD

Mar Hicks, Illinois Institute of Technology

Sarah Appleby, PhD student, University of Edinburgh

Madeleine Maxwell, Researcher, Independent

Cindy Lin Kaiying, PhD Candidate, University of Michigan, Ann Arbor

Rachel Bergmann, Social Media Collective, Microsoft Research New England

Seth Erickson, Assistant Librarian, Penn State

Charles Logan, Educational Technologist, The Ohio State University

Sophia Searcy, Executive Director of Product, Metis

Harini Suresh

Louis Gomez, Stevens Institute of Technology

Sreeja Kondeti, Health Policy MPH Candidate, Yale University

Carlos Scheidegger, University of Arizona

Teresa Heffernan, Professor, Saint Mary’s University

Dorothea Salo, Distinguished Faculty Associate, UW-Madison Information School

Emma McKay, York University

Aakash Gautam, PhD candidate, Virginia Tech

Amanda Cercas Curry, PhD Student, Heriot-Watt University

sava saheli singh, Postdoctoral Fellow, Department of Criminology, University of Ottawa

Nikki L. Stevens, PhD Candidate, Arizona State University

Dana Wheeler, engineer

Vince Fong

Dr. Elizabeth Henaff, Assistant Professor, NYU Tandon School of Engineering

Gabriel Grill, PhD Student, University of Michigan — School of Information

Artie Vierkant

Crystal Lee, PhD candidate, MIT

Nina Lutz Graduate Student MIT

Ashley Shew, Assistant Professor, STS, Virginia Tech

Dr. Elinor Carmi, Research Associate at Liverpool University, UK

Nicholas Selby, Graduate Student, MIT

Steven Clark, Online Course Facilitator, University of South Australia

Alexander Criswell, PhD Student in Astrophysics, University of Minnesota

Dan Lockton, Imaginaries Lab, Carnegie Mellon University

Jessica F Cantlon, Associate Professor, Carnegie Mellon University

Matt Nish-Lapidus, University of Toronto

David C. Sorge, Doctoral Candidate in Sociology, University of Pennsylvania

Amandalynne Paullada, PhD Candidate, University of Washington

Emma Strubell, Assistant Professor, Carnegie Mellon University

Nate Beard, PhD Student, College of Information Studies, University of Maryland

Daniel Nkemelu

Nadine M. Finigan-Carr, PhD; Research Associate Professor; University of Maryland, Baltimore

Gabriel Teninbaum, Assistant Dean of Innovation, Strategic Initiatives, & Distance Education; & Professor of Legal Writing; Suffolk University Law School

Aashka Dave, Researcher, MIT Center for Civic Media

Ken Holstein, Assistant Professor, School of Computer Science, Carnegie Mellon University

Andrew McStay, Prof., Bangor University

Aldo Ahkin Barrera-Machuca, MSc, Simon Fraser University

Dr Michael Dempster, MNeuro, MSci, PhD

Mohamed Sofiene Kadri

Benjamin Winokur, York University

Matt May, Head of Inclusive Design, Adobe

Dr. Catherine Stinson

Jesse Josua Benjamin, PhD Student, University of Twente

Trent Fulton

Ludovic Righetti, Associate Professor, New York University

Alex Bigelow, Data Science Fellow / Postdoctoral Researcher, University of Arizona

Christian Szegedy, AI researcher

Nancy Baym, Sr Principal Researcher, Microsoft

Tim Highfield, Lecturer in Digital Media & Society, University of Sheffield

Gabriella Coleman, Wolfe Chair in Scientific and Technological Literacy McGill University

Fernando Diaz, Senior Principal Researcher and Assistant Managing Director, Microsoft Research Montréal

Petros Terzis, PhD student, University of Winchester Oskar Austegard

Benjamin Wolf, Senior Software Engineer, Google

Aaron Clauset, Associate Professor of Computer Science, University of Colorado Boulder

Erik Harpstead, Systems Scientist, Carnegie Mellon University

Laurent Dinh, Research Scientist, Google Brain

Tim O’Brien, GM AI Programs, Microsoft

Ananya CHAKRAVARTI, Associate Professor, Georgetown University

Sajid Ali, PhD Candidate, Northwestern Univ.

Hilde Weerts, Research Engineer, Eindhoven University of Technology

Manasvin Rajagopalan, PhD Student in Comparative Literature, UC Davis

Damien Patrick Williams, PhD Candidate in Science, Technology, and Society at Virginia Tech

Georgios Bakirtzis, PhD Candidate, University of Virginia

Devon Persing, Accessibility Specialist, MS in Information

Lindsay Weinberg, Clinical Assistant Professor, Purdue University

Kathryn Kosmides, CEO, Garbo

Ellen Wondra

Pauline van Mourik Broekman, PhD student, RCA, London and co-editor, Mute magazine

Frank Edwards, Rutgers University

William Pettersson, Research Associate, University of Glasgow

Aviel Roshwald, Professor of History, Georgetown University

Neal Patwari, Professor of Electrical Engineering and Computer Science

Dennis Boella North, Mr., (Retired Public Sector IT Consultamt)

A Knuppel, Visiting Assistant Professor, Lawrence University

Mounaim Zaryouhi, Software engineer

Stephen P. Smith, PhD, Michigan State University, Retired,

Gary Weissman, MD, MSHP, Assistant Professor of Medicine, University of Pennsylvania Perelman School of Medicine

AJ Alvero, PhD Candidate, Stanford University Graduate School of Education

B. D. R. Bogart, PhD

Robert Soden, Columbia University

Jacob John Jeevan, CS MS

Arthur Borem, Software Engineer

Julian Michael, Phd Candidate, University of Washington

David A. Banks, co-chair, Theorizing the Web

Paul Dourish, Professor of Informatics, University of California Irvine

Annie Zhang, Software Engineer

Paul Roquet, Associate Professor, Comparative Media Studies, MIT

Anjana Vakil, Software Engineer & Educator

Ashleigh Thomas, MS student in Genetics and Genomics UC Davis; BS Computer Science Johns Hopkins University

Yoehan Oh, PhD student, Department of Science and Technology Studies, Rensselaer Polytechnic Institute (RPI)

Kate Kadash-Edmondson, PhD

H. Malik

Rachel Douglas-Jones, Associate Professor of Anthropological Approaches to Data and Infrastructure, IT University of Copenhagen Denmark

Jentery Sayers, Associate Professor, University of Victoria

Sarah Deutsch, Attorney and Board Member, Electronic Frontier Foundation

Jeannette Bohg, Assistant Professor, Stanford

Aaron Harrison

Alexis Logsdon, Digital Scholarship Librarian, University of Minnesota Libraries

Liz B. Marquis, PhD Candidate, University of Michigan School of Information

Stephanie T. Douglas, Astronomer

Schalk Steyn

Lucia Donatelli, Postdoctoral Researcher, Saarland University

I. Elizabeth Kumar, PhD Student, University of Utah School of Computing

Oz Amram, PhD Student, Johns Hopkins University, Physics Department

Jackson Wright

Adji Bousso Dieng

David Cecchetto, Associate Professor, York University (Canada)

Mahdi Cheraghchi, Assistant Professor, University of Michigan

Zachary Furste, Postdoctoral Fellow in Software Curation, Carnegie Mellon University

Miss Rachel Silvester Williams (University of Glasgow)

Britt S. Paris — Assistant Professor of Library and Information Science, Rutgers University

Melinda Sebastian, Postdoctoral Fellow, Syracuse University

Chiara Addis, PhD candidate, Salford Business School

Kai-Hsin Hung, HEC Montreal

Sefa Ozalp, Lead Data Science Researcher, HateLab, Cardiff University

Rushi Shah, CS PhD student at Princeton’s CITP, JD student at Harvard Law School

Elena Razlogova, Associate Professor of History, Concordia University, MontrealDr Andrew Wood

Balazs Bodo, social scientist, University ofAmsterdam

Dan Rubins, CEO, Legal Robot

Krishna Venkatasubramanian, Assistant Professor, University of Rhode Island

Phillip Kuznetsov, Software Engineer, Pixie Labs

Priya C. Kumar, PhD Candidate in Information Studies, University of Maryland-College Park

Amy Isard, University of Hamburg

Adji Bousso Dieng

Jacob Wolf

Brenda Ruch

Alexander Ronald Altman, Graduate Student, University of Illinois at Urbana-Champaign

Taylor C. Nelms, Filene Research Institute

Binh Phan

David Eadington

Josh Guberman, PhD Student at the University of Michigan School of Information

Lynne Goerner, Software Engineer, Google

Lucy Archer, Technologist, Cambridge Consultants

Sandra Milosevic

Miguel Rodriguez Basalo, psychologist, graduated on Universidad Complutense de Madrid

Julia Mendelsohn, PhD Student, University of Michigan

Samuel Klinkenborg

Matthew Guzdial, Assistant Professor, Computing Science Department of the University of Alberta

Takeshi Takahashi

Dr. Marc Schulder, Hamburg University

Julien Girard-Satabin, PhD student in Artificial Intelligence Safety, CEA LIST/ INRIA

Alicia Jarvis

Laura South, PhD Student, Northeastern University

Khalid El-Arini, Facebook

Kai Caspar — Biologist, PhD student at University of Duisburg-Essen, Germany

Hana Marčeteić

Yen-Chia Hsu, Project Scientist, Carnegie Mellon University

Seny Kamara, Associate Professor, Brown UniversityGeoffrey Lehr

Mike Ananny, Associate Professor, Annenberg School for Communication & Journalism, University of Southern California

Gemma Milne, journalist & author of Smoke & Mirrors: How Hype Obscures the Future and How to See Past it

Yuriko Furuhata, Associate Professor, McGill University

Davide Nunes, University of Lisbon, Portugal

Adam Summerville, Assistant Professor, Cal Poly Pomona

Kathryn Spiers, Committee on Liberatory Information Technology

Asia Minor

Cynthia Taylor, Assistant Professor, Oberlin College

Michael Champlin, Experience Designer

Dr Nick Rush-Cooper, Newcastle University (UK)

Alexander D’Amour, Research Scientist, Google

Roland Crosby

Nathan Jurgenson, Co-Founder and Co-Chair of Theorizing the Web

Dr. Luke Dicken, Director of Applied AI, Zynga Inc.

Vivek Gupta, Graduate Student, University of Utah

Kathleen McDermott, Industry Assistant Professor, NYU Tandon School of Engineering

Myle Ott, AI Researcher, Facebook AI Research

Andrew Drozdov (PhD Student, University of Massachusetts Amherst)

Kyle Montague, PhD Northumbria University

Karan Balaji

Benjamin VanderSloot, Assistant Professor, University of Detroit Mercy

Houda Lamqaddam, PhD student, KU Leuven

Matthew Puentes, Graduate student, WPI

Aaron Welsher, Developer

Colin Fredericks, Sr. Project Lead, Harvard University

Oz Blake

Ani Nenkova, Associate professor, University of Pennsylvsnia

Jordan Harrod, PhD Student, Massachusetts Institute of Technology

Ian Goldberg, Canada Research Chair in Privacy Enhancing Technologies, University of Waterloo

Seth Johnson

Max Maass, PhD Student, TU Darmstadt

Rey Arndt

Patrick Thomson, Senior Research Engineer, GitHub, Inc.

Sebastián Herrera Gaitán. Sociologist, Universidad Nacional de Colombia.

Jiri Zlatuska, Faculty of Informatics, Masaryk University, Brno, Czech Republic

August Taylor, Computer Science Student, University of Oxford

Shayla Nikzad, PhD Candidate, Stanford Chemical Engineering

Alexandra Schofield, Assistant Professor, Harvey Mudd College

Becca Ricks, Researcher, Mozilla Foundation

Dr Shawn Graham, Professor of Digital Humanities, Carleton University

Chinasa T. Okolo, Cornell University

Volha Litvinets, PhD student, Sorbonne University

Dr. Jonathan May, Research Assistant Professor, University of Southern California Information Sciences Institute

Elias B. Khalil, Assistant Professor, University of Toronto

Lundy Braun

Elizabeth Resor, PhD Student UC Berkeley

John Phillpot, Site Reliability Engineer, Google

Jake Strang, Scientific Software Engineer, JHU/APL

Mario Pecheny, Professor of Sociology of Health, University of Buenos Aires & CONICET

René Mahieu, PhD candidate fundamental rights in the digital age, Vrije Universiteit Brussel

Juliane Jarke, PhD, University of Bremen

Carolyn Ten Holter, researcher, University of Oxford

Samantha Kleinberg, Associate Professor, Stevens Institute of Technology

Lily Xu, PhD Candidate, Harvard University

Omiros Pantazis, machine learning PhD student at UCL

Gillian R Hayes, Kleist Professor of Informatics, UC Irvine

Martim Brandao, Post-Doctoral Researcher, King’s College London

Sara Woodbury, PhD student, William & Mary

Ryan McMahon, Data Scientist, Google

Tim Vaughan, Sr. Software Engineering Manager, Microsoft

Landon Morrison, College Fellow, Harvard University

Michael Madaio, PhD Candidate, Carnegie Mellon University

Candace Ross, PhD Candidate, MIT CSAIL

Joseph Renner

Brittany Wills, Software Engineer, Twitter Inc.

Hamid Eghbal-zadeh, PhD, Johannes Kepler University

Edwin Brady, Lecturer, University of St Andrews

Nicole Grove, Associate Professor, University of Hawaii at Manoa

Gwyn Evans

Melanie Mitchell, Davis Professor, Santa Fe Institute

Haarman Nicolas, wine advisor

Erwan Moreau (postdoc), Trinity College Dublin

Mr. B. S. Herring

Neil Ryan, University of Washington

Roberto Iriondo, Carnegie Mellon University

Colin Bayer; cofounder, anti software software club LLC; former staff, University of Washington Allen School of Computer Science and Engineering

Dr. Mihaela Vorvoreanu, Microsoft

Mikaela Meyer, PhD Student in Statistics and Public Policy, Carnegie Mellon University

Andrew S. Hoffman PhD, Postdoctoral Researcher, Radboud University, the Netherlands

Shivangi Narayan, PhD Candidate, Centre For Study For Social Systems, Jawaharlal Nehru University, New Delhi, India

Cameron Raymond, University of Oxford

Jenna Burrell, Associate Professor, UC-Berkeley

Andrew Whalen, software QA tester

Yong Xin Hui, graduate student, University of Pittsburgh

Mannat Kaur, PhD candidate, TU Delft

William Merrill, Predoctoral Young Investigator, Allen Institute for AI

Jeremy Hyrkas, PhD student, UCSD

Bhaskar Mitra, Principal Applied Scientist, Microsoft

Jasmine Noonan

Jaime Alvarez, Undergraduate Student, University of Texas Rio Grande Valley

Benedikt Boecking, PhD Student, Carnegie Mellon University

Eren Alay — Research Assistant — Stevens Institute of Technology

Sarita Schoenebeck, University of Michigan

Rebecca Nutter

Wojciech Nawrocki, postdoctoral fellow, VU Amsterdam

Prerana Sunkara, Activist, Gen Z

Kevin Lobo

Mike Marcin, Lead Programmer, Bethesda

Stephanie Dick, History and Sociology of Science, University of Pennsylvania

Joëlle Skaf, Staff Software Engineer, Google

Kym Harbin

Dr Sam Ladkin, University of Sussex

Orestis Papakyriakopoulos, Technical University of Munich

Jadynn Evans Lizzie Grosso

Susan Mazur Stommen, Principal, Indicia Consulting

Kristina M. Sawyer, PhD Candidate, UIC

Enda Brophy, School of Communication, Simon Fraser University

Owen Leddy, Ph.D. Student, MIT

Caitlin Doughty, PhD Candidate, NMSU

Xiaowei Wang, UC Berkeley, Logic MagazineGrace, a student

Lauren Miller — Human — Earth

James Alexander Feldman-Crough, Software Engineer

Julia Bilby, student

Jack Hester, Brown University

Karrie Jackson

Marco

Erik Thomas-Hommer, SDE, Amazon

B. B. Schieffelin, Professor, New York University

Tekela Robinson

Tara Maldonado

Brandon Osborn, Doctoral Candidate, UC Irvine

Tyler Lian

Shannon Vallor, Baillie Gifford Chair in Ethics of Data and AI, University of Edinburgh

Manny Patole, Project Manager, NYU

Cindy Wolff

Benjamin Gorman, PhD, Bournemouth University

Johanna Strömberg, Uppsala University

Manlin Yao, User Researcher

Gemma Auxiliadora Morillas Cerezo

Nate Ballarino, Entrepreneur

Mariel Deluna

Garreth Tigwell, Assistant Professor, Rochester Institute of Technology

Clare Kim, Postdoctoral Associate, Washington University in St. Louis

Christina Schoux Casey, Associate Professor, Aalborg University

Elizabeth Donger, JD Candidate, NYU School of Law

Mayra Duran, Industrial Engineer

Shobita Parthasarathy, Professor, University of Michigan

Jordan Holt, Director, LazLabs

Stuart Wilson, Digital Engineer, Best Buy

Thomas G. Dietterich, Professor (Emeritus), Oregon State University

Krzysztof Chwała, Yale University

Devin Kennedy, PhD // New-York Historical Society

Lauren Morvin

Vicky Zeamer, Design Researcher at IDEO

Emilija Gagrčin, PhD candidate, Free University of Berlin

Varoon Mathur, Technology Fellow, AI Now Institute

Héctor Beltrán, Assistant Professor, MIT

Brienna Rodgers

Yvonne Lin, PhD Student, University of California, Berkeley

Danielle Medellin, Data Scientist

Cedric Whitney, incoming PhD, University of California - Berkeley

Katherine Wolf, Doctoral Student, University of California at Berkeley

Kat Sullivan, Visiting Industry Assistant Professor, Integrated Digital Media, NYU

Anis Rahman, Department of Communication, University of Washington, Seattle

Richard Tomsett, IBM Research Europe

Clare DuVal, Data Analytics Intern

Garrett Kelly

Caroline Tracey, PhD Candidate in Geography, UC Berkeley

Jenny Brennan, Researcher, Ada Lovelace Institute

Nader Akoury, CS PhD Student, UMass Amherst

Adrian Hayes, A concerned citizen

Kate Devlin, Senior Lecturer/Assoc Prof in Social and Cultural Artificial Intelligence, King’s College London

Isaac Murchie, Senior Software Engineer, BenevolentAI

Shreeharsh Kelkar, Lecturer, Interdisciplinary Studies, University of California, Berkeley

Joshua Delman, Director of Engineering at Snaps Media, Inc.

Kumar Ramanathan, PhD candidate in Political Science, Northwestern University

Dr Matt Luckcuck, University of Liverpool

Professor Matthew Cobb, University of Manchester, author of The Idea of the Brain: The Past and Future of Neuroscience

Jared M. Wright, PhD Candidate, Purdue University

Courtney Denardo

Gavin Jones

Nina Solomun

Chris Lavelle

Darren Byler, Postdoctoral Research Fellow, University of Colorado

Arica Tuesday, concerned citizen

Landon Getz, PhD Candidate, Department of Microbiology and Immunology, Dalhousie University

Chad Geidel, Software Developer IV, Colorado Department of Human Services

Margaret Spires, Librarian, Utica College

William Agnew, PhD Student, University of Washington

Zachary Gill, Senior Game Developer

Eric Moon, Senior Software Engineer, AquaSeca Inc

Kadija Ferryman, PhD, Industry Assistant Professor, NYU Tandon School of Engineering

David Russell, PhD Student at Oakland University

Neil Bickford, Developer Technology Engineer, NVIDIA Corporation

Christoph Becker, Associate Professor of Information, University of Toronto

Madelyn Fetzko, Jr Art Director, Edelman

Vlad Niculae (post-doc, Instituto de Telecomunicações, Portugal)

Rowan Hampton

Phoebe Campbell

Julia Dressel, Co-author of "The accuracy, fairness, and limits of predicting recidivism"

Fernanda Barrientos, estudiante de biomedicina

Daniel Shiffman, Associate Arts Professor, ITP/IMA, New York University

Rashida Richardson, Director of Policy Research, AI Now Institute

Josh Faust, CTO, Torch 3D

Valentina Fuentealba-Fernández, Biomedical Engineering Student, Universidad de Concepción.

Julie Carpenter, PhD, Research Fellow, Ethics + Emerging Technologies group

Hannah Beierman

Ash Brent-Carpenter

Chris Miller

Ryan S Moss

Lizzie Turbett, BS Nutrition & Dietetics

Brad Berkemier, Security Researcher

Chris Miller

Nadia Wendt

Lauren Wolfe (Research data specialist at Fred Hutchinson)

Will Payne, Ph.D. Candidate, Geography (and New Media), UC Berkeley

Alexandra Nilles, PhD Candidate, UIUC

Maria Annichia Riolo, Postdoctoral Researcher, Santa Fe Institute

Aya Selman

Asa Kalish, Undergraduate, Washington University in St. Louis

Giovanni Campagna, PhD Candidate, Stanford University

Roya Pakzad, Taraaz

Dr Khalil Thirlaway, NHM

Katie Latimer, PhD student, UC Berkeley

Joe Near, Assistant Professor, University of Vermont

Rhys Goodall, University of Cambridge

Zachary Katz, Engagement Manager, Recidiviz

Constanza Vásquez, M. Sc. (C) in Computer Science, Universidad de Concepción

Zach F.

Emma Stamm, Ph.D, Instructor, Virginia Tech Department of History

John F Dickson

Dr. Alex Ketchum, Institute for Gender, Sexuality, and Feminist Studies of McGill University

Dr Jennifer Andrew, trade union researcher

Aaron Karper, Software Engineer

Sam Kluck

Kade Keith, Stanford University

Callie, Undergraduate Student, Washington University in St. Louis

Abigail Swenson

Rosamond Thalken, PhD Student, Cornell University

Catherine Cronin, PhD, National Forum for the Enhancement of Teaching and Learning in HE

Georgina Garcia, Ms.

Ulrich Junker

Elena Maris, Postdoctoral Researcher, Microsoft Research

K. Philip

Corrado Monti, Postdoctoral Associate, ISI Foundation, Italy

Nawaf Al-Rashid

Prof Elizabeth Lawrence

Juniper Jackson

Katy Weathington, PhD Student, University of Colorado Boulder

Pippa Hough

Francesca Loiodice, Student, Barnard College

Arthur Barbosa Câmara, PhD candidate TU Delft

Christina Dunbar-Hester, Associate Professor, Communication, University of Southen California

Connor P. Jackson, PhD Student, Agricultural and Resource Economics, UC Berkeley

Kerry Magruder, History of Science, University of Oklahoma
Herald Guinto

Djoerd Hiemstra (Radboud University)

Nushin Yazdani, Interaction Designer and Artist

Carla Soto, student, Universidad de Concepción

Dante O’Hara, Ph.D., postdoctoral researcher, US Naval Research Laboratory

Aneesh Naik, machine learning developer, MIT

Gerardo Salazar

Katie Burke

Jordan T. Thevenow-Harrison

Colin Caver

Kipper Fletez-Brant, Computational Biologist

Dorsey Winchester

Lorraine Floyd

David Fussichen, CEO, Analytics8

Jacob Certain, Software Engineer

Amy Elizabeth Manlapas

Vytaute Kedyte

Shauna Gordon-McKeon

Abhijat Biswas, PhD Student, Carnegie Mellon University

Lynn Rodriguez

Armen Enikolopov, PhD

Elan Simon Parsons, Data Manager, Center for Open Science

Roshan Pokharel

Christopher Vergara, BME Student, Universidad de Concepción

Edward Hilfstein

Ari Edmundson, PhD, UC Berkeley

Devin Johnson, PhD Student, McMaster University

Donald E. Goodman-Wilson, PhD, Katsudon.tech

Sheena MacRae, PhD student, University of Hull

Ashleigh Poe

Cesar Clavijo

Ofer Idan, CTO, Carbon Relay

Zachary Talis, Undergraduate, Rochester Institute of Technology

Dan Herrera

L Balstad, Student

Glenn Ellis, MPH, CHCE

Joshua Zane Weiss, UC Davis PhD Candidate in Cultural

Anthropology/Science and Technology Studies

Phillip Gara, MIT Alum

Pallavi Mishra-Kalyani

Mat Leonard, Senior AI Research Engineer

Şerife Wong, Icarus Salon

Sophie Burstein

Morgan

Romi Ron Morrison, PhD candidate, USC

David Lyttle, Computational Biologist

Wells Lucas Santo, former Education Manager at AI4ALL & advisory board member at AI4K12

Bruno Perreau, Cynthia L. Reed Professor of French Studies, Massachusetts Institute of Technology

Sarah Rico

Kate Sim, PhD Researcher, Oxford Internet Institute

Add your name to speak out against carceral technology here