Co-Opting AI: Conflict

11/19 Tuesday | 6pm

NYU’s Institute for Public Knowledge, NYU Tandon’s Department of Culture, Technology and Society and the 370 Jay Project invite you to a discussion on conflict as part of the “Co-Opting AI.” Series. Featuring Peter Asaro, Liz O’Sullivan, Meredith Whittaker, and Mona Sloane in conversation, this event examines the intersection of AI, politics, human rights and conflict. In what ways are technological innovation and military technology entangled? What role does AI play in modern combat? How does this affect human rights? What are the politics behind AI and conflict, and how do the relate to inequality? The speakers will come together to provide their expertise on AI, data, technology, the military, human rights and inequality to consider these questions.

Peter Asaro is a philosopher of science, technology and media. He is Associate Professor and Director of Graduate Studies in the School of Media Studies at The New School, and an Affiliate Scholar at Stanford Law School’s Center for Internet and Society. His research focuses on the social, cultural, political, legal and ethical dimensions of automation and autonomous technologies, from a perspective that combines media theory with science and technology studies. He is the co-editor of Machine Ethics and Robot Ethics (2017), and has written widely-cited papers on autonomous weapons from the perspective of just war theory and human rights, and the legal and moral issues raised by law enforcement robots and predictive policing. Prof. Asaro has been Visiting Faculty at the Munich Center for Technology and Society at TU Munich, and held research positions at the Center for Information Technology Policy at Princeton University, the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, computer vision, and robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine for Wolfram Research. In 2009, Prof. Asaro co-founded the International Committee for Robot Arms Control (ICRAC), which in 2012 joined a coalition of NGOs in the Campaign to Stop Killer Robots, where he serves on the steering committee. The Campaign has been successful in initiating discussions on lethal autonomous weapons at the United Nations’ Convention on Conventional Weapons (CCW), and seeks to advance those talks to treaty negotiations. Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.

Liz O’Sullivan is a co-founder and vice president of commercial operations at a new AI explainability and algorithmic bias monitoring startup called Arthur AI. She has spent 10 years in the tech industry, mainly in the AI space. Most recently, she served as the head of image annotations for computer vision startup, Clarifai. Clarifai is where she first encountered the intersection of AI and the military industrial complex through the controversial Project Maven, over which she resigned due to her objection on selling autonomous targeting software to the government. Since then, she has supported the Campaign to Stop Killer robots as a technical advisor and a member of the International Committee for Robot Arms Control (ICRAC), contributing expertise at the United Nations in pursuit of a new treaty to mandate the need for meaningful human control over the application of force. Liz also serves as Technology Director for the Surveillance Technology Oversight Project (STOP), where she focuses on NY state and local tech and privacy policy, doing battle with the ever-growing law-enforcement surveillance state. She has been featured in articles on ethical AI in the NY Times, The Intercept, NPR, and has written about AI for NBC THINK and the ACLU, and has advised New York City and federal policy makers. Her passion for ethics springs from her degree in Philosophy from UNC Chapel Hill.

 Mona Sloane is a sociologist and her work examines the intersection of design and social inequality. Her current research is on AI design and policy in the context of inequality, valuation practice, data epistemology, and ethics. At IPK, Mona founded and convenes the ‘Co-Opting AI’ series. Mona completed her Ph.D. at the London School of Economics and Political Science (LSE scholarship) with a thesis on commercial spatial design practices. She also is a co-founder and former member of the LSE research programme Configuring Light/Staging the Social which explores the socio-technical role of public lighting in cities. Mona has published on design inequalities, interior design and atmospheres, material culture in design practice, social justice and lighting design, social research in/for design, aesthetics, design thinking, the politics of design, practitioner-academic collaboration for societal impact, and AI ethics. She has completed fellowships at UC Berkeley and the University of Cape Town. Follow her on Twitter @mona_sloane.

Meredith Whittaker is a Distinguished Research Scientist at New York University, Co-founder and Co-director of the AI Now Institute, and the founder of Google’s Open Research group. She has over a decade of experience working in industry, leading product and engineering teams. She co-founded M-Lab, a globally distributed network measurement system that provides the world’s largest source of open data on internet performance. She has also worked extensively on issues of data validation and privacy. She has advised the White House, the FCC, the City of New York, the European Parliament, and many other governments and civil society organizations on artificial intelligence, internet policy, measurement, privacy, and security. She is the co-founder and co-director of the AI Now Institute at NYU, which is a leading university institute dedicated to researching the social implications of artificial intelligence and related technologies.


Image credit: Philipp N. Hertel


Join Our Mailing List