Co-Opting AI: GDPR
You can watch the event here.
This event is a virtual panel discussion that will be live-streamed via Youtube
Guests can pose questions via Twitter
NYU’s Institute for Public Knowledge, NYU Tandon’s Department for Technology, Culture and Society, the 370 Jay Project, the NYU Information Law Institute, and NYU Law Guarini Global Law & Tech invite you to a discussion on GDPR in the series on “Co-Opting AI.” Featuring Ira Rubinstein, Sandra Wachter and Mona Sloane in conversation, this event examines the European Union’s General Data Protection Regulation (GDPR). Since it came into force in May 2019, the GDPR has often been hailed as a hallmark regulatory framework for data privacy regulation on this side of the Atlantic. However, the increasing proliferation of AI technologies, and the often invasive data collection practices that underpin it, have routinely put the GDPR to a test. In addition, the current pandemic has created a new situation that may require extensive contact tracing which includes forms of data collection that the GDPR sets out to strictly regulate. At this event, Ira Rubinstein, Sandra Wachter and Mona Sloane will come together to critically discuss the strengths and weaknesses of the GDPR, how the framework relates to newly emerging concerns arising in the context of AI, and what kind of data privacy policy conversations are likely to follow the pandemic.
Ira Rubinstein is a Senior Fellow at the Information Law Institute of New York University’s School of Law and a Senior Fellow at the Future of Privacy Forum. Ira’s research interests include Internet privacy, electronic surveillance law, voters’ privacy, local privacy regulation, EU data protection law, and privacy engineering. Ira lectures and publishes widely on issues of privacy and security and has testified before Congress on these topics on numerous occasions. Before coming to NYU in 2008, Ira spent seventeen years in Microsoft’s Legal and Corporate Affairs department, most recently as Associate General Counsel in charge of the Regulatory Affairs and Public Policy group. Prior to joining Microsoft, he was in private practice in Seattle, specializing in immigration law. Ira graduated from Yale Law School in 1985. He has served on the President’s Export Council, Subcommittee on Encryption (1998-2001); the Editorial Board of the IEEE Security and Privacy Magazine (2003); the Board of Directors of the Center for Democracy and Technology (2010-2016); as Rapporteur, EU-US Privacy Bridges Project, which was presented at the 2015 International Conference of Privacy and Data Protection Commissioners in Amsterdam, 28-29 October 2015; and on the Organizing Committee, Privacy by Design Workshops, Computing Research Association (2015-2016).
Sandra Wachter is an Associate Professor and Senior Research Fellow in Law and Ethics of AI, Big Data, robotics and Internet Regulation at the Oxford Internet Institute (OII) at the University of Oxford. She is a Visiting Professor at Harvard Law School to work on her current British Academy project “AI and the Right to Reasonable Algorithmic Inferences”, aiming to find mechanisms that provide greater protection to the right to privacy and identity, and against algorithmic discrimination. Sandra is also a Fellow at the Alan Turing Institute in London, a Fellow of the World Economic Forum’s Global Futures Council on Values, Ethics and Innovation, a Member of the European Commission’s Expert Group on Autonomous Cars, an Academic Affiliate at the Bonavero Institute of Human Rights at Oxford’s Law Faculty and a member of the Law Committee of the IEEE. Previously, Sandra worked at the Royal Academy of Engineering and at the Austrian Ministry of Health. Sandra specialises in technology-, IP-, and data protection law as well as European-, International-, human rights (online) and medical law. Her current research focuses on the legal and ethical implications of AI, Big Data, and robotics as well as profiling, inferential analytics, explainable AI, algorithmic bias, governmental surveillance, predictive policing, and human rights online. Sandra works on the governance and ethical design of algorithms, including the development of standards to open-up the ‘AI Blackbox’ and to enhance algorithmic accountability, transparency, and explainability. Sandra also works on ethical auditing methods for AI to combat bias and discrimination and to ensure fairness and diversity with a focus on non-discrimination law. Group privacy, autonomy, and identity protection in profiling and inferential analytics are also on her research agenda.
Mona Sloane is a sociologist working on inequality in the context of AI design and policy. She frequently publishes and speaks about AI, ethics, equitability and policy in a global context. Mona is a Fellow at the Institute for Public Knowledge (IPK), a Fellow with NYU’s Alliance for Public Interest Technology and a Future Imagination Collaboratory (FIC) Fellow at NYU’s Tisch School of the Arts. She also works with The GovLab in New York and teaches at NYU’s Tandon School of Engineering. At IPK, Mona founded and convenes the ‘Co-Opting AI’ series. She also curates the Technology Section for Public Books. Mona holds a PhD from the London School of Economics and Political Science and has completed fellowships at the University of California, Berkeley, and at the University of Cape Town. Follow her on Twitter: @mona_sloane.
The Co-Opting AI event series is convened by Mona Sloane. It is are hosted at IPK and co-sponsored by the 370 Jay Project and the NYU Tandon Department of Technology, Culture, and Society. The “Co-Opting AI: GDPR” event is co-sponsored by the NYU Information Law Institute, and NYU Law Guarini Global Law & Tech.