Co-Opting AI: Intimacy
To watch the event, please click here.
RSVP is required. Please RSVP here.
NYU’s Institute for Public Knowledge, the NYU Center for Responsible AI, and the 370 Jay Project invite you to a discussion on games in the series “Co-Opting AI.”
This event will focus on how our intimate lives are mediated by technology, and artificial intelligence specifically, and vice versa. Featuring Gabriella Garcia, Hannah Zeavin, and Mona Sloane, the discussion will explore the themes of agency, care, connectedness, sexuality, (non-)humanness, and inequity.
Gabriella Garcia is a writer, performer, and poetic technologist. Her research primarily focuses on the protection of radical self-expression, networked subcultures, and cybernetic intimacy. As a performance artist, Gabriella works to create spaces ruled by vulnerability. She has performed in work curated by New York Restoration Project in partnership with the Brooklyn Academy of Music (LEIMAY), The Watermill Center (LEIMAY), SPRING/BREAK Art Show, MANA Contemporary, and Otion Front Studio. Her work has appeared at CultureHub, Pioneer Works, Museum of Sex, and Secret Project Robot. Garcia is currently working on the creation of Decoding Stigma, a cross-institutional thinking group bridging the gap between sex workers, academics and technologists. She is the Managing Editor of ADJACENT, NYU ITP’s online journal of emerging media.
Hannah Zeavin is a Lecturer in the Departments of English and History at UC Berkeley, and a faculty affiliate of the University of California at Berkeley Center for Science, Technology, Medicine, and Society. Her research focuses on the coordinated histories of technology and medicine. Zeavin is the author of The Distance Cure: A History of Teletherapy (MIT Press, August 2021) and at work on her second book, Mother’s Little Helpers: Technology in the American Family (MIT Press, 2023). Other work has appeared or is forthcoming in differences: A Journal of Feminist Cultural Studies, Logic Magazine, the Los Angeles Review of Books, Slate, and beyond.
Mona Sloane is a sociologist working on inequality in the context of AI design and policy. She frequently publishes and speaks about AI, ethics, equitability and policy in a global context. Mona is a Fellow with NYU’s Institute for Public Knowledge (IPK), where she convenes the Co-Opting AI series and co-curates the The Shift series. She also is an Adjunct Professor at NYU’s Tandon School of Engineering, an Affiliate of the Center for Responsible AI, and is part of the inaugural cohort of the Future Imagination Collaboratory (FIC) Fellows at NYU’s Tisch School of the Arts. Mona is also affiliated with The GovLab in New York and works with Public Books as the editor of the Technology section. Her most recent project is Terra Incognita: Mapping NYC’s New Digital Public Spaces in the COVID-19 Outbreak which she leads as principal investigator. Mona currently also serves as principal investigator of the Procurement Roundtables project, a collaboration with Dr. Rumman Chowdhury (Director of Machine Learning Ethics, Transparency & Accountability at Twitter, Founder of Parity), and John C. Havens (IEEE Standards Association) that is focused on innovating AI procurement to center equity and justice. Mona also works with Emmy Award-winning journalist and NYU journalism professor Hilke Schellmann on hiring algorithms, auditing, and new tools for investigative journalism and research on AI. With Dr. Matt Statler (NYU Stern), Mona is also leading the PIT-UN Career Fair project that looks to bring together students and organizations building up the public interest technology space. Mona is also affiliated with the Tübingen AI Center in Germany where she leads research on the operationalization of ethics in German AI startups. She holds a PhD from the London School of Economics and Political Science and has completed fellowships at the University of California, Berkeley, and at the University of Cape Town. Follow her on Twitter @mona_sloane.
The Co-Opting AI event series is convened by Mona Sloane. They are hosted at IPK and co-sponsored by the 370 Jay Project and the NYU Center for Responsible AI.
Image: (c) Dante Busquets