Tansy Xiao: I’ve noticed that the way you designed the title of the show “The Question of Intelligence — AI and the Future of Humanity” was to list it equally among a series, “the question of being, the question of classification, the question of labor, of race, of seeing” etc. Please share some insights on that idea.
Christiane Paul: The cluster of questions that surrounds the title “The Question of Intelligence” in the exhibition signage brings together crucial issues that are explored by works in the exhibition. Projects by Mimi Onuoha and Stephanie Dinkins address race; Brett Wallace and LarbitsSisters investigate the impact of AI on labor; and Memo Akten, Lior Zalmanson, David Rokeby, and Mimi Onuoha investigate AI as it affects vision, the process of seeing. All of these ancillary questions are also crucial to assessing intelligence: what does the automation of the senses, such as vision and speech, and of the tasks we perform as part of a job mean for the ability to acquire and apply knowledge and skills? What biases are datasets introducing and perpetuating when it comes to racial and ethnic representation and cultural context? At the core of all of these issues lies “the question of being,” how we define ourselves as humans in the face of rising machine intelligence.
Video: Opening night for “The Question of Intelligence – AI and the Future of Humanity”
TX: Over the years the art world had been relying on varying techniques to withdraw from subjectivity: the vulnerable, almost flawed nature of being human. David Rokeby’s The Giver of Names pointed out in a very playful way the absurd randomness of rulemaking, while Mary Flanagan’s [Grace:AI] has attempted to not necessarily achieve the ultimate neutrality but to utilize such tendencies as a vessel to address the existing bias in artificial intelligence. Could you talk a bit about her approach? In observing the process of machine learning, do we look back and introspect the structure of our own history, who was writing it, and in what system?
CP: Your question makes a very important point. We indeed need to take a close look at the structure of our technological and cultural history and ask ourselves who has been in control of its language and is “writing” this history. Not coincidentally, the book that is part of Mary Flanagan’s project [Grace:AI] includes definitions of intelligence over the centuries and highlights that most of them have been written by men. For creating [Grace:AI], Mary Flanagan used a Generative Adversarial Network (GAN) trained only on works by female painters, which is a perspective that no human and only an algorithm exposed to a particular slice of art history could have. A GAN uses generative algorithms trained on a specific data set to produce new original images with the same characteristics as the original set. They are then evaluated by discriminative algorithms that, based on their own training, judge whether the newly produced data looks authentic. After having been trained on the history of women painters, [Grace:AI] was tasked with painting a portrait of Frankenstein’s monster, an implicit critique of the artist’s role in conceiving a machinic entity. The use of GANs in creating artwork has recently emerged as a trend, which even led to the coining of the term GANism. Many of these GAN-based projects use a training data set to make an AI that paints like a Renaissance artist or abstract expressionist or you name it. [Grace:AI] intentionally stays away from a seemingly ‘neutral’ perspective by presenting a deliberately feminist take on machine creativity.
I would also argue that David Rokeby’s Giver of Names is not random in its rule sets, but rather provides both a very logical and a subjective reading of the world it perceives. After performing contour and image analysis of the objects presented to the system, it links this analysis to ideas and words in its database, which is populated with a lot of older novels that are in the public domain. The Giver of Names has a specific understanding of the world that is very much informed by 19th century novels, so it presents an AI’s quite subjective state of mind.
TX: Both Tega Brain’s Deep Swamp and Ken Goldberg team’s AlphaGarden have explored the possibilities of guiding if not intervening in the development of natural environments with AI. Are any of these techniques used in agricultural or ecological practice, or are they more of a utopian vision?
CP: Autonomous robots are increasingly used on farms and The New York Times recently devoted an article to this development. The TerraSentia robot, for example, has been designed to generate a detailed portrait of a cornfield, measuring the size and health of the plants and the number and quality of ears each corn plant will produce by the end of the season, in order to assist agronomists to breed better crops. In different ways, Deep Swamp and AlphaGarden explore the questions surrounding the automated optimization of our environment. Deep Swamp — a triptych of wetlands that are governed by three artificially intelligent software agents with different programmatic goals — playfully asks questions about optimization at a time when ecological calamity meets environmental engineering. AlphaGarden explores the potentials and limitations of artificial intelligence in the context of 21st-century ecology, diversity, and sustainability by making deep AI policies learn from simulation and human demonstrations in order to control a three-axis robot that tends a garden that is a polyculture environment including invasive species.
TX: LarbitsSisters’ work utilized the data on Twitter to influence the outcome of their AI. That’s almost a model of the reality that we live in, as we have both the government and the dissidents actively expressing their opinions and affecting the public views on social media. Democracy or not-mob mentality has always been a part of politics. Could you talk about the potentials of data collected from social media, the ethics, and the purposes?
CP: There are many layers to this discussion. As you mention social media are a platform for democratic engagement and activism, as well as social manipulation and propaganda. We need both higher standards and technological mechanisms for truth filtering and fact-checking. Another layer of this conversation are data collection and mining through social media sites for commercial purposes, and we are still only at the beginning of developing requirements for protecting people’s privacy and creating ethical frameworks for commerce. Social media corporations make money of user-generated content, and BitSoil Popup Tax & Hack Campaign by the Belgian duo LarbitsSisters playfully develops a model for a fairer digital economy. The project understands user-generated data as “bitsoil,” the new oil of the digital economy, and an army of tax collector bots — trained by IBM’s AI-Watson Natural Language Classifier — to detect, collect, and mine bitsoils on the data produced by users on Twitter. The online platform of the campaign invites participants to mine bitsoils, or to generate their own tax collector bots equipped with a set of actions to perform. During the campaign, each of their actions on Twitter randomly assigns a micro amount of bitsoils to a virtual wallet of a campaign participant. While the project isn’t a functioning economic model, it effectively invites us to think about frameworks for a digital economy in which users would be compensated for the data they produce.
TX: It looks like you collaborated with several academic institutions for this show. Is it true that AI art is primarily a subject explored within academia, science, or art, or do you see such a medium practiced in a broader part of the visual art world as well?
CP: The Question of Intelligence took place at The New School’s Sheila C. Johnson Design Center which is devoted to generating an active dialogue on the role of innovative art and design in responding to the environmental and social challenges of our contemporary world. We didn’t collaborate with other institutions in the organization of the exhibition per se, but a couple of the projects in the show are located at or generated from within academic institutions. The actual AlphaGarden is in the greenhouse of the University of California at Berkeley, and the AI Mappa Mundi project was developed by a team of artists and researchers (Baoyang Chen, Zhije Qiu, Ruixue Liu, Xiaoyu Guo, Yan Dai, Meng Chen, Xiadong He) at the Central Academy of Fine Arts in Beijing. There is no doubt that academia has played a crucial role in nurturing the environment of digital art by providing technological support through labs and enabling research, as well as discussion. Many of the most established artists in the field of digital artwork at universities. There definitely have been more digital art exhibitions at university galleries and science museums than in the traditional art world. That being said, AI-focused exhibitions seem to be a little bit of an exception since this topic has entered mainstream discourse and therefore more easily gained a presence in museums. There have been exhibitions such as AI: More than Human at the Barbican in London (May 16 – Aug 26, 2019); Uncanny Values. Artificial Intelligence & You at the MAK-Museum of Applied Arts, Vienna (May 29 – October 6, 2019); and Uncanny Valley: Being Human in the Age of AI at the de Young museum in San Francisco (February 22 – October 25, 2020).
Participating Artists: Memo Akten, Tega Brain, Baoyang Chen, Zhije Qiu, Ruixue Liu, Xiaoyu Guo, Yan Dai, Meng Chen, Xiadong, Harold Cohen, Stephanie Dinkins, Mary Flanagan, Ken Goldberg, the AlphaGarden Collective, University of California at Berkeley, Lynn Hershman Leeson, LarbitsSisters, Mimi Onuoha, David Rokeby, Brett Wallace, and Lior Zalmanson
The Question of Intelligence — AI and the Future of Humanity
At the Sheila C Johnson Design Center
February 7 – April 8 (was suspended due to the COVID-19 Virus)
Interview by Tansy Xiao