Building on What We Have: Stephanie Dinkins’ If We Don’t, Who Will?

Installation view of Stephanie Dinkins's If We Don't, Who Will? in the Plaza at 300 Ashland Place. Photo by Avery J. Savage.
Installation view of Stephanie Dinkins’s If We Don’t, Who Will? in the Plaza at 300 Ashland Place. Photo by Avery J. Savage.

Nam June Paik’s TV Buddha (1972) is often cited as one of the seminal pieces of early video art. Descendent from his earlier works involving magnets and cathode ray tubes, Paik’s interest in how we interact with a medium that was designed to only dictate culminated in his most recognized piece. The original iteration featured an 18th century Buddha statue Paik purchased on Canal Street in New York City sitting on a stupa-like mound watching its own image on a live feed. It is contemplation, interaction, creation, and death: the black mirror conundrum. By challenging TV’s power, Paik posed questions about how we exist in technology that is designed to lull us into a state of vapid entertainment. The machinery masquerades as neutral but we are all susceptible to its ability to pacify and enrage.

Nam June Paik’s TV Buddha, 1972

While the technology has advanced greatly since that time, the information it utilizes has held in its stead. Biases, conscious and unconscious, are maintained by the same demographic of cis, white males, while the people of the global majority are still woefully underrepresented in the datasets utilized by artificial intelligence.[1] Paik’s revolution of breaking the one-way communication of the screen and infusing it with non-European cultural references is still relevant some 50 years later.

Installation view of Stephanie Dinkins’s If We Don’t, Who Will? in the Plaza at 300 Ashland Place. Photo by Avery J. Savage.

Artificial Intelligence identifies patterns in datasets and makes predictions based on said datasets.[2] AI researchers have identified 3 categories of bias: algorithmic prejudice, negative legacy, and underestimation.[3] Algorithmic prejudice occurs when there isn’t enough racial data to make predictions, so it relies on geographic data shaped by decades of segregation and redlining. Negative legacy refers to language translations that associate female names with words like “weddings” and “family” while male names are associated with “salary” and “professional.” Underestimation happens when there is insufficient data to make predictions, so it relies on the cis white male data at hand, ignoring more than half the population.

Stephanie Dinkins’ Public Art Commission piece If We Don’t, Who Will?, asks its viewers to volunteer their own stories, anecdotes, and lore as a resistance to discriminatory AI. Housed in a shipping container up-cycled by renowned architects LOT-EK, this interactive multimedia piece challenges the notion that we should continue to placidly accept the conclusions of a biased system and instead become active participants in helping shape the data that nourishes the machine.

The AI Laboratory’s outside screens display AI-generated images of African American people and its entrance is marked by a ramp that leads into the container. Dinkins programmed the algorithm to prioritize Black and brown perspectives by training the rhetoric to recognize AAVE and feeding it images of the Black experience taken by Black photographer Roy DeCarava, Once inside, the viewer is greeted by screens that display the latest inputs from one of the many QR codes placed around the pavilion that leads to an app where people can share their personal perspectives on things like perceived privilege or personal freedom or just personal anecdotes. Dinkins addresses concerns about privacy by prioritizing anonymity and maintains diversity through the boundless nature of an app: easily accessible from anywhere and open to everyone.

Installation view of Stephanie Dinkins’s If We Don’t, Who Will? in the Plaza at 300 Ashland Place. Photo by Avery J. Savage.

The pavilion itself is inscribed with the semaphoric language of the Underground Railroad, used to guide enslaved people to safety and preserve unwritten histories. The North Star at the entrance is the symbol of guidance, the Flying Geese on the north wall references migration, and the Log Cabin on the south wall indicates a safe space. Dinkins’s approach to infuse this work with specific cultural references underlines her motive to integrate multicultural narratives, references, and data into an AI system that operates through a lens of inclusivity and universality instead of perpetuating the biases of a Euro-centric past.

However we feel about machines, we have to make peace with the fact that we’ve doomed ourselves to live alongside them. Stephanie Dinkins is an artist who has embraced the direction we’ve taken in this area by ensuring that her voice and the voices of others are heard in equal measure.

“Our attention, understanding, advocacy, and participation in nurturing AI are crucial to creating a world that supports all of us—not just a select few. AI runs the risk of perpetuating harmful biases and stereotypes embedded in our society. However, when our stories are self-determined, nuanced, and culturally specific, they can foster AI systems that reflect and uplift the experiences of everyday people.”

By inserting our unique perspectives into the machine, we can break the loop of static information and help shape how AI understands human history, culture, and humanity in general. This could lead to a kind of commonality that connects us to technology and to each other.

[1] West, S.M., Whittaker, M. and Crawford, K. (2019). Discriminating Systems: Gender, Race and Power in AI. AI Now Institute, p. 6

[2] UNESCO, Artificial Intelligence and Gender Equality: Key Findings of UNESCO’s Global Dialogue (August 2020,p. 4)

[3] Anupam Datta, 3 Kinds of Bias in AI Models — and How We Can Address Them, InfoWorld, Feb 24, 2021

Avatar photo
Cindy Rucker is an independent curator, writer, and arts professional with decades of experience in exhibition development, artist mentorship, and nonprofit arts initiatives. Formerly the owner of Cindy Rucker Gallery, she has collaborated with various institutions in both curatorial and fundraising capacities. She lives and works in Brooklyn, New York.