- Words
- Linda Dounia Rebeiz
- —
- Date
- 1 November 2023
- Tags
AI tools paint a blurry picture of our current reality – so what do these biases mean for our future?
Share
Share
For artist Linda Dounia Rebeiz, it was actually AI’s shortcomings which drew her to use it as a tool in her practice. An individual who “investigates the philosophical implications of technocapitalism and its role in furthering systems of inequity”, in this essay the artist explains how she personally discovered the biases of text-to-image models, and how her practice presents an artistic form of resistance towards AI’s evolution.
I recently visited the neighbourhood where I grew up, Quartier Escale. It was a heavy shock to my system. I wished that memories would flood back to me as I walked through the streets I used to run along, but the new roar of the place was deafening.
Quartier Escale is now a market, or rather, a market has overtaken it, spreading its rows of stalls like tentacles. In the last decade, there has been a population explosion in my hometown of Mbour. A once quiet coastal town is now one of the most populated areas in Senegal.
I had a hard time finding my childhood home. On the lot where it stood, there are now two large wholesale stores. I had known for a while now that my childhood home was gone, but staring at the stores brought me a different reminder, a more painful one – my hometown is changing beyond recognition. What it once was only survives in stories, memories, and the occasional photograph in a picture book.
“What are we willing to lose to reach the edges of our desires? My work in AI centres around this question.”
Linda Dounia Rebeiz
To justify the scale and damage of our imprint on the world, we have to believe that progress is good. I once believed it too. Having worked in technology and design for the past decade though, I have become dubious about the true value of progress, or more pertinently, its cost. Quartier Escale was once a wide avenue lined with cailcedrat trees, hibiscus bushes, homes, and a few family businesses. Today, it’s a narrowing street where stalls spill onto each other and shoppers are jostled by carts carrying stuff. There is stuff everywhere. My home was torn down to make room for stuff.
By all accounts, Mbour is more prosperous – more people, more money, more stuff. From my perspective though, Mbour has faded under the pressure of all of it, its spirit snuffed out by consumerism and the seductive momentum of progress. There is a lesson here, and more importantly a call for reflection: what are we willing to lose to reach the edges of our desires? My work in AI centres around this question.
We’re figuring out a way to engineer ourselves into AI, a sentient constellation of data points about us. Its task is to ingest our humanity – all that we know, are, and do – and reflect it to us. It promises that it will at least augment us, if not replace (some of) us. Though we’ve never been closer to seeing it succeed, AI is falling short. At best, it reflects a blurry picture of humanity. At worst, that picture is riddled with prejudice and hallucinations (presented as entirely fabricated answers to queries).
“Though we’ve never been closer to seeing it succeed, AI is falling short.”
Linda Dounia Rebeiz
These shortcomings are what initially drew me to AI. At the time, I was curious about why facial recognition software was misidentifying Black people as criminals. This search led me to interrogate how AI was trained and to explore whether I could train a model myself. As an artist, I wondered whether anyone was training AI with art and found a host of experiments using famous painters’ work to train Generative Adversarial Networks (GAN) to emulate their style.
I learned that to train a GAN, I needed to first build a database of images to train it with. The larger the database and the more homogenous it was, the better the outcomes. I took pictures of my existing paintings and made many more (smaller ones) to have at least 1000 images I could use to train. The results weren’t satisfactory, so I painted more. A few months into this process, I had 2000 images in my database, increased the number of training steps, and then magic! I had never seen anything like it. Some of the outputs looked like I could have painted them, some looked like I might have painted them in the future, and some felt completely alien.
I downloaded 10,000 outputs from that first model and was positively overwhelmed by the scale GANs could achieve. My hands alone definitely couldn’t get anywhere close to what a single model could achieve, even at the scale of my lifetime. I was charmed. I was intrigued. I wanted to keep digging.
I spent a few months with my model learning the intricacies of GANs. Eventually, the initial excitement and flattery of having the essence of my painting practice reflected to me at an unprecedented scale wore off. I began speculating about more interesting ways to put GANs to use and landed on the idea of experimenting with using them as predictive archives. Reinhart Koselleck’s Futures Past was on my mind, and I wondered what futures GANs could imagine based on data available today.
The second GAN I trained and titled Once Upon A Garden, started with compiling a database of a variety of indigenous flora native to the Sahel region of West Africa. I used the UCN Red List of Endangered Species as a starting point and then scoured the web for images of the flowers to create the database. I found a few of the plants searching gardens around Dakar, where I now live. I found a few others online. Most were hard to track down, if at all. It’s as if they were already gone between the time the UCN list was compiled and when I started searching for them. So I turned to national, and eventually colonial archives, and learned that a number of those flowers were preserved in herbarium pages.
“What we don’t include in training today contributes to a much hazier and incomplete understanding of our world.”
Linda Dounia Rebeiz
During my research, it dawned on me that a majority of the plants present in my database were completely new to me. The truth about why was harrowing: I grew up seeing less than half of the plant species my grandmother grew up seeing. There was another, more insidious, truth: while some of the plant species I worked with survive in their digital embodiment on the internet, the ones that weren't recorded or digitised are now entirely lost to both human and digital consciousness.
Once Upon A Garden became a dystopian projection of a likely outcome of global warming where all plants and flowers from the West Africa region have disappeared from Earth. It depicted a reality where humans have to live with simulated images of these plants and flowers, based on a patchy digital memory of them. It’s no surprise that the flowers generated during this project were amorphous and hazy. They tended to look like each other. They were not flowers but spectral remains of what could have been flowers.
This body of work also became an allegory for the lossy reflection of our humanity by AI. What we don’t include in training today contributes to a much hazier and incomplete understanding of our world. Fittingly, while I was working on Once Upon A Garden, generative AI tools like DALL-E or Midjourney grew in popularity. Both mediated a deeply flawed and exclusionary understanding of the world. Against the vastness and richness of the world's artistic expressions, they were criminally limited. They were also unequivocally biased.
Most generative AI tools do not disclose training data. However these tools do share that its algorithms, while not explicitly trained on art, use “a wide range of images from the internet”, including the documentation of art history that can be accessed online. This means that when we interrogate the biases present in generative AI training data, we have to search for answers in both the internet’s data and the documentation of art history.
We know that the majority of content on the internet is produced by a minority of its users, with a significant portion coming from Western, English-speaking users. That’s our first layer of bias. Secondly, we know that the Western canon of art (predominantly white, male, and European) has long been the focus of most academic and critical attention and, therefore, is most widely documented. Contributions by individuals from diverse racial and ethnic backgrounds, genders, and non-European cultures have often been marginalised or ignored. That’s our second layer of bias.
“People from the Global South, especially people of colour, using AI today can confidently attest that it doesn’t know them and that its conceptualisation of their reality is fragmentary.”
Linda Dounia Rebeiz
We can’t dig into the training data just yet, so we don’t know whether there are additional layers of bias in the way, for example, the data these companies use is sourced or codified. There could be bias in the framework for handling this data: Is some data prioritised over other data? Is there a weighting mechanism for how the algorithms draw from the data? Is there some type of sorting, labelling, or censoring as guardrails around the data?
We can only infer bias based on the outputs of AI – what generic prompts are more likely to produce and which contextual prompts are least likely to return something accurate or satisfactory. As an example, I have queried the words “Mbour” and “New York” in DALL-E and Midjourney.
Midjourney isn’t sure whether Mbour is a place or a person. From these results, it seems to think it’s either a painting of a Black woman (not the kind you’d find in a reputable gallery) or a cartoonish fantastical town with only palm trees for vegetation. In the first output, the word “mobb” is inscribed. While “mobb” is not an English word, it is dangerously close to “mob”. On the other hand, Midjourney is more confident about New York. It thinks it’s a real city with bright lights shining on sleek cars.
(Copyright © Linda Dounia Rebeiz)
(Copyright © Linda Dounia Rebeiz)
At least DALL-E seems to be sure Mbour is a place. It gets the coastal part right, showing us what appear to be fishing boats. Its depiction of New York though is highly detailed. It doesn’t glorify it quite like Midjourney does, but the results are more flattering than blobby suggestions of fishing boats on a murky seashore.
From just this query, we can tangibly see the results of generative AI’s biased databases. Images of places it doesn’t have much data on have a lower definition and are less detailed. Stereotypes fill the gaps in its understanding of these places (Mbour is just a long stretch of beach) and it also hallucinates quite a bit (paintings of Black women instead of the place).
People from the Global South, especially people of colour, using AI today can confidently attest that it doesn’t know them and that its conceptualisation of their reality is fragmentary. To me, the issue is not that there isn’t enough data about us, our cultures, and our environments to train AI. It’s that most commercial AI tools don’t care enough to know this data is missing from their models, let alone work to remedy it.
I don’t believe technology exists in an apolitical, post-racial, and post-colonial vacuum. Though autonomous in its workings, AI is constructed by humans. The data used to train it is gathered by humans. Humans are political entities with the ability to make decisions about what’s right or wrong and take action. Humans are also vulnerable to the legacies of history. It is therefore not so difficult to believe that the technology we create is consequently vulnerable to the legacies of history and inevitably reflects their impact on our model of the world. Bias and prejudice, whether conscious or unconscious, are an impact of history. We are all vulnerable to them.
This is why I primarily use GANs in my practice. It offers some level of control. I can gather data on my own, about things I care about, in places that are unimportant to the zeitgeist. The good news is anyone can also do this. There are many tools, like Runway ML for example, that make it easy to train models at a relatively accessible cost. The costs can lower when people organise in groups to collect data and train together.
“I continue to incorporate AI in my practice because I am afraid that if I don’t, everything about me and where I come from will be erased from the digital memory of the world.”
Linda Dounia Rebeiz
(Copyright © Linda Dounia Rebeiz)
(Copyright © Linda Dounia Rebeiz)
I am dubious that at this stage of AI’s evolution and profitability, it is possible to hold generative AI companies accountable for being transparent about what it trains its models with and how. I hope policy eventually catches up, but capitalism will not make it easy. I am more interested in how to exert pressure from the ground up.
I dream about data collection for AI operating like a mycelium – a decentralised network of people collecting data they care about and sharing it to train models. It’s not enough to have data equity. We should have agency on data – knowledge and power over what is collected about us and how. We should also have the tools to collect this data ourselves and contribute it to better train AI.
While I buy into some of the promises of AI, I understand that it tends to be an echo chamber of our world order. The idea that generative AI companies source data from us, but don’t give us the means to interrogate this data, dangerously mirrors the kind of power relations we’ve been rejecting everywhere – the ones that relegate injustice as a mere byproduct of progress. This complicates the relationship marginalised identities have with AI, but also makes their participation a critical form of resistance.
I continue to incorporate AI in my practice because I am afraid that if I don’t, everything about me and where I come from will be erased from the digital memory of the world. Responding to the biases of AI in representing people and cultures from the Global South feels urgent to me. When I encourage artists from the Global South to experiment with AI and brute-force it to tell their stories, I do it because I believe it’s a necessary form of resistance at this stage of AI’s evolution.
Hero Header
Linda Dounia Rebeiz: Season A-3, 2023 (Copyright © Linda Dounia Rebeiz)
Moving image one: Linda Dounia Rebeiz: The Garden Under The Sun, Linda Dounia, 2022 (Copyright © Linda Dounia Rebeiz)
Moving image two: Linda Dounia Rebeiz: Electric Blossom X-3, Linda Dounia, 2022 (Copyright © Linda Dounia Rebeiz)
Share Article
Further Info
About the Author
—
Linda Dounia Rebeiz is a transdisciplinary artist and designer based in Dakar who investigates the philosophical implications of technocapitalism and its role in furthering systems of inequity. She also writes about and curates art exploring these ideas. In 2023, her work was recognised on the TIMEAI100 list of most influential people in AI. You can view more of her work here.