A researcher on digital misogynoir explains how AI-generated images and online trolls are amplifying racist harassment against Black nonbinary people.
By Dr. KáLyn Coghill
Essence
https://www.essence.com/
Shortly after Roe v. Wade was overturned in 2022, I notified my followers on Instagram that I was speaking to Yahoo! News about what such a monumental attack on reproductive rights would mean for millions of people who could no longer access reproductive care.
In less than three minutes, the online trolls descended on my account. They didn’t just attack my identity as a Black nonbinary person or my reproductive justice expertise — they also challenged the legitimacy of my decades-long research on digital misogynoir: anti-Black racism and sexism that Black nonbinary, agender, and gender-variant people experience online. I decided to reply with a clap-back — something I had seen other Black folks employ online. And although I silenced most of them, I was not oblivious to the fact that this kind of bigoted hate would certainly resurface online again.
In the four years since I endured that onslaught of hateful comments, a new threat did emerge for many Black nonbinary, agender, and gender-variant folks, and it’s one that has only worsened the racist attacks many of us experience online: artificial intelligence. With the advent of AI, it has been even easier for people to create openly racist images that make the digital world even more volatile for Black nonbinary, agender, and gender-variant folks.
This isn’t hypothetical. Just last month, the Administration used AI to distort the public image of a Black woman. After civil rights attorney and activist Nekima Levy Armstrong was arrested while protesting at a church service, the White House circulated an AI‑altered photo of the moment. Instead of showing the real photo — Levy Armstrong calmly being escorted by federal agents — the AI-manipulated image portrayed her as frightened, crying, and with noticeably darkened skin.The photo itself has been viewed over six million times, and it has been used by the administration to portray Levy Armstrong — and by connection, other Black nonbinary, agender, and gender-variant folks — as weak and hysterical.
False and harmful narratives about us, like the one of Levy Armstrong, have long been perpetuated — and they have an unmistakable impact on how we are treated, especially the dark-skinned among us. Offensive images and racist tropes about us and our level of education means that we are more likely than others to hear people express surprise when they demonstrate strong language skills in the workplace. Black people also have some of the highest maternal mortality rates because doctors and nurses are more likely to dismiss their pain due to tropes about “toughness” or “thick skin”. I, myself, have experienced the way that racist tropes show up in rooms with white academics who have dismissed my research as lacking rigor because it focuses on harm leveled against people who look like me. This is not just hurtful, it’s disabling.
But artificial intelligence adds a sinister layer to this long-standing form of violence. Anyone can access these generative applications and alter any image they want without guardrails, moderation, or even consequences. Where online trolls have long perpetuated racist narratives through words, now they can do the same via images churned out in seconds and spread online even faster. But despite its name, AI can’t shake the fact that it has human DNA. AI mimics what real people, not robots, program it to do and the data it does it with.
To stem the deluge of online misogynoir, we need to reflect on our built-in biases so that AI doesn’t continually replicate them and reproduce harm. We have seen scholars, organizers, and scientists like Timnit W. Gebru, Joy Buolamwini, and Ruha Benjamin talk about the power dynamics and harms that these biases replicate in digital spaces.
But we also need to pre-empt the harmful ways that AI perpetuates racist stereotypes by educating people so that they know to call it out when they see it, especially young people who are growing up with this new technology. And when we talk about educating people on AI, we need to be explicit about how racism plays a key role in what we are witnessing in real time. We can do this by providing examples of the ways harmful racist tropes have had a dangerous impact on Black nonbinary, agender, and gender-variant people throughout history — even teaching this to students as early as the primary school level. We also need to create curricula that give young people the foundation for identifying and combating these harms. We also can’t forget that our own digital hygiene is monumental in combatting harmful AI, especially when it comes to targeting Black nonbinary, agender, and gender-variant folks in particular. Interrogating images and videos you see before reposting or resharing can help mitigate the virality of harm.
As a disabled person, I often wrestle with the ways that AI can be used as a tool for accessibility. I’ve seen it help people with cognitive impairments who use AI to help them brainstorm or work through brain fog. But I am all too aware that it hurts me as a Black person, which is also a part of my identity. We need to call a thing a thing, and what we witnessed with Nekima Levy Armstrong, what I have experienced and observed online, and what is becoming even more normalized is the racist use of AI to harm this group.
We must ensure that our education system can teach young people the harmful narratives that AI perpetuates, and then we must hold people accountable for them — even when that’s our own government. Because if AI can be turned into a tool that makes the world more dangerous for Black nonbinary, agender, and gender-variant folks, the next shiny innovation won’t be far behind.

You must be logged in to post a comment Login