Let me start by saying I am probably not the best person to write this article. I’m a professor who teaches one introductory Gender Studies course at a small Midwestern college, not a working activist or power player in feminist thought. But I can’t seem to find an article like it, and I am a person (100% human, no bot parts). So that’s going to have to be good enough.
We sometimes hear about AI as women’s issue when discussing image manipulation, privacy, and pornography. But in all the conversations around the role that generative AI (ChatGPT, Meta, Grok, Grammarly, and the rest) should play within American colleges and universities, one perspective feels noticeably lacking to me, and that’s an intersectional, anti-sexist, justice-based perspective. As a shorthand, I’m going to call that feminism, which the great bell hooks defined as “a movement to end sexism, sexist exploitation, and oppression.”
How should that movement treat the ascendance of generative AI in our halls of higher learning? Here are the serious and important concerns that stand out to me as feminist.
Generative AI Destroys the Environment
I will treat this one briefly because it is the most widely discussed argument against generative AI, at least in my circles. If you don’t know about the decimation of natural spaces for the building of larger and larger data centers, or the enormous water usage needed for cooling said data centers, or how this negatively impacts communities – humans as well as natural environments – you can find plenty of reputable sources beyond the ones I’ve linked. All of our modern technologies have environmental impacts, yes, but generative AI’s is absolutely, unfathomably enormous.
This is an intersectional feminist issue because environmental destruction disproportionately impacts disadvantaged groups and poor communities in both urban and rural settings. American corporations are draining the world’s water and power to fuel services the average American is using to create funny memes or do their homework for them. This isn’t a future concern; people’s safety, access to medical care, and access to drinking water is already being impacted. However you may frame your ethics, if you’re someone “committed to caring for the poor,” or “a believer in the sacred dignity of life,” or “an activist against the oppression of marginalized communities,” you can’t look the other way when it comes to the impact of generative AI technology.
Generative AI Isolates Us
Community is the core tenet of intersectional feminism. Call it sisterhood, call it solidarity, call it coalition-building: it’s at the heart of what feminists believe in and strive for, even if it’s often hard to achieve.
I’ve written before about the ways that generative AI usage in academics erodes our academic community and isolates students. A learner who would once have asked a friend or a teacher a complicated question is now asking AI. A writer who would once have gotten a friend’s feedback is now asking AI. People who would once have relied on a tutor, or study buddy, or a trained counselor for academic support are now asking AI. And it goes way beyond academics. People are using AI in place of therapists, in place of romantic partners, in place of friends.
Feminists should be concerned about anything that benefits massive, billion-dollar corporations and appears to be significantly harming mental health, wellbeing, and community-building. Feminists should be concerned about anything that further isolates us, at a time when many are already feeling divided, siloed, frightened, and disempowered. Isolation, while harmful to everyone, disproportionately harms groups with less power who need to stay in coalition and community to keep from losing their rights.
Generative AI Replicates Preexisting Biases and Erasures
You may already be aware that algorithms tend to replicate cultural hierarchies, privileging the same things that are often privileged in Western countries: whiteness, maleness, heterosexuality, etc. Here’s a simple, real example of algorithmic bias: when Snapchat rolled out filters years ago, the filters trained to recognize faces often failed to recognize Black faces. Whiteness was thus “privileged” through ease of use without having to ever think about it. Caroline Criado Perez wrote a whole, well-researched book about gender bias in data and algorithms if you want more examples.
From the outset, generative AI has had a racism and sexism problem. While AI chatbots are encoded not to say certain things – you may find them servile and polite to an inane degree – the data sets they have been trained on contain biases and those biases are thus reflected in the results they serve up. You can correct for it if you know what to look for. But does the average user? And do the tech company workers? It’s not always as blatant as praising Hitler. It can be the answer that isn’t given, the story not told, the person or event left out of the historical record, the fundamental assumptions underlying what is stated. AI “summaries” are everywhere, but how you summarize something depends on what you think is important, and these algorithms are not neutral when they are selecting “what matters.” They are also not qualified to judge what is true.
Why is this a generative AI problem, if it’s present throughout culture? Mainly because AI appears like a neutral thing, a machine, a talking encyclopedia. As we become more and more reliant on AI chatbots to answer all our questions and treat them as sources of knowledge and truth, we will less and less often consume thoughtful cultural products created by experts who do know how to look for bias or the erasure of minorities, who are making an effort to present complete information and complex reasoning.
Generative AI is Trained on Stolen Work
A capitalist, imperialist mindset operates on the principle that profit is self-justifying. If an enterprise or new technology makes money, then whatever ends are necessary to promote that enterprise or advance that technology are not only justified, but inevitable and inarguable.
Feminists don’t agree. When imperial powers take over other nations to coopt their resources, intersectional feminists call this exploitation. When a man takes the work of a woman – whether it’s her scientific research as his lab assistant, or her “editing” of his novel that amounts to co-writing – and calls it his own without giving her credit, intersectional feminists call this exploitation.
So when Meta or OpenAI or other large language model (LLM) generative AI software companies pirate the works of women and minorities and, yes, white men too, in order to “train” their software, that is exploitation. And intersectional feminists need to call it that.
Because think this through. Imagine a woman named Nina. Nina has done all the hard work of researching something, writing it, and getting it to publication. It may have taken years of data collection, or conducting interviews across the country, or sifting through archives, or rewriting her characters again and again until they popped from the page. Nina has had to contend with juggling many responsibilities to find time for her intellectual and creative labor. Nina has done all the labor of finding a platform, publishing house, or journal that would accept her work, potentially going through cycles of rejection, possibly navigating some of the biases that still provably exist in many publishing settings. Nina may even have risked personal or professional repercussions for publishing work that was innovative or boundary-pushing.
But she did it. Now her work is available for sale to reward her efforts, provide her income to keep writing, and potentially build a fan base. Or if it was an academic publication, she profits from it not by money but because it’s associated with her name, boosting her reputation, which is everything in academia.
Then Meta comes along and scrapes up her work and dumps it into their “training” data set. Which is another way of saying that their LLM repackages Nina’s writing, takes her voice and her ideas, and gives them away for free to Meta’s users, benefitting Meta, stripped of any association with Nina. She gets nothing. She’s not even associated with that work anymore. It’s all just been taken, like a big kid on the playground pushing a little kid down and taking their favorite doll and saying “It’s mine now.” Nothing happens to the big kid. They are never punished. The doll is never returned.
“But,” you are thinking, “that is silly. Nina’s creative work or research is still out there. Just because it was taken for a data set, who cares? She can still sell copies. She can still build a reputation.” Maybe. But what if Nina is struggling to break through, and someone else with a more prominent position or more privilege takes the results of her years of hard work and research and publishes something very similar, written by generative AI, on a big, splashy, prominent platform? What can Nina possibly do about that?
And honestly, even if that doesn’t happen, the principle still operates. It was Nina’s work. It was her intellectual property. The fruit of her labor. Why should feminists support companies who steal and profit from it, any more than we support any other kind of exploitation?
Generative AI Is Blossoming While Research Strangles on the Vine
The salt in the wound is that all this is happening at a time when it’s become more difficult and risky than ever to do academic labor. Grants have been cut. Budgets have been slashed. Centers for research are closing. This has impacted everything from developing cancer treatments to putting on a new play. And academics are seeing drastic pushback, even losing jobs, over innocent, once-uncontroversial parts of academic life, like hosting an MLK Day speaker or teaching Plato. Researchers are not only having their work stolen by corporations, they are also finding it harder and harder to do new work. As if the world is saying: “Okay, we’ve extracted your product and found out how to monetize it, so we’re done with you.” Meanwhile, money and resources are cascading into the for-profit, male-dominated AI tech field.
Generative AI Is a Threat to the Liberating Power of Education
When college students embrace generative AI to do their work for them, and faculty and administrators fail to push back on it, they are tacitly saying that a real education doesn’t matter. It becomes a transaction: professors set assignments, students turn in “work” created by bots, students get diplomas, the college gets paid. What hasn’t happened in that sequence is learning. What hasn’t happened is consciousness-raising. What hasn’t happened is the liberating, empowering process of higher education (in the truest sense of those words).
This is an intersectional feminist issue because women worked damn hard for access to education. There are plenty of American colleges that have admitted women for less than a century. Education is still, in many places in the world, a privilege unavailable to girls and women.
And it’s an intersectional feminist issue because literacy is a privilege once afforded – not that long ago – primarily to wealthy white men in Western culture. It’s a privilege that people of color and women fought for, one that was a barrier to voting and citizenship for racial minorities within living memory. It’s a privilege that, between the nineteenth and twentieth centuries, made the difference in America and Western Europe between an oppressed economic underclass making up most of the population vs. the existence of a large, thriving middle class. And the rise of generative AI correlates conspicuously with plummeting literacy rates in America: the number of adults reading at the lowest proficiency level has risen to 28%, up from 19% less than a decade ago.
And if we give up the value of our education, how long before we lose our access to it? How long before we lose the benefits that it gave our feminist foremothers, the ones who rose up after frustrated centuries spent battling for the right to learn and go intellectually toe-to-toe with men? It’s no coincidence the feminist movement started one generation after the share of women receiving bachelor’s degrees jumped up in the mid-twentieth century. Now, with the widespread use of generative AI, psychologists and education specialists predict a widespread decline in cognitive activity – like problem-solving, decision-making, and memory-encoding. A preliminary study of students using ChatGPT to write essays found they “consistently underperformed at neural, linguistic, and behavioral levels” compared to those in the study not using AI. Another study has shown that students who use these tools begin to rely on them and actually perform worse than before when the tools are taken away.
Generative AI is eating away at the foundation of our best pathway out of subjugation and second-class citizenship. Because, if we step aside and let bots do our critical thinking, who will be left in power? No one gains from this collective rejection of intellectual growth but an elite few. And I can’t think of an elite few that ever had intersectional feminist principles at heart.
Who am I to give up the privilege of thinking for myself when women have shed tears and blood to give me that privilege?