Recent research shows that women are adopting generative AI tools at a lower rate than men, and some headlines are quick to point to the negative impact this could have on women’s careers. However, it’s important to ask why women are less enthusiastic about using AI than men. So today, let’s take a closer look at the gender gap in AI usage.
The gender gap in AI
According to a recent Harvard Business School working paper, women are about 25% less likely than men to use generative AI. Even when women were given the same opportunities and training to use AI tools, the study found they were still less likely to engage with the technology than their male counterparts. While the researchers found this to be “pretty shocking,” many women are pointing out that this gap in adoption isn’t surprising at all.
To start, this pattern of adoption follows past technology adoption trends. As a reporter for the Financial Times explained, “the same pattern of earlier male adoption showed up for the internet, and men have long held more enthusiastic opinions on emerging technologies from nuclear energy to driverless cars.” Those gaps have also tended to close fairly quickly, and evidence is already showing that the AI usage gap is practically nonexistent among teens.
While many companies are quick to push an “act now or get left behind” mentality when it comes to generative AI, there are real advantages to being a “second-mover.” As one reporter argues, “being a second-mover is particularly useful when the benefits of a new technology are uncertain and the risks are high, because what you lose in speed you gain in information.” Right now, the technology of generative AI is evolving much quicker than its impact can be studied, leaving a lot of room for uncertainty about its real risks.
Do the benefits outweigh the harms?
AI’s risks and missteps are legitimate reasons for women to approach generative AI with caution. In addition to concerns about “transparency, safety, data privacy, fairness, inclusivity, [and] sustainability,” some of the ways AI has been deployed is also troubling.
AI has not only been used to amplify existing forms of violence against women, it has created entirely new forms of abuse that overwhelmingly target women. AI has helped to create more sophisticated doxing and sexual harassment campaigns, enhanced impersonation and sextortion schemes, and made deepfakes and image-based abuse all too common on social media. A very clear example of this made national headlines when X’s AI chatbot, Grok, was used widely to generate sexualized images of women. When a researcher tested out the limits of the chatbot, they were unnerved at not only how easy it was, but how Grok made suggestions and turned some images into short sexually explicit clips unprompted.
Many women have stated that these generative AI systems simply give them the “ick” and that they aren’t comfortable using a system that has repeatedly been used to harass them, nor one that reinforces gender stereotypes – something that many studies have found.
Amplifying biases
Although ChatGPT and similar programs are commonly marketed as impartial and objective, they are not immune to bias and may actually amplify it. Gender bias can be built into the system itself, from the data it’s trained on and through decisions (conscious or otherwise) made by developers in the creation process. It can also be added in as the technology learns from user prompts. As researchers noted, “If primarily white men are setting AI agendas, it follows that the supposedly ‘neutral’ technology is bound to be inscribed with masculine preferences.”
One such example was a recruitment tool created by Amazon that had to be scrapped after the system “effectively taught itself that male candidates were preferable.” The AI system was trained on resumes from past applicants, which mostly came from men, and even when the algorithm was changed in an attempt to correct for the bias, the tool still showed clear preferences for male applicants. This led the team to abandon the project as it was too unreliable to implement.
Additionally, as fewer women engage with the programs, there is real concern that it will create a feedback loop that amplifies biases based on prompts from predominantly male users. While tech enthusiasts are quick to encourage women to enter STEM fields, work on AI models, or simply use the product to ensure those biases are fixed (often for the sake of their own careers), it’s important to stop and ask if it is a woman’s job to ensure a company’s products are free from bias against them. Or is it a company’s job to ensure its products are not harming consumers?


