According to research by Zahra Chatou, former meta strategist and founder of the think tank Code for Good Now, women are significantly penalized for using artificial intelligence to create job application materials, while men who use similar AI assistance receive forgiveness and understanding.
Chateau distributed identical AI-generated resumes under the names of two candidates: Emily Clark and James Clark. Reviewers said that the similar names were AI-supported. The results revealed a clear gender double standard.
Reviewers questioned Emily’s credibility 22% more often than James. Her ability was twice doubted, with feedback revealing she “can’t even write a CV herself” and questions raised about whether she had the job skills.
James faced a completely different approach. Reviewers noted, “He needed a little help preparing it.”
This reveals implicit bias in the workforce. Chatu explained, “When men use AI, we question their effort. When women use AI, we question their integrity. This difference changes the perceived risk of using AI.”
The generational gap was clear. Generation Z men, who grew up around AI, viewed Emily’s CV negatively 3.5 times more often than James’s. James’s identical CV received a 97% approval rating, while Emily’s received a 76% approval rating for the same content.
Rembrandt Koning, associate professor at Harvard Business School, documented a 25% adoption gap between men and women using AI for work. Women, concerned about the perception of cheating and possible accusations, become more risk averse.
According to a Caltech survey conducted in January involving 3,000 respondents, women were significantly less confident that the benefits of AI would outweigh the disadvantages, and they were also less confident that AI would help advance their careers.
The findings help determine one of the key barriers to closing the AI ​​adoption gap, as Chatou noted, “If people feel they will be judged more harshly for using AI, they are less likely to adopt it, regardless of their ability. Closing the AI ​​adoption gap means figuring out not only how people use AI but how that use is evaluated.”
