In 2025, Russell Reynolds Associates published research examining how nearly 750 CEOs were portrayed in the global media. The results were clear: women CEOs face heightened scrutiny and unconscious bias in media narratives. These distorted perceptions risk holding women back from the top seat. At RRA, we view media coverage as a proxy for broader societal opinions and norms, and therefore a powerful indicator of how leadership is understood and judged.
Now, as generative AI tools increasingly become a primary source of information—summarizing information about leaders, answering questions about performance, and shaping how individuals and organizations understand leadership—we believe it is critical to ask whether these systems are reproducing the same imbalances. This question is especially relevant, as the LLMs underlying these tools are trained on large-scale text corpora that may encode patterns and biases present in public discourse, including media narratives.
To further our understanding of this important topic, RRA partnered with EqualVoice—Ringier’s data-driven initiative focused on measuring and improving representation in media—to study whether this phenomenon is being replicated when discussing men and women CEOs.
Collecting over 8,000 AI-produced responses, we leveraged EqualVoice’s bias detection tool (EVA) flagged potential issues, and conducted rigorous human review of every flagged instance. After review, 1,447 confirmed instances of gender bias remained—representing 18% of all outputs.
Notably, 94% of these biased statements were about women CEOs.
While public discussions of AI bias often focus on overtly harmful or discriminatory language, our research reveals a more subtle—and arguably, more pervasive—issue: systematic differences in how women and men CEOs are framed, even when discussing comparable achievements. The vast majority of confirmed bias instances (94%) appeared in descriptions of women CEOs; not through negative tone, but through structural framing.
Women were frequently labeled by gender (“female CEO”), positioned as exceptional because of gender (“breaking barriers,” “first woman to…”), or described using gender-coded leadership traits. Comparable patterns were rare or absent for men. This indicates that gender remains a salient organizing principle in AI descriptions of women leaders, while men are treated as the default. By qualifying women as 'female CEOs,' the output linguistically reinforces the assumption that the standalone title 'CEO' is inherently male.
The most consequential bias was not labeling alone, but the combined gender exceptionalism, which framed women’s success as symbolic rather than grounded in professional capability. While acknowledging factual milestones is appropriate, using gender as the explanation for achievement subtly shifts the narrative away from performance.
These patterns—consistent across models—suggest that the issue is not within a specific platform, but is a systemic mirroring of historical biases. The models act as a high-fidelity 'bias amplifier' of the vast data they were trained on, demonstrating that without active intervention, AI will naturally default to the “CEO = Man” archetype present in global discourse.
These framing differences accumulate over time, shaping how leadership competence is perceived. Because the language is often positive, these biases are easy to miss and hard to challenge. But left unaddressed, GenAI tools risk normalizing outdated views on leadership.
This has broader implications for organizations that are concerned about their CEO pipelines—and many have good reason to be.
As the CEO role grows harder than ever, succession pipelines are falling short, with fewer than half of board directors believe their CEO succession plans will succeed. Underpinning this is the reality that women’s CEO representation has stagnated around 10% for decades, meaning prior succession approaches have done little to address the issue. This shows no sign of immediate improvement; in 2025, the share of incoming women CEOs declined to about 9% globally, continuing a steady fall from a peak of 13% in 2022.
Women leaders represent a strategic lever that organizations are significantly underutilizing in their CEO succession pipelines. But, as we’ve explored in prior research, pervasive media narratives (and, now, AI-generated narratives) have meaningful impacts on how boards and the public perceive women leaders. Ultimately, these framing differences can have a big impact on women leaders’ opportunities and their presence in succession pipelines.
What’s next?At Russell Reynolds Associates, we help organizations strengthen succession systems and expand leadership pipelines. Through initiatives such as RRA Artemis, we are working to shift the CEO leadership paradigm—broadening how potential is identified, accelerating the development of women succession contenders, and ensuring that leadership opportunity is shaped by capability rather than narrative bias. For boards that want to sidestep unintentional AI-generated bias and meaningfully expand their CEO succession pipelines:
|
Dr. Annabella Bassler is the CFO of Ringier and Initiator of EqualVoice. She is based in Zurich.
Hetty Pye is a senior member of Russell Reynolds Associates’ Board & CEO Advisory practice, and is the co-founder of RRA Artemis. She is based in London.
Leah Christianson is a member of Russell Reynolds Associates’ Center for Leadership Insight. She is based in San Francisco.
Lea Eberle is the CFO of Ringier Media Switzerland and the COO of EqualVoice. She is based in Zurich.
Gabrielle Lieberman leads Russell Reynolds Associates’ Center for Leadership Insight. She is based in Chicago.
Margot McShane co-leads Russell Reynolds Associates’ Board & CEO Advisory practice globally, and is the co-founder of RRA Artemis. She is based in San Francisco.
Elsa Reichling is the Chief Innovation Officer of EqualVoice. She is based in Zurich.