The Math of Matilda

In 1993, science historian Margaret Rossiter introduced the term the Matilda Effect. Writing in the journal Social Studies of Science, Rossiter described a recurring pattern in scientific history where women’s intellectual contributions were systematically under-recognized or credited to men. She named the effect after nineteenth-century suffragist Matilda Joslyn Gage, who had already observed that women’s achievements were routinely minimized or erased.
Over the years, the Matilda Effect has often been told as a story of theft. A woman makes a discovery, then a man takes the credit, and history remembers the wrong name.

That story is not false, but it is incomplete.
Because when we look carefully at how recognition actually moves through science, mathematics, and historical memory, something more unsettling emerges. Most of the time, no one needs to steal anything at all.
Instead, small biases accumulate, tiny advantages compound, and institutional rules quietly amplify early recognition until history itself hardens around those outcomes.
The Matilda Effect is not primarily about bad actors, it is about systems that behave mathematically.
And once those systems start compounding, the past becomes very difficult to correct.
Why the Theft Narrative Falls Short
The theft narrative is emotionally satisfying because it offers clarity. There is a villain, a victim, and a moral resolution. But historians and sociologists of science have long warned that this framing obscures how scientific credit actually works.
Scientific recognition is not a single event. It is not granted once, at the moment of discovery. It unfolds over time through citations, authorship order, invitations, awards, textbooks, and archives.
Margaret Rossiter emphasized that the Matilda Effect operated through systematic under-recognition, not just overt appropriation. In other words, even when no one consciously took credit away from women, the structure of science still delivered recognition unevenly.
This matters because it changes the question.
Instead of asking, “Who stole this idea?” we need to ask, “How did society enable the movement of recognition?”
And that question leads us directly into mathematics.
Recognition as a Cumulative Advantage System
In Sociology of Science, the concept of cumulative advantage predates the Matilda Effect. In 1968, sociologist Robert K. Merton described what he called the Matthew Effect, named after a verse in the Gospel of Matthew: “For to everyone who has, more will be given.”
Merton observed that well-known scientists tended to receive disproportionate credit, even when their contributions were similar to those of lesser-known colleagues. As a result, early recognition created future recognition.
This insight has since been formalized mathematically through preferential attachment models, which describe how networks grow. These models are used to explain phenomena ranging from the structure of the internet to citation networks in academic publishing.
Let’s use the concept of nodes in this example. Nodes are connection points in a network or system. A node could represent a connection to a computer, a printer, a grandparent in a family tree, or that one person who knows everybody. And that’s exactly where this fits in with the Matthew effect as well as the Matilda effect. Preferential attachment models mean that the nodes within that system already have many other connections and are likely to receive new ones.
Translated into scientific circles, papers that are already cited are more likely to be cited again. Additionally, researchers who are already known are more likely to be invited, funded, and referenced. It’s a lot like the social media algorithms for social influencers wherein early visibility creates future visibility.
Physicist Albert-László Barabási, whose work on network theory is foundational, showed that such systems naturally produce power-law distributions, where a small number of nodes accumulate most of the attention. These outcomes do not require bias to begin. They emerge from the rules of the system itself.
Now let’s add bias.
If women start with even a slight disadvantage in visibility or credibility, preferential attachment guarantees that the gap will widen over time.
This is the Matilda Effect as a mathematical process. Mathematically, cumulative advantage is not abstract or metaphorical. It begins with a simple assumption that appears throughout the history of mathematics: growth depends on what already exists. In proportional growth models, the rate at which something increases is tied directly to its current size, the same logic that governs compound interest and early population equations. When recognition follows this rule, even a slight early difference matters, because the system amplifies what is already present.
Network scientists later formalized this behavior through preferential attachment, showing that new attention flows toward nodes that already have attention. In citation networks, this produces power-law distributions, where a small number of names accumulate a disproportionate share of recognition while most contributions remain at the margins. Merton described this empirically as the Matthew Effect, and later statistical models made the mechanism explicit by expressing recognition at each step as a function of prior recognition plus randomness. Even when chance is included, early advantage dominates. Once these dynamics are in motion, history itself becomes subject to survivorship bias, because only the work that accumulated enough recognition early on remains visible to be cited, archived, and taught.
The system is also path dependent. Early recognition decisions, even if partially accidental, lock in trajectories that are difficult to reverse. Fairness applied later does not reset the process. In cumulative systems, initial conditions shape outcomes so strongly that inequality becomes structural, not because anyone intended it, but because the math guarantees it.
And speaking of math, I’m going to post the mathematics at my website at Mathsciencehistory.com for my math nerds. And, while you are there, consider clicking on that coffee cup and making a donation through PayPal to Math! Science! History! Because every donation you make keeps us going. It truly does make a difference. Every penny helps pay for additional services such as editing, social media services, you name it. So that being said, I genuinely appreciate every donation that has already come our way. Thank you so much for your support.
Small Biases, Large Historical Consequences
One of the most important features of cumulative systems is that small initial differences matter enormously.
A five percent reduction in citations at the start of a career does not remain five percent, it cascades.
A paper that is cited less frequently is less likely to be read, and a researcher who is read less is invited less often to speak. Fewer talks lead to fewer collaborations and fewer collaborations lead to fewer high-profile publications.
By the time prizes or textbooks enter the picture, the divergence appears dramatic. However, the cause was incremental.
This is why the Matilda Effect cannot be fixed simply by “adding women back” later. Once cumulative advantage has reshaped the landscape, recognition is already locked in.
This phenomenon is visible in citation data. Large-scale analyses summarized by Science, published by the American Association for the Advancement of Science, have shown persistent gender citation gaps across multiple fields. Women’s papers are cited less often than men’s, even when controlling for journal prestige, subfield, and publication year.
The key point is that citations are not just markers of impact, they are inputs into future recognition.
Bias at the citation level becomes bias everywhere else.
Why Peer Review Does Not Neutralize the System
It is often assumed that peer review acts as a corrective mechanism. If papers are evaluated anonymously, the argument goes, then merit should prevail.
But peer review does not operate in a vacuum.
A landmark study published in Nature in 1997, titled “Sexism and Nepotism in Peer Review,” examined fellowship evaluations and found that women had to be significantly more productive to receive the same competence scores as men. The reviewers were not consciously discriminating; they were responding to reputational cues.
Later studies replicated this effect in hiring, grant funding, and recommendation letters. As summarized by research reviewed by the National Academies and others, women were more often described as diligent or cooperative, while men were described as brilliant or visionary.
Peer review filters work, but it does not reset reputation, rather it inherits the system’s prior conditions.
And this matters because cumulative advantage does not only operate downstream, it shapes who gets evaluated favorably in the first place.
Invisible Mathematical Labor and Non-Creditable Work
The Matilda Effect becomes even more pronounced when we look at what kinds of labor generate recognition.
In mathematics and science, certain activities reliably produce authorship and prestige such as proposing a new theory, leading a lab, writing the final paper, error checking, calculation, data cleaning, proof verification, replication, and code maintenance.
As historians of computing and mathematics have shown, women were disproportionately concentrated in these roles throughout the twentieth century. Human computers, for example, performed the calculations that made astronomical and physical discoveries possible, yet their names rarely appeared on publications.
The historian Marie Hicks, writing about computing labor, has shown how technical work done by women was often reclassified as clerical once it became feminized, stripping it of prestige without changing its substance.
If you think about this as a systems perspective, it becomes crucial. If a contribution does not generate a node in the recognition network, it cannot accumulate advantage. As a result, it disappears mathematically.
Thus, the Matilda Effect is not only about women being denied credit, the Matilda Effect is about entire categories of work being structurally unable to receive it.
Archives as Amplifiers of Early Advantage
History does not remember everything equally, but instead it remembers what survives.
Archivists and historians rely on published papers, institutional records, correspondence deemed significant, and materials that were preserved, catalogued, and digitized.
Women’s scientific work was less likely to enter these channels. It was more likely to appear in letters, internal reports, or collaborative contexts that were not preserved with the same care.
And this creates what historians call archival bias: what we know about the past is shaped by what was saved.
As scholars of archival science have emphasized, including those writing in The American Archivist, archives are not neutral containers. Archives are products of institutional priorities.
Once again, small biases compound. What is preserved is cited, what is cited is taught, what is taught becomes canonical.
The Matilda Effect persists because memory itself is cumulative.
Why Recognition Arrives Too Late
Another modern reframing of the Matilda Effect treats it as a lag problem.
In many cases, women’s work was eventually validated, but only after a male authority endorsed it, or the field shifted systematically, or new tools made the contribution legible.
By the time recognition arrived, the opportunity for cumulative advantage had passed.
This pattern has been discussed by philosophers of science examining how legitimacy operates. Legitimacy often precedes recognition, but legitimacy itself is socially mediated.
The historian Londa Schiebinger has argued that women’s ideas were frequently dismissed not because they were wrong, but because they did not fit prevailing frameworks. When those frameworks changed, the ideas appeared prescient. However, history had already moved on.
Delayed recognition is not neutral, and in cumulative systems, timing is everything.
Why Institutions Produce These Outcomes Predictably
It is tempting to treat the Matilda Effect as a cultural failure. But the deeper issue is institutional design. Scientific institutions reward single authorship, seniority, network centrality, and prestige markers. However, many scientific institutions do not reward maintenance, replication, teaching, or even collaborative depth.
Because women were, and are, historically steered into the latter categories, the system reliably under-credited them.
This outcome does not require malicious intent. It requires only that institutions optimize for narrow signals of success.
Economists and sociologists studying organizational behavior have shown that institutions tend to reproduce their own metrics, and what they measure becomes what matters.
If recognition metrics are biased, outcomes will be biased, even under conditions of formal equality.
What Does This Mean for History Right Now?
The most uncomfortable implication of this analysis is that the Matilda Effect is not finished.
If recognition behaves mathematically, then future historians will inherit the distortions we are creating today.
Citation gaps documented in recent studies published in Proceedings of the National Academy of Sciences and summarized in Science suggest that women’s work continues to be under-referenced. Patent data analyzed by researchers and reported in outlets like STAT show that women are less likely to be listed as inventors, even when they contribute to patentable work.
These are not moral failures waiting to be corrected. Instead, they are trajectories already in motion.
History will remember what we amplify.
The Matilda Effect is often described as a failure of fairness. But it is more accurately a failure of systems underscored by lack of fairness. There are still systems in place that favor scientists with the most popular name, there are still systems in place that gravitate towards the male scientist even though he is greatly unqualified compared to his female counterpart. And, when recognition compounds, small biases do not stay small. Instead, small biases become structure, memory and history.
Margaret Rossiter gave us a name for this pattern and mathematics explains why it persists.
If we want a different history, we cannot rely on correction alone. Instead, we must pay attention to how credit moves, accumulates, and hardens in real time, because the most dangerous biases are not the loud ones. The most treacherous prejudices are the quiet ones that let math do what math always does.
If the Matilda Effect were simply a matter of individual wrongdoing, it could be solved by better intentions. But because it emerges from cumulative systems, it requires something more difficult: deliberate interruption. What do these deliberate interruptions look like? They can look like systems that amplify gain, which in turn can be redesigned to distribute advantage. Citation practices can be audited rather than inherited. Authorship can be tied to documented contribution rather than position alone. Additionally, evaluation can rely on structured criteria instead of reputation shortcuts. Invitations and nominations can be treated as inputs that shape history, not neutral honors that merely reflect it. Furthermore, archives can preserve the labor that sustains discovery, not only the moments of apparent brilliance.
None of these changes require perfect fairness or moral purity. Instead, they require attention at the points where recognition enters the system. The Matilda Effect persists because small distortions are allowed to compound unchecked. However, it diminishes when those distortions are noticed early, named clearly, and corrected repeatedly. History does not change all at once. It changes when we stop letting mathematics run on autopilot. It changes when we begin choosing, consciously and collectively, how credit moves through time.
Every listener who reads more carefully, cites more deliberately, and credits more precisely alters the trajectory of what survives. The future history of science is not written only by discoveries, but by the choices we make about whose work we amplify, remember, and teach. In cumulative systems, even small acts of recognition matter, because they are the ones that compound.
And that is where each of us still has agency, even inside systems that compound.
Until next time, carpe diem.
SOURCES:
1. Margaret Rossiter & the Matilda Effect (1993)
Rossiter, M.W. (1993). “The Matthew Matilda Effect in Science.” Social Studies of Science, 23(2), 325–341.
- DOI: https://doi.org/10.1177/030631293023002004
- Free PDF summary via AWIS: https://awis.org/wp-content/uploads/SSS-matilda-effect.pdf
2. Robert K. Merton & the Matthew Effect (1968)
Merton, R.K. (1968). “The Matthew Effect in Science.” Science, 159(3810), 56–63.
- DOI: https://doi.org/10.1126/science.159.3810.56
- Link: https://www.science.org/doi/10.1126/science.159.3810.56
- Free PDF: https://garfield.library.upenn.edu/merton/matthew1.pdf
3. Barabási & Preferential Attachment / Power-Law Networks
Barabási, A‑L. & Albert, R. (1999). Scale-free networks and preferential attachment.
- Overview via Scholarpedia (by Barabási): http://www.scholarpedia.org/article/Scale-free_networks
- Wikipedia summary of the Barabási–Albert model: https://en.wikipedia.org/wiki/Barab%C3%A1si%E2%80%93Albert_model
4. Wennerås & Wold — Peer Review Sexism Study (1997)
Wennerås, C. & Wold, A. (1997). “Nepotism and sexism in peer-review.” Nature, 387(6631), 341–343.
- DOI: https://doi.org/10.1038/387341a0
- Nature link: https://www.nature.com/articles/387341a0
- PubMed: https://pubmed.ncbi.nlm.nih.gov/9163412/
- Free PDF: https://www.cs.utexas.edu/~mckinley/notes/ww-nature-1997.pdf
5. Marie Hicks — Computing Labor & Reclassification as Clerical
Hicks, M. (2017). Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing. MIT Press.
- MIT Press page: https://mitpress.mit.edu/9780262535182/programmed-inequality/
- H‑Net review: https://networks.h‑net.org/node/9782/reviews/3682972/penn-hicks-programmed-inequality-how-britain-discarded-women
6. Londa Schiebinger — Women’s Ideas Dismissed Due to Frameworks
Schiebinger, L. (1989). The Mind Has No Sex? Women in the Origins of Modern Science. Harvard University Press.
- Harvard University Press: https://www.hup.harvard.edu/books/9780674576254
Schiebinger, L. (1999). Has Feminism Changed Science? Harvard University Press.
- Harvard University Press: https://www.hup.harvard.edu/books/9780674005440
7. Gender Citation Gap (PNAS & Science)
Huang, J. et al. (2020). “Historical comparison of gender inequality in scientific careers across countries and disciplines.” PNAS, 117(9).
Lerman, K. et al. (2022). “Gendered citation patterns among the scientific elite.” PNAS, 119(40).
Science/AAAS article summarizing the research:
- “Women researchers are cited less than men. Here’s why—and what can be done about it.” Science, AAAS: https://www.science.org/content/article/women-researchers-cited-less-men-heres-why-what-can-done
8. Women & Patents (STAT News reference)
Koning, R. et al. (2021). Gender bias toward men in biomedical patent awards.
- STAT News coverage: https://www.statnews.com/2021/06/17/gender-bias-toward-men-patents-less-biomedical-innovation-women-study/
USPTO “Progress and Potential” reports:
9. Archival Bias
The script references general scholarship on archival theory. The most relevant journal is The American Archivist, published by the Society of American Archivists:
10. National Academies — Recommendation Letter Language Study
National Academies of Sciences, Engineering, and Medicine. (2020). Promising Practices for Addressing the Underrepresentation of Women in Science, Engineering, and Medicine.