AI is rewriting the Holocaust — and history at large.
A flood of AI-generated “memory” is replacing lived memory, expertise, and historical fact.
Please consider supporting our mission to help everyone better understand and become smarter about the Jewish world. A gift of any amount helps keep our platform free of advertising and accessible to all.
This is a guest essay by Jason Steinhauer, a Global Fellow at The Wilson Center and a Senior Fellow at the Foreign Policy Research Institute.
You can also listen to the podcast version of this essay on Apple Podcasts, Google Podcasts, and Spotify.
Fazal Rahman lives in Pakistan and, up until recently, had never heard of the Holocaust. When asked by the BBC about the event, he confessed he did not know what the term meant. Yet he was familiar with images of it — AI-generated images, that is.
Rahman is part of a network of Pakistani content creators who use artificial intelligence applications to generate fictitious images of historical events, including the Holocaust. They principally post on Facebook, where their pages have amassed hundreds of thousands of followers.
For producing content that generates a high number of interactions, such as AI-generated Holocaust imagery, they can earn $1,000 per month as part of Facebook’s content monetization program. The more clicks, views, and shares their content gets, the more they get paid — particularly if their content is consumed by higher-income audiences in the United States, United Kingdom, and Europe. For Rahman, it has become his sole source of income.
How would someone such as Rahman, who’d never heard of the Holocaust, get the idea to post AI-generated Holocaust imagery?
They ask ChatGPT. According to the BBC investigation, creators inside these content groups use AI chatbots to determine which historical events create high-performing social media content. The Holocaust was one of the answers.
Since 2022, there has been growing discussion about how artificial intelligence applications such as large language models, image generators, and chatbots will affect “history” — both the professional discipline of history and public understandings of the past. In 2025, the contours of that influence are now visible. AI applications are being used around the world in dozens of history-related contexts. A shortlist includes:
An exhibition at the White House about America’s “Founding Fathers” and “Founding Mothers,” curated by PragerU, which includes historical portraits that morph into AI-generated video clips, activated by scanning a QR code
Game designers, such as in the Netherlands, incorporating AI-generated historical imagery into their games, trying to, in their words, “leverage the potential of AI to bring history to life”
Historians and researchers in multiple countries using tools such as Google’s NotebookLM (a research and note-taking tool) to summarize and synthesize scholarly literature, as well as take notes and arrange book chapters
Reviewers of peer review journals, in some instances, using ChatGPT or large language models to author their assessments of scholarly articles
Holocaust museums, and other sites of conscience, using hologram technology coupled with AI to simulate conversations with deceased witnesses and survivors of past events
Conservators and historic preservationists in Italy, Greece, India, and China using AI to reconstruct destroyed archaeological sites or damaged cultural artifacts such as paintings, sculptures, and mosaics
Hitler speeches and writings translated by AI, remixed and propagated on social media set to background music
The current U.S. presidential administration using AI to generate a list of history books with perceived offensive keywords in their titles and removing them from library shelves at the U.S. Naval Academy
University presses licensing their books to train large language models
Museums and universities establishing policies and procedures for using AI in the workplace
Approximately one-third of professors across disciplines, including history, describing themselves as frequent users of generative AI tools, including developing lesson plans, making lecture slides, and designing custom chatbots that answer student questions
Students in classrooms worldwide using ChatGPT and other large language models to author homework assignments, historical essays, and, at least in one case, write a PhD dissertation
The proliferation of uses of AI in historical or history-adjacent spaces prompted the New York Times to publish an article in June, titled “A.I. Is Poised to Rewrite History. Literally.” Presumably in response, at least in part, the American Historical Association released its “Guiding Principles for Artificial Intelligence in History Education” shortly after.
Though only two months old, the Association document, while laudable, already feels like an artifact, staking a claim to land that has experienced a seismic continental drift. It asserts several times that AI cannot replace history teachers or professors. The wider world seems to disagree. Over the summer, Microsoft released their top 40 occupations with the “highest AI applicability score.” Historians were number two, behind only interpreters and translators, with a “coverage” score of 0.91 out 1. According to the second largest corporation in the world (by market capitalization), most of what historians do can be replicated or outsourced to machines, either now or in the future.
The tool History.AI is here, then — and with it, the likely alteration of the history profession as it has been practiced for decades. What it becomes is still to be determined. While some observers and traditionalists had held out hope that AI might be a passing fad, akin to the laser-disc or the unicycle, that appears unlikely. Much as AI applications and advancements will alter medicine, law, marketing and science, so, too, will they alter “history” in all its manifestations.
So far, no one has quite been able to articulate what that altered future looks like. The punditry around History.AI has largely mirrored conversations from prior decades when new platforms and technologies emerged. In the 2000s, crowd-source historical knowledge on Wikipedia was perceived as a threat to expert-centric models of historical knowledge. The debate was whether Wikipedia would ever be accurate enough to be considered reliable history, or tackle the harder intellectual questions that historians felt they tackled. Today, Wikipedia is the fourth-most visited website in the world; its content informs everything from YouTube videos and Amazon Alexa (a virtual assistant technology), to journalism and legal research.
With the advent of social media and “e-history” in the 2010s, historians again criticized the diluted and inaccurate historical memes that circulated online, arguing that the viral past would never be a substitute for accurate and rigorous historical knowledge. Today, YouTube, Reddit, Instagram, Facebook, and X are all among the top-10 most visited websites in the world, with some history-related accounts boasting millions of followers/subscribers.
While Wikipedia and social media have captured worldwide attention, professional history has clung to a rapidly diminishing air space. As has been well-documented in this newsletter and many other places, history departments in the North America and Europe are being shuttered, retiring history professors are not being replaced, fewer history degrees are being awarded, history museums are closing — some due to lack of visitors and donor support, others due to political pressures — funding for historical research is disappearing and full-time job openings for professional historians in universities or museums/historical societies are few-and-far-between, often with paltry salaries.
Anecdotally, I know many historians and history-degree-holders working in jobs (or pursuing jobs) outside of the profession. It is all well-and-good for advocacy organizations in Washington, D.C., to plant a flag and say they are not moving. The problem is that everyone else has.
For a period, it seemed that so-called “hallucinations” and inaccuracies produced by generative AI might be the kryptonite that would protect expert-centric disciplines such as history. Surely the need for accuracy would always ensure the need for professional historians. Alas, the tech companies were playing chess while scholars were playing checkers. The more that everyone uses AI technologies, the better they become. We are all training the machines in real-time every day.
Now armed with access to scholarly databases and thousands of university books, the AI applications are far more accurate and less “hallucinative” than they were even one year ago, and rapidly improving. In the case of historic preservation and recreation of bygone relics, AI is likely to become more accurate than a human ever could be.
In prior decades, there were arguments, too, that the proliferation of these technologies might help professional history — and, by virtue, public understandings of history. If only professional historians could leverage Wikipedia and social media to elevate their own voices, the technologies would boost the relevancy of history and historians in people’s everyday lives, and the web would be overflowing with accurate historical analysis.
Some public intellectuals and journalists continue to hold out hope that AI will do similar, that its transformative power will “bring history to life.” These scenarios failed to account for the realities of how people behave within incentive structures shaped by modern technologies. If the ecosystem rewards rapid-fire posting on social media with financial, reputational, and political gains regardless of accuracy, that is what people will do. Other forms of cultural production won’t disappear overnight; they’ll simply become novelty items that people admire but don’t feel are necessary in order to succeed in their everyday lives.
That is precisely what has happened to history; even as we are surrounded by billions of pieces of e-history every day online — some created by humans, most generated or circulated by AI and algorithms. With more than 20,000 history institutions in the U.S. alone, Americans struggle to name a single historian, identify a historian they know or have met, or even recite basic historical facts. (According to one survey, 48 percent of Americans could not name a single concentration camp or ghetto from World War II.)
While historians continue to publish books, Americans continue to read fewer and fewer of them. Amid a deluge of information, misinformation and disinformation — as well as unflinching demands on people’s time, energy and attention simply to make ends meet — rigorously researched information about the past is among the first items to be jettisoned.
Where is all this headed? Futurology is a tricky business, akin to batting in professional baseball; being right 30 percent of the time makes you eligible for a lucrative payday and consideration for the Hall of Fame.
The evolution of AI and the decline of “History” have not happened in a vacuum. Their stories are intertwined with broader political and geopolitical agendas. In the United States, the race to lead the world in artificial intelligence has become a national imperative too consequential to finish runner-up.
Across multiple administrations and Congresses in both political parties, it has resulted in a massive infusion of funding into S.T.E.M. (science, technology, engineering and math) at the expense of the humanities, as well as a deregulatory environment that has balked at imposing any serious restraints on the technologies, lest China, Russia, Saudi Arabia, or India surpass us. It is a powerful political statement to be winning the global race for the future. It is less impactful to say we are winning the global race for the past.
At the college and university levels, the allure of high-paying S.T.E.M. careers that would justify the high costs of tuition lured students (and parents) out of the Liberal Arts buildings and into the science and business halls. AI has upended that story, too — coders and engineers are now facing bleak job prospects — but the students have not returned. With no history students, administrators can justify fewer history professors and no history departments.
Why pay salaries plus benefits for professors to teach to empty classrooms?
Students can learn the history they need for free from a variety of high-quality YouTube videos or podcasts. And while history museums and historic sites still retain high trust among American audiences, maintaining such sites is expensive (people, facilities, maintenance, programming, climate-controlled storage, etc.). Donors are donating less, some donors have passed away and others are scared to takes a stand on matters of history for fear that elected officials will come after them and their bank accounts.
So, while AI applications such as NotebookLM and artifact reconstruction software will help the remaining employed professional historians write their syllabi, refine their classroom slides, or do museum work, AI is unlikely to improve the state of professional history in North America or Europe — and beyond. At least as currently constructed, AI will not help students do better research or become better writers; will not improve the quality or accuracy of e-history in the public sphere or on social media; will not boost funding to history institutions that sorely need it; and likely will aid propagandists and repressive forces as they continue to pressure independent thought and scholarship on difficult topics.
But AI will continue to be used because the discourse around AI taps into one of humanity’s biggest fears: a fear of being left behind. If everyone else is using it, surely, I need to as well. The more they are used, the better the applications become, and the more legacy professions fade into obscurity.
What AI really offers is proving too alluring to pass up — and, indeed, this has been the genius of how AI has been marketed to the public. AI tools offer the promise of being ahead of the curve: saving time, earning money, processing more information at greater scale, parsing data to reach a plausible explanation, and getting to an outcome faster, even if that answer is not the best one or even the correct one. All technologies have politics, and AI’s politics are engineered for speed, automation, profit and predictability.
The best history, on the other hand, emerges from a deliberate and tortuous human exploration of what is often unpredictable and unexpected, usually without a clear profit motive. That experience, itself, could soon become a thing of the past.
What will replace it? For inspiration, I asked Google Gemini. It produced a lengthy AI-generated report that claimed, “instead of facing obsolescence, the profession of history is in the midst of a profound and necessary evolution.” It cited 24 websites, more than half of which were university webpages marketing history classes to prospective students, with three additional citations from Wikipedia. It was eerily similar to the AI-generated Holocaust images produced in Pakistan; it had the illusion of reality, while simultaneously being completely divorced from it.
A metaphor for our current times, perhaps.




This is why books and paper documents are crucial to history: they can't be altered.
You forget to say that some AI-generated facts are simply plain wrong. I have many examples of this. Also, the security of primary sources remains strong: I am thinking of the intermediate and large well-known institutions, for example The Smithsonian, The British Library and Yad Vashem who protect these, particularly against malicious falsifiers of History and Science, and continue to be well funded. There will always be a need and high financial value for the facts, and the institutions who guard them. And giving this fact high prominence will eventually re-fund these institutions.