Simulating Real-World Conflict: This article lifts the curtain on a fast‑growing, $15‑billion global ecosystem of AI‑powered war‑gaming software and predictive‑battlefield platforms. Built largely by private defence contractors — Palantir, Booz Allen Hamilton, Israeli start‑ups, and Gulf‑backed tech labs — these simulations draw on live‑time satellite images, social‑media feeds, and past combat records to create future wars on demand. What started as harmless training aids now occupy the core of live targeting cycles: algorithmic predictions inform which neighbourhoods are bombed, which drone corridors remain open, and whose weapons programmes receive funding.
By selling worst-case digital options, sellers commodify fear as recurring payments, pushing generals and politicians towards war over peace. The article poses a difficult question: when combat choices are priced, packaged, and sold as software subscriptions, whose interests determine war? And whose lives carry the algorithm’s margin of error?
Military simulation has been moulding strategy and doctrine for decades, from the Cold War-era models developed by RAND Corporation. These simulations were initially analogue or mathematical games that sought to forecast nuclear escalation or outcomes on the battlefield. Military war-gaming in the 21st century, however, is a drastically changed beast. No longer relegated to secrecy within classified think tanks, the industry has evolved into a $15 billion worldwide industry led by private defence contractors, AI integration, and interactive digital platforms.
Simulating Real-World Conflict
Defence industry companies such as Palantir and Booz Allen Hamilton now provide turnkey platforms for “predictive warfare”, allowing commanders to see threats and consequences in real time. Even commercially used engines such as Unity are being leveraged to create affordable, immersive war-gaming environments for military training. Such “serious games” permit personnel to exercise air operations, determine collateral damage, and map out strategic outcomes without ever engaging in combat.
Digitised warfare has produced a competitive market in which companies make money by anticipating worst-case scenarios. Palantir, for example, defines its mission as defending liberal democracies, but its software is integrated into drone targeting and battlefield analytics that obscure the distinction between ethics and efficiency.
In a high-profile partnership with Booz Allen Hamilton, Palantir is now developing integrated systems to modernise warfighting infrastructures. According to Booz Allen’s CEO, Horacio Rozanski, these collaborations are about accelerating mission results and pushing boundaries in defence innovation. Palantir’s CEO, Alex Karp, adds: “We’re creating a future where AI-infused hardware keeps our allies safe and our enemies scared.”
This kind of rhetoric expresses a logic of profit: war as a subscription service of constant renewal, fuelled by algorithms and software updates. Sellers offer not solutions, but cycles of constant escalation, urging governments towards militarisation.
U.S. National Research Council
The convergence of the entertainment sector and military simulation is not new. As far back as 1996, the U.S. National Research Council was calling for cooperation between the Department of Defence (DoD) and video game makers. The outcome was games such as America’s Army — a free online game used as a recruitment advertisement and propaganda vehicle that painted sanitised images of American troops.
Such games made war the norm for younger viewers and quietly promoted interventionism. They usually represented American troops as the heroes and “the enemy” as evil or faceless, without examining deeper political implications. Such presentation subtly supported the notion that U.S. foreign military intervention is justified, effective, and necessary. As media researchers such as Løvlie and Nohrstedt & Ottosen noted, the simulations amounted to a one-sided account, frequently hailing military action as the sole means of ending conflict.
One of the clearest examples of predictive warfare in action is Project Maven. Launched by the U.S. Department of Defence (DoD) in 2017, this initiative contracted tech firms — initially including Google and Palantir — to develop AI systems capable of analysing drone footage in real time. These tools could identify potential targets, classify military assets, and aid commanders in making strike decisions. However, the reliance on incomplete or biased datasets led to ethical dilemmas and operational risks. Google ultimately withdrew from the project after internal protests by employees objecting to the militarisation of their technology. Despite the controversy, the project marked a turning point: simulations were no longer just training tools; they were active participants in warfare.
Israel’s Unit 8200
Israel’s Unit 8200 — its highest-ranking military intelligence unit — is yet another end of predictive warfare enabled by AI. The unit, working in partnership with domestic and American technology companies, created a huge language model trained on captured Arabic communications from the occupied territories. The system, reminiscent of a war version of ChatGPT, was intended to respond to questions about persons being monitored and mark future targets for drone attacks.
The model, according to a joint investigation by +972 Magazine and The Guardian, was based on enormous datasets — some of which, allegedly, consisted of intimate conversations with no military value. Former Israeli intelligence officials said the technology enables blanket surveillance and predictive labelling of “troublemakers”, placing serious human rights questions in its wake. As Zach Campbell of Human Rights Watch said: “It’s a guessing machine. And in the end these guesses can wind up being used to convict people.” The danger is not merely error but bias — built into life-or-death algorithmic judgements.
As predictive warfare emerges, it comes into conflict with long-standing ethical paradigms of military behaviour: jus ad bellum (just initiation) and jus in bello (just conduct). The U.S. Department of Defence has made some efforts to mitigate these conflicting tensions through a succession of guidelines, including the 2020 adoption of the AI Ethics Principles, with a focus on responsibility, reliability, and traceability. Palantir asserts that its systems are attuned to these ethical requirements and that they believe effective military tools and ethical responsibility should be one part.
Nevertheless, as pointed out in the DoD’s Responsible AI Strategy (2022), implementation remains the actual challenge. When machine learning models and opaque AI tools become ingrained in military workflows, it becomes difficult to hold anyone accountable — particularly when civilians are harmed based on algorithmic determinations.
The convergence of AI
The convergence of AI, game technology, and military tactics has bred a new form of war — one that’s simulated, forecast, and marketed by software. This $15 billion market isn’t merely peddling tools; it’s influencing political choices, rewriting moral standards, and normalising war as a product.
Even though companies such as Palantir and Booz Allen claim that their products increase security and maintain democratic values, evidence indicates that predictive warfare tends to spur escalation, mechanise suspicion, and desensitise surveillance. The real issue is no longer whether these technologies are effective — but whether they are compatible with the world we wish to create.
As simulations grow closer to actual conflict, we need to ask: who profits when war is predicted, virtualised, and commodified? And who suffers when the simulation is over, but the destruction is all too actual?