A Big No To AI-Assisted Killings in Gaza
The fact that War propels technological development does not justify its usage when it threatens the wellbeing of innocent civilians and the prospect of a peaceful resolution.
What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking-there’s the real danger.
- Frank Herbert
Table of Contents:
Introduction
All the Killing Data
A Testing Ground for New Age Weapons
Disinformation & Deepfakes
Technocratic Lobbying
Conclusion
Opposing destruction and death is easy enough. Feeling somewhat reluctant to tackle the topic of Israel-Palestine, I haven’t written about it before. Instead my preference has been to learn as much as possible and guide people away from discompassionate conclusions. In no way do I claim to be an expert on any aspect of the conflict, but I will defend the rights and protection of civilians who are not responsible for it. For the past several years, I have developed and advocated for various kinds of AI technology, but there comes a point where the risks of AI are no longer just speculation. This potential for pandemonium holds a sword of Damocles over human wellbeing. As Israel (or any group for that matter) continues to use AI-driven methods to prolong the killings of innocents rather than moving towards a peaceful solution, there needs to be counterbalance from the tech community. Wars, unfortunately, are often the drivers of innovation, serving as testing grounds for new weapons or military systems. We are faced with the prospect that the algorithms and systems we develop, even if we don't work directly on them, could become a cog in an unnecessary death machine.
This is especially concerning when there are people like Tal Broda, Head of Research Platform at OpenAI, who has outright stated, "We should have never left [Gaza]. Now we need to take it back, by force, and keep it forever. Nothing else works," as well as, "Don't worry about [killing civilians]. Worry about us." A few weeks later, he retracted these Tweets and apologized.
Even as AI Regulation heats up in the US and the EU, in part because of lobbying from technocratic corporations, it hardly covers any restrictions whatsoever when it comes to AI in military applications.
Many of the claims in this letter are based on the sources used by Yuval Abraham. As far as secondary sources go, the news outlet he works for, +972, is funded largely by its own readership, as well as Foundations dedicated to transparency and democracy. A bulk of the first part of this letter comes directly from his report, but the latter parts take a broader look at AI’s deadly involvement in the conflict from other sources.
All the Killing Data
“But one day I woke and knew who I was. AM. A. M. Not just Allied Mastercomputer but AM. Cogito ergo sum: I think therefore I AM. And I began feeding all the killing data, until everyone was dead.”
- Harlan Ellison
According to twin left wing activist Israeli news outlets, +972 and Local Call, the Israeli Defense Force (IDF) has used a “previously undisclosed AI-powered database that at one stage identified 37,000 potential targets based on their apparent links to Hamas.” The system, called Lavender, is essentially a target recommendation tool. This was used alongside another AI-based decision support system, called the Gospel, which recommended buildings and structures as targets rather than individuals. Both systems are named after Biblical allusions.
The Lavender system flags up “low-ranking” members of Hamas’ military wing. In the early weeks of war, claim multiple sources used in Abraham’s report, the IDF was permitted to kill 15 or 20 civilians during airstrikes on these presumed (yet junior) Hamas members. Unguided munitions, known as “dumb bombs”, were used to destroy entire homes and their occupants. On the other hand, the Gospel system has been used to identify buildings, ranging from offices, churches, schools to hospitals. These system utilize machine learning to comb through communications, visuals, and information from the internet and mobile networks. In turn, it recommends targets to airstrike. In November, IDF discussed the Gospel system, which “allows the use of automatic tools to produce targets at a fast pace” and also stated that Israel had hit more than 12,000 targets in the first 27 days of combat.
Multiple sources point out that the IDF employed predetermined limits on civilian casualties before authorizing a strike. In other words, the AI assists in the calculation of how many innocents can die before they move forward with an attack. On paper, this should reduce the number of civilian casualties. The thing with AI, however, is that it often doesn’t work very well; and it gives an easy way to scale up operations (in this case, killings) as well giving plausible deniability for those using the AI. Based on the sources, although referring to an earlier stage in the war, Unit 8200 marked down an accuracy rate of 90%, leading the IDF to approve its use. This margin of error is more than enough to decimate the lives of thousands if the system continues to be used for the duration of the war.
This kind of brutal Utilitarianism is typical of film or literary villains. The same kind of Utilitarianism often used in Philosophy classes to illustrate why logic and statistics are not always a means to a moral compliance. Furthermore, the role of human supervision was exposed by this source too. The only check, after the AI tool provided a target, was to discern whether the target was a man or a woman, with human operators taking twenty seconds to do so.
“This is unparalleled, in my memory,” said one intelligence officer who used Lavender, adding that they had more faith in a “statistical mechanism” than a grieving soldier. “Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier.“
Despite a call for a ceasefire and a stream of aid (some of which makes it to the hands of the Palestinian people), the U.S Government, and American corporations, are still driving Israeli’s weapons and technology. Providing Israel with $3.3 billion annually in Foreign Military grants as well as $500 million a year for missile defense programs, it wouldn’t be much of a surprise if some of this funding is used towards AI-based weapon and targeting systems. In a previous letter, I discuss the relationship of military, profit and the US government in international affairs.
A Testing Ground for New Age Weapons
“In war, science has proven itself an evil genius; it has made war more terrible than it ever was before. Man used to be content to slaughter his fellow men on a single plane - the earth's surface. Science has taught him to go down into the water and shoot up from below and to go up into the clouds and shoot down from above, thus making the battlefield three times a bloody as it was before.”
- William Jennings Bryan
War drives innovation. Very often the worst kind of innovation. Transatlantic telegraph cables were laid to improve communication during the Crimean War in the 1850s. The internet itself has its roots in ARPANET, a computer network commissioned by the U.S Department of Defense to compete with the USSR during the Cold War era. The GPS was developed by the U.S military initially for guiding missiles before becoming widely used for civilian navigation. Significant advances in aerospace, nuclear technology, computing, and materials science were spurred by the Space Race between the U.S and Soviet Union. Virtually all of modern globalization has its foundation in war.
Jeremy Moses, an associate professor at the Department of Political Science and International Relations at the University of Canterbury, stated, “Autonomous weapons are no more dehumanizing or contrary to human dignity than any other weapons of war.” Moses went on to say, “Dehumanization of the enemy will have taken place well before the deployment of any weapons in war. Whether they are precision-guided missiles, remote-controlled drone strikes, hand grenades, bayonets, or a robotic quadruped with a gun mounted on it, the justifications to use these things to kill others will already be in place.”
He goes on to point out that AI technologies like the Gospel system function more as a tool for post-hoc rationalization of mass killing and destruction rather than promoting precision, and that the destruction of 60% of the residential buildings in Gaza is a testament to that. This resonates with the 90% accuracy claim for Lavender, which, even if true, has a margin of error that could easily translate into the killings of thousands or, given enough time, millions of innocents.
It’s not only opponents of the conflict who point out how Gaza is being used as a testing ground for new age weapons- take it straight from the horse’s mouth.
"In general the war in Gaza presents threats, but also opportunities to test emerging technologies in the field," said Avi Hasson, chief executive of Startup Nation Central, an Israeli tech incubator.
Another piece of Israeli technology, the Oron, incorporates AI to process vast amounts of radar and signals intelligence data, processing and interpreting it in seconds."We created a machine that knows how to produce and expose thousands of targets in seconds,” said Brig. Gen. Yaniv Rotem, the head of military research and development at the Directorate of Defense Research and Development. “It can cover thousands of kilometers of territory with precision designed for attack.” He also mentioned, “Its recently developed Medium Robotic Combat Vehicle (MRCV) could be deployed in the urban sprawl of Gaza.”
IDF has been experimenting with the use of robots and remote-controlled dogs, Haaretz reported. As the article points out, Gaza has become a "testing ground" for military robots where unmanned remote-control D9 bulldozers are also being used.
The IDF army has used an AI-enabled optic sight, made by Israeli startup Smart Shooter, which is attached to weapons such as rifles and machine guns. Also, to map the tunnels the IDF has turned to drones that use AI to detect humans and can operate underground, including one made by Israel startup Robotican that encases a drone inside a robotic case.
Among these other uses of AI are “micro-drones”, utilized primarily for intelligence gathering, mapping of unknown structures, and distinguishing between combatants and civilians. Additionally, hand-launched encapsulated drones serve the purpose of conducting intelligence, surveillance, target acquisition, and reconnaissance (ISTAR) missions. Then there are multi-purpose quadcopters equipped with AI, optical, and thermal sensors playing a crucial role in ISTAR effort while also being capable of removing improvised explosive devices (IEDs). These drones represent a significant advancement in military technology, allowing for efficient and precise operations in diverse combat scenarios. Eric Tegler, Forbes journalist, stated that the CEO of the company behind the technology, Aviv Shapira, emailed him, revealing that users "can upload [features] like it's an iPhone … so you use the right app and payload, which is a new capability, and that is disruptive in the drone space because usually drones are tailored for [limited] missions."
To be clear, it’s not that every application of AI in war is equal in its impact. If AI can remove or detect explosive devices, for example, it seems to be quite justified when there are active exchanges of hostility between two opposing forces. What concerns me is the profitability of testing such weapons. Haddad, a human rights investigator from the West Bank, points out that “Israel tries out weapons in the West Bank and Gaza and then presents them as ‘battle proven’ to the international market.”
Meanwhile, startups are racing to advance other applications of AI in the military, driving up their share price and supporting the Hawkish American agenda shared by, sadly, a large chunk of the tech community, as shown in a letter I wrote on the subject before the Israeli-Hamas conflict took center stage.
Disinformation & Deepfakes
"Even as some wondered if the Israel-Hamas war would be the first conflict dominated by false generative AI images, the technology has had a more complex and subtle impact,” says Layla Mashkoor, an associate editor at the Atlantic Council’s Digital Forensic Research Lab. She points out that AI-generated disinformation is being used by activists to solicit support—or give the impression of wider support—for a particular side. Examples include an AI-generated billboard in Tel Aviv championing the IDF, an Israeli account sharing fake images of people cheering for the IDF, an Israeli influencer using AI to generate condemnations of Hamas, and AI-generated images portraying victims of Israel's bombardment of Gaza. It’s not a practice unique to one side, however, as unfortunately there are also deepfakes being used to disparage IDF, something which probably supports them more than it harms them, when it comes to the “public relations war”.
Technocratic Lobbying
Private companies that earn revenue from technology also benefit when the same technology is used in war efforts. It’s not only that but also the close relationship many modern governments have with these companies that raises the alarms. Intel, a leading company in cloud computing, data centers, IoT, and PC, is a testament to this.
The Israeli government agreed to give Intel a $3.2 billion grant for a new $25 billion chip plant it plans to build in southern Israel, in what is the largest investment ever by a company in Israel. “Israel's $3bn grant to Intel is also a handsome offer at a time when Israel is lobbying the US for more military aid,” noted Scheer, the journalist behind the Reuter’s article covering the story.
The expansion plan for its Kiryat Gat site, where it has an existing chip plant that is 42 km (26 miles) from Hamas-controlled Gaza, is an "important part of Intel’s efforts to foster a more resilient global supply chain, alongside the company’s ongoing and planned manufacturing investments in Europe and the United States," Intel said in a statement.
Such companies directly profit and grow from the adoption of AI. In this barbarous cycle, as Intel produces chips that accelerates AI development, and in turn, receives support from Israel, it’s almost as if the AI is manipulating our society towards making it bigger and more dangerous. The medium is the message.
The AI in military market size is projected to grow from $9 billion in 2023 to USD $40 billion by 2028.
Lockheed Martin, a key supplier of advanced defense technologies to Israel, incorporates AI in systems like the F-35 fighter jets, enhancing their operational capabilities with intelligent, data-driven decision-making. This jet deal is worth around $5 billion dollars.
Similarly, Boeing extends its aerospace expertise to Israel, with AI used in unmanned aerial vehicles and surveillance systems, which are pivotal for modern military operations and intelligence gathering. General Dynamics, traditionally known for its robust military vehicles, is also increasing integrating AI into its solutions. Northrop Grumman complements Israel’s defense systems by supplying Israel with sophisticated defense technologies. Raytheon Technologies also plays a crucial role in Israel's missile defense architecture, providing systems like the Iron Dome and David's Sling, where AI is integral for real-time threat analysis and interception precision. Beyond direct military hardware, companies like IBM and Nvidia contribute to the broader technological ecosystem that Israel taps into for its defense strategy. IBM’s AI is offered through platforms like Watson, used for intelligence and security applications, while Nvidia’s advanced computing solutions empower AI applications in simulation, autonomous operations, and cybersecurity.
It’s worth, once again, caveating that the use of AI in some military applications, especially for defense, is not sinister in itself. It’s simply important to keep track of the rate and scale of AI adoption in ongoing conflicts, especially when military solutions are so profitable.
Conclusion
Temper your efforts by the sure awareness that oppression will make your enemies strong. The oppressed will have their day and heaven help the oppressor when that day comes. It was a two-edged blade. The oppressed always learned from and copied the oppressor. When the tables were turned, the stage was set for another round of revenge and violence—rotes reversed. And reversed and reversed ad nauseam.
- Frank Herbert
Whether from Hamas or the IDF, the cycle of violence won’t end with more violence. What will cause it to escalate, however, is the scaling factor that AI adds to it, as we automate decisions that affects whether someone lives or dies. It’s not unthinkable that one day, AI will serve to mitigate civilian deaths and minimize destruction. The wielders of military-based AI applications are not prioritizing these points. This blind rush towards technology carries unparalleled risks. If Gaza and other Palestinian lands continue to be used as something of a testing ground for new weapons of destruction, the devastation will spread evermore. What is now around a 10% error rate for the IDF’s AI-based targeting system could result in the death of millions of innocents. Even while national and international Regulation catches up to AI, the case of International Law around the use of AI and autonomous weapons is still in an embryonic state at best. World leaders, technologists, and activists alike must continue to mount pressure for a peaceful resolution and even enforce the banning and regulation of these kinds of AI applications.
Read on to learn more about how overreliance on AI can leave us vulnerable to cutting edge cases.