This is a contributor content by Manouk (Mark) Termaaten, Founder and CEO at Vertical Studio AI.
AI weapons are being used on the battlefield, when they should be outlawed. The time is now for nations to sign treaties agreeing to prohibit the use of these weapons in war.
U.S tech giants in particular are developing AI and computer tools to surveil and kill alleged militants in Gaza and Lebanon. Despite the mighty high technology, the number of civilians killed has contributed to concerns that the tools are resulting in the deaths of innocent people possibly due to faulty data and flawed algorithms.
For instance, in October 2023, the Israel Defense Force (IDF) reportedly integrated AI with old technology to find Ibrahim Biari, the commander of Hamas’s Central Jabalia Battalion. The attack resulted in dozens of civilian deaths, according to reports. All targets over the past three years have been found with AI.
The IDF calls AI a “game changer” for warfare. Officials call AI a “force-multiplier,” empowering them to improve their strike capabilities. The Israeli military began its so-called “First AI War” with an 11-day bombing campaign of Hamas in May 2021.
The IDF’s employment of AI in its wars was made possible by soldiers and reservists who work in the technology industry, according to a report in The New York Times.
The IDF’s use of these weapons is multifaceted. The IDF, for instance, has incorporated a chatbot learned in Arabic dialects as a means of monitoring public sentiment.
Their tools also include an AI audio program with the ability to find targets based on sounds “such as sonic bombs and airstrikes.”
Furthermore, the IDF has deployed facial recognition technology designed to identify hidden or wounded faces. That equipment has misidentified faces.
The IDF’s virtual reality system aids soldiers scanning urban zones.
The AI, however, has demonstrated flaws. The Times reports that certain officials have ethical qualms with the technology’s use in the war. Their concerns revolve around the increased surveillance, civilian deaths and arrests of innocents.
As part of its Lavender program, the IDF has reportedly compiled a 37,000 person target list based on Hamas connections. The military denies the list exists. Lavender ranks people on a scale of 0-100 based on the likelihood of them being a militant. The ranking is based on various factors, including a person’s family.
The IDF’s The Studio in Unit 8200 developed an AI language model in different dialects of Arabic trained on decades of intercepted texts, phone calls, and social media posts. The system is crude enough that intelligence officers must double-check the output.
Google, a major contractor for the Israel government since 2021, has provided Israel’s Defense Ministry and Israel Defense Forces white-glove service, including access to its latest technology under an IDF program, Nimbus, a cloud computing contract aimed at making sweeping upgrades to Israeli government technology.
The program has been confirmed by Gaby Portnoy, director general of the Israeli government’s National Cyber Directorate, to be for combat. Google also offers clients Vertex, which clients can harness to sort their own data with AI algorithms.
In November 2024, a Google employee asked for access to Gemini AI technology on behalf of the IDF, which wanted an AI to process documents and audio. The Israeli government also contracts with Amazon.
“Thanks to the Nimbus public cloud, phenomenal things are happening during the fighting, these things play a significant part in the victory — I will not elaborate,” he said.
Ever since the IDF opened up its front in Gaza, it has made use of Habsora, a tool developed internally which provided commanders with thousands of human and infrastructures for bombings, contributing to the violence in Gaza. Habsora is made of hundreds of algorithms.
These algorithms analyze data, including intercepted communications and satellite imagery to determine the whereabouts of military targets, although some Israeli commanders have raised the alarm about the accuracy of the system. Some express concern that the military was placing too much trust in the technology’s recommendations.
Errors can still happen for many reasons involving AI, said Israeli military officers who have worked with the targeting systems and other tech experts. One intelligence officer said he had seen targeting mistakes that relied on incorrect machine translations from Arabic to Hebrew.
IDF has long been at the forefront of deploying artificial intelligence for military use. In early 2021, it launched Gospel, an AI tool that sorts through Israel’s vast array of digitized information to suggest targets for potential strikes. It also developed Lavender, which uses machine learning to filter out requested criteria from intelligence databases and narrow down lists of potential targets, including people.
Commercial AI models are being used in warfare. The cloud and AI are being used to deploy bombs and bullets. Tech companies are empowering the Israeli military with digital weapons.
These AI weapons are repugnant. They should be prohibited. Clearly, AI-powered weapons raise ethical concerns. AI cannot distinguish between combatants and civilians, doesn’t understand proportionality, and places civilians at greater risk. AI could undermine global security, create a new arms race, and make these tools available to terrorist groups, and others. Nations need to sign on to a treaty prohibiting the use of these weapons.
Read Also: Verifiability Is A Fundamental Element of AI Innovation
Disclaimer: This is a contributor article, a free service allowing blockchain, crypto, and AI industry professionals to share their experiences or opinions with AlexaBlockchain’s audience. The content above has not been created or reviewed by the AlexaBlockchain team, and AlexaBlockchain expressly disclaims all warranties, whether express or implied, regarding the accuracy, quality, or reliability of the content. AlexaBlockchain does not guarantee, endorse, or accept responsibility for the content in any manner. This article is not intended to serve as investment advice. Readers are advised to independently verify the accuracy and relevance of any information provided before making any decisions based on the content. To submit an article, please contact us via email.
Image Credits: Canva