By Sarina Tareen
In the rapidly evolving realm of military innovation, the greatest threats to human rights, international security, and international humanitarian law are the emergence of lethal autonomous weapons systems (LAWS), commonly known as Killer Robots.

These systems acquire the capability to identify, select, and hit targets without human interference. The advancement in military technology of countries like Russia, Israel, and others proves that the age of killer robots is no longer science fiction but reality.
Perhaps, the most notable example of this trend is Israel’s use of AI-driven warfare in Gaza. For several years, Israel has used AI-driven facial recognition software at checkpoints in the West Bank. Furthermore, Israel’s deployment of systems like Lavender marked 37,000 people as targets in the early stages of the Gaza War. This striking example illustrates how AI is making life-or-death decisions. It is reported that thousands of innocent civilians are wrongfully targeted due to the 10% error rate of the Lavender. Directly violating the principles of proportionality and distinction under international humanitarian law.
Additionally, Israel’s warfare technology has been used in bombing civilian areas in Gaza, which includes booby-trapped Israeli robots, remote-controlled dogs, and AI-guided quadcopters. This Further explains how autonomous weapons systems haphazardly destroy civilian infrastructure and lives. With 59% of buildings and 87% of schools in Gaza destroyed, it is not only a humanitarian catastrophe but also a direct outcome of brutal, dehumanized warfare.
Russia also adopted autonomous weapon systems in its military operations, particularly in Ukraine. Russia’s extensive use of Iranian-made Shahed drones in massive waves over Kyiv shows how to threaten civilians and destroy infrastructure. Russian-made drones are pre-programmed; they can select and attack targets without any human monitoring. A war from just war to algorithmic aggression represents a risky escalation in destruction.
During the recent cross-border operations, India deployed Israel-made Harop drones (Indigenous Suicide Drones), also called Low-Cost Miniature Swarm Drones. These drones patrol over targeted locations and autonomously identify and hit targets. However, these systems are praised by the Indian defence in terms of calculated strikes. In reality, they undermine individual judgment from warfare, making no space for ethical accountability. A swarm drone is pre-programmed without considering the proportionality of civilian infrastructure nearby.
This concern is not theoretical; it is grounded in reality. When an autonomous weapon system is deployed in complex conflict zones where the distinction between combatants and non-combatants is often blurred, the threat becomes obvious. Critics argued that this trend eliminated critical human oversight in life and death decisions during warfare. Machines are not empathetic and lack ethical judgment. The principle of proportionality cannot be programmed. The machine cannot determine whether the probability of military advantage of a strike outweighs the potential civilian harm. As Human Rights Watch argues, such an automated system is dehumanizing war, reducing people to numerical points.
However, proponents argue that killer robots can lower the risks to soldiers, and eliminate irrational behaviours, like trigger-happy, and homicide in the fear of revenge. They further argue that (LAWS) can process information faster, likely to minimize collateral damage. Nevertheless, these arguments neglect the complexity and uncertainty of warfare. Article 6 of the International Covenant on Civil and Political Rights (ICCPR) supports the inherent right to life, which needs to be secured by law. Allowing machines to make decisions that can abolish the right to life without accountability is a breach of Article 6.
One of the most concerning dimensions of (LAWS) lies in their accessibility to non-state actors. ISIS succeeded in conducting its first effective drone attack in Iraq in 2016, targeting two Kurdish fighters. Afterward, this led to the establishment of its drone unit known as “Unmanned Aircraft of the Mujahideen”. In 2018, Syrian rebel forces launched a swarm of 13 homemade drones to strike at Russian installations. Such examples are the striking evidence of the weaponization of AI-driven technologies by terrorists.
Noel Sharkey (an expert in AI and robotics) warned of a future where cheap swarm and lethal drones with no pre-installed safety mechanisms are easily accessible to terrorist organizations. This concern is well demonstrated in the viral video (Slaughterbots), which visualizes an AI mini-drone targeting people based on social media data. Eventually, terrorist organizations could launch drones to hunt down and eliminate targets based on GPS, facial recognition, and online data, making attacks more precise, easier, and untraceable.
This alarming threat is extending to conflict-affected countries. In Africa, there is the fear that autonomous weapons lost in counterterrorism operations may be recovered by insurgents. And utilized. It is reported that the U.S. has already lost Reaper, and Predator drones in fragile states like Yemen, Libya and Niger. These weaponry systems, if modified and hacked, could enable non-state actors to conduct attacks at full capacity.
Accountability of these autonomous weapons systems is another critical concern. Who is to be blamed when a killer robot acts unlawfully? The programmer? The manufacturer? A person who approves the deployment? Autonomous state war crimes may be unpunished due to the unaccountability. This can challenge the fundamental concept of command responsibility in international law.
Despite the growing danger, the efforts of international regulation remain stalled. Countries like India, Russia, Iran, Türkiye, and Israel, restricting their binding treaties, fearing that such restrictions could undermine their military superiority. Such an opposition compromises the setback of forums such as the UN Convention on Certain Conventional Weapons (CCW). However, an important milestone was achieved in November 2023. When the UN General Assembly passed a resolution, with 164 votes in favour, calling for action on the regulation of autonomous weapons. Civil society initiated movements like Stop Killer Robots, Human Rights Watch, and Amnesty International for the comprehensive treaty.
To prevent a chaotic future, the international community must act. Countries should come up with a binding treaty that can prohibit autonomous lethal weapons, guarantee responsibility, and systematize the principle of human control. Without this, we will soon live in a world where machines decide who lives and dies, without ethical judgment.
Author: Sarina Tareen – Research Intern at Balochistan Think Tank Network (BTTN), International Relations graduate from BUITEMS, Quetta, Pakistan.
(The views expressed in this article belong only to the author and do not necessarily reflect the editorial policy or views of World Geostrategic Insights).