From Pager Explosions to Targeted Drone Attacks in Lebanon: Assessing Israel’s Use of AI in Warfare

Abstract

In this period, and against the backdrop of the 2024–2025 escalation between Israel and Hezbollah, this thesis examines Israel’s use of artificial intelligence in military operations through the Lebanese case, beginning with the coordinated pager and radio explosions of September 2024 and moving, at a later stage, into the sustained pattern of drone strikes and aerial surveillance that came to structure everyday life in southern Lebanon and parts of Beirut. The thesis does not approach AI warfare as a sudden or purely technical rupture, but rather as the outcome of a longer historical trajectory in which remote targeting, predictive surveillance, and algorithmically assisted decision making gradually became normalized, and this reflects a shift in how violence is planned, justified, and experienced, particularly by civilians who live through such experiences. The analysis proceeds from the lived exposure of Lebanese civilian spaces and then moves outward toward legal doctrine and ethical reasoning, asking how International Humanitarian Law and Just War Theory operate when life and death decisions are increasingly shaped by automated systems and distant operators rather than by individuals directly embedded in the environment of harm. This thesis adopts a doctrinal legal analysis of the Geneva Conventions, customary international humanitarian law, and Amended Protocol II to the Convention on Certain Conventional Weapons, alongside ethical reasoning grounded in Just War Theory in order to assess how existing legal and moral frameworks respond to the operational realities revealed by the Lebanese case. The pager and radio explosions demonstrate how embedding lethal force into ordinary communication devices disperses risk through civilian life and undermines feasible verification at the decisive moment of harm, while the continued reliance on drones reflects how surveillance and strike capacities collapse into a condition of persistent vulnerability even when framed through the language of precision, warnings, and ceasefire. In this sense, the thesis shows that Israel’s use of AI driven military technologies in Lebanon does not suspend the applicability of international law, but it does reorganize the conditions under which legal and moral judgment are exercised, and this marked a shift in which targeting increasingly functions as a data driven pipeline rather than a contextual human assessment. What ultimately emerges is that the challenge posed by AI warfare is not the absence of law, but the growing difficulty of preserving substantive human judgment, accountability, and civilian protection within contemporary forms of remote and algorithmically mediated conflict.

Description

Keywords

Citation

Endorsement

Review

Supplemented By

Referenced By