Quagmire of the Killer Robot: Drones & Loops

February 2023

What are the moral conundrums of controversial A.I.-powered weaponry to carry out strategic intelligence objectives?

The confluence of intelligence and technology has set us into uncharted waters. The notion of artificial intelligence was once a chimerical, caricatured stable of science fiction and pop culture; now its utilization is very much manifest. Intelligence collection is now facilitated with cutting-edge, streamlined capabilities that provide results instantaneously. Automation and machine learning have placated the frustrations towards delays and erstwhile miscommunication. 

One particular lethal type of technology has come to dominate the geopolitical and intelligence rhetoric surrounding automated weaponry: drones. Drones, alternatively referred to as unmanned aerial vehicles (UAVs), have redefined the military theatre given an innate value proposition of theirs unique to warfare: the absence of a human pilot in the mechanism. Drones’ surveillance and offensive capabilities have redefined the command-and-control sequence, and raise intriguing dilemmas regarding attribution and how “involved” humans are. As hegemonic powers across the globe increasingly deploy UAVs to accomplish their strategic objectives, questions emerge surrounding to what extent humans are “in the loop”: in control of the autonomous weaponry. 

Artificial intelligence-powered systems are not new to the realm of intelligence collection, with numerous agencies at national and local levels including the FBI and the NYPD utilizing various types of software to facilitate facial-recognition, pattern-detection, geospatial mapping, and the parsing of social media websites. Yet, their application in what British agency GCHQ refers to as the “effects” - or manifest actions in the field (“GCHQ.gov) , becomes a morally-ambiguous, sordid affair. 

The following sections shall divulge and juxtapose the benefits and disadvantages of enacting unmanned weaponry on a practical and ethical level, particularly with magnification upon the notion of attribution: is a person; state; or machine to blame when matters go awry? Proximately, a brief case study shall be launched to examine President Barack Obama and the United States’ prolific usage of drones across the Middle East, and what precedent this could set for the future of intelligence and AI-powered warfare.

Ostensibly, drones are being utillized due to their numerous surveillance-based, tactical advantages across a wide range of intelligence collection types. A key element of drone capability is rooted in its proclivity as a remote technology. Commanding officers positioned in Virginia can operate unmanned vehicles executing a mission over Afghanistan with the ease of a video game. 

Confining humans to the command center and away from the battlefield is a rather revolutionary notion. The usage of such technology innately precludes the physical risk of injury, or death. Drones do not have families that military generals have to write to upon their passing. Drones do not cower, or waver, or shrink in the presence of an enemy. They can be replicated 1:1 if a device is shot down or taken out of commission, and do not possess a seditious, rebellious streak to stray from orders (“Netivist”). They simply have an objective, and embrace a Terminator-like zeal to carry it out with utmost effort. In the same manner through which a web page is loaded on a browser, a drone “executes” a script that it has been programmed to carry out. Thus, the aforementioned rationales highlight drones as being monumentally strategic in terms of efficiency and convenience for a surreal overarching reason: it is not human.

Drones can leverage their GEOINT tendencies to keep their commanders safe while simultaneously mapping out the entire terrain of interest with little to no error (Chandele). A mechanism flying over even the most remote adversarial territories can do so in a variety of conditions and provide satellite-imagery quality visual intelligence of the operating theatre. Thus, its reconnaissance capabilities are second to none. To this end, drones are capable of executing multiple operations in a manner that humans couldn’t –it’s not possible for even the most seasoned of pilots to focus on a belligerent aircraft, map out the terrain, intercept enemy command base signals electro-optically (MASINT), all while maneuvering an extremely sophisticated aircraft. Most significantly, drones offer a prescient option in a military’s arsenal to carry out targeted operations. The idea of assembling a group to carry out an assassination over enemy terrain now seems almost anachronistic: drone strikes offer a level of precision in honing in on their target, especially in nuanced, inchoate environments, with thanks to what is known as the Multi-Spectral Targeting System, a powerful “graphic user interface” that provides a 360-view “dashboard” of the operating environment, consolidating open source intelligence (OSINT), GEOINT, MASINT as well as live feedback from fellow intelligence colleagues, or human intelligence (HUMINT) (Kreps & Lushenko). 

The U.S. has leveraged drone technology to enhance their vigilante efforts against insurgent leaders: major Al-Qaeda figures, including Ayman Al-Zawhiri, have been taken out by drone strikes. The operation was a masterclass in targeted intelligence in the sense that it was a culmination of surveillance, geospatial tracking, and flawless execution, with no civilian casualties whatsoever (Miller). The usage of drones to coerce the enemy into yielding by the U.S. shall be explored in a proximate section, but the point is ascertained: drones provide a strategic advantage to their operator within the realms of intelligence collection, interception and above all intelligence “effects”: action and combat in the field.

Any nascent technology arrives commensurate with its pitfalls and criticisms. Ethically, moral conundrums effervesce, the most notable of which is the question of attribution. When drone strikes go awry, as they often have in their relatively new stage of deployment, who is to blame? Is the controller situated at the military base to assume responsibility? Does the American flag insignia on a drone place the nation at fault? Is it perhaps the culpability of a specific agency that played a role in gathering the intelligence for the mission? Or, setting aside frivolity for a second, could the machine itself be blamed for mission failure? While the incumbent administration and its defense apparatus ultimately accepts the blame – or the credit – with regards to drone based operations, the fog of war going forward becomes more unclear. Take the Baktar Bayraktar TB2 drone, a formidable killing machine that allies of Ukraine have supplied it with in the effort to defend against Russia’s Sisyphian siege (Perrigo). Bayraktar is a Turkish manufacturer – does their country become an active belligerent in the war? Is it possible or palatable to repudiate a “third” country from the conflict when their machinery could be instrumental in changing the course of a military invasion? As with the onslaught of cyber attacks plaguing the world order on a new plane beyond land, air and sea, the question of attribution with drones complicates their relationship with normative intelligence effects. On a more practical level, drones do not possess the acumen of nuance and instinct. Their rigid programming renders them cognitively static, especially because A.I. at this stage does not empower them to make situational-sensitive decisions. In this regard, the need for active intelligence officers is more imperative than ever: a human needs to be “in the loop” at all times during drone operations. 

Artificial intelligence, in accordance with Ray Kurzweil’s theory of singularity, will far exceed human decision-making, but this will only occur in the very distant future, and until then humans need to firmly position themselves within drone command and control. How will intelligence officers and manufacturers alike program moralistic tenets into drones? If a drone is programmed to vanquish any moving personnel within a certain area, and launches missiles at a mother and her children as they already have, said moral conundrums arise. With regards to the command and control loop, it can also be argued that controlling drones sadistically gamifies killing – drone deployment’s remote nature keeps intelligence officers safe at their control panels, but reduces one of the most integral and high-pressure environments to an aforementioned video game. 

Lastly, by virtue of drone software existing on the electromagnetic spectrum, it shares a quality with smartphones, computers, grid infrastructure, cars and much more: it is hackable, and its MASINT electromagnetic capabilities can be manipulated within the cyber domain. Adversaries can mislead drones by generating “fake” signals, causing it to prevaricate from its original mission and render itself futile (Ahmed & Haskell-Dowland). More threateningly, adversaries could also theoretically seize or reprogram the drones to carry out its bidding – something not as easily accomplished with human infantry they hold captive. Surveillance drones issued by the U.S. were hacked by Iraqi insurgents as early as 2009 (MacAskell), and while encryption within the military sphere has bolstered, so too have the cyber-disruptive capabilities of China, Russia, Iran, and more. 

The most manifest use of drones carried out by a U.S. presidency and its commensurate intelligence agencies has been that of President Barack Obama’s. According to data obtained by the Council on Foreign Relations, a President with seemingly a concomitant, handshake-first approach to foreign policy can now be heralded as the first head of state to maximize the usage of weaponized drones. Across two terms, the Obama administration in conjunction with the Central Intelligence Agency (CIA) carried out over 540 strikes across Pakistan, Yemen, Afghanistan and Somalia (Zenko). 

As a President retrenching from the criticisms that his predecessor George W. Bush faced from massive boots-on-the-ground operations within Iraq and Afghanistan as part of the War on Terror, Obama embraced an out-of-sight, out-of-American-mind policy of drone strikes, one that aligned with new technological capabilities. Obama’s record with regards to civilian casualties may be the nadir of his presidency: Sources indicate that embracing this form of intelligence effect resulted in the death of between 384 to 807 civilians between 2008 and 2016 (Purkiss & Skele). Obama and the CIA’s intelligence apparatus rationale for drone operations? The “surgical, precise nature” of the strikes, which this author believes was panacea to the never-ending wars that the Bush administration thrust the United States into. Furthermore, the acclaimed “surgical” nature of drones seemed necessary, given that insurgent opposition like Al-Qaeda and its affiliates in Al-Shabbab across Yemen and Somalia operated in remote, hostile, and open environments that would surely be a death-trap for conventional forces. 

Another reason why drone strikes were appealing is the strategic nature from which they could be launched. While the Middle East is a maelstrom of alliances and rivalries, with varying sentiment towards the United States, Obama and co. had bases in Qatar, Bahrain and Kuwait (Lushenko & Kreps) from which drones could be deployed – this is crucial, especially when, as with any remote-controlled technology, drones possess finite battery capacity. 

The third and most intriguing possible rationale outlined by the Brookings Institution as to why the Obama administration upped the ante on drone usage – and faced fatal flaws –  is one that directly deals with intelligence collection. In Sarah Kreps and Paul Lushenko’s view, the Obama-decreed beginning of the withdrawal of U.S. forces from Afghanistan depleted the capacity for human intelligence the U.S. had, with clandestine experts and parochial resources depleted. Kreps and co. believe the core ingredients for an effective drone strike are a synthesis of HUMINT and SIGINT – and being devoid of the former has lead to “botched” drone strikes before and since. As HUMINT on the ground was dialed down, the capabilities of drone computer screens was dialed up. But the nuanced, conversational nature of HUMINT can’t be replaced. Drones may be able to map out an adversary’s bunker, but do they know what restaurant he went to that morning, which wedding he is going to tonight, and how long he takes to consume his dinner? 

 The struggle of who was accountable for the drones also led to a clandestine, not-often-discussed “turf war” between the Pentagon, the metonym for the Department of Defense, and the CIA. A major shift during the Obama administration was the allocation of drone control away from the CIA (though still in conjunction with it) and towards the military. The CIA had generally been responsible for all “covert war” activities in the Middle East from 9/11 onwards (Scahill). But as drones proliferated in their usage as Obama’s saber of choice, they soon fell under the command of the Pentagon. This came to an apex clash in 2011, when leaks obtained by the New York Times and Washington Post  revealed that neither was a fan of the other’s approach – the Pentagon felt that the CIA was too timid and restrained in their utilization of drones, and military advocates felt that they, not the CIA, were the ones who had to answer to Congress with pie on their faces when matters went askew (Scahill). Chris Woods of Foreign Policy says this is futile, as “generals aren’t any better than spooks at hiding” statistics. The infighting continued towards the twilight of the Obama administration and into the Trump one, with some experts like Jeremy Scahill arguing that this “dual command” system for drones between the military and the chief foreign intelligence agency is a double-edged sword. Mike Pompeo, Donald J. Trump’s CIA head even went as far as to say that the CIA should be authorized to launch their own drones. The bureaucracy of whether one agency should have full autonomy over drone operations is one that continues today.

Drones posit an intriguing dilemma for intelligence agencies, governments, and militaries alike. They have several advantages with regards to their technical, MASINT, GEOINT, SIGINT capabilities and offer their commanders the guarantee of remote safety, instant access to all realms of the operative theatre, can be clandestine in nature, and can tread where no human could fathomably go. Moreover, drones do not experience fear, cowardice, or trepidation and in lieu of this execute exactly what they’re programmed to. 

On the contrary, the lack of humanity is exactly what makes drones pedantically problematic – their inability to adapt to rapidly evolving situations, hackable electromagnetic spectrum framework, and tunnel-vision mentality is not always conducive to the strategic environment. Moreover, ethical and attribution-based conundrums emerge pertinent to whether drone strikes are a function of human, machine or state. For these reasons, keeping a human “in the loop”, or in firm control, of drones is imperative. They ought to be used in conjunction with human intelligence and military forces, not in place of them. If we envision drones to be intelligent enough to think like a human, we must also envision the notion that they could go rogue or turn on their commanders for a self-fulfilling reward. 

In briefly examining the Obama administration’s deployment of targeted drone strikes in the retrenchment of the Iraq and Afghan wars, a number of conclusions can be drawn. For benefits mentioned in this paper, targeted drone technology was appealing for political reasons – there would be no boots on the ground in these adversarial environments at a time when public sentiment towards the War on Terror was becoming increasingly acerbic. The perfect storm of capable technology, conveniently located military bases from which to launch drones, and the presence of Al-Qaeda, Al-Shabbab as well as the rise of other extremist groups like the Islamic State (ISIS) fuelled the drone-based trajectory of Obama’s, though his administration is not without fault – the proliferation of drone strikes came at the expense of several hundreds of civilian lives. This paper also subscribes to the belief that as HUMINT sources of the United States’ withdrew from Afghanistan, a gaping void grew - and not one that drone surveillance, no matter how sophisticated, could ever placate. Finally, with more explicit focus on intelligence backrooms, the tension between the Department of Defense and the CIA inevitably manifested in terms of who should be able – and directly accountable – for drone strikes and missions. 

As the cyber domain proliferates and drone usage becomes more universal, the very future of intelligence collection and effects will be magnified. The rocket-trajectory advent of artificial intelligence will soon render drones increasingly independent; perhaps to the extent that a human in the loop will be nothing more than a possible liability. As the U.S., Russia, China, and other global hegemons and aggressors develop – or spend billions purchasing – drones, a nuclear-proliferation-esque world order could be at play. As this surfaces, the role of intelligence will become more integral, as HUMINT and SIGINT officers grapple with a nebulous future of automation and a new form of adversary that, regardless of its flag, is programmatically designed to carry out a function as a result of the very aspect it lacks – humanity. 

Previous
Previous

Perils of Power I

Next
Next

Is Biden Tesla’s Saving Grace?