top of page
  • Writer's pictureChristopher Soelistyo

Militarised AI: A New Frontier of Warfare?

This article was originally published on UCL Pi Media in April 2019 under the title "Militarised AI: A Sinister Deus Ex Machina": https://uclpimedia.com/online/militarised-ai-a-sinister-deux-ex-machina


Source: US National Archives

March 2019 marks the expiry of a controversial contract between Google and the US Military, concluding an internal revolt that led to 13 resignations and the signing of a cancellation petition by more than 4,600 employees. This contract would have seen Google using its dominance in Artificial Intelligence (AI) to develop algorithms that identify objects in the military’s video footage. Object recognition might not seem like the kind of issue that would cause such an incident, spurring Google to publicly disavow any future work in weapons technologies – that is, if the cameras collecting said video footage weren’t being deployed on drones, and if the objects being identified weren’t potential targets for said drones. It turns out that the combination of AI and targeted killing can be quite concerning.


On 22nd February 2008, Eric Schmitt and David Sanger reported in the New York Times that the Bush Administration had agreed upon a new targeting principle that would allow drone operators to eliminate targets without even having to confirm their identities in advance – the execution instead based on suspicious behaviour or characteristics. In Schmitt and Sanger’s words: “this shift allowed American operators to strike convoys of vehicles that bear the characteristics of Al-Qaeda or Taliban leaders on the run”. Since then, these so-called ‘signature strikes’ have been used to target individuals after using long-term video footage to build a pattern-of-life analysis and determine the likelihood of an individual being a legitimate target. In this scheme, factors such as frequent close contact with a known terrorist would increase one’s likelihood of being marked for elimination.


Unsurprisingly, the use of ‘signature strikes’ is not without its critics. Is it possible to differentiate a ‘terrorist’ from a ‘non-terrorist’ based solely on their behaviour through drone footage? How can we justify the potential collateral damage caused by an attack if we don’t know a target’s value in the organisational structure? How prominently do human biases factor into this judgement? ‘Signature strikes’ clearly lie in an extremely murky area.


I’m dwelling on this specific aspect of US targeted killing because it seems like the natural conclusion stemming from any integration between unmanned drone technology an recognition AI, but it is not the only aspect. For example, facial recognition AI could significantly aid the use of ‘personality strikes’, where the target’s identity is known. This may be helpful in situations where the target’s face has been altered, as the result of injury or ageing. However, the signature strike example is particularly interesting from a technical standpoint: it involves the recognition of not only a still image, but a collection of time-adjacent images – what we would call a ‘video’. By inspecting each image individually and then integrating the results, a well-built artificial neural network could classify aspects of the entire video, rather than just single images. Developing such powerful AI is no trivial task, but the precedent is already there. Video recognition networks have been used to classify short clips containing various actions (jumping, running etc…), and in the biological realm to analyse cell behaviour. Japanese start-up Vaak has even developed a network to weed out ‘likely shoplifters’ from CCTV footage based on their behaviour. When applied to drone footage, who says it can’t be effectively used to classify ‘terrorist’ from ‘non-terrorist’?


To be sure, this is very far from the kind of work that Google got themselves into when they joined the Department of Defence’s Project Maven in late 2017. Their initial contribution to Maven, which aims more broadly to integrate ‘big data’ with the DoD’s activities, was to develop software that would allow a drone’s camera to identify common objects, such as cars, buildings, people etc. The US operates thousands of drones around the world (as of January 2014, more than 10,000), and to reserve manpower to sift through all that footage would be highly costly and in all likelihood quite soul-destroying for those involved. It would be far easier to train an artificial neural network to carry out the task and flag any images that might be worthy of human inspection (e.g. those that contain the face of a known terrorist, or the presence of AK-47s or IEDs). All decisions and final judgements would still remain in human hands.


However, the potential for more advanced tasks – such as behaviour recognition – is there. To assume that this potential will never be tapped, I think, is to ignore the general trend of technological advancement in warfare. Not to mention the fact that Google was only one of several large tech companies, including Amazon, gunning for the DoD contract. Google may have publicly disavowed weapons contracts, but will other corporations do the same?


All this points to the growing potential, and reality, of military use of AI – but is it inevitable? And should we try to stop it? The growing appreciation of and investment in military AI by the governments of Russia and China make it inevitable that AI will be a competitive sector for the great powers. In late 2017, Vladimir Putin famously remarked that “whoever becomes the leader in (the AI) sphere will become the ruler of the world”. His statement may very well have alluded to China, whose rise in the field of AI has been nothing short of meteoric. The government of Xi Jinping has set a target for China to become the world’s dominant AI player by 2030 – a commitment that won’t be jeopardised by leadership changes, owing to Xi’s lifetime-leadership status.


As I alluded to in an earlier article on nuclear weapons, there is very little chance that the world’s dominant military power – the US – will sit idly by while a competitor gains an advantage, so we won’t be seeing the end of military AI anytime soon. Should we try, however, to prevent it?


The concept of militarised AI can be disconcerting, but we should recognise that the tasks we employ AI to do are borne out of human decisions. Instead of fighting against the use of AI in drone strikes, perhaps we should fight against the use of drone strikes at all or, more specifically, signature strikes. If we take drone strikes as a given, however, we should not be blind to the potential benefits that AI can bring. Leaving human drone operators to stare at a screen for hours can inflict huge psychological stress, the kind of stress that makes for bad judgement and poor decisions. Also, the greater classification accuracy of image-analysis AI could even prevent civilian casualties. AI only turns malevolent when we so choose; perhaps it is human decisions we should be more concerned about.

0 comments
bottom of page