Autonomous weapons systems and meaningful human control of cyber

In cyber, borders, states, agencies – the traditional ways of organising international cooperation and communication no longer count. In cyber, everybody is a potential adversary.

Flickr/U.S. Army CERDEC.Some rights reserved.

Talking about autonomous weapons systems, meaningful human control and cyber at the same time, creates a peculiar situation: we know we need this discussion. At the same time, the concepts that form the basis of the discussion are still so intangible that we do not quite know what we are actually talking about.

When I was asked if I could say a few things about cyber with reference to the above-mentioned concepts at the Weapons, Technology and Human Control conference organized by the United Nations Institute for Disarmament Research (UNIDIR) in New York on October 16, I at first thought, “yes, of course!” It is both interesting and necessary. As a research professor at an academy where we train and educate cyber professionals, I witness daily how our future military cyber personnel not only grapple with the fundamental concepts of their profession. I often find myself in discussions about what type of warfare these young men and women are likely to engage in.

What will be the rules of the game in, say, 5 years? 10 years? 25 years? Where will these tech-savvy young cyber professionals today take the military? How will they shape it? And not only that, but also: How do we train them, how do we give them the necessary skill-set to make them robust and equipped for future warfare?

These are questions that we grapple with on a daily basis, as any education by definition is about forming the future, and any training implies a balancing act between, on the one hand, what we want the future to look like, and what we think the future will look like, on the other. In a way, we want to both equip and shape.

And so, given the inevitable technological development of literally ‘everything military’, we must address questions of how officers and technology interact.

A problem when discussing “cyber” is that cyber is in many ways a collective term, that refers to computers, information technology, computer networks, and, to a virtual reality. Cyber is about networks. It is about communication. It is about information.

Cyber defence, then, is – (and I quote from the US “Memorandum for chiefs of the military services”, the document “Joint terminology for cyberspace operations”) – cyber defence is:

 “the integrated application of … cyberspace capabilities and processes to synchronize in real-time the ability to detect, analyse and mitigate threats and vulnerabilities, and outmanoeuvre adversaries, in order to defend designated networks, protect critical missions, and enable freedom of action.”

It follows that cyber warfare, then, involves the actions by a nation-state or international organization to attack and attempt to damage another nation’s computers or information networks through, for example, computer viruses or denial-of-service attacks.

Thus, cyber is about technology. But of course it is not this easy, as cyber is about a lot more.

One challenge is the fact that cyber means different things depending on who you ask and in which context, and whether it is talked about on a tactical, operational or a strategic level, and whether it is about defence or offense.

For example, on a tactical level, cyber offers the trigger puller an unlimited amount of bullets. In comparison to an M16, where you have an unlimited amount of bullets, it is endless in cyber. Some will point to the fact that a drone is in a way to be regarded as a sniper rifle: only instead of looking into a focusing, lens you look at a screen. You don’t pull a trigger, you push a button. And though the line of reasoning may be the same – you find a target and eliminate it – one could argue for the increased precision of the use of drones. But, the trigger puller may be in charge of, say, 100 drones instead of just one gun. What then? If the quantity is big enough, it has a qualitative impact. Quantity thus may have an effect on quality. Or as Depeche Mode said in the 1980s: Everything counts in large amounts.

But cyber is not essentially about quantity, neither are drones necessarily part of what we label “cyber”. Drones obviously trigger profound discussions about meaningful human control and weapons technology.[1] But cyber is not about robots, it is about communication, data, the collection of data. As such, dominating nations in the cyber arena, such as China and the US, make extensive use of cyber intelligence. That is to say that cyber is part of the military paradigm as a way of collecting and sharing information. They also include cyber in the military arena primarily as an asset to kinetic warfare; not necessarily as a separate battle field in and of itself.

Thus, in most comprehensions of cyber, despite the fact that the focus of cyber is in a virtual reality, we have not yet reached a state where people are not part of the picture. Maybe we have cognitive problems in visualising a non-visual world, where the human species is redundant. We need ways to analyse, make judgements, explore the utility, and we organize our worlds in ways where people still are needed. Thus, we need people.

Besides, the fear of a world where people are redundant in warfare is also challenging. To state the obvious: You don’t need analysis, if what you want is to eliminate, not utilize. One only has to contemplate the consequences of killer robots in the hands of ISIS. That would be a living hell. And so it is the fear of technology taking lives that is at stake here.

Regardless: cyber changes both warfare, as well as soldiering. In other words, including cyber into the military is a double-edged sword. It gets you more opportunities to attack, more ways to deter, to fight your adversary. But at the same time, you offer the same to your adversary. But not only that. Your adversary may no longer be a nation, a state. In cyber, borders, states, agencies – the traditional ways of organising international cooperation and communication no longer counts. In cyber, everybody is a potential adversary.

Thus, “cyber” and “soldiering” are increasingly viewed as going hand in hand in at least two ways:

First, cyber is changing what it means to be human and how humans are social. Technology changes society. It changes our ways not only of how we organize our societies or how we live. It affects our minds, our cultures, our ways of being. Technology has changed what it means to be human – and therein, to be humane.

Secondly, being part of a war changes who you are. Whether you are on the front line or inside operational rooms, being part of the war theatre makes a long-lasting imprint on the soldiers’ thoughts, their way of looking at the world, their way of relating to others and their ability to function in society afterwards. For this reason, all serious, national armies take the training of their soldiers seriously in order to make them robust, make them as fit as possible for the tasks ahead.

As a token of how cyber is increasingly viewed as an integral part of all military activity, US Cybercom was established in 2009, the Norwegian Cyber Defence in 2012. We equip, we prepare, we defend. We look ahead, look forward.

The biggest challenges

But, if we are to discuss meaningful human control and cyber – what are we actually discussing?

It is important to keep in mind that cyber is non-physical. Scholars even argue that cyber wars will not take place, to paraphrase Thomas Rid atKing’s War College, as cyber in itself occurs in a non-physical domain. That is to say that the direct consequence of cyber is not physical. The second-hand or third-hand effects may be physical, but the immediate consequences of a cyber attack are not about the physical domain.

But, if cyber is not physical, what is, then, to be viewed as a hostile cyber operation? Is planting malware that can stall a power plant or water supply an act of war? And if you do not witness the consequences of what you do – how do you manage to relate to it? How should you relate to it even?

Here comes perhaps one of the biggest challenges:

Cyber empowers. One cannot isolate cyber as a separate domain, or as a tool that you make use of whenever you see it fit. Therein conversely, a tool that you can shuffle away into the drawer. Cyber is always there, both as a separate domain where things happen and also as an integral asset to all aspects of the military. Our societies are increasingly depending on cyber networks. As a student of mine said: “It is after all about the security of our state, of our people. That’s what cyber is about in the military, really. I’d even go so far as saying that it’s about survival.”

If we let machines influence our future, obviously that future may look quite bright and shiny. Yet, that is only the case if we do not let the machines kill. Soldiers are trained to break the ethical boundaries of the civil sphere. They are taught to make judgement calls, to have a situational awareness, to evaluate. Computers do that. Computers do not have to think, and that is obviously one of the biggest problems, and why you are having this conference.

Attacking a power plant or a banking system is not necessarily the same as a robot on a killing spree. But it can be.

It should be clear that cyber essentially is about programming. About how you create software to do the work. To be more specific through an example: What if there is malware embedded in your systems that alters the intended programs. What if your software, your viruses, your malware is hampered, altered. The outcome is changed. With embedded malware you can get hacked and your intended outcome is compromised.

What then? According to Jim Young, US Army Account Executive for Google Enterprise Transformation, most of what cyber soldiers deal with is malware living in a system that can be exploited by an enemy. In an article posted in Defensetech a year ago, on October 24 2013, Jim Young spoke to a crowd of cyber officers “in the making”, claiming that a major head-ache to cyber security in the military domain is the challenge of embedded malware. He said: “This notion that persistent malware can stay on your machine should not happen. The technology is out there today to erase it, or not make it an attack factor. So I encourage you … to start looking at opportunities that fundamentally change how you probe cyber security. Do not do incremental. It will not get you where you need to be.”[2]

Now, he is certainly not alone in being worried about embedded malware. In military circles, this is, if not a buzzword, at least an issue of major concern. But how is it solved?

The solution here takes us back to the starting point, namely how autonomous weapons systems and meaningful human control relate to cyber. Solving the challenge of fighting embedded malware can largely be solved through automatization. Charles Croom, vice president of Cyber Security Solutions for Lockheed Martin Information Systems & Global Services, called it the “80/20 cyber rule”:

“It’s a rule of thumb that says, ‘hey, if I implemented everything I knew how to do today [to stop the malware] I could take 80 percent of my threats off the table, and then I could focus on this advance persistent threat of 20 percent.”

“When you know there’s an issue on your network you ought to be able to close most of them with machines,” he continued.

“These are repetitive things that have to be done and most of it can be done by machines. And then you save the manpower for the high-end intellectual issues, the threat you’ve never seen before, that is unique and requires some intelligence.”

No one has developed such an all-in-one package yet, but the Defense Advanced Research Projects Agency (DARPA) has issued proposals intended to find solutions. The only way to do it is to automate these solutions, he said, whether they are patching, vulnerability assessment, or remediation. These steps now are all done successfully by individual soldiers, but are done again and again as they keep cropping up, he said. And he continued:

“The only way we’re going to [fix it] is through automation. We’ve got to get people out of the loop and automate what we know how to do.”

The problem is that it is a multi-platform, multi-device world across “monstrous enterprises that are globally connected”. Networks should be automatically and constantly scanned to identify exactly what and who is on them at any time, looking for changes to software and hardware; it can be done at the speed of light. And when an unauthorized change is found or weakness or an intrusion is detected, the solution should be instant and automatic, as well.

Michael N. Schmitt points out how “fully autonomous weapon systems must be distinguished from those that are semi-autonomous, which are commonplace in contemporary warfare”.[3] By claiming that semi-autonomous weapons are commonplace, he refers to how a large defence industry has established automated systems, primarily in the context of missile defence, pointing at the Israeli “Iron Dome” and the US Aegis at sea and the Patriot on land. Thus, in the defence industry, atomisation of military activities are already in place. However, Schmitt also states the following:

“Of course, a fully autonomous system is never completely human-free. Either the system designer or an operator would at least have to program the system to function pursuant to specified parameters.”[4]

In other words – in the context of cyber defence, we have already reached a state where autonomous weapons systems increasingly are defined out of the picture. We encourage automated systems, we encourage a development where people are redundant. We want less personnel, we want to need less personnel, and we want to outsource the “repetitive” jobs to machines, as one has done in other industries, such as in factories producing cars or milk, for example.

Yet, what happens when we remove the people? What if the malware has already “been at work so to speak”? That can of course imply that you have created software for a purpose that is altered, adjusted or whatever have you. The point is just that you may have created software for whatever peaceful or legal intention, but the outcome may still be something else. With increased automoization, people will not know.

We cannot predict the future. But with such critical questions at stake, we are required at least to try to take control of the future, before technology takes control for

[1] The campaign to stop killer robots provides a telling example.

[2] See “Google to Soldiers: Malware is the Enemy” by Bryant Jordan on October 24, 2013

[3] See Michael N. Schmitt’s article “Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics” In Harvard Law School National Security Journal, February 5, 2013.