In this three part series, Alan Brill and Elaine Wood look at the evolution of artificial intelligence, machine learning and autonomous decision making and how the skills of the General Counsel are likely to be critical in protecting the organization from avoidable risks.
Machine learning and autonomous decision making are the hallmarks of artificial intelligence (AI). We see autonomous decision making at work in today’s cars with automated crash avoidance systems. Using a combination of sensors and computers, the system “learns” and evolves to be better and better at solving the problems it is programmed to respond to.
The system evolves fast and can respond and adapt more quickly than a human. AI, it is argued, can respond to challenges—such as cyber attacks—faster and more effectively than humans. In certain circumstances, human operators would be too slow to react to the velocity of attacks that can characterize modern hacking and state-sponsored cyber warfare. This raises the specter of a machine deciding to launch a weapon—suddenly “Dr Strangelove?”
Worrying About Artificial Intelligence is Not New
For a long time, we have recognized the need to put some limits on what an AI system can do. The most famous statement of these limitations was written by Isaac Asimov for his short story Runaround published in 1942. These have become known as Asimov’s Three Laws of Robotics, which can be stated simply as:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In a later novel, Robots and Empire, Asimov added an additional and more basic “Zeroth Law”—a robot may not harm humanity, or, by inaction, allow humanity to be harmed.
Consider an AI system dedicated to carrying out the cyber defense of an organization. It is likely that the system would, as part of its processing, attempt to determine the source of an attack, or, in other words, to seek attribution. But what if the system determines (based on inquiries from managers following prior attacks) that a way to stop the attack, or perhaps to better determine exactly what happened or what data was stolen, was to take action against what it believes to be the perpetrator of the attack? Over time, the AI system could optimize this offensive capability as part of its machine-learning-based evolution to better carry out its cybersecurity mission.
Cyberspace is Not Real
So what’s the problem? The AI system seems to be following Asimov’s laws:
It is working in cyberspace and dealing with other systems and data, not human beings, so apparently there’s no problem with the First Law.
It seems to be following the Second Law, because it is following its developer’s instruction to protect the system.
And it seems to be following the Third and Zeroth Laws, because the AI system is protecting itself without a perceived risk to humans or humanity.
Unfortunately, the three (or four if you count the Zeroth) laws of robotics aren’t the only laws that apply. We talk about “cyberspace” as if it were a physical reality. It isn’t! It is, at best, a way of thinking about where the interactions between systems take place. But as much as the concept of cyberspace is widely accepted and understood, it doesn’t exist. The fact is that nothing actually happens in cyberspace—it happens in the real world. There are no computers in cyberspace. They are all in the real world. Signals travel through wires, cables or wireless signals in the real world. They pass through real nation-states. And those nation-states have real laws.
The laws of the country where the AI is operating or is controlled are in force. And the laws of countries that the AI’s communication passes through or where an attacker is located may also be relevant. There is no “free pass” because of the notion of cyberspace. Artificial intelligence is not above the law.
For example, while it would be legal to identify the Internet Protocol (IP) address associated with an attack, going further trying to break into the attacker’s system, or to do things such as trying to take the attacker’s system offline, or to run software designed to defeat the attacker’s security and to enter its systems to try to see what is stored on a server, planting any form of software or anything else that could be interpreted as an act of cyber offense could be a crime under national laws.
While striking back against a cyberspace attacker seems like a natural reaction to a cyber attack, it doesn’t nullify real-world laws.
Taking an action that seems reasonable, but which actually causes harm—ranging from damage or loss of data, financial loss or reputational damage to criminal violations—can occur without intent. This is known as “The Law of Unintended Consequences” and forgetting about this “law” can have significant consequences, as we will see in Part 2 of this series.
Alan Brill is a Senior Managing Director in Kroll’s Cyber Risk unit and an Adjunct Professor at Texas A&M Law School. Elaine Wood is a Managing Director at Duff & Phelps specializing in compliance and regulatory consulting and a former federal prosecutor.