At the dawn of the age of autonomous robots, there has been a sudden surge in concern over the ability of intelligent machines to make ethical choices. Ethics can be challenging, in theory and in practice. But from a technical perspective, things might not be so bad.
The Pentagon recently hired a British scientist to help build robot soldiers who “won’t commit war crimes.” (article) “A British robotics expert has been recruited by the US Navy to advise them on building robots that do not violate the Geneva Conventions.”
Excellent. My hope is that he is an engineer. What is needed is a coding of the Geneva Convention that engineers can easily use as design requirements. Better still if there’s a version that computer programs can understand. If the product of the work is not specifically geared toward technical development activities, then it’s unlikely to be any more useful than the original Convention documents. Getting robots to understand the rules of war is a useful idea, though not a complete capability for an ethical robot.
Ronald Arkin, a computer scientist at Georgia Tech, who is designing software for battlefield robots under contract with the U.S. Army is among those who believe that ethical robots can be built. He has said, “as the robot gains the ability to be more and more aware of its situation,” more decisions might be delegated to robots. “We are moving up this curve.” He said that was why he saw provoking discussion about the technology as the most important part of his work. And if autonomous battlefield robots are banned, he said, “I would not be uncomfortable with that at all.”
In view of the fact that robots can serve to move humans out of harms way, the suggestion to ban them might seem a little disconcerting, particularly if he means to suggest that other AI scientists and engineers around the world haven’t been thinking about the issue. It might give the wrong impression. If the goal of intelligent robotics was to create indiscriminately destructive machines, it would have been reached long ago. Just push something round and heavy down a steep hill and it’ll go until it hits something. We don’t think of intelligent behavior as something that once put in motion is out of control. From an engineering perspective, building robots for end-use is still about building robots that perform properly, no matter how intelligent they are.
In Arkin’s robotic system, the robot pilot would have what he calls a “governor.” Just as the governor on a steam engine shuts it down when it runs too hot, the ethical governor would stop the robot from acting unethically.
An alternative developed during the 1980s, known rather generically as HLL (High Level Logic), was initially envisioned as a concept for creation of more powerful expert systems. It was more recently suggested to a large number of AI scientists and roboticists as having potential for development of a standard component for many AI systems, including autonomous robots.
HLL includes experts, manager(s), and at least one executive. They are related in a hierarchy similar to many human organizations. Executives set goals and assign them to managers. Managers formulate plans and have the authority to approve or disapprove actions. Both executives and managers have specific responsibilities in formulating and controlling acceptable behavior. Experts with specialized knowledge can play a supportive role involving details, such as whether an action would violate the Geneva Convention.
HLL was included in a proposal to the Swedish Defense Department that led to the recently announced commercial release of BrainstormÃ‚Â® from Institute of Robotics in Scandinavia (iRobis), “the first complete cognitive software system for robots.”
iRobis is not currently involved in development of autonomous robot soldiers, nor any armed platform. A more general question arises out of the robot ethics discussions however. Is the technology on offer today sufficient for creation of robots that can carry out complex missions successfully? Can we expect highly evolved autonomous robots to behave well?
Brainstorm provides the most advanced adaptive learning and problem-solving capability yet seen; on a level that can accurately be described as a thinking machine. Robots can be trained to perform complex tasks rather like humans are trained and can “think on their feet” while carrying out a mission. The software can be used to create any robot and will be used to create autonomous robots that adapt to new circumstances and make decisions. Nonetheless, the need for ethical regulator mechanisms in HLL layers has not been shown to be as great as one might expect from reading discussions on robot ethics.
In the short term at least, the process of developing robots with Brainstorm, although likely faster and cheaper, will follow a familiar path. Product requirements will be defined. Specifications will be developed defining in some detail how the requirements will be met. Robots will be tested before a product release. Assuring that the end-product behaves acceptably is still part of the development process. At present, a machine readable version of the Geneva Convention might be more useful as an integrated part of design systems and in the process of intelligent testing than in the robots themselves.
A machine readable copy of the Geneva Convention would be a valuable piece of technology but there is plenty of wisdom suggesting that the idea of simply loading it into intelligent weapons systems to govern their behavior is not an acceptable design concept. Should the decision to go to war be left to a robot simply because the Geneva Convention allows a military response to an act of war? There has been decades of discussion solidly against putting machines in charge of ultimate decisions. Many of the ideas made popular in literature and film are taken seriously by roboticists as well. Autonomous capability is good, but we do not in fact want machines to take over the world.
A proper decision model already exists in the use of military technology under human command. Submarine commanders do not fire their missiles into foreign countries simply because they can. Fighter pilots do not drop bombs simply because they can. Each has an assigned role and level of authority defined within the context of an organization. They respect not only rules of engagement but also decision-making hierarchy – the chain of command.
As the level of autonomous capability grows in robots in field service, there may be an increased role for the type of control proposed in HLL. Application of the concept would endow robots with a natural connection to organizational structure and thinking. An executive would be assigned a specific role and level of authority. From that information, it would know the limits of its autonomous decision-making authority and when permission is needed to carry-out actions that it is capable of performing. Designing robots to operate within a familiar command structure also greatly simplifies autonomous machine-human interaction.
Brainstorm is not only aimed at the military market, but at the consumer market as well. It would be absurd to create a general public fear that manufacturers don’t care about the quality of robot behavior. It may be that home service and autonomous industrial robots should come before autonomous robot soldiers. Once a robot understands how not to be destructive, it can be systematically endowed with specific decision-making instructions on what destruction is acceptable and under what circumstances, otherwise remaining non-destructive. At least in the short-term, the creation of highly trained specialists rather than 007 robots with a “license to kill” greatly simplifies the problem of robot ethics. But military robotics also has something general to offer. An owner of a home service robot should equally expect to have command authority over robot servants.