Yoshimi Battles the Pink Robots by The Flaming Lips
Joshua Foust, the former defense contractor who now serves as a non-senior analyst at John Kerry's American Security Project, has yet another piece published on the liberal PBS site about why the Obama Administration should continue to feel good about drones.
It's in response to a report by Human Rights Watch called appropriately Losing Humanity.
The premise is basically -- oh, you civilians and peaceniks filled with FUD (Fear, Uncertainty and Doubt as the geeks called it)! You just don't get it and you just aren't cool because you can't realize how efficient drones are and don't realize how much we already use robotic stuff. Silly you, and stupid you! And smart me!
My answer:
The reason a lot of science fiction is scary is not because it's about machines, but because it's about people -- people against other people, and not under a shared sense of the rule of law.
The computer code that runs the killing machines is made by humans and is a concretization of their will, not something uncontrollable or entirely automatic and escaped from their control.
More importantly, the decisions about where to deploy the machines that involve targets drawn from Panatir data-dredging, or heat-seeking missiles or counter-mortar systems are made by humans. Before drones are deployed, humans sometimes have to do things like call up leaders of countries and seek their intelligence and their clearance. So, in the first place, nothing that is portrayed here as automatic is in fact as automatic as Foust strangely makes it seem, because it's in a context and a system where humans do make decisions about the very theaters of war in the first place.
Yet precisely because in our time, the weapons are far more automated, and in the case of drones, there is a greater acceleration and precision -- and therefore ease and seeming moral comfort -- in their use, we have to look at the moral dimension. Foust seems content if drones just don't miss very often or don't have much collateral damage. But if they get so easy to use, won't the temptation be to do more killing with them and make them more automatic? Where will it stop and who will be authorized to make the judgement call?
There's also the question of whether it's really the case that drones *are* so precise, given how many reports there are from human rights groups and local lawyers about non-combatants, including children, who are hit. These victims can't seek compensation, as their counterparts killed by regular US or NATO actions with more traditional weapons can, because drones are in a secret program run by the CIA, and not the military. This is apparently because of the need to keep them secret, apparently particularly from the governments of Pakistan and Afghanistan.
So this raises questions of governance, as to whether we can morally retain these weapons as secret and unaccountable, and whether we should put them under the regular armed forces' leadership.
More automation can in fact decouple the moral imperative from the results of the action of weapons particularly because of the acceleration and capacity for devastation.
Foust has a curious coda to yet another unconscionable piece in defense of drones as efficient war-machines -- he posits the idea that a less active role by people -- i.e. less compunction about use and nature of targets and consequences -- could somehow be a goal, and that more automation need not diminish our values. How?
In fact, if these programs reflect our values, they would have to become less secret, and attacks less common. Foust has already stripped away the moral context by pretending to find all kinds of "good" uses of "automation" that in fact a) aren't automation as he claims because of the prior choices about war in the first place, and theaters of war, and targets and b) have more unintended consequences than he prepares to admit.
As Foust notes, the Pentagon released a directive on "appropriate levels of human judgement," but Foust seems to think the radiant future can contain more automated processes if we can just all agree on our priorities.
There's nothing wrong with a cultural heritage that seems autonomous robots as deadly; they are. Pentagon planners and the CIA don't wish to kill civilians who are not combatants. Yet they do. They do because the targets often tend to have their families around them and the military can't wait until they get into the clear. That's the crux of the problem.
There's a strange notion that raising any moral questions about killing machines, as Human Rights Watch has done, is motivated by "fear". It seems simply to be more motivated by morality, and also the practical sense that machines don't have consciences, and code never renders human interaction as perfectly as real life.