Have you seen the recent Bud Light commercials, in particular the one where one man tries to show his friend how he trained his dog? (This is not an advertisement for Bud Light or any associate of Bud Light). The message is artisticly explained: Bud Light is "just right," or as beer specialists often describe fine brew - "well-balanced."
Columnist Roger Cohen, in his recently published op-ed piece in the NY Times this past weekend entitled, "Of Fruit Flies and Drones," raises the question of how "video-game-like" international drone killings are fundamentally changing the way we "go to war." He notes that President Obama has authorized as many drone strikes in Pakistan in nine and a half months as George W. Bush did in his last three years in office; at least 41 total.
I assume Cohen is asking this question on an ethical level as well, as he argues that "when robots are tomorrow's veterans, does war become more likely and more endless(?)." The conclusion he draws in the midst of the "dark side of the war on terror" is that a public debate is needed to highlight "how targets are selected, what the grounds are in the laws of war, and what agencies are involved" in order to establish and maintain accountability for wartime decisions.
While I appreciate his concern for these types of questions, I think that the public is limited in the way in which a topic of this nature can be debated: what should be known, when it should be known, etc. The public has elected people to be knowledgable about these aspects, and we need to trust them to do their job. As JD remarked in his recent blog, "how do we achieve security through transparency?" These decisions ought to remain the responsibility of officials; the public will always debate without the full picture, and that might be "just right" like the Bud Light commercial portrays.
Overall, Cohen does raise an important topic for further discussion; namely, the promise of success robotic warfare holds while engaged in the war on terror. While discussing morality and terrorism with my undergraduate advisor, I was asked by him: If Osama bin Laden had captured you and your group of 24; and you had a gun loaded with one bullet but could not kill yourself or him; and he said to you 'If you kill one of your people I will let you all go free;' what would you do? I still think to this day: "But Osama and AQ have changed the rules of engagement, and I can never trust him to let us go even if I kill one of my own."
Even though the example is dictated by boundaries, the point is: AQ and their associates, who work in linked-networks that are ever-morphing and increasingly indidivualized/separatist in nature, have changed warfare. It is now irregular warfare, as we do not even define it as guerilla warfare: no uniforms; no intentions to found or run a state-sanctioned government or soverign entity. Yet it is incorporated with intellectual persons motivated by a global vision and highly specialized in systematic training techniques, and it both supports and praises any ounce of extremist activity related to martyrdom. This type of warfare expects and accepts only one end and supports any means to that end.
Robotic warfare is able to meet these challenges on an ever-morphing war landscape. It is a controlled and precise way of targeting specific coordinates where "suspected terrorists" seek safe haven(s); it is "just right" and "well-balanced" because it limits simultaneously the possible diametric impact of harm to civilians. The technology being developed now offers a more promising means of a) obtaining more intelligence information which leads to b) the execution of successful operations.
The process is not simply, as Cohen puts it, "watch[ing] people get vaporized on a screen in Virginia and then drive[ing] home for dinner with the kids." Going to war using robotic warfare requires human capital as well - and expertly trained human resources for that matter - to integrate intelligence and communication tactics. It is a tool, like the internet, that can be used virtuously or co-opted. For example, dogs are "man's best friend," but the Red Army trained them to sit under enemy tanks with bombs attached during the Second World War.
Just like bio-mechanics is a booming field, so too is a newly-forming science called bio-mimicry, which boasts nature as our "model, measure, and mentor." I think robotic warfare can grow more precise, more accurate, and more intelligent by integrating bio-mimicry in military operations.
For example, humpback whales eat the most krill when they employ a tactic known as "bubblenet feeding." The process: a group is formed and roles are understood; all dive down slowly to a specified depth; then one whale swims horizontally past his companions while blowing bubbles at a top rate; when this whale reaches the end of the line a second whale swims horizontally across while sounding a high-pitched noise. The high-pitched noise scares the krill swimming above the humpbacks and causes them to rise to the surface while the bubbles trap the krill in a net-like area. What promises the whales' success is, above all, teamwork. What successfully enables the whales to feast is the confusion of the krill.
Imagine, now, that bio-mechanics and bio-mimicry could reproduce this tactic in robotics technology. Could we employ an object(s) that mimics this in its mechanics? Scientists need to continue to work with a) intelligence analysts to understand threats, b) special forces to understand strategic opportunities, and c) military leaders to understand objectives so that the proper tools can be integrated to achieve operational victories in countering AQ and associates.