Vous êtes sur la page 1sur 5

Bradley Jay Strawser (2010) Moral Predators: The Duty to Employ Uninhabited

Aerial Vehicles, Journal of Military Ethics, 9:4, 342-368,


Rapid increase in UAV deployment.

Counterarguments:

- asymmetrical form of warfare - ignoble/dishonourable?
- impede jus in bello principles?
- psychological conicts in operators?
- targeted killings by non-military government agencies?
- autonomous lethal weapons platforms?
- lowers risk threshold too far - too easy to go to war.

Strawser - no, ethical obligation to use UAVs.

- If agent pursuing a morally justied yet inherently risky action, then there is a moral
imperative to protect the agent if possible to do so, unless there is a countervailing good
that outweighs protection of agent.

ergo UAV use obligatory.



UAVs usd for some time, but only recently become capable of lethal engagement. Not
talking about autonomous weapon systems - instead, only ones that are under human
control in carrying out lethal actions.

Principle of unnecessary risk:

If X orders Y to accomplish good goal G, then X has an obligation to choose a means to
accomplish G that does not violate demands of justice, make the world worse, or expose Y
to a potentially lethal risk unless incurring such risk aids in the accomplishment of G in
some way that cannot be gained via less risky means.

any potentially lethal risk incurred must be justied by some strong
countervailing reason.
uncontroversial?

Principle of unnecessary waste of scarce resources? - cheaper to produce, deploy, and
operate UAVs than inhabited aircraft for similar missions. Could be spent on social/
egalitarian welfare? (is Strawser trying to throw a bone to liberals/leftists here?)

PUR stronger though.

Morally impermissable to incur lethal risk in cases where you could avoid it. The honour/
adrenaline/valour argument isnt applicable here.


Claim: For any just action taken by a given military, if it is possible for the military to use UAV
platforms in place of inhabited aerial vehicles without a signicant loss of capability, then that
military has an ethical obligation to do so.

- rst claim (antecedent) false?
- technologically infeasible to transition some military aircraft into remotely piloted vehicles.
- too expensive to do so?

Remote weapon systems - cant target discrimination as effectively as inhabited vehicles - lose
effectiveness.

[EOD argument. Logically convincing, but not appropriate? Drones arent an issue of disarming
bombs - saving lives - the main objections are about their likelihood of killing non-combatants]

If you can use remote-controlled weapon systems to carry out missions with no loss of combat
effectiveness, then by PUR you must. No compelling reason to expose a soldier to danger.

Assumes use of remotely controlled weapons as part of a fully justied war effort - jus ad bellum,
jus in bello, etc.

Objection - move towards independent/autonomous weapons as a result of UAVs? Needs to be a
human in the loop.

Response - UAV development wont necessarily lead to IAWs - ignores middle-ground stopping
point. Important distinction between UAV and IAW.

Autonomous weapons - its OK to put soldiers at risk, because theres a genuine, compelling reason
not to use them.

Objection - UAV limitations lead to jus in bello iolatios.

Certainly - poor video feed, if cant abide by ius in bello, then shouldnt use. Martin Cook - 1999
NATO air campaign in Kosovo - by conducting missions at a minimum of 15,000
feet, NATO was more concerned with force-protection than noncombatant
discrimination

Warghters should take on additional risk to protect people.

UAVs actually more reliable though?



- is this at all reliable?

Data - suggests positive?

17 militant targets : 1 civilian deaths - UAVs operated by US.
4 : 1 Pakistan Special Weapons and Tactics Teams.
3 : 1 Pakistan Army.
0.125 : 1 - esimated all conict.

No reduction in ability of UAV pilots - noncombatant discrimination.

Objection - cognitive dissonance for UAV operators - treat like video game?

No - lessen temptation to commit violations of jus in bello if warghter not at risk?

New types of stress on warghters - dont know about this yet. There are ways to overcome this
though, and its less bad than actual combat harm.

Oversight by external sources, authorities - lawyers, human rights experts, ofcers. Can overcome
objectins by introdcuing external scrutiny - operators would be more judicious than inhabited
military aircraft.

Objection - targeted killings by UAVs, esp. CIA. UAVs contribute to this trend particularly/are
particularly suited to extrajudicial killing?

Response: Certainly dodgy - but are UAVs the issue here?

UAVs dont create a special class of weapons that escape concerns about sovereignty/airspace etc.

Hurt - hearts and minds - hit civilian targets (but this is lessened by more accurate?). Sense of
wronged honour - uninhabited weapons platform? This might reduce overall warghting capability.

UAV technology - makes actions easier to carry out? Loitering, accessing areas they wouldnt
otherwise. Doesnt make them inherently harmful - its the people who use them that should be
focused on [very weak argument - glosses over temptation to use weapons, etc].

Objection - UAVs create unjust/unfair asymmetry in combat.

Response - does this affect UAVs in particlar - crossed that threshold long ago.

Here I am following Jeff McMahans recent work rejecting the moral equality
of combatants (see McMahan 2009). That is, the warrior ghting for a just
cause is morally justied to take the life of the enemy combatant, whereas the
unjust ghter is not justied, even if they follow the traditional principles of jus
in bello such as only targeting combatants and the like, to kill the justied
ghter. Thus, there is no chivalrous reason for a just combatant to equal the
playing eld or ght fair. If combatant A ghts under a just cause, while
combatant B ghts for an unjust cause, combatant A owes nothing to combatant
B by way of exposing his/herself to some minimal threshold of risk. Thus, it is
right for combatant A to reduce the risk in an engagement with the unjust
enemy.
(nb - under this, the drone operators would be a valid target)
No compelling normative reason to have a fair ght.
Are remote-controlled attacks disturbing to targets? Becomes pest control? (???) Disturbing, but
not strong argument against it - have to articulate this argument properly.
Steinhoff is certainly right in this. I would add that a crucial element in how one
feels about imagining such warfare depends on whether or not the precision
missile strike in the picture envisioned is justied or not. Is it a military strike as
part of a fully justied defense against an aggressing, unjustied, destructive
enemy force? Is the strike hitting a legitimate and morally culpable target? If it
is, such factors temper our view of the strike considerably and move us away
from the pest control picture. In such a case, we should desire that the just
warrior be well protected from any possible threat that this enemy might proffer
! protection that the UAV affords.

Objection - reudces ius ad bellum threshold, makes UAV use too easy.
Response - easier to go to war? Intuitively plausible, but fails PUR test - led to deaths of soldiers,
more important than reduction in number of wars. Applies to any military technology.
Focus on epistemically dubious calculations that are predictive about themselves
doing something wrong in the future (we might be more likely to do wrong
action X down the road) over epistemically solid calculations to protect their
own just warghters presently (our soldiers will be safer today if they wear the
vests.)

because we will most likely behave unjustly in the future, we should behave
unjustly in the present (by violating PUR in choosing not to protect our warriors
as best we can) in order to try to prevent ourselves from acting unjustly in the
future. If that holds, we have a strange account of moral epistemology at work,
to say the least. We should forego taking presently morally correct action A in
order to help restrain our future selves from the likelihood of committing
morally wrong action B. In other words, we should do something wrong now in
order to (hopefully) better stop ourselves from doing something wrong in the
future.

Is this so odd - lesser of two evils?
Author grants that its possible that the moral weight of the present failure to develop UAVs might
be countered by the risk of preventing future wars, but - epistemic uncertainty, tough to calculate.
Not enough certainty about future affairs of states.

FINALLY

Question more about the justication of the broader military action than the actions themselves.

Vous aimerez peut-être aussi