12/21/07

Kershnar, Stephen - For Interrogational Torture

12/21/2007

International Journal of Applied Philosophy, Vol 12 No 2 2005

This is a relatively quick and semi-technical piece that considers 'interrogational torture' of an 'attacker' from a a consequentialist and deontological position. The author first has to define the term 'interrogational torture':

Interrogational torture: the imposition of great suffering in a short amount of time that is neither willingly accepted or validly consented to, in order to gain information, usually from the person tortured. (IT)

The next definition is that of an 'attacker':

Attacker: a person who performs a gross injustice and is morally responsible for doing so.

With these two matters cleared up, author first quickly considers cases where, from a consequentialist perspective, IT is permissible. (pg 225). Author then moves to the deontological perspective, and argues that IT of an attacker doesn't violate any right of the attacker, either. Also, it is unclear or a toss-up about whether IT of an attacker is a 'free-floating' wrong (i.e. a wrong that doesn't attach to a particular person). Thus because it isn't a wrong to a person and isn't a free-floating wrong, it isn't wrong. The argument goes as follows:

P1 The only wrong to a person is one that infringes on her moral right
P2 moral rights are either natural rights or non-natural rights
C1 Torture must either wrong a natural right or a non-natural right
P3 IT doesn't infringe on a natural right
P4 IT doesn't infringe on a non-natural right
C2 IT doesn't wrong a person

Author first wants to cast moral rights as powers, in a way that provides support for P1. (pg 225-8) Author discusses three wrongs that might be done to an attacker: 1- infringing of rights, 2- exploitation, 3- contemptuous treatment. Author calls 1 object-centered, 2 subject-centered, and 3 falls either into the object-centered or the subject-centered. Author argues that the object-centered account is the best to go with since it makes the respect for others due to powers they have. Author argues that this properly captures what rights are. (226)

The next major move in the paper involves saying that an attacker has given up her rights against IT when she became an attacker. Author construes this as a case of self-defense. (pg 228-31) The objections are as follows:

Objection 1: Waiving a right against extreme suffering is invalid-- you can't waive such a right. Author's reply: the attacker is actually consenting to IT, though involuntarily, since the attacker lacks alternatives-- but this is not the fault of the torturer. (pg 232)

Objection 2: Autonomy-based rights can't be waived. Author: nonsense: we don't want maximal autonomy but narrative autonomy. What is important in autonomy is 'reflexive autonomy', which is something like [I guess]: 'all things considered, how much autonomy do I want in the upcoming events in my life?' (pg 233)

Objection 3: the reply to objection 2 is insufficient-- you shouldn't be allowed to take away someone's narrative autonomy. Author: remember, this is self-defense!

Objection 4: IT will most likely be imposed using an unreliable procedure, and is therefore unjustified. Author: though it might be 'wrong', it isn't 'morally wrong' in the sense of infringing on anyone's rights. This is a reply to Nozick. (233-4)

The next major discussion is author arguing that IT isn't a free-floating wrong. There are three free-floating wrongs: 1- exploitation, 2- indecency, 3- a failure to satisfy a consequentialist duty (pg 235)
1- Author claims it isn't clear that the attacker is being exploited by IT "It is not clear that the attacker receives an unfair share of the transactional surplus. This depends on the relative magnitude of the two parties' gains..." [what?!]
2- Author claims a reasonable person would find this self-defense not indecent.
3- Is IT optimizing good consequences? Author claims this is a tough empirical question that is most likely to be a toss-up.
So, since it isn't conclusive that IT is a free-floating wrong, and since it doesn't infringe on an attacker's rights, it isn't morally wrong.

12/14/07

Burgess-Jackson, Keith - The Logic of Torture

12/14/2007

Wall Street Journal, Dec 5 2007

This is a rather short piece that lays out some of the issues in moral philosophy: how there are rule and act consequentialists and absolute and moderate deontologists. Author also lays out types of questions around any moral issue (torture not unique in this regard)-- factual questions, conceptual questions, evaluative questions.

Author's position is that it is only conceptual questions that philosophers can help with-- clarifying ideas and correcting conceptual errors only. No one should look to philosophers for evaluative expertise, author claims. "Philosophers, as such, have neither factual nor evaluative expertise. (I would argue that nobody has evaluative expertise.)"

Author also distinguishes between what is permissible by law and what is permissible morally-- how the two are different, specifically that the law has to be practical and apply in a rule-like manner.

12/7/07

Lurz, Robert - In Defense of Wordless Thoughts About Thoughts

12/07/2007

Mind & Language Vol 22 No 3 June 2007

This is a paper almost exclusively aimed at refuting Bermudez's theory of nonlinguistic creatures and their capabilities. On Bermudez's account, nonlinguistic creatures can think about the world and have 'protocausal' reasoning, but cannot think thoughts about thoughts, that is, understand that their thoughts stand in relation to themselves-- that is, have 'higher-order propositional attitudes' or higher-order PAs. Author wants to deny this a priori theory.

First, author attacks the a priori aspect of Bermudez's theory by pointing to some empirical work that underlies the position that nonlinguistic animals can entertain higher-order PAs. (pg 272-3) Important to note that Bermudez does not deny that nonlinguistic animals can have thoughts about mental states, just not PAs. (pg 273)

Author quotes Bermudez and describes his theory:
P1) Ascribing PAs involves higher-order thinking (intentional ascent)
P2) Higher-order thinking (intentional ascent) can only be done by using words for the thoughts-- by using public-language sentences. Intentional ascent involves semantic ascent.
Conclusion: PA ascriptions involve public language. (pg 275)

higher-order thinking is considered by Bermudez to be 'consciously considering thoughts and how they relate to each other', which he names 'second-order cognitive dynamics' (pg 276). A point of interest is that this cannot be a sub-personal representation, since then it would conflict with the language of thought hypothesis. Author denies that ascribing PAs need to be done consciously, since there are many studies that seem to show that when children over 4 ascribe PAs to others, they aren't doing it in an acessible manner. (pg 277-9) Author offers a way out for Bermudez by saying that nonlinguistic animals can't explicitly engage in PA ascriptions, but author considers this a much weaker conclusion (pg 280-2).

Author considers an interpretation of Bermudez's theory: call 'first-order cognitive dynamics' the ability to explicitly, reflectively reason about states of affairs (not thoughts). It seems that Bermudez is committed to nonlinguistic animals doing some form of reasoning, but is it 'first-order cognitive dynamics'? If Bermudez says 'yes', then author tries to trap him into admitting that 'first-order' requires language just as much as 'second-order' does. If Bermudez says 'no': 'this happens on the subpersonal level', why wouldn't this happen for 'second-order' as well? (pg 282-4)

Author considers a possible story that suggests that nonlinguistic animals could reason about PAs. (pg 286-7) The story is about a nonlinguistic animal 'tricking' or letting another have a false belief in order to secure a means of amusement for itself. If the story is possible, then it seems that language isn't required to have reasoning about PAs. This goes directly against Bermudez, who thinks that for a PA to be thought about, it must first be represented, and thus requires a 'vehicle' at the 'personal level' (not sub-personal). If they weren't conscious (personal), then we wouldn't be able to 'regulate and police' our thoughts and what we're entitled to believe. The only viable systems available for all of this is analogue representative models (maps, images, models) and language. But Bermudez says that analogue is out (pg 288) so language is the only contender left standing.

Author denies that these two options are the only way to go, and furthermore that Bermudez is confused. He confuses what is a conscious consideration 'the thought' with whatever does the representative work 'the vehicle'. What is at the personal level are thoughts, the vehicles that represent could easily be at the subpersonal. (pg 288-9) Bermudez's apparent reply is that this seems unlikely since whenever you go and check your thoughts, you get words. (pg 289 bottom) Author denies this: there are thoughts that don't come in words, even though you can put them into words (pg 290-1).

Author concludes that it is an empirical issue, not a conceptual one, whether nonlinguistic animals can ascribe PAs.