6/29/07

Blankenhorn, David - Defining Marriage Down... is no way to save it

06/29/2007

The Weekly Standard, Vol 12, Issue 28 04/02/2007

This is an article that is a shortened thesis of author's book The Future of Marriage. The claim is that marriage is a pro-child institution, perhaps the best pro-child institution that humans have ever created, and that this institution is declining. The problem for the author is that it seems that growing acceptance for the decline of marriage is correlated with beliefs that Same Sex Marriage (SSM) is acceptable as well. Author uses cross-cultural surveys that ask questions like the following:

1-People who want children ought to get married
2-One parent can bring up a child as well as two parents together
3-Married people are generally happier than unmarried people
4-It is all right for a couple to live together without intending to get married
5-Divorce is usually the best solution when a couple can't seem to work out their marriage problems
6-The main purpose of marriage these days is to have children
7-A child needs a home with both a father and a mother go grow up happily
8-It is all right for a woman to want a child but not a stable relationship with a man
9-Marriage is an outdated institution

The author wants to consider these questions as, generally, addressing the decline (or strengthening) of the institution of marriage. The issue for the author is that the countries whose populations generally agree to 2, 4, 5, 8, 9 also show support for SSM. This means, author concludes, that these ideas all go together, much like teenage drinking goes with teenage smoking, though perhaps neither causes the other-- they 'come in a bundle'.

The second argument author uses to support that SSM is generally related to the decline of the institution of marriage is that various leftist-socialists-poststructuralists who are generally against the institution of marriage are all for SSM, since they think that SSM will push traditional marriage off of its pedestal and open up a multiplicity of possible relationships.

REPLIES:
Rauch, Jonathan - Family Reunion

Democracy; A Journal of Ideas, Issue 5, Summer 2007

Rauch reviews the book by Blankenhorn The Future of Marriage and agrees with much of what author tries to prove about the history of marriage as an institution, and it's meaning as an institution. Rauch claims that Blankenhorn might appear to view marriage is a multidimensional personal, sexual, public, and child-bearing relationship, but Blankenhorn's main objection to SSM is that it hurts the child-bearing part. Rauch claims Blankehnorn needs to keep biological parents raising the biological child as the most central feature of marriage in order to get the argument he has to have any teeth at all. Since this is clearly not the sole, and perhaps not even the only central aspect of marriage, this argument falls.

Secondly, Rauch paints Blankenhorn as telling us we only have two choices. Go towards the 'bundle' of ideas that reinforce traditional marriage, or go toward the bundle of ideas (that include the permissibility of SSM) that deinstitutionalize it. Rauch claims this is a false dilemma. Why can't we blend and mix policies? Rauch predicts that we will be able to do this.

Carpenter, Dale - Blog
The Volokh Conspiracy
March 27th posting, and subsequent

Carpenter argues in a number of ways. He claims that while Blankenhorn is trying to avoid talk of causation, namely that is SSM causing the other beliefs about marriage to rise, that Blankenhorn is subtly sneaking causation into the mix. Of course claiming causation would be fallacious since correlation does not imply causation. Blankenhorn, in a side Blog (The Family Scholars Blog), seems to agree with him that he is in fact talking about causation, not correlation. [why?!?] Carpenter rightly discusses that correlation cannot prove causation, and that SSM comes after the rise in the other beliefs detrimental to marriage, so causation, unless it is somehow backwards, is impossible.

Secondly, a major argument of Blankenhorn's is that several liberal thinkers are all for SSM since it will deinstitutionalize it. Carpenter replies with several liberal thinkers that are worried that SSM will re-institutionalize marriage. Carpenter thinks that, probably, neither result will actually occur.

6/22/07

Sunstein, Cass & Vermeule, Adrian - Is Capital Punishment Morally Required? The Relevance of Life-Life Tradeoffs

06/22/2007

AEI-Brookings Joint Center For Regulatory Studies, March 2005

Authors argue that government legislation is always trafficking in courses of action that encourage and discourage in order, presumably, for social goods to be obtained. If it is the case that the death penalty is a deterrent for murder, then there is a life-life tradeoff in instituting the death penalty. It is a life-life tradeoff because a system of laws without the death penalty spares a few mostly guilty lives for numerous innocent lives, while a system of laws with the death penalty trades mostly guilty lives for numerous innocent ones.

The major thrust for this paper lies in recent studies that conclude that the death penalty does act as a deterrent, one such study claiming that there were, roughly, 18 murders deterred per legal execution. There is a 'threshold' effect that indicates that if not enough executions take place per year and/or are capricious enough, this deterrent effect doesn't work. Furthermore, even more murders are avoided as the time between the trial and the execution is shortened. The authors report this evidence and do not argue with the studies. They are interested in the moral issue if the studies are true, so they simply grant the evidence as true and start from there.

Authors want to avoid the reply that this only applies to a consequentalist and not to a deontologist. They argue that it applies to a consequentalist in a straightforward manner. For the deontologist, there may be a way out, but it is unpalatable. First off, authors frame the issue as a trade between numbers of lives: 1 vs 18. Under this trade, it seems that even the deontologist must hit a 'threshold override'. (pg 13-14) Authors claim that other objections from the deontologist are attributable to the acts/omissions distinction.

The biggest part of the paper involves replying to what they take to be the central principled objection to the death-penalty-as-deterrent argument. This argument goes as follows:
Central Objection: The death penalty is an affirmative action on the part of the state to kill another human being, while the deaths its absence allows are affirmative actions of citizens, not endorsed by the state at all. (This can be broken down into two separate objections: intentional and causal) (pg 15)
Intentional: under death penalty, the state intends to kill, while without death penalty, the state doesn't intend to kill, the citizenry does.
Causal: under death penalty, the state causes a death, while without death penalty, the causal chain is much longer, and ultimately the cause isn't the state, it is its citizens.
Authors claim these objections, and a few like them, operate on a hidden principle of the distinction between actions and omissions to action. Authors argue this distinction does not apply to governments. Governments are always choosing what package of legal policies will optimize the goals of the state & citizenry. (pg 16-7) In this case, there is the imagined two packages that are entirely equal except one includes capital punishment and the other doesn't. The evidence suggests that capital punishment package will result in far fewer murders-- isn't that the one to go with?

Other objections about capital punishment are then addressed. Mainly authors reject them as failing to take seriously the idea that a regime of capital punishment, no matter how flawed, will still save numerous entirely innocent lives. Objections: the innocent convict, the randomly assigned execution, the racially motivated execution. These are three additional objections to capital punishment. Authors reply that these scenarios also apply to the innocent lives that are being killed because the would-be killers aren't being deterred (like they would be if capital punishment was used). (pg 20-24)

Arguments against capital punishment as a tool for deterrence now roll in: there are other ways to deter murder, do those instead. Authors say: fine, fine; but don't forget that these other ways actually have to be committed to, practical, feasible, and proven. So far, it seems capital punishment is a good policy, at least until other policies arrive. There is no reason to be against it, since the evidence (ex hypothesi) is supportive that it works. Authors peg this argument, like most of their previous ones, to the evidence. If the evidence turns out faulty or not based on the regime of capital punishment, the moral conclusion would change. But as the evidence suggests that a life-life trafeoff is 1 to 18, capital punishment is morally obligatory.

Lastly, because this is a life-life tradeoff, they can't be accused of a slippery slope (pg 27) or extending their arguments to other domains (pg 40). Executing someone for rape may deter, but it isn't a life-life tradeoff. Therefore the moral calculus is different, and it may not be morally requisite.

Authors suggest that failure to take the life-life tradeoff seriously might be a cognitive error of not taking 'statistical lives' as real ones. (pg 32-35)

6/15/07

Kent, Bonnie - Virtue Theory

06/15/2007

The Cambridge Companion to Medieval Philosophy, A. McGrade ed.

Author begins paper with discussion of ancient virtue theories and the changes they underwent in the middle ages. The difficulty with virtue theory is that they appear circular-- in order to get virtue you need to perform right actions and get the right 'habitus', but in order for an action to be right you need to have virtue. This circularity was not lost on the medievals, who also made the entire virtue theory far more complex by adding religious virtues, ones only granted by god's grace, and by adding the concept of the Will, which could do whatever it pleased, regardless of virtuous 'habitus'.

Author gives summary of history of the thinking from the Medievals. Virtues became classified as a 'habitus' in the 12th century, a word for which there may not be a good english translation. It roughly approximates 'habit' or 'disposition'. This may have helped the Medievals distinguish between the habitus and the will. (pg 8)

Aquinas talked about two sets of virtues, 'civic' ones and 'divinely infused' ones. These were challenged. (pg 10) Divinely infused virtues were in many ways similar to civic ones, but they lacked being divinely inspired, gifts from God's charity. But even these divinely given virtues could be countermanded by an act of the free will, so what good were they anyway? Medievals began to even question this. Author believes that John Buridan rightly avoided the circularity that threatened virtue theory in his own commentary of Aristotle's Ethics. (pg 15)

6/8/07

Aune, Bruce - An Empiricist Epistemology Ch 6 "Memory And A Priori Inference"

06/08/2007

Unpublished Manuscript

Author starts with a discussion of memory and how it is essential for any theory of knowledge, since what is presently observed quickly becomes the past, and the past is usually only accessible through memory. The issue with memory is more complex than it is with observational knowledge, since memory involves inference, e.g. the inference that what I'm remembering now is what I observed then (a fact about the past is inferred using the present). To be justified in making this inference we need some sort of backing for fallible inference in general, and this is the thrust of the chapter.

Author starts with Hume (pg 204) who considered this type of inference 'experimental', meaning that it is reasoning about cause and effect, specifically a similarity between examined causes and effects with unexamined causes and examined effects. This is general inductive reasoning, the kind of reasoning we now need support for.

The difficulty with giving a rational justification for induction first lies in giving the proper account of what needs to be justified. Generally, we want to think that induction is assuming that As are Bs in all cases based on seeing that As are Bs in the examined cases. This is obviously open to critique, and various fixes have been offered, prior to attempting the justification.

Lycan & Russell have attempted, but each time it appears there needs to be some prior known 'representative sample' or some general way of specifying how to get one. Neither is available, author claims, without further empirical considerations. Bonjour proposes an a priori answer that makes use of 'metaphysically robust' regularities that we observe, therefore are confident using induction on. Author considers this naive, since there are many cases where we use induction but aren't assuming 'metaphysically robust' regularities (e.g. pollsters). Also, the discussion of 'robustness' assumes a bias toward our predicates rather than 'grue-like' ones, as the author brings up the infamous 'grue' counterexample and argues that there is no principled way to exclude these cases from Bonjour's account: no good way to presume that the future will be like the past.

The next proposed justification for induction comes from using Inference to the Best Explanation, which argues that the best explanatory account of the evidence is the one most likely to be true. Author disagrees, since we do not consider all the relevant alternative explanations before we pick the one, nor do we know all the relevant alternative explanations before we pick the one we want to go with.

Author instead proposes using probability theory and Bayes as a way to justify induction (pg 221-4). Under this theory, we need to have priorly assigned probabilities to background beliefs before we can assign a probability to new hypotheses; however the system should, ultimately, be self-correcting, so improperly assigned probabilities will eventually be changed to reflect newly acquired evidence and supported hypotheses. The worry though, is what initial beliefs to accept to get the whole system running. Here author considers an attempt by Chisholm, (pg 230) which he amends to accept. Initial beliefs must be given 'weak acceptance on a trial basis'. Author offers Bayesean theory as the alternative to Inference to the Best Explanation (pg 234).

With a new Bayesean backbone for induction, we can revisit the skeptical BIV (Brain-in-vat) argument and assign a low probability to it, since no evidence counts directly in favor of the 'surplus content' that BIV asserts (pg 237).

6/1/07

Aune, Bruce - An Empiricist Epistemology Ch 5 "Observational Knowledge"

06/01/2007

Unpublished Manuscript

This is a chapter mainly focused on the foundations of knowledge, and dealing with brain-in-the-vat problems. Author eventually concludes that there is no empiricist proof against brain-in-vat (BIV) problems, and that the most recent answer to BIV problems given by Putnam fails as well, so this cannot be used as an argument against empiricism.

When dealing with prototypical empiricist foundations of knowledge, author claims there are two sources: observation and memory. This chapter deals with observation. The problem with observation is its fallibility: we have optical illusions, phantom sounds, hot things sometimes feel cold, etc. The commonplace reply to this is that we must be careful to question our observations, not to take them at face value, to examine their context, their sources, and so on (Locke). The philosophical problem with this reply is that it tries to correct empirical evidence with more empirical evidence-- a regress or vicious circle.

The answer proposed by people like Russell, Moore and Carnap was that there were immediately known 'sense-data' that we were infallible about that served as the inferential foundations of the rest of our knowledge. The problem with this response was that it eventually failed because it put a 'sensuous curtain' between perceivers and the real world. The real world becomes a Kantian 'thing-in-itself' that is unknowable. (pg 171) Arguments requiring the basis of empirical knowledge to be a non-inferential foundation go back to Aristotle, who showed an infinite regress if all our knowledge is inferential. (pg. 173)

However, author argues, we do not need sense-data for a non-inferential basis for our empirical knowledge. Author has developed a framework for what he calls 'imperfect' empirical knowledge, which is knowledge that could be false (ch 1). Working from this, we can accept that there is a non-inferential base from which empirical knowledge is made possible, but that non-inferential base can simply be (fallible) observational beliefs. Author tells a story of how, when we were young ,we took most of our observations at face value, particularly when they didn't conflict with the observations of others. As we became more critical, we began to form generalizations and theories, so our observations became subject to our theories. So the foundation for empirical knowledge is other empirical knowledge. No regress or vicious circle looms, since in all this 'knowledge' talk we are talking about 'imperfect' knowledge.

Author considers alternatives to the Russell/Carnap 'Foundationalism', that are also alternatives to his own theory. One such alternative is Bonjour's 'Coherentism', which asserts that knowledge is justified on the whole, and that particular beliefs are justified by their being able to fit (or not) into that coherent whole. Author claims this is too tough a standard to hold anyone to: everybody has gaps, even the scientific community does!

The problem with empirical knowledge as Hume has sketched it is that it is susceptible to external world skepticism, or a more modern version, BIV. If there were a good answer to this, we would like to hear it. Putnam suggests the answer of Semantic Externalism: the reference relation involves a connection to the actual things referenced ('meanings just aren't in the head!'). Putnam got grist for this argument by successfully arguing that a Turing machine might be thinking, but it would never be referencing if it couldn't sense the objects it was talking about. So the anti-BIV argument runs as follows:
1) if someone S is a BIV, the reference of his words would be electrical impulses, not actual things
2) therefore S's claim 'I am BIV' is not referring to actual things like brains and vats, so he cannot be describing himself as a BIV

Author argues this fails, since it doesn't alleviate our concern: either S has an 'exotic' meaning, or S is saying something false about a real person. Author considers other problems with Semantic Externalism. The major problem is that there is no clear story about how references relations come about. Putnam describes 'language entry and exit rules' that should correspond to behavior and experience, but this fails to capture the full extent of our ability to reference. Author claims that these simplistic rules sound a lot like the verificationist's claim that everything with meaning must be verifiable. If there is room for unobservables to be referenced (like H2O), then it seems there is still no good story about cases of genuine reference in Semantic Externalism. Since there is no clear-cut winner in this, BIV is still an issue for both empiricists and non-empiricists alike.