8/24/07

Ismael, Jenann - Saving the Baby: Dennett on Autobiography, Agency and the Self

08/24/2007

Philosophical Psychology Vol 19 No 3 June 2006

Author uses Dennett's arguments against the Cartesian Theatre as a starting point for a discussion on the self and other concepts of a centralized self-identity. Dennett is hostile to the idea of a unified location or 'brain pearl' that has all systems of the brain in front of it. He uses the analogy of self-organizing systems that give the appearance of centralized intelligence but in fact have none (e.g. termite colonies). The origin of our thinking we have a centralized 'theatre' is our use of words to represent our actions to others-- a useful fiction. (pg 346-7)

Author agrees that there isn't a Cartesian Theatre, but thinks that doesn't mean we end up as termite colonies. Author uses an example of a ship that guides itself by using an internal map. Sensors receive input from the environment. The input is processed using various modules, and a program is run that takes the results of this processed information from all the sources and 'deliberates' about where it is on the map, and what course to set. This could all be displayed graphically, or it could simply be an internal, distributed program. The point is that there is a 'stream' that runs through the 'Joycean Machine', or a program that tries to place itself as a self-representation and 'deliberates' about what course to set. Author considers this the alternative to the self-organizing only model and the Cartesian Theatre model. (pg 349-51)

Author concedes that Dennett does not always talk as eliminatively as that. The tension within Dennett when he seems to endorse a limited 'Joycean Stream' but other times when he insists on only distributed self-organizing systems is reconciled, Author claims, if we take 'language as rooted in the development of explicit self-representation ... representation of ourselves and our states in a causally structured world' (pg 353) For Author, it is this need in social life for self-representation that lead to the Joycean Stream.

Author talks about three types of unity that the Joycean Machine enables:

-Synthetic Unity: the integrating of various disparate information sources.

-Univocity: when the information is integrated into a coherent stream, they are given a 'collective voice'. Here author uses much analogy: like a state-wide referendum that takes all different perspectives and makes them into a 'yes' or 'no', the Joycean Machine is the mouth-piece for a group that has a distributed identity. There is no 'commander' other than the reporter. (pg 356-7)

-Dynamical Unity: The Joycean Machine mediates interactions with other systems, as changes occur.

The important point of all of this is that the reporting of self doesn't mean there is an entity inside the brain 'the self'. The 'reporting' is more like asserting-- a performative that makes it true by concluding what is going on within itself.

8/17/07

Menary, Richard - Attacking the Bounds of Cognition

08/17/2007

Philosophical Psychology Vol 19 No 3 June 2006

Author is undertaking to defend the hypothesis of extended cognition (HEC) and also what author considers is a more radical project that he calls 'cognitive integration', which takes internal (biological) and external vehicles to be integrated into a whole, which is properly considered cognition. The aim of this paper isn't to establish HEC or cognitive integration, but to defend it from the attacks of Adams, Aizawa, and Rupert (A&A).

Author lays out what the cognitive integrationist is committed to:

1) Manipulation thesis: place the 'cognizer' into an environment; agents complete cognitive tasks often by manipulating features of the environment. There are three types of manipulation:
A) Biological cases of coupling (pg 331)
B) Using the environment directly, without representing
C) Manipulation of the external representational system in accordance with cognitive norms

2) Hybrid Mind thesis: cognition is understood as a hybrid process of internal and external systems.

3) Transformation thesis: our cognitive capacities have grown, been transformed, or otherwise augmented by our ability to manipulate, use hybrid processes, and so on.

4) Cognitive Norm thesis: we are able to manipulate external vehicles of cognition because we learn norms that operate on how to manipulate those vehicles. (These norms of external vehicle manipulation are just as cognitive as internal ones.)

A&A, as 'traditional cognitive internalists', do not deny that we use e.g. mathematical symbols to complete cognitive tasks, they just deny such use constitutes a cognitive process. Author claims their objections misconstrue the manipulation thesis and attack a 'weak' parity principle.

The Parity Principle: if an external process were located in the skull, we'd call it cognitive (pg 333) This is supposed to be intuitive, not necessarily an argument for HEC.

A&A's first argument says that if a cognitive process uses/is coupled to object X, it doesn't follow that X is part of the cognition. Author replies that this misunderstands the where/how the cognition is being done. The cognitive integrationist instead has it that cognition is happening with internal processes and objects, together making up cognition. Thus: X is the manipulation of (e.g. the notebook) reciprocally coupled to Y (the brain process) which together constitute the cognitive process of (e.g. remembering). If this seems question begging, author claims that HEC has been independently established, and is beyond the scope of this paper (pg 334).

A&A have a 'intrinsic content' condition that author next attacks. The intrinsic condition seems to be that a process can be counted as cognitive only if it involves at least some intrinsic/non-derived content. Thus a process that involves no intrinsic content is non-cognitive. Somehow mental representations of 'natural objects' are fixed by 'naturalistic conditions on meaning' (Fodor or Millikan or Dretske), and A&A argue that artifical objects can be fixed the same way. The problem here, author claims, is that when you avoid saying that an internalist idea of an artificial object is fixed by conventional content, you stop yourself by using the convential norms that govern use of that artificial object in cognition. But we do use these norms in manipulation of these artificial objects. So either the objection takes us to be less competent than we are, or the objection posits intrinsic content that is suspiciously similar to convential content. (For an in-depth review of the dialectic, see pg 334-7)

A&A object that we have no good way of making a science out of the combination of brains and external tools, since external tools are all so disparate. A related objection from Rupert is that notebooks and any external tool you can use so far can't really be used when keeping up in conversation, so conversational memory doesn't work if it is external. (pg 339) Author replies to A&A by saying that they miss the entire point. It isn't that cognitive integrationists say that what happens externally is just like what happens internally! (pg 340) It is that, instead, the external vehicles take part in a hybrid process of cognition. Author replies to Rupert that he may be right, but other sorts of memory work differently.

8/10/07

Fisher, Justin - Why Nothing Mental Is Just In The Head

08/10/2007

Nous, Vol 41 No 2 2007

This paper uses a counter-example to 'mental internalism' to show that it isn't just what happens 'in the head' that influences mental events. Author defines a 'mental internalist' early:
A Mental Internalist believes that an individual's mental features supervene on what is in that individual's head at that time. Likewise for two individuals with the same mechanical layout: same things inside the head = same mental features. Author explains how some of this has been challenged by 'classical' externalist arguments (Putnam, Kripke, Burge), particularly on the side of the content of mental features (for instance, the content of my thought that 'Water is wet'), and in what justifies a belief. Of course, externalism of this sort has been open to challenge from a 'narrow content' view of the content of beliefs-- but author tries to get away from this. Classical externalist arguments haven't touched many of the hallmark mental features: phenomenal experiences, rationality, moral character, emotions, propositional-attitude-types. Author constructs an example that disproves mental internalism:

Imagine a world where there are 100 radiation 'pulses' per second shooting around. They are disruptive to the human physiology so that our mechanics/mental causation would go haywire if we were on that world 'Pulse-world': we would go quite mad. However, there are 'Pulselings' who have evolved to be just like humans except that their mental mechanics do just fine (maybe even need) to have these pulses going through their heads 100x/second. Now, one Pulseling Paula is having the experience of driving, and an Earthling Edna is having the experience of playing the saxophone. At some point in time t (in between pulses) author stipulates that these two people's mechanical/physical/inside-the-head properties are identical. (pg 321-2) If this is possible, there is a difference between mechanical inside-the-head and mental features. Thus mental internalism is false.

The next section deals with whether this example is possible. Author claims his example rests on three assumptions:
1) Our mental features are produced as the consequence of relatively simple interactions between many elements in our heads
2) These pulses 'coax' the elements in our heads to change the mechanics of how they operate
3) If these pulses change small elements in our heads, they can change large ones too
The moral of the story is that all cognitive systems depend deeply on the appropriate support (or at least non-interference) from their surroundings. (pg 324)

Author next considers replies to his example. The first is the other-minds skeptic. Since nobody can say much to him, author can't either. Nobody can convince the other-minds skeptic that other humans have mental features, let alone Pulselings. Another defense might be that a pulseling that receives these pulses is disqualified from having mental features attributed to her, because of these pulses. This is ad hoc, and denies explanatory power (since it certainly looks as though Pulselings are intelligent, have feelings, and so on), and might also disqualify ourselves as well (since we might be dependent on some sort of environmental factor).

Author considers two possible alternatives to mental internalism. The first is 'wide functionalism', which expands the mental features to some more of the subject's current surroundings. Author dislikes this in favor of 'teleo-functionalist' historical perspective, which takes into account the history of the subject in order to determine what the normal mechanics are for 'in the head' mental features. Author espouses the Principle of Mental Inertia:

--Altering things outside a creature's head won't significantly change the progression of mental states that that creature will undergo, unless those external alterations also bring about change within the creature's head. (pg 329) [What? Things won't be different unless they're different?!?]

Author briefly describes why his teleo-functionalist account is superior to the wide functionalist account, by suggesting that both Edna (Earthling) and Paula (Pulseling) are de-brained and their brains are thrust into identical vats: each would still have the same surroundings but their mental features would be different. Thus wide functionalism would fail here, but the Principle of Mental Inertia would be consistent with this result.

8/2/07

Montero, Barbara - Physicalism Could Be True Even If Mary Learns Something New

08/03/2007

The Philosophical Quarterly, Vol 57 No 227 April 2007

In this paper the thesis is that Mary would lack the concept of 'what it is like to see red', even if she knew what happened on the lower-level physical level, and could deduce what would happen on the higher-level physical level. Author dubs this the 'missing-concept' reply to the knowledge argument.

Author starts by discussing a 'less than ideal' knowledge argument that is open to flaws. She uses this as a starting point for some of her claims as replies. The less than ideal argument starts with 'Mary knows all the facts of physics, chemistry and neurophysiology...'. This is open to problems because there may be other physical facts that aren't included in these fields. There could be 'higher-level' physical facts (those that constitute/determine the experience of red) that aren't, strictly speaking, included in physics, chemistry or neuropsych. This is consistent with what author calls the 'non-reductive' physicalist position, with a conception of the 'broadly physical'. The 'broadly physical' is that mental facts are physical facts, whatever those facts may be (pg 179).

This leads to a discussion of what it is to be physical at all. Author begins with saying that as long as a property is either fundamental and physical or determined by fundamental physical properties, it is broadly physical. Much talk in the sciences involves talk of deducing higher-level physical facts from lower-level ones. There should be no reason why, in principle, this can't be done. This is the case, author points out, only if all fundamental physical facts are taken to be 'structural/relational' facts. If we construe the physical as the 'non-mental', then we won't have this necessary connection. (pg 181-2) [Doesn't this beg the question?] Only on a certain understanding of the physical as being ultimately accessible to physics using structure, position, charge, etc. can higher-level properties be deducible from lower-level physical ones.

The fixed Mary argument takes Mary to know all the fundamental lower-level physical facts and have perfect reasoning and deduction skills. Author abandons the previous argument she uses (above) and agrees that all higher-level facts are deducible from lower-level ones. Presumably, this can be done a priori. However, can it be done without the relevant concepts? One might think that this is what a priori just means. However, author claims that a priori means that the truth of the conclusion is justified from the truth of the premises without reference to empirical studies. This doesn't mean the conclusion can be reached by simply looking at the premises-- sometimes you'd need the relevant concepts to employ. (pg 183-87) Presumably, Mary could infer "Ahh, seeing red would look like this", except that she wouldn't understand that 'this' refers to, since she lacked the relevant concept of 'the experience of seeing red'.

The last bit of the paper tries to show that author's reply to the Mary argument is different from the 'old fact, new presentation' reply. The 'old fact, new presentation' argument uses identity between (brain-state B) and (seeing red). The 'non-reductive physicalist', however need not hold this identity-- in the sense that the two propositions have the same truth-value. (pg 188) [WHAT?!?]

7/27/07

MacDonald, Cynthia & Graham - The Metaphysics of Mental Causation

07/27/2007

The Journal of Philosophy, Vol CII, No 11, November 2006

This is a difficult (and long) paper about the causal efficacy and causal relevance of mental events. The causal efficacy of an event is a necessary condition for the causal relevance of one of the event's properties. The issue here is that there seem to be two causes that are causally efficacious for the same effect, e.g. turning on a light 'because you noticed it was cold' or 'because of some neuro-physical explanation'. Here is the 'qua problem' of non-reductive monism. Notice that if you can/want to reduce the mental to the physical, this isn't a concern. But if you believe the mental can't be reduced, then you have a case, made especially by Kim, that mental properties are 'too little' relevance for effects. This calls for a defense of the mental in conjunction with 'minimal physicalism', which makes the case for the irreducibility of the mental. The problem goes as follows:

PCR: physical properties of physical events are causally relevant to the physical effects of those events

MCR: Mental properties of physical events are causally relevant to some of the mental and physical effects of those events

EXCL: If P is causally sufficient for an effect, there is no other property Q that is distinct from and independent of P, that is causally relevant for that same effect

CLOS: If a physical event has a cause, it has a sufficient physical cause, where physical Ps are causally sufficient for the effect

Put all these together and it seems we have physical properties being causally relevant for physical effects, and the mental properties being 'too little' to be included. (pg546) Yet for the causal efficacy of mental events, you should be able to preserve all these 4 principles.

One possible fix for the causal efficacy of events is a trope theory. Tropes are abstract and not concrete, but this distinction doesn't map onto a universal/particular distinction. (pg547) Instead, a trope of red is the unique red of a certain robin (at a certain time and place), an abstract gained from attending to just one aspect of the robin. The concrete robin is all the tropes taken together. Under the trope theory, there are two conceptions of what it is to be a property. The first considered is the 'class of tropes' theory, where a property is all tropes of red taken together. (pg548) The second is that a property is just another trope. Authors consider the 'class of tropes' conception of properties first in their analysis of whether the trope theory solves the problem of mental efficacy.

The setup is that physical tropes (that are causally relevant) fit into a class of similar tropes to form a homogeneous property. A mental property is a higher-level (not 'higher-order') property that contains classes of these same physical tropes, and other physical tropes that instantiate the same trope-functional mental trope, e.g. pain is c-fibers and/or h-fibers and/or o-fibers... (pg550-1) Since the physical trope that is causally relevant falls into both a physical property and a mental property, the problem looks solvable. However, authors throw up the following objections: if a physical trope is causally relevant, in virtue of what? Prima facie, it seems relevant because it is a physical property, not because it is also a mental one. (pg552) Secondly, just because you call a higher-level property 'mental' doesn't make it mental-- there are lots of higher-level properties that are also physical. (pg553) Finally, authors claim that logically there is no connection between a causally relevant physical trope that is a physical property and a mental property, even if that causally relevant physical trope also inhabits that mental class. After all, there are several other heterogeneous classes which that physical trope will also inhabit that should not be considered causally relevant. (pg553-4)

A way out for the trope theorist is to claim not that properties are classes of tropes but instead just the tropes within the classes. (pg554) Authors attribute this view to Heil & Robb. Here is where authors level their biggest objection: the trope theorist misses the point of causal relevance: it isn't that a P property and an M property are identical therefore both relevant; causal relevance is the problem of: what is effected in virtue of what property? (pg555) This problem authors claim the trope theorist fails to address. Instead, authors offer up the Property Exemplification Account (PEA). PEA says that events like having a pain right now not only has the property of 'being a pain event' but also is an exemplifying of a property like 'has-pain'. (pg556). Here's how it works: objects are the subjects of events. In objects, property exemplifyings occur. When a property is an exemplifying in an event, it is actually exemplifying in the subject. A property exemplifying in a subject at a time is constitutive of an event, (pg556-7) though does not 'constitute' the event the same way e.g. a chair parts constitute a chair. (pg559)

Authors then posit that not only do events have constitutive properties (the properties of the objects), but events also have 'characterizing properties' as well. Characterizing properties have exemplifyings in events, and constitutive properties have exemplifyings in objects, the subjects of those events. (pg560) This sets up two sets of properties that an event can have. Kim argues that mind-body identity in events must be between constitutive properties of events, but authors consider instead that the identity should be between the properties of the events, not of their objects (subjects).

From here authors elaborate what a property is according to the PEA, and claim that two distinct properties can have exemplifyings in the same object of an event. In this case, you can have a mental property and a physical property exemplify in the same object of an event. They claim the mental-physical co-instantiation is a supervenience relation that is similar to the metaphysical relation between 'being colored' and 'being red' (pg561). So mental properties and physical ones are both exemplified in the same subject in the same event. Authors then argue that the 'universalist understanding' of properties forces the causal efficacy of mental events, since when a mental property is exemplified in an object of a physical event, that event is constitutively a mental event as well.(pg562) The result is that all properties that are exemplified in the subject of an event become efficacious. The immediate objection arises: too many properties are efficacious! Authors argue that this isn't a problem-- the only problem is if too many properties are causally relevant. (pg563)

To save causal relevance from this objection, authors introduce another thesis that works off the 'is colored'/'is red' relation by talking about different levels of mental and physical properties. Mental properties are 'higher-level' than their lower-level physical ones, but related in that when the lower-level one is exemplified, the higher-level one automatically is. Authors call this the Property-Dependence thesis (pg564). Crucial to this is understanding that a mental property of 'thinking of Vienna' is a higher-level property of 'neuro-state x'. The causal relevance of the lower-level physical property then can become the causal relevance of the higher-level mental one of the same object in the same event. Yet not every property becomes causally relevant (though every one could be considered causally efficacious) since not every property of the object is a higher-level property of the lower-level, causally relevant physical property. Lastly, mental properties aren't considered constitutive to the event (I guess they are part of the 'characterizing properties' of the event). This is a supervenience relation that authors analyze (pg565).

The next step is to show that mental properties can be causally relevant qua mental properties, not because they supervene on causally relevant physical ones. Authors claim that the framing of this problem by Kim is hostile to this possibility, so if they can show that the causal relevance of the mental no more problematic than any other causal relevance claim, they have done enough (pg567). At this point they draw the distinction between property instances and properties themselves. Causal efficacy is about property instances; causal relevance is about the properties themselves (that are instanced in objects of events). This distinction serves to show that there can be many physical properties being instanced in an event that will not be causally relevant to some of the effects. Authors claim this is a by-product of having a metaphysics that allows for multiple properties to be exemplified in the same event. In other words, there are some properties being relevant to some effects, other properties relevant to other effects, and so on. So it isn't just mental properties but also other physical ones that may fail to be causally relevant (for a particular effect property exemplifying). Given that this is context-dependent and empirical, authors insist it would be 'churlish' to reject the mental. (pg568)

The last objection is one that claims that Davidson's Principle of the Nomological Character of Causality (PNCC), combined with the position that the mental is anomalous (the mental doesn't figure in causal laws), makes mental property relevance impossible. (pg571) Authors first argue that PNCC isn't the only enduring causal theory, and that proposals from the likes of Lewis have suggested co-variance as a theory of causation. (pg572) These new propoals remove the 'covering-law' as necessary for causation, therefore still leaving open the possibility of mental property relevance along with physical property relevance. The lesson here is that 'overdetermination has to do with causal instances-- efficacy, not relevance' (pg574) Authors argue that two 'co-variation relationships' (causally relvant properties) can exist 'harmoniously'.


7/20/07

Haldane, John - The Breakdown of Contemporary Philosophy of Mind

07/20/2007

Mind, Metaphysics and Value, ed. J. Haldane, 2002

This is a self-contained chapter in a collection that tries to recapture some of the ancient theories (Aristotle & Acquinas) that grappled with the mind-body problem. As the problem became different with Descartes, part of what it means to use ancient theories is to change what the problem is in the first place. The first part of the paper is devoted to author comparing what the scholastic philosophy world was like just before Descartes to what our current anglo-analytic philosophy world is like, suggesting that there is about to be a major revolution that will sweep all this work away as irrelevant, or at least outdated.

Author continues to stress how we need a different approach to the mind-body problem altogether. One example is to focus on the non-representational forms of intentionality, using quotes from Merleau-Ponty and Anscombe that talk about immediate, unmediated practical knowledge that the mind acquires from the world. (pg 57-8) To make his point that current understandings are in need of overhaul, author points to the following problems:

-- The problem of eliminitivism: the unwelcome conclusion that denies mental content/experience
-- The problem of supervenience: the unwelcome conclusion that asserts some kind of connection between the mental and the physical, but fails to capture how (the problem given to the non-reductive physicalist)
--The problem of dualism: the unwelcome conclusion that separates the mind from the body in all important ways

With these three problems put in this way, it isn't clear how we're going to get out of it using our current reasoning. Author assumes that eliminitivism is a bad conclusion, as is mentalism. Author focuses on the problem of supervenience/dependence. What is the nature of this relation? The Davidsonian reply is weak: you can't have a change in the mental without a change in the physical, and vice verse (then add an asymmetry that favors the physical as the primary causal source). (This doesn't go as far as type-type or token-token identity.) The problem here is that any change in the physical (in any part of the world) could account for a change in my mental state, a sort of 'global supervenience' that is absurd.

In the next discussion, we have a potential fix that tries to use perceptual externalism. The problem is that we don't have a good world-mind connection, so why don't we just say that part of what is in the mind is the things that are in the world? (pg 62) Author thinks this fails because of the problems of genuine mental causation. The first problem of genuine mental causation is basically the same as the earlier problems: are there two distinct processes, one (non-identical) process, or what? (pg 64-5) The second problem is that it seems as though the physical systems are doing all the causal 'work', with the mental is an epiphenomenal, or byproduct of the physical. (pg 65-7)

Author turns to three possible arguments from Acquinas, two of which he would like to see revitalized. (pg 72) These are as follows:
2) Human reasoning uses not empirical particulars but abstract universals, which don't 'exist' per se
3) Thinking is self-reflexive: when I am thinking, I know I am thinking-- but not as a second-order thought but instead as part of the original thought.

Author thinks that pursuing these lines of argument might be fruitful.

7/13/07

Arnold, Jack & Shapiro, Stewart - Where in the (World Wide) Web of Belief is the Law of Non-contradiction?

07/13/2007

Nous, Vol 4 Issue 2, 2007

This is a paper that tries establish that there are two interpretations of Quine, or perhaps more accurately, that Quine was of two minds when it came to the status of logical truths. Authors believe there was a 'logic-friendly' Quinean empiricism, one that placed the rules of truth-preserving inference (logic) outside of the possibility of revision by recalcitrant experience. There was also 'radical' Quinean empiricism, dating back to 'Two Dogmas', that did not exclude any of the rules of logic from possible revision. This is a concern because it could mean that the law of non-contradiction and 'ex falso quodlibet' (explosion) are subject to revision. Because authors are classical logicians, this is troubling to them. Authors first lay out the possible conflict between the logic-friendly Quine and the radical Quine, but then mostly focus on the implications for the radical Quine. The thesis for the paper is that if we are to take these rules of logic on using the radical Quine's own principles of empirical confirmation, these rules will not have the robust status of universal rationality that many (most) logicians want them to have.

The first part of the paper entails the authors go to the original sources often and trying to work out the radical Quine's theory (pg 279-281). Ultimately the authors conclude that it is part of Quine's holism that logic is included in the 'web' of beliefs. It just so happens that the 'theories' of logic go very far to the inside of the web, making recalcitrant experience more likely to change beliefs on the edges rather than closer to the insides. This is complicated by other claims made by Quine that seem to say that logic cannot be changed since, any change in logic is a 'change of subject'. This is Quine's reply to dialetheists-- 'you're just changing the subject'. So, is a change in the theory of logic change the meaning of the theory? Quine says yes. (pg 280-281) Now we might be left to wonder how theories of logic could be changed at all without changing everything altogether.

The next matter for consideration is how the rules of logic should be used within the web of belief. Should they be used everywhere? One problem here is that Quine does not like to engage with normativity when discussing belief. Talk of 'should' would mean that the rules of logic have some sort of force other than merely helping to make predictions and avoid recalcitrant experience. Authors interpret Quine as talking about using causation (or constant conjunction) for belief formation and ordering (pg 283-4), not using normative logical rules.

Lastly, we have a problem of what 'recalcitrant experience' really is. Does it mean there are contradictions between one belief and another? If so, then it appears the law of non-contradiction has some priority and is immune from alteration. The authors use the same talk of data causing assent to Belief A, and they previously would have assented Belief ~A, which is impossible to do. No real talk of contradictions, just of beliefs that cannot be taken together because they are impossible. (pg 285-286)

The second portion of the paper involves using only a descriptive picture of the realm of science and the realm of ordinary every-day beliefs in discovering the status of the law of non-contradiction and explosion. Authors look at two 'chunks' of belief: everyday beliefs, and scientific theories. In both cases, authors argue that the law of non-contradiction applies in some areas and not in others, and that (in the case of everyday beliefs) humans have a 'knack' for intuiting where to apply it and where not to, and that (in the case of science) often we are willing to accept contradictions as long as we get the predictions right (pg 286-292). The case for explosion (ex falso quodlibet) is even worse. There is no widespread acceptance of this in everyday usage or even in science.

The last part of the paper discusses that we are left with. The only hope of establishing a robust notion of the law of non-contradiction is to assert it's epistemic usefulness. Certainly it fits into a logical system (classical logic) that is disciplined, consistent and orderly. But so is a paraconsistent system! This does not save the law of non-contradiction. (pg 292-293) Lastly, we might hope that the Minimum Mutilation Thesis preserves the truths of logic. Unfortunately, this seems consistent with paraconsistent/dialetheists as well.

7/6/07

Blankenhorn, David - Ch 6 Deinstitutionalize Marriage?

07/06/2007

The Future of Marriage, Encounter Books, 2007

In this chapter author discusses what it means to deinstitutionalize marriage, and why he thinks SSM is going to do that. Author starts by discussing the claims of Jonathan Rauch, who thinks that SSM will actually strengthen marriage. Author criticizes this 'dream' as using a purile definition or conception of marriage, one that is essentially private. Author then moves on to citing various leftist activists who are generally against traditional marriage but very much in favor of SSM.

After reciting a bunch of leftist writers who want to transform marriage and favor SSM for that very reason, author presents his main claim about the leftist thinkers; there are those who:
1) think marriage is a good thing and gays deserve to be brought in
2) think marriage is a bad thing and why not bring gays into it
3) think marriage is a bad thing and SSM will help to transform it

Author claims that we need to get clear about the fundamentals of marriage-- what it is for, what it is essentially about-- so that we can get clear on it's public meanings. The real fight here is about the public meaning of marriage, because that is what the 'institution' of marriage really is-- what public meanings it has. Author claims that the arguments often used to support SSM talk about what marriage is fundamentally about, and these miss the point. (#1-5 on pg 139-140). Author claims that the definitions/conceptions of marriage used in these pro-SSM arguments are mainly about supporting 'close personal relationships', not marriage-- and there is a big difference between the two things.

Next are five claims that 'disconnect' the traditionalist view of marriage from what marriage is now. (#6-10 on pg 139-140) Author likens these to 'turning off the lights until it is dark enough to suit us' (pg 150), referring to taking out of the conception of marriage the following: monogamous sex, bridging the male-female divide, raising biological children with a mother and father, and having a 'natural parent'.

Lastly, author wants to reply to various leftist objections to marriage a religious institution: marriage came about before religion in any modern sense of the word. Also, author points out that some claim that marriage as an institution has become weaker, making it a reason to allow SSM. Author sees this as totally backward-- that if it is only a weakened institution of marriage that will accept SSM, we should strengthen marriage and then, of course, this would exclude SSM. Many of author's conclusion end with a dilemma-- choose SSM and the various ideas that support and go along with it, or go with the other choice, a pro-marriage as a robust institution, anti-SSM package. "We must choose".

6/29/07

Blankenhorn, David - Defining Marriage Down... is no way to save it

06/29/2007

The Weekly Standard, Vol 12, Issue 28 04/02/2007

This is an article that is a shortened thesis of author's book The Future of Marriage. The claim is that marriage is a pro-child institution, perhaps the best pro-child institution that humans have ever created, and that this institution is declining. The problem for the author is that it seems that growing acceptance for the decline of marriage is correlated with beliefs that Same Sex Marriage (SSM) is acceptable as well. Author uses cross-cultural surveys that ask questions like the following:

1-People who want children ought to get married
2-One parent can bring up a child as well as two parents together
3-Married people are generally happier than unmarried people
4-It is all right for a couple to live together without intending to get married
5-Divorce is usually the best solution when a couple can't seem to work out their marriage problems
6-The main purpose of marriage these days is to have children
7-A child needs a home with both a father and a mother go grow up happily
8-It is all right for a woman to want a child but not a stable relationship with a man
9-Marriage is an outdated institution

The author wants to consider these questions as, generally, addressing the decline (or strengthening) of the institution of marriage. The issue for the author is that the countries whose populations generally agree to 2, 4, 5, 8, 9 also show support for SSM. This means, author concludes, that these ideas all go together, much like teenage drinking goes with teenage smoking, though perhaps neither causes the other-- they 'come in a bundle'.

The second argument author uses to support that SSM is generally related to the decline of the institution of marriage is that various leftist-socialists-poststructuralists who are generally against the institution of marriage are all for SSM, since they think that SSM will push traditional marriage off of its pedestal and open up a multiplicity of possible relationships.

REPLIES:
Rauch, Jonathan - Family Reunion

Democracy; A Journal of Ideas, Issue 5, Summer 2007

Rauch reviews the book by Blankenhorn The Future of Marriage and agrees with much of what author tries to prove about the history of marriage as an institution, and it's meaning as an institution. Rauch claims that Blankenhorn might appear to view marriage is a multidimensional personal, sexual, public, and child-bearing relationship, but Blankenhorn's main objection to SSM is that it hurts the child-bearing part. Rauch claims Blankehnorn needs to keep biological parents raising the biological child as the most central feature of marriage in order to get the argument he has to have any teeth at all. Since this is clearly not the sole, and perhaps not even the only central aspect of marriage, this argument falls.

Secondly, Rauch paints Blankenhorn as telling us we only have two choices. Go towards the 'bundle' of ideas that reinforce traditional marriage, or go toward the bundle of ideas (that include the permissibility of SSM) that deinstitutionalize it. Rauch claims this is a false dilemma. Why can't we blend and mix policies? Rauch predicts that we will be able to do this.

Carpenter, Dale - Blog
The Volokh Conspiracy
March 27th posting, and subsequent

Carpenter argues in a number of ways. He claims that while Blankenhorn is trying to avoid talk of causation, namely that is SSM causing the other beliefs about marriage to rise, that Blankenhorn is subtly sneaking causation into the mix. Of course claiming causation would be fallacious since correlation does not imply causation. Blankenhorn, in a side Blog (The Family Scholars Blog), seems to agree with him that he is in fact talking about causation, not correlation. [why?!?] Carpenter rightly discusses that correlation cannot prove causation, and that SSM comes after the rise in the other beliefs detrimental to marriage, so causation, unless it is somehow backwards, is impossible.

Secondly, a major argument of Blankenhorn's is that several liberal thinkers are all for SSM since it will deinstitutionalize it. Carpenter replies with several liberal thinkers that are worried that SSM will re-institutionalize marriage. Carpenter thinks that, probably, neither result will actually occur.

6/22/07

Sunstein, Cass & Vermeule, Adrian - Is Capital Punishment Morally Required? The Relevance of Life-Life Tradeoffs

06/22/2007

AEI-Brookings Joint Center For Regulatory Studies, March 2005

Authors argue that government legislation is always trafficking in courses of action that encourage and discourage in order, presumably, for social goods to be obtained. If it is the case that the death penalty is a deterrent for murder, then there is a life-life tradeoff in instituting the death penalty. It is a life-life tradeoff because a system of laws without the death penalty spares a few mostly guilty lives for numerous innocent lives, while a system of laws with the death penalty trades mostly guilty lives for numerous innocent ones.

The major thrust for this paper lies in recent studies that conclude that the death penalty does act as a deterrent, one such study claiming that there were, roughly, 18 murders deterred per legal execution. There is a 'threshold' effect that indicates that if not enough executions take place per year and/or are capricious enough, this deterrent effect doesn't work. Furthermore, even more murders are avoided as the time between the trial and the execution is shortened. The authors report this evidence and do not argue with the studies. They are interested in the moral issue if the studies are true, so they simply grant the evidence as true and start from there.

Authors want to avoid the reply that this only applies to a consequentalist and not to a deontologist. They argue that it applies to a consequentalist in a straightforward manner. For the deontologist, there may be a way out, but it is unpalatable. First off, authors frame the issue as a trade between numbers of lives: 1 vs 18. Under this trade, it seems that even the deontologist must hit a 'threshold override'. (pg 13-14) Authors claim that other objections from the deontologist are attributable to the acts/omissions distinction.

The biggest part of the paper involves replying to what they take to be the central principled objection to the death-penalty-as-deterrent argument. This argument goes as follows:
Central Objection: The death penalty is an affirmative action on the part of the state to kill another human being, while the deaths its absence allows are affirmative actions of citizens, not endorsed by the state at all. (This can be broken down into two separate objections: intentional and causal) (pg 15)
Intentional: under death penalty, the state intends to kill, while without death penalty, the state doesn't intend to kill, the citizenry does.
Causal: under death penalty, the state causes a death, while without death penalty, the causal chain is much longer, and ultimately the cause isn't the state, it is its citizens.
Authors claim these objections, and a few like them, operate on a hidden principle of the distinction between actions and omissions to action. Authors argue this distinction does not apply to governments. Governments are always choosing what package of legal policies will optimize the goals of the state & citizenry. (pg 16-7) In this case, there is the imagined two packages that are entirely equal except one includes capital punishment and the other doesn't. The evidence suggests that capital punishment package will result in far fewer murders-- isn't that the one to go with?

Other objections about capital punishment are then addressed. Mainly authors reject them as failing to take seriously the idea that a regime of capital punishment, no matter how flawed, will still save numerous entirely innocent lives. Objections: the innocent convict, the randomly assigned execution, the racially motivated execution. These are three additional objections to capital punishment. Authors reply that these scenarios also apply to the innocent lives that are being killed because the would-be killers aren't being deterred (like they would be if capital punishment was used). (pg 20-24)

Arguments against capital punishment as a tool for deterrence now roll in: there are other ways to deter murder, do those instead. Authors say: fine, fine; but don't forget that these other ways actually have to be committed to, practical, feasible, and proven. So far, it seems capital punishment is a good policy, at least until other policies arrive. There is no reason to be against it, since the evidence (ex hypothesi) is supportive that it works. Authors peg this argument, like most of their previous ones, to the evidence. If the evidence turns out faulty or not based on the regime of capital punishment, the moral conclusion would change. But as the evidence suggests that a life-life trafeoff is 1 to 18, capital punishment is morally obligatory.

Lastly, because this is a life-life tradeoff, they can't be accused of a slippery slope (pg 27) or extending their arguments to other domains (pg 40). Executing someone for rape may deter, but it isn't a life-life tradeoff. Therefore the moral calculus is different, and it may not be morally requisite.

Authors suggest that failure to take the life-life tradeoff seriously might be a cognitive error of not taking 'statistical lives' as real ones. (pg 32-35)

6/15/07

Kent, Bonnie - Virtue Theory

06/15/2007

The Cambridge Companion to Medieval Philosophy, A. McGrade ed.

Author begins paper with discussion of ancient virtue theories and the changes they underwent in the middle ages. The difficulty with virtue theory is that they appear circular-- in order to get virtue you need to perform right actions and get the right 'habitus', but in order for an action to be right you need to have virtue. This circularity was not lost on the medievals, who also made the entire virtue theory far more complex by adding religious virtues, ones only granted by god's grace, and by adding the concept of the Will, which could do whatever it pleased, regardless of virtuous 'habitus'.

Author gives summary of history of the thinking from the Medievals. Virtues became classified as a 'habitus' in the 12th century, a word for which there may not be a good english translation. It roughly approximates 'habit' or 'disposition'. This may have helped the Medievals distinguish between the habitus and the will. (pg 8)

Aquinas talked about two sets of virtues, 'civic' ones and 'divinely infused' ones. These were challenged. (pg 10) Divinely infused virtues were in many ways similar to civic ones, but they lacked being divinely inspired, gifts from God's charity. But even these divinely given virtues could be countermanded by an act of the free will, so what good were they anyway? Medievals began to even question this. Author believes that John Buridan rightly avoided the circularity that threatened virtue theory in his own commentary of Aristotle's Ethics. (pg 15)

6/8/07

Aune, Bruce - An Empiricist Epistemology Ch 6 "Memory And A Priori Inference"

06/08/2007

Unpublished Manuscript

Author starts with a discussion of memory and how it is essential for any theory of knowledge, since what is presently observed quickly becomes the past, and the past is usually only accessible through memory. The issue with memory is more complex than it is with observational knowledge, since memory involves inference, e.g. the inference that what I'm remembering now is what I observed then (a fact about the past is inferred using the present). To be justified in making this inference we need some sort of backing for fallible inference in general, and this is the thrust of the chapter.

Author starts with Hume (pg 204) who considered this type of inference 'experimental', meaning that it is reasoning about cause and effect, specifically a similarity between examined causes and effects with unexamined causes and examined effects. This is general inductive reasoning, the kind of reasoning we now need support for.

The difficulty with giving a rational justification for induction first lies in giving the proper account of what needs to be justified. Generally, we want to think that induction is assuming that As are Bs in all cases based on seeing that As are Bs in the examined cases. This is obviously open to critique, and various fixes have been offered, prior to attempting the justification.

Lycan & Russell have attempted, but each time it appears there needs to be some prior known 'representative sample' or some general way of specifying how to get one. Neither is available, author claims, without further empirical considerations. Bonjour proposes an a priori answer that makes use of 'metaphysically robust' regularities that we observe, therefore are confident using induction on. Author considers this naive, since there are many cases where we use induction but aren't assuming 'metaphysically robust' regularities (e.g. pollsters). Also, the discussion of 'robustness' assumes a bias toward our predicates rather than 'grue-like' ones, as the author brings up the infamous 'grue' counterexample and argues that there is no principled way to exclude these cases from Bonjour's account: no good way to presume that the future will be like the past.

The next proposed justification for induction comes from using Inference to the Best Explanation, which argues that the best explanatory account of the evidence is the one most likely to be true. Author disagrees, since we do not consider all the relevant alternative explanations before we pick the one, nor do we know all the relevant alternative explanations before we pick the one we want to go with.

Author instead proposes using probability theory and Bayes as a way to justify induction (pg 221-4). Under this theory, we need to have priorly assigned probabilities to background beliefs before we can assign a probability to new hypotheses; however the system should, ultimately, be self-correcting, so improperly assigned probabilities will eventually be changed to reflect newly acquired evidence and supported hypotheses. The worry though, is what initial beliefs to accept to get the whole system running. Here author considers an attempt by Chisholm, (pg 230) which he amends to accept. Initial beliefs must be given 'weak acceptance on a trial basis'. Author offers Bayesean theory as the alternative to Inference to the Best Explanation (pg 234).

With a new Bayesean backbone for induction, we can revisit the skeptical BIV (Brain-in-vat) argument and assign a low probability to it, since no evidence counts directly in favor of the 'surplus content' that BIV asserts (pg 237).

6/1/07

Aune, Bruce - An Empiricist Epistemology Ch 5 "Observational Knowledge"

06/01/2007

Unpublished Manuscript

This is a chapter mainly focused on the foundations of knowledge, and dealing with brain-in-the-vat problems. Author eventually concludes that there is no empiricist proof against brain-in-vat (BIV) problems, and that the most recent answer to BIV problems given by Putnam fails as well, so this cannot be used as an argument against empiricism.

When dealing with prototypical empiricist foundations of knowledge, author claims there are two sources: observation and memory. This chapter deals with observation. The problem with observation is its fallibility: we have optical illusions, phantom sounds, hot things sometimes feel cold, etc. The commonplace reply to this is that we must be careful to question our observations, not to take them at face value, to examine their context, their sources, and so on (Locke). The philosophical problem with this reply is that it tries to correct empirical evidence with more empirical evidence-- a regress or vicious circle.

The answer proposed by people like Russell, Moore and Carnap was that there were immediately known 'sense-data' that we were infallible about that served as the inferential foundations of the rest of our knowledge. The problem with this response was that it eventually failed because it put a 'sensuous curtain' between perceivers and the real world. The real world becomes a Kantian 'thing-in-itself' that is unknowable. (pg 171) Arguments requiring the basis of empirical knowledge to be a non-inferential foundation go back to Aristotle, who showed an infinite regress if all our knowledge is inferential. (pg. 173)

However, author argues, we do not need sense-data for a non-inferential basis for our empirical knowledge. Author has developed a framework for what he calls 'imperfect' empirical knowledge, which is knowledge that could be false (ch 1). Working from this, we can accept that there is a non-inferential base from which empirical knowledge is made possible, but that non-inferential base can simply be (fallible) observational beliefs. Author tells a story of how, when we were young ,we took most of our observations at face value, particularly when they didn't conflict with the observations of others. As we became more critical, we began to form generalizations and theories, so our observations became subject to our theories. So the foundation for empirical knowledge is other empirical knowledge. No regress or vicious circle looms, since in all this 'knowledge' talk we are talking about 'imperfect' knowledge.

Author considers alternatives to the Russell/Carnap 'Foundationalism', that are also alternatives to his own theory. One such alternative is Bonjour's 'Coherentism', which asserts that knowledge is justified on the whole, and that particular beliefs are justified by their being able to fit (or not) into that coherent whole. Author claims this is too tough a standard to hold anyone to: everybody has gaps, even the scientific community does!

The problem with empirical knowledge as Hume has sketched it is that it is susceptible to external world skepticism, or a more modern version, BIV. If there were a good answer to this, we would like to hear it. Putnam suggests the answer of Semantic Externalism: the reference relation involves a connection to the actual things referenced ('meanings just aren't in the head!'). Putnam got grist for this argument by successfully arguing that a Turing machine might be thinking, but it would never be referencing if it couldn't sense the objects it was talking about. So the anti-BIV argument runs as follows:
1) if someone S is a BIV, the reference of his words would be electrical impulses, not actual things
2) therefore S's claim 'I am BIV' is not referring to actual things like brains and vats, so he cannot be describing himself as a BIV

Author argues this fails, since it doesn't alleviate our concern: either S has an 'exotic' meaning, or S is saying something false about a real person. Author considers other problems with Semantic Externalism. The major problem is that there is no clear story about how references relations come about. Putnam describes 'language entry and exit rules' that should correspond to behavior and experience, but this fails to capture the full extent of our ability to reference. Author claims that these simplistic rules sound a lot like the verificationist's claim that everything with meaning must be verifiable. If there is room for unobservables to be referenced (like H2O), then it seems there is still no good story about cases of genuine reference in Semantic Externalism. Since there is no clear-cut winner in this, BIV is still an issue for both empiricists and non-empiricists alike.

5/25/07

Aune, Bruce - An Empiricist Epistemology Ch 4 "Properties And Concepts"

05/25/2007

Unpublished Manuscript

Author begins chapter with a discussion of the three leading theories of properties. Properties can be considered a feature of an object, such as 'red' or 'spherical'. There are theories promoted by David Armstrong that claim that properties exist in some universal sense and that objects partake in the existence of them-- similarity exists therefore between objects with similar properties by virtue of their sharing the same property. There are 'trope' theorists like Donald Williams or Keith Campbell (and possibly Aristotle) who claim that each property is unique, but share a similarity that is more-or-less similar, depending on the context, object, etc. The final theory that author prefers is a conceptual theory (Kant, Frege), the theory that a property of an object is a concept that the object falls under.

Author has two main criticisms of the first two theories. The first theory (Armstrong's A-theory) fails because it must include the object itself that has been stripped of all its properties, thus a 'bare particular'. If that wasn't bad enough, it seems that if properties exist, they too must have properties that distinguish them, which means that properties have 'bare particulars' too.



The second theory (trope theories) fail because we need to distinguish between all the different tropes out there, which means that each trope itself is made up of further tropes, and so on (pg 132-3).

Both theories fail, author suggests, because they have an incorrect view of predication as ascribing a property to an object. Author returns to the theory of properties he likes, the Fregean F-theory. In this theory, properties 'fall under' the concept of an object. Thus an object is red because the object itself is red, not because it has the property of redness. One upshot of this theory is that the concepts of an object can be used in propositions without much mutation. What is a concept? A concept is something associated with the thing it conceptualizes, and someone has a concept when she can use at the right times and in the right places. (pg 145)

One problem for the Fregean theory is that we are unclear what the 'falling under' relation actually is. For this, author uses the Sellarsian suggestion of distributive singular terms as a way to sort objects under concepts-- a token of a type that is distributed. There are some criticisms of Distributive Singular Terms (DSTs) that author deals with:

1) Not all lions are tawny. Response: that's fine, just restrict the DSTs to typical or ideal examples

2) DSTs sometimes are true due to their distribution, not due to the properties the objects have: 'the grizzly is found in North America'-- no one grizzly is all over North America. Response: those aren't DSTs!

Author makes sure to agree with Sellars that the concept 'red' and the word 'red' are expressing the same thing too. 'The concept "red"' is used as a DST to distribute to all proper times when 'the concept red' is employed.

Author revises the Fregean version of concepts being the connectors between predicates and objects. Now, predicates directly describe objects, or directly classify objects, without 'conceptual mediation'. This would be much like demonstratives or names. The problem now is where predicates get their conceptual function? Author: by usage (pg 154-5).

The next step is dealing with propositions. Author uses a Sellarsian 'distributive' treatment for propositions: propositions are similar because they distribute to, ultimately, beliefs about their proper usage. (pg 158-9)

This is an immensely fast-paced and difficult chapter that is the bulk of the sorting out between concepts, predicates, properties and propositions.

5/18/07

Aune, Bruce - An Empiricist Epistemology Ch 3 "Empiricism and the A Priori"

05/18/2007

Unpublished Manuscript

In this chapter author gives a positive account of analyticity and how it might work in relation to so-called propositional attitudes. The primary aim is to show that cases of supposed a priori knowledge are just cases of analyticity in language or in concepts.

The chapter begins with a brief survey of the origins of analytic truths as conceived by Kant. For Kant, an analytic truth (similar to Leibniz) is a truth where the predicate being ascribed to the object is already contained in the concept of the object itself. Author considers Kant's work to be acceptable, but only applicable to a limited class of judgments. (pg 82) Frege attempted to build on Kant by claiming that an analytic truth is a truth derivable only from general logical laws and definitions.

Frege and other empiricist philosophers were generally considered to be refuted by Quine in his paper "Two Dogmas of Empiricism". Author describes the problem was that we cannot use the idea of synonymy without using 'analytic', and vice versa. The two are involved in a vicious circle of definition. Further, any attempt at lining up all the sentences containing supposedly synonymous terms will contain the term 'analytic'. Quine later retreated from his earlier attack and admitted that there was some common-sense appeal to analyticity. He gives a rough definition: something is analytic for a native speaker S if S learns the truth of P by learning the usage of one or more of the words in P. This truth must be deductively closed-- so the steps to analyticity must 'count as analytic in turn'. (pg 88) But still, as with Kant, Quine relegates analytic truths to logic ones and linguistic tautologies.

Author describes his version of analytic truth as one developed from Carnap's, which consists of the specification of a formal language-system that has semantic rules and definitions. Author uses the example of specifying how 'if...then' applies, a usage that is separate from common usage. Once these specifications are considered, then the supposed counterexamples to modus ponens and modus tollens are clarified and dismissed (pg 97-98).

Author lastly turns to the problems with necessary truths that Kripke has shown. One example is the origin claim: an item cannot have it's origins in a different hunk of matter than it does now. (pg 112-) Author goes about showing what the proof for this would be. Assuming that 'distinctness' is defined properly; author claims this Kripkean claim can be considered analytic.

Lastly, author considers psychological states, which are not the same as propositions. If propositions can be analytic, can psycholgical states like beliefs? Author review the classical notion of propositions as expressing the sense of a sentence, with words having conceptual content. With the theory of names being rigid designators this classical notion is undone. Author discusses the ramifications of this failure and also discusses conceptualism, which says that propositional attitudes have 'contents' rather than 'objects'. Using this method at least, the empiricist can have analyticity in psychological states.

5/11/07

Aune, Bruce - An Empiricist Epistemology Ch 2 "A Priori Knowledge and the Claims of Rationalism"

05/11/2007

Unpublished Manuscript

This chapter is devoted to giving doubts to the anti-empiricist rationalist philosophies that claim support of a priori knowledge like "nothing can be green or yellow at the same time" or 'not (P and not P)'.

Author wants to distinguish between a priori knowledge and analytic knowledge. At issue here is whether commonly thought of a priori claims (not necessary identity and contingent a priori claims made by Kripke, by the way) can be proved or verified. The proof would be possible given a combination of axioms and inference. But now we are given to another question: what axioms do we use? Here the rationalist believes the axioms used are knowable by direct intuition. Author complicates this picture by showing that axioms are superfluous because they are derivable from rules, and further that the claim that anything is knowable by direct intuition is dubious (pg 46-). The contrary view is the empiricist's, whose standard claims are that the rules of inference are underwritten by convention that aims at preserving truth using 'semantical rules' (pg 42, top).

Author discusses that it is true that sometimes things are immediately apprehendable; but these things are like recognizing faces, or my own hand. Author does not believe that recognizing that things are of a given kind is as intuitive. For instance, we need to recognize the appropriate application conditions of modus ponens before we can use it in inference. Without taking a situation (or set of propositions) to be an instance where modus ponens properly applies, we can easily go wrong. Author uses two examples of moral and geometrical cases.

Author then takes on the standard cases that make up the rationalist backbone. E.g., the law of non-contradiction is complicated by non-self-referential liar-like-paradoxes. The law of the excluded middle is complicated by supposedly vague predicates 'is bald', 'has a beard', 'is a tree/sapling', 'is a child/adult'. Author also discusses 'nothing can be both red and green all over at the same time' and indicates that this happens to be more a matter of physiology than a property of the world, as an exercise, change 'red/green' to 'yellow/green' and you can have 'yellowish-green' and 'greenish-yellow'.

Lastly, author reviews the more modern cases of Kripkean necessary identity and contingent a priori. He claims these can possibly be shown to be analytic, and he will try to do so in the next chapter.

5/3/07

Aune, Bruce - An Empiricist Epistemology Ch 1 "Knowledge and Analysis"

05/04/2007

Unpublished Manuscript

The author has introduced in the preface how he intends to do some work of defending the empiricist notion of the analytic (or at least Carnap's version of it) against Quine's assaults.

The first chapter is devoted to sorting out some of the different senses of knowledge, how the different senses sometimes work in everyday language, and then to give a 'rational reconstruction' of two senses of knowledge.

Author first points out the various disagreements regarding the actual method to employ when trying to answer the question 'what is knowledge?' Is it an analysis that points to necessary and sufficient conditions? Is it conceptual analysis that tries to capture all (or most of) our intuitions? Is it trying to identify a property that is true of us when we know something? This last approach seems to be taken by Chisholm's followers, which assumes that such a property exists, even before we can find one. Author reviews the rejection of 'essential properties' talk, e.g. the famous critique given by Wittgenstein regarding a 'game', and so on. Given the difficulty since the Gettier examples to show a property of knowledge, it is unlikely to be located.

Author review the Gettier cases and points out that if we have two senses of knowledge, one being an ancient traditional approach of 'rational certainty' and the other as one that is based on inconclusive evidence, the Gettier cases only refute the later cases, not the former. One reply to the Gettier cases for inconclusive knowledge is Lewis' contextualism. Contextualism is a positive account that says that a subject S knows P when S has eliminated all relevant alternatives to P that are reasonable to consider given the context that S is in. Author points out that his major problem with this theory is that it requires relevant alternatives to P to be eliminated, but sometimes there are none.

Author proposes to avoid Gettier-like counterexamples by having access to the evidence that makes the proposition true. This 'making-true' concept author wants to keep elementary enough to avoid endorsing Armstrong's elaborate notions of 'truth-makers'.

At the end of the chapter, author gives his rational reconstruction of inconclusive or imperfect knowledge: S has imperfect knowledge of P in a context C only when (i) P is true, (ii) S has the information that P, (iii) The evidence for P is high enough in C to be considered adequate, (iv) S has evidential access to a sufficient truth-maker for P.

4/27/07

Kennedy, Anthony - Gonzales v. Carhart

04/27/2007

No. 05-380, April 18, 2007

This is the opinion of the supreme court of the USA, regarding a federal 'partial birth' abortion ban that was legislated and then challenged 'facially' (before anyone was even arrested for violating the statute). The findings of the court was a 5-4 decision that the act was not facially unconstitutional.

Much has been said and already discussed on this opinion, so I shall stick to the outline.

Author first lays out the history of the case and previous rulings related to it, mentioning Stenberg v. Carhart in particular. Author also gives the history of the act and background information on the method of second trimester abortions that are commonly performed, one is D&E (dilation and evacuation) and D&X (dilation and extraction). Of the two, D&X mostly involves delivering most of the fetus, then piercing its skull and shrinking its head to get it out of the birth canal. Though D&X is far less utilized than other practices, it is used by many doctors who consider it a safer (health-wise for the woman) to use it. The act seeks to ban variations of D&X, and gives no health exemptions for the woman. Congress performed a finding of fact, which was mostly mixed and somewhat erroneous, but concluded that there is no medial necessity to perform D&X.

Author first claims that the act is not vague and lists the specific conditions that are punishable and the alternatives that are not. (III A, B) Author then considers whether vague language in the act creates an undue burden on a woman seeking an abortion, since it could cover D&E; he rejects this (III C1). Author also rejects that some abortions that begin as D&E end by accident as D&X and that this would chill other abortion procedures, making an undue burden; author rejects this by affirming that the act calls for it to be an intentional D&X, not an unintentional one.


Author further affirms two claims: 1) that the state has a legitimate interest in preserving the integrity and ethics of physicians, and 2) that partial birth abortions are 'lade with the power to devalue human life'. The decision to have a D&X is so fraught with emotional consequences that some doctors do not disclose the details of the procedure: but how the procedure takes place is precisely the issue abhorrent to the state. Author thinks it might be possible that abortion doctors will develop new, less shocking methods, of aborting late-term fetuses. The state has an interest in preserving a bright line between good medicine and bad: it is tough to tell the difference between a D&X and infanticide.

The next issue is the lack of an exception for the preservation of the health of the mother. (IV B) Author claims there is medical testimony on both sides of the question of whether a D&X is ever medically necessary. According to the testimony of some doctors, a D&E is always just as safe. Author claims that because there is medical uncertainty, there is no reason to presume on the side of caution in a facial challenge. The state is permitted to pass a wide range of legislation where there is medical or scientific uncertainty. Author leaves open that an 'as-applied' challenge can be used here instead of a facial challenge.

Scalia & Thomas concur, also claiming that there is no basis in the US Constitution for Roe v Wade.

Ginsburg delivers the dissent

Author first points out that the claim of medical uncertainty is fallacious since, mostly, Congress ignored the testimony of good abortion doctors and listened to many non-abortion doctors who all claimed that D&X was not medically necessary. There is a strong body of evidence that for a small portion of women, D&X is less dangerous to the health of the mother than any other option.

Author claims Kennedy et al have not drawn the line for abortions at viability/non-viability but instead at a idea of resemblance to infanticide/abortions.

The only saving factor here, author claims, is that as-applied challenges are still possible.

4/20/07

Houser, Marc et al - A Dissociation Between Moral Judgments and Justifications

04/20/2007

Mind & Language, Feb, 2007

Though this is a recent paper, it revisits the same studies that have been going on regarding 'trolley cases' where subjects are asked about the morally permissible action given a range of moral scenarios. There are four basic scenarios:

1) Redirect a trolley (bearing down on 5 people) onto a different track that kills one person instead.
2) Push one person in front of the trolley in order to stop it (successfully)
3) Redirect that same trolley so that it kills one person, but gives everyone else enough time to get out of the way
4) Redirect the trolley so that it is stopped by a heavy weight, but in the process of getting to that heavy weight, kills one person in the way.

There was a large number of respondents, due to the internet as the distribution method. The study done here was to examine a case where moral judgments are made without reference to explicit, conscious, reasoned rules. The contrast was drawn between three theories of moral psychology:

1) Cognitive: moral judgments are made with reference to explicit rules that are consciously considered
2) Emotive: moral judgments are made due to unconscious emotional responses to situations and also consciously using an 'intuitive' standard
3) Rule-following but still intuitive: this is the view of the author. He wants it to be the case that we do follow rules that in principle can be made explicit, but we don't necessarily usually have access to these rules when making moral judgments. This is made easier to understand by analogy to the linguistic case, where rules are followed but not necessarily made explicit in order to be followed.

In this study, the third theory was supported against the first by showing that a significant number of respondents adhered to particular moral principles but were unable to provide and coherent explanation for their judgment upon questioning. Furthermore, these judgments were made across all demographic groups identified. In this case the particular moral principle was the Principle of Double Effect, a principle that posits that contingent foreseen harm to another as a side-effect of some (greater) good is preferable to (the same) harm that is necessary to secure the same good.

Here was how the data broke down in response to the question: is it morally permissible to...
1) Yes 85%
2) Yes 12%
3) Yes 56%
4) Yes 72%

Author claims that this data shows that in the context of these cases of moral judgments, there aren't explicit moral principles that the subjects are consciously reasoning about, even though they may be following principles like the principle of double effect. There are two possible objections to the results of this study: a) the subjects weren't given enough time or b) if the subjects were given a set of principles to choose from, they would pick the relevant one. The replies author gives is that a) the subject had plenty of time and b) that these might just be post-hoc rationalizations, not identifying the principles of their reasoning.

4/13/07

Priest, Graham - Truth And Contradiction

04/13/2007

The Philosophical Quaterly, Vol 50 Num 200 July 2000

Author begins by announcing that he is a dialethiest and believes that some contradictions can be true [or at least that not all contradictions are false?]. This is a claim about how to put together logic-- or what 'kind' of logic to endorse. [Logic is supposed to be a system of inferences that, if applied, are truth-preserving.] He doesn't argue directly for the appropriateness of dialetheism, instead is intent to show that the 6 front-running theories of truth (what makes something true if it is true) don't rule out dialethiesm. If none of the theories of truth particularly exclude dialetheism, then those who oppose such a logic will have to find some other resource to use to combat dialetheism. There are three traditional theories of truth, and three more modern ones. Author doesn't enumerate all the fine points of any of these theories, or even compare or contrast them; he simply gives minimal information about each and tries to show how it doesn't rule out dialetheism.

One caveat the author lays out is that there is a problem of what we're applying logic to: sentences, propositions, beliefs, statements, worldviews... what? Author says that nothing he talks about hinges on a specific restriction-- whatever the 'truth-bearers' are, that is what he will use-- he will use the Alpha symbol for this category.

(1) Deflationism
Deflationism says that was is true is the same thing as saying what is. This theory, author claims, has a tendency toward dialetheism, since certain paradoxes (the liar paradox) can lead to affirm a contradiction.

(2) Semantic Theory of Truth
This theory might have a problem in general if it uses a logic that allows anything to follow from a contradiction. For this, author suggests a paraconsistent logic that does not allow explosion. Once this happens, author claims this theory is consistent with dialetheism.

(3) Teleological Theory
We normally say 'x is true' to prove a point-- we aim at something when we say it. Author claims this is neutral when it comes to dialetheism.

(4) Pragmatist Theory
This theory says that something is true if it 'works', meaning perhaps it is verified in practice. This pragmatism can allow for inconsistent theories-- in fact ones that contain contradictions! Thus dialetheism is acceptable according to this theory as well.

(5) Coherence Theory
This one appears the most difficult, since 'consistency' in a model of truth seems to be immediately valuable. But here the author argues that there might be some virtues of a theory that are greater than consistency, and if so, the coherent theory must allow for an inconsistent but otherwise virtuous theory to be the true one.

(6) Correspondence Theory
Considered the most traditional theory, it says that something is true if it corresponds to reality. How exactly it can 'correspond' is the trouble here. Author sketches 'situational semantics', which is a system of logic that allows for context-specific truth values that might be ultimately contradictory if placed into a larger context, but avoid the difficulty by being contextual. The idea the author wants to avoid is the idea that the correspondence theory has to deal with a world or other maximally consistent set of objects to correspond to, not a smaller subset of things. The problem with this 'situational semantics' is that it allows for 'negative facts', or facts about what is not the case. This flies in the face of traditional correspondence theories like Wittgenstein's or Russell, since correspondence theories intuitively correspond to what is, not what isn't. But author argues that this is an arbitrary distinction.

4/6/07

Koenigs, Michael et al - Damage to the Prefrontal Cortex Increases Utilitarian Moral Judgments

04/06/2007

Nature, 03/27/2007

This is a short study done by moral psychologists who studied subjects with Ventromedial Prefrontal Cortex, a part of the brain that is considered important for executing emotional responses and also encoding the emotional value of sensory stimuli. There has been previous work done that connects emotions to moral judgments, but it is unclear whether this is cause, effect, or correlational. This study tries to show that the emotive value of certain situations exerts influence on the moral judgments most non-VMPC damaged subjects.

In this study three groups were examined: VMPC subjects, other brain damaged subjects whose damage isn't considered to be relevant to emotion generation or moral judgments, and non-brain damaged subjects. Each group was given a number of circumstances with questions at the end of them with yes/no answers attached. Some of the scenarios were non-moral in nature, and two sets were moral. Of the moral, some were judged (by an independent group of non-damaged subjects) to be more or less 'emotionally salient', corresponding to 'personal' and 'impersonal' divisions.

The hypothesis is: If emotions have a role in influencing moral judgments, those who have difficulty generating emotions (VMPC) will not have their moral judgments influenced.

In the testing, the VMPC subjects were more apt to judge the personal and impersonal moral scenarios equally, or at least far more equally, than the other groups. Thus there is an absence of a 'personal/impersonal' distinction in VMPC subjects. The ability to apply explicit rules of max/min in moral scenarios is still retained by VMPC subjects (pg 3, top right side), suggesting that these judgments are 'utilitarian'.


Wade, Nicholas - Scientists Finds the Beginnings of Morality in Primate Behavior, New York Times, 03/20/2007

This is a popular article that talks about primatologists, biologists and so on finding elementary 'morality' in other animals. The big conflict is between Dr. de Waal, who has recently taken a few tough positions. The one most grand is that human moral decisions derive 'above all' from 'fast, automated, emotional judgments'. De Waal also is in favor of the claim that this is a group-level adaptation, primarily to deal with in-group and out-group situations (e.g. warfare).

The evidence points to the ability of other primates to learn social rules, reciprocity, peacemaking, and their capacity for empathy. The next step is to claim this is the bedrock of moral judgments [see article above this one].

The debate is poorly framed between rationalists and scientists.

3/30/07

Mosley, Albert - Witchcraft, Science, and the Paranormal in Contemporary African Philosophy

03/30/2007

African Philosophy: New and Traditional Perspectives, L. Brown, ed., Oxford University Press, 2004

Author is concerned with the study of the paranormal, and the widespread refusal to countenance it as a legitimate fact in the world by some western philosophers. Author takes some time in enumerating the various supposed occurrences: telepathy, clairvoyance, psychokinesis, and precognition. Author points out that there is some problem in distinguishing these, given the possibility of one or the other. For example, is precognition just subconscious psychokinesis? Is telepathy the same as clairvoyance? Given different manners of describing the same phenomena, it may be possible to reduce these attributes.

The debate about whether such paranormal abilities can be a source of knowledge was taken on by Bodunrin. His claim was that this might be a way to get a belief, but certainly not a justified one. Author first says that this approach conflates being able to justify a belief and having it be justified. A 'knowing how/knowing that' distinction. Further, if you're a reliabilist, then perhaps that belief can be justified, if the relationship between the belief and the paranormal ability is of the kind that tends to produce true beliefs.

The reliabilist position is attacked by Bonjour, who offers various (4) counterexamples, all of which are designed to show that even if paranormal traits could ascertain true beliefs, nobody would be justified in believing them. But in each case, author replies, this is in a culture where nobody believes in the paranormal anyway, so it 'intuitively' seems unjustified to believe in them.

Another problem for the paranormal in the general atmosphere in the west is that most believe all claims about paranormal beliefs have been debunked 'in the laboratory'. Author gives examples of higher-than-chance occurrences that have been produced in the lab. Author also suggests that it might be more useful to study these processes 'in the field', as a field biologist would. (pg 145-6)

Horton argues that the spiritual beliefs in Africa take the place of naturalistic explanations that the west has given over to germs, molecules, and a modern scientific worldview. The rest of the paper is a survey of various different beliefs about the paranormal found in Africa.

3/23/07

Henig, Robin - Darwin’s God

03/23/2007

New York Times Magazine, March 4, 2007

This is an article about the two sides in the debate about the science of belief, specifically about the possible biological origin of the belief in god and other religiosity. The argument proceeds roughly that belief in spirits, supernatural forces, omnipresent, omniscient or omnipotent is a universal component of human culture. This is prima facie evidence that it has a biological component. To start off, this article misses a few major distinctions at the outset that are important: religion vs. belief in god, and old-time religion vs. modern-day theologically souped-up religions.

The big debate is between 'spandrelists' (Atran) and 'adaptationists' (Wilson). The spandrelists claim that it is a mixture of other adaptive traits that, when working in conjunction, make it very easy to believe in a god. The three main candidates are the aspects of our brains that deal with 'agent detection', 'causal reasoning' and 'theory of mind'.

'Agent Detection' the default assumption is the presence of agency (a creature with beliefs/desires) when dealing with events or things.

'Causal Reasoning' the belief that things happen because of previous causes, rather than at random.

'Theory of Mind' another term used is 'folkpsychology', but it is the intuition that other individuals have beliefs and desires similar to how we do.

The argument is that the conjunction of these adaptive biases 'primes' us to believe in god-- a causal force with agency behind the occurrences in the world. The adaptationists claim that belief in god is itself adaptive. This claim immediately finds objections, since acting religiously (when there is no basis in reality) would likely hurt an individual agent's survival prospects. Yet this view is championed by Wilson, who claims that this might be the best example of group selection. Group selection is an out-of-favor theory that claims that some adaptations can take place at the level of the group, or, perhaps more appropriately, that genetic adaptations that take place across generations of individuals will be responsive to the relative fitness of the group those genes evolved in, not the individual's fitness.

3/16/07

Graukroger, Stephen - Home Alone: Cognitive Solipsism in the Early-Modern Era

03/16/2007

APA Proceedings and Addresses, Vol. 80, No. 2, 2006

Author makes a distinction between cognitive solipsism and epistemological or skeptical solipsism. Cognitive solipsism involves not realizing that there is an external world at all, where it is true of an animal that sensations are presented to it as though they are modification/changes in it's mind. Of course we are using 'mind' loosely here-- it could be just a place were perceptions come together, the 'sensus communis', a medieval hold-over into early modern philosophy where it was assumed the five senses came together to form a full representation.

Author discusses two neurological disorders that pull apart the cognitive and the affective: Capgras syndrome and Cotard syndrome.
Capgras: patient recognizes faces but feels no affective connection, often therefore thinking they are impostors
Cotard: patient doesn't think it lives in the external world, thinking instead it is dead (thinking there is no external world?)

Author's main point is that we might be looking for answers to the skeptical arguments in the wrong place: don't just start with the epistemological, also look at the affective. Author traces this dual approach back to the early moderns, who had a dialectic about the affective aspects of cognitive solipsism.

Once the science of perception developed, thinkers began to shrug off the old aristotelean claim that perception was just taking in resemblance and realized that perception was re-presentations, or representation. But now that we know it is a re-presentation, the threat of skepticism and solipsism arises.

Descartes, claimed that in order for us to be free of cognitive solipsism, we had to have both sensation and conscious judgment, which humans have, but he claimed that animals did not have the judgment part, so they were, for the most part, cognitive solipsists. They see, he thought, 'as we do when our mind is elsewhere'. [Blindsight?] The interesting thing here is that having the judgment capacity to overcome all forms of solipsism is also the feature that gives us our moral agency and personhood. It is that self-reflective capacity that Descartes identifies that unifies our cognitive life and gives us ability to reason morally (pg 70).

Locke claimed instead that perception is just successful sensation, not sensation+judgment. The difficulty here is that personal unification cannot be tied into the perceptual capacity like Descartes did. But Locke is an empiricist-- any moral agency from humans will likely come from experience one way or another.

Diderot enters the picture and realizes that if the empiricist picture is right, then someone with impaired sensation might have impaired moral agency. This was the gist of his Letter on the Blind. Diderot began to try to create a basis for morality based on the senses (pg 72). So here is an example of cognitive solipsism (or a leaning toward it) affecting our affections and moral sensibilities. [But we have two skepticisms!? One about the external world, one about other minds!]

So one of the issues is how we can have morality and ethics once we eliminate the rationalist picture and are empiricist. The other issue is whether skeptical/empirical solipsism represents a sort of bad moral fiber, or someone who is intellectually dishonest with himself.

3/9/07

Wolf-Devine, Celia - Preferential Policies Have Become Toxic

03/09/2007

Cohen and Wellman, eds., "Contemporary Debates in Applied Ethics," Blackwell, 2005

Author's major struggle is against Albert Mosley and 'preferential' or 'strong' affirmative action. This can be contrasted to 'procedural' affirmative action, where members of the targeted groups are encouraged to apply and receive fair consideration for jobs. Author claims that preferential affirmative action is in play when you get a 'yes' answer to the following question: "If another black person had applied whose credentials matched those of the rejected white candidate, would that person have gotten the job over the black candidate who was in fact chosen?"

The author sets the stage about the difficulty in discussing this issue. Calling it 'politics of inclusion' is misleading because the job market is a zero-sum game-- one is included, necessarily the other is excluded. Furthermore, there is an entanglement of race and sex in these policies, author spends some discussion to try to distinguish that many of the arguments for race don't work for sex (women).

Author evaluates the various affirmative action arguments:
1) Compensatory argument
2) Corrective argument
3) Consequentialist argument

1) Compensatory: one party has harmed another, that other need to recover damages. There could be material damages, or cultural damage.

First considered is material damage. This argument is hard to apply to entry-level workers, most of whom are born in 1970-1980, and therefore missed much of the overtly racist Jim Crow and other overt racial institutions. Also, it is hard to claim in the specific instance of one white person turned down, a black person hired, whether that one white person ever had the advantages and that one black person had the disadvantages. Also, there is a difficult moral question of whether an innocent, unaware beneficiary of an unjust action is obliged to give the advantage back. Author also objects to "projecting moral intuitions that concern one-on-one interactions onto a large and complex society".

Next is cultural damage. Well, whatever the black culture is, it is tough to say that it is 'worse' than white culture. Second, if you claim that your culture makes you disabled for a certain job, why is it reasonable that you should have it?

Two final problems with the compensatory project:
Origin problem: 'you can't blame your mother', because if it weren't for her, you wouldn't have been born. If we hadn't brought you over, you would have had to immigrate.
Completion problem: 'when are we over with this'? without a good answer (which author thinks there is none) you'll have 'endless turf war'. Mosley assumes proportional representation is fair, but author denies this due to the fact that you may never have that, for cultural reasons.

2) Corrective: stop the existing bias in hiring practices. Author replies to the argument that there are existing biases against blacks in the hiring process. Author concedes there could be some shown bias and that would allow for corrective action. But when applied on a grand scale, bias is assumed, not shown.

3) Consequentialist arguments are 2:
A) Role Models: we need blacks as role models. Author: you already have them; and you don't need the 'mixed message' of affirmative action.
B) Diversity and representation: diversity is good! Author: not all diversity is good, and don't assume that more black means more diversity. About representation, author claims that people just represent themselves, not anything else, so this argument has a false premise.

Positive account
Author argues that we should instead focus on programs to elevate the poor from the cycle of poverty, and this would disproportionally help blacks, since the poor are disproportionately black. Author also claims that affirmative action has led to more black drop-outs and failed lawyer/doctor exams. Author ultimately claims that she wants racial categories eliminated, as opposed to Mosley, who wants to preserve them.