10/12/07

Davidson, Donald - Problems in the Explanation of Action

10/12/2007

Metaphysics and Morality; Essays in honour of JJC Smart, Petit, Sylvan & Norman eds. Blackwell, 1987

Author starts with discussing his views on intentional actions and actions explained by reference to intentions. Specifically, author investigates the question posed by Wittgenstein: 'What must be added to my arm going up to make it my raising my arm?' Author claims that there is nothing that must be added; first is a discussion that if a raising of an arm is an effect of a previous act of the agent, it seems that previous act must also have a previous act, ad infinitum. The case is simple in the case of the arm raising the arm that is raised. However, what if a rope is tied to a pulley that raises a paralyzed arm (I pull the rope with the other arm)? It seems here there is the pulling of the lever and the raising of the arm-- 2 events. Author rejects this (pg 37), saying the 'two events' are identical. In a sense, the rising of my arm is not part of my raising my arm.

The next difficulty has to do with possible objections raised to this 'identity' thesis that show a disjunct between cause and effect, or at least a period of time and/or space that separates the two, thereby plausibly questioning the identity of the two 'actions'. (e.g. my sending a thank you message and the recipient not getting it until later) (pg 38). Author replies that often causal verbs actually have two (or more) parts where x causes y, then y causes z. This can fix certain space/time problems.

Author moves to a concern for giving explanations for actions. Often the explanation takes the form of describing the intention for the consequence. Author takes back a previous claim he had made in "Actions, Reasons and Causes" where he claimed that 'there were no such states as intending, there were just intentional actions.' Author claims that 'it is not enough to ensure that an action was performed with a certain intention that it was caused by that intention' (pg 39) and gives examples of deviant causal chains, which leads author to conclude that 'concepts of event, cause and intention are inadequate to account for intentional action'.

Forming an intention requires a belief and a 'pro-attitude' or desire. Desires aren't the same as intentions, since, according to the author, a desire is a conditional, dispositional state that can be countermanded, while an intention is 'sandwiched between cause and effect' (pg 41). The complaint now is that it seems that only an explanation of which desires and beliefs formed the intention that is a reason for the action (plays the part of a cause). The problem here is that it seems that the explanation for the cause is dependent on how the events are described and can be considered fragile in this way compared to the hard sciences. Perhaps what is desirable are psycho-laws that, once you specify the belief&desire, lawfully cause the action. Since these haven't been found, author claims this is not a reason to say reasons don't cause actions, rather that Author attempts to fix this problem by saying that causal powers may be mentioned in different explanatory contexts, but in principle instantiate laws (pg 42, bottom). This has come under criticism.

Author considers various attempts to specify reason-explanation laws, that are mostly inadequate (pg 43-4). His conclusion is that 'laws relating the mental and the physical are not like the laws of physics, therefore are not reducible to them' (pg 45) The big complaint (that is relevant to our recent previous readings) is that events causes each other by virtue of the lawlike behavior of properties, but the only real properties are physical. Thus there can't be mental-physical causation. Author then says that there are all sorts of kinds of laws, and with that different kinds of explanatory schemes that will apply according to our interests. This will make different properties causally efficacious. In the physical universe only (free of our interests), all properties cause the effect. (pg 45-6)

Author finishes by saying that there is a further distinction between physics, the special sciences (which may theoretically be ultimately reducible to physics) and reason-explanation in the explanation of action. The distinction between any scientific explanation and reason-explanations, is the normative. Here author identifies the semantic content as vital to explanation, and semantic content is subject to our interests in consistency, correctness, etc. Author ends by arguing against a behavioral system of 'black box' psychology.

10/5/07

Polger, Thomas - Realization and the Metaphysics of Mind

10/05/2007

Australasian Journal of Philosophy, Vol 85 No 2 June 2007

Author's main target is Gillett's account of Realization and Reduction. Author has numerous objections to the family of realization accounts (Kim, Shoemaker, Gillett), as well as Gillett's specifically, which is called the 'dimensional' view. Author claims that these accounts of realization will destroy the distinction between Realization Physicalism (RP) and the original account (the account that RP was supposed to be an alternative to), identity theories. Author does not want to defend RP, but instead make sense of Realization so that RP can be properly evaluated.

Realization is not the same as 'instantiation'. Author gives multiple examples of use of the term 'realization' (pg 235), and suggests that the paradigm 'textbook' case is something like:
My computer currently realizes Microsoft Word; or
Memory fixation is realized in humans by long term potentiation of neurons.

"Certain electrical states of the device realize computational states such as, say, storing the contents of the last copy operation. The electrical activity of the device is not identical to any program state of Microsoft Word, but it implements or realizes such program states" (pg 236)

Author takes the view of Gillett et al to be that causal powers of properties of objects individuate the realization of the function in question. Author considers this the 'causal view' of Realization. Gillett's view differs from Kim & Shoemaker's in that their view is of a 'flat' causal theory where realizer and realized properties are in the same object, at the same level, in virtue simply of causal powers. Gillett's view is that the realizer properties can take place at a lower level, or a horizontal one, or as part of the structure of the object (or at the object level). This is considered the 'dimensional' causal view. (pg 238)

Author's major reply to the entire causal view is that it fails to capture objects that realize abstract processes, like machines that realize 'addition' or 'Microsoft Word'(pg 240). The claim is that an abstract function like addition is a formal and not a causal relation, and so you can't use causal powers to individuate them. Thus the causal approach fails to capture textbook cases of abstract, formal, or algorithm realization, since these things get realized but the objects/properties that realize them are not doing so causally but in form instead. Author extends this to intentional and etiological (historical) realized properties too (e.g. A US dollar is whatever the US Gov't says is a dollar). Author predicts there will be numerous objections to his attack, which he considers: (pg 243-6)
1) The computational/functional model of cognition (RP) is over
Author: so? We should still try to get the Realization relation right

2) The project of abstract realization in general is defunct
Author: wrong! 'We cannot dismiss abstract realization out of hand' (pg 245) [important!]

3) Ok, maybe a machine can't cause 'addition', but it can cause 'adding things', and that's all you need for Realization. Thus the causal view is saved.
Author: A) there are other realization relations that I hope that doesn't work for B) We can still say that an 'adder' that 'adds things' stands in a particular relation to addition, and thus name that Realization. (pg 246) [this makes no sense]

4) Realization of abstract functions is not Realization proper.
Author: But then we have no room for the special sciences, or for functionalism.

Author turns to the specific criticism of Gillett's dimensional causal view. Author claims (pg 248-50) it essentially destroys Realization and makes it into an identity theory. Author then proposes his own theory of Realization:

'to realize a property or state is to have a function'. (e.g. 'something realizes the property of being a heart iff it has the function of pumping blood') (pg 251)
Author leaves open what kinds of functions will be realized.

Author then considers a final attack on Gillett. This is a discussion about Multiple Realization (MR). Gillett thinks that before we have an idea of MR we need an account of R. But author denies this. Author says that MR is an argument for R, so we can't have R figuring in the explanation of MR-- that would beg the question. Author claims that MR is a theory about explanation and explanatory kinds, whereas R is a metaphysical theory about properties. (pg 255) This leads to a 'paradox' where something might be MR across species/objects/etc. but not actually realized (R) in the special, irreducible sense, since the multiple species/objects' properties are the same across those species/objects/items. Author puts forth that we must first use MR for kinds as an explanation, then investigate the relation between the properties realized (e.g. an eye) and the physical objects doing the explaining (e.g. retina, cornea). If the relation isn't one of identity across the MRs, then maybe we have the Realization relation instead.

9/28/07

Kim, Jaegwon - Physicalism, or Something Near Enough

09/28/2007

Physicalism, or Something Near Enough, Ch 6 Princeton University Press, 2005

This book chapter isn't too different from the other recent papers by author. The major claim is that physicalism is mostly true, being able to account for everything but the non-functional aspects of qualia. Author uses the term 'ontological physicalism' as the view that material things in space-time are all there is. The first part of the chapter argues for accepting this view.

Why accept ontological physicalism?
Causal Closure:
The first point was that causation [at least how we understand it] requires a 'space-like structure' with objects that are identified within that space. Causation takes place in the physical realm-- we can't figure out how it could be otherwise. So if we are to have mental causation at all (that is things in the mind causing things in the world), then we need to believe that those things in the mind are physical. An objection to this comes from our desire to make us special, or to claim that mental properties 'emerge' from physical ones. All of this has failed, author claims (pg 152).

Causal Exclusion:
The second argument author uses is in reply to a dualist claiming that structuring causation as only physical is question-begging. So author instead gives us an explanation of what causes a finger to twitch while in pain, using only physical processes. Only now we also have mental properties-- both of which should cause the finger to twitch? This sounds like overdetermination with two distinct causes! Author claims that property dualists haven't been able to resolve this problem.

One alternative to ontological physicalism might be Davidson's anomalous monism or Putnam-Fodor and their functionalism, non-reductive materialism and emergentism. The only upshot to these claims is that: either the mental can't be causal or the mental is irrelevant. Since we want to save the causal efficacy of the mental, we need to reduce it to the physical. This is an if-then:

If the mental has causal efficacy, then it needs to reduce to the physical.

Reductionism: author lays out what he considers the principles of reduction. Reduction takes place when previously named 'concepts' or functional place-holders that describe causal entities/properties are shown to have mechanisms that underlie their functions/causal powers. Of course this allows for multiple realizations-- author denies this is a problem for physicalism (pg 164).
Reduction is a three step process: (pg 164)
1) Identify/name a concept that has a function or plays a causal role
2) Begin the scientific work to find the 'realizers' of this functional property
3) Develop an explanation of how the lower-level mechanisms perform the specified causal work

Author then claims that once we have discovered step 1, we can assume that the concept/property is reducible (pg 164). Now, what parts of the mind are reducible, and what parts aren't? Psychological states like beliefs, desires, thoughts, etc. are. Qualia aren't, at least to the extent of their qualitative character. That we can tell the difference between pink and light red is a discriminatory capacity that can be reduced, but the "look of red" is just mental residue. Author's suggestion: live with the residue and we have mostly, ontological physicalism with mental residue. Note: during this paper it seems author is skeptical that total zombies could exist (pg 169).

9/21/07

Gillett, Carl - Understanding the New Reductionism: The Metaphysics of Science and Compositional Reduction

09/21/2007

Journal of Philosophy, Vol CIV No 4 April 2007

Author begins by discussing the recent move by Kim, a reductionist, from semantic reduction to metaphysical reduction. This eschews the long-standing 'Nagelian' approach of reducing entities in the 'special sciences' (basically, any science that isn't particle physics) to more elementary ones by way of 'bridge laws'. Instead, the focus is on the metaphysics and ontological relations involved in 'mechanistic explanation'. Mechanistic explanation is, basically, an explanation of how a device or entity works by description of its parts, how each part functions, and how the parts work together. One of the suspicions is toward the term 'metaphysics', but author claims that this is the 'metaphysics of science': a careful, abstract investigation of ontological issues as they arise within the sciences-- (e.g. not a priori). (pg 194-5)

Author discusses the main problems with previous reductionist attempts: there was an unavailability of 'bridge laws' or other descriptors of the lower-level entities such that the higher-level 'emergent' properties couldn't be explained in terms of the lower-level entities. Thus the antireductionists argued that the entities, terms and properties of the special sciences (higher-level) were ineliminable for proper scientific understanding and experimentation.

Author launches into an example of neuroscience where many diverse lower-level entities compose higher-level ones (pg 198). What is interesting in the example is that the higher-level changes that take place aren't caused by lower-level changes-- it is part of the structure of the entities involved that the changes take place. Author argues that this isn't causation but instead noncausal determination (pg 199-200), and that a familiar kind of scientific practice is going on here: explanation by describing composition.

Focusing on the compositional nature of higher-level entities reveals the following: (1) the powers ascribed to the higher-level entity does not have a lower-level analogue. This was the point made by the antireductionists. (2) The lower-level entities are qualitatively different in kind, making up the higher-level ones. This indicates a 'many-one' relation, not an 'identity-identity' relation.

Earlier author put how he individuates properties using a Shoemaker-type 'causal theory' attempt: a property is individuated by what power it gives to the objects that instantiate it. A 'power' is an entity whose possession allows an individual to enter into a certain process (pg 201). Basically, the powers in the lower-level entities comprise the powers in higher-level entities iff the activated powers of the lower-level entities all taken together activate the powers of the higher-level entities (pg 202), and not vice versa. There is some weird discussion/usage of 'manifestation grounds'. Given that there is asymmetry in the 'manifestation grounds', there is room to argue for reduction on an ontological level while leaving the semantic aspects of the special sciences intact and ineliminable.

Author formulates his 'Argument from Composition': (pg 204-5)
1- Properties are individuated by what powers they grant to their instantiators
2- The properties belonging to the lower-level entities are sufficiently efficacious for manifesting higher-level powers
3- Higher-level properties grant no special powers to their higher-level entities, therefore are ontologically dispensable.

The remainder of the paper, author shows that though ontologically dispensable, higher-level entities and properties are not semantically or epistemologically dispensable, being necessary for science and scientific practice. Thus it seems to be granting some of the antireductionist points (and perhaps main contentions) while remaining reductionist in what matters most: ontology.

9/14/07

Clark, Andy - Curing Cognitive Hiccups: A Defense of the Extended Mind

09/14/07

Journal of Philosophy, Vol 54 No 4 April 2007

This is written in response to an earlier paper written by Rupert proposing an alternate hypothesis to the Hypothesis of Extended Cognition (HEC), namely, the Hypothesis of Embedded Cognition (HEMC). The first part of the paper replies to Rupert's two major points:
1) The external aspects of so-called cognition for HEC look dramatically different from the internal aspects, causing a dis-analogy, and making HEC look like a stretch.
2) The proper study of cognitive science depends on taking the 'stable persisting human individual' as the subject matter. If we lose sight of this, we risk losing a lot.

Author first replies to 1 as follows: much of this objection comes from taking the parity principle too seriously. What we're trying to get at is equality of opportunity for all sorts of processes. If we encountered alien neurology, would we say it isn't cognitive because it doesn't look like ours? No. Similarly, just because the fine-grained differences are significant doesn't mean hybrid processes can't play a constitutive role. (pg 167-8)

Author replies to 2: In general, there is little to worry about. Most of the time the stable, persisting human individual's brain is the sole instantiator of cognition. But sometimes there are important 'soft-assembled' resources that also serve to instantiate parts of the cognitive process. We shouldn't be worried that science can't get this right. (pg 169-70)

Author discusses an experiment that deals with programming a VCR, with subjects sometimes seeing the screen, sometimes just trying to remember it, and other times able to remove a barrier. The goal was the fastest programming. What was found was that sometimes memory was used, sometimes visual input. The study concluded that what was important was the fastest, least time-wasted method of computation, whether that was using solely internal information or using external, or a mix. This lead to the Hypothesis of Cognitive Impartiality: problem solving doesn't privilege in-the-head resources over external ones. (pg 174) The risk here is to think that there is a centralized processor that seeks the most expedient route to solve the problem (giving priority to the brain). Here, author tries to distinguish between two 'explanatory targets':
A) The 'recruitment of the extended organization itself' (here the brain plays a crucial role)
B) The 'flow of information and processing in the new soft-assembled extended device.' Author wants to focus on B with HEC/Cognitive impartiality and claims HEMC blurs these distinctions. (pg 175) This leads to the conclusion that cognition is organism centred, even if not organism-bound.

Author then goes into an elaborate, extended discussion of multiple studies and thinkers who have worked on gesturing. (pg 176-183) Gesturing, the conclusion is, not just expression that helps make an already formed point, not just partially-offloaded spatial tasks that it is easier to do in real space, not just part of a marking or crude reminder about what to think about next/remember, but instead constitutive of the act of thinking about certain things. Author wants to be sure to avoid merely treating gesture as a causal aspect that assists with our thinking, but instead as constitutive to that thinking. [Interesting problem in the experimentation see top of pg 179]

Author concedes that there is an asymmetry in that neural processes are considered cognitive always, while gesturing by itself isn't always considered cognitive. (pg 183) However, this is not to invalidate the systematic cognitive process. Take, as an alternative one single neuron-- it isn't cognitive either per se-- only when placed into the larger context is it; the same with gesture. Further, just because the gesturing causes a neural process doesn't mean that the gesture itself is dispensable or somehow not part of the cognition. Author argues that other ways of (in principle) bypassing neural processes to get the end-result brain state would also satisfy this condition, but would be considered cognitive (183-4) [not according to B, above!].

There is a real danger of falling into a 'merely causal' reply about external processing-- but author argues, this danger is the by-product of believing that everything external can only be causal! Author then elaborates by using the example of rain drops on the window: they cannot be part of cognition, even if they prompt me (cause me) to think certain thoughts about poetry, nature, etc. They 'are not part of ... any system either selected or maintained for the support of better cognizing' (pg 184) [teleological?] The 'mere backdrop' or 'merely causal' aspects of the rain contrast with constitutive, self-stimulating loops (like a turbo-engine, where the exhaust from the motor turns an air injector, which then makes the motor more powerful) that are part of the process of larger cognitive accomplishments.

Author discusses a simple mechanical robot that can instantiate 'exclusive or' just by using 'inclusive or' and 'and' and a few simple rules of behavior. (pg 185-8) The point of this description is that even a simple machine can instantiate more complex logical manipulation using less complex rules, and it can then use these more complex rules in further processing if it has some sort of self-feedback device that reports back to itself what it is instantiating; this is supposed to be analogous to humans' overt behavior as playing a part in our further cognitive tasks.

Author concludes by saying that we should not fall into thinking HEC, HEMC or some other hypothesis is true until we have done much more experimentation-- but that we shouldn't be biased against HEC for sure.

9/7/07

Campbell, Sue - Our Faithfulness to the Past: Reconstructing Memory Value

09/07/2007

Philosophical Psychology Vol 19 No 3 June 2006

Author begins by noting the common distinction between memory and imagination, the former being concerned with accuracy or truth. However, author wants to expand what it means to remember properly: not only must memory be faithful but remembering should also get right the significance of the past in relation to the present. Because meaning and significance is often contextual and also brought out and interpreted in social settings, remembering can be affected and perhaps constituted by a 'varied set of human activities' (pg362).

Author gives examples of the intricate functions of auto-biographical memory: a single composite memory formed from repeated similar events, or of using objects or talismans to remember a loss or grief-ridden memory but it changes over time as they come to accept and move past what happened. (pg363-4) The main thesis is that construing memory as solely archival and that accuracy to an original scene is the only important value of memory will miss much of the other important personal aspects of memory (pg365). In particular, author argues for using 'accuracy' as opposed to 'truth' or even 'detail'. Adam Morton in dealing with the possibility of having accurate but not true emotions, argues that accuracy as not reducible to truth (pg366); author takes this line of argument up and applies it to memory, though author argues that detail isn't always preferable (for instance, you can have irrelevant detail, or fail to get the overall themes right). (pg367-8)

With the accuracy of memory as an analog of the accuracy of emotion, author focuses on two issues: how faithful memory can be considered appropriate, rather than straightforwardly true (pg369-70), and how accurate memories can have different significance and different pieces recalled based on present contexts (pg370). Next, author deals with 'integrity' of memory, which she re-frames from being a solely personal characteristic to being a 'personal/social virtue' (pg373). The idea here is that public memory can shape one's own memory, like the public memory of 9/11 versus the personal thoughts and feelings that one might have been having at the time. Thus there is a decision about how and whether to integrate your memories into the public one, changing the public sphere but also perhaps changing your own. (pg374-5) Author's case is that the two virtues in 'reconstructive' memory is not faithfulness and truth, but accuracy and integrity. (pg377)

8/24/07

Ismael, Jenann - Saving the Baby: Dennett on Autobiography, Agency and the Self

08/24/2007

Philosophical Psychology Vol 19 No 3 June 2006

Author uses Dennett's arguments against the Cartesian Theatre as a starting point for a discussion on the self and other concepts of a centralized self-identity. Dennett is hostile to the idea of a unified location or 'brain pearl' that has all systems of the brain in front of it. He uses the analogy of self-organizing systems that give the appearance of centralized intelligence but in fact have none (e.g. termite colonies). The origin of our thinking we have a centralized 'theatre' is our use of words to represent our actions to others-- a useful fiction. (pg 346-7)

Author agrees that there isn't a Cartesian Theatre, but thinks that doesn't mean we end up as termite colonies. Author uses an example of a ship that guides itself by using an internal map. Sensors receive input from the environment. The input is processed using various modules, and a program is run that takes the results of this processed information from all the sources and 'deliberates' about where it is on the map, and what course to set. This could all be displayed graphically, or it could simply be an internal, distributed program. The point is that there is a 'stream' that runs through the 'Joycean Machine', or a program that tries to place itself as a self-representation and 'deliberates' about what course to set. Author considers this the alternative to the self-organizing only model and the Cartesian Theatre model. (pg 349-51)

Author concedes that Dennett does not always talk as eliminatively as that. The tension within Dennett when he seems to endorse a limited 'Joycean Stream' but other times when he insists on only distributed self-organizing systems is reconciled, Author claims, if we take 'language as rooted in the development of explicit self-representation ... representation of ourselves and our states in a causally structured world' (pg 353) For Author, it is this need in social life for self-representation that lead to the Joycean Stream.

Author talks about three types of unity that the Joycean Machine enables:

-Synthetic Unity: the integrating of various disparate information sources.

-Univocity: when the information is integrated into a coherent stream, they are given a 'collective voice'. Here author uses much analogy: like a state-wide referendum that takes all different perspectives and makes them into a 'yes' or 'no', the Joycean Machine is the mouth-piece for a group that has a distributed identity. There is no 'commander' other than the reporter. (pg 356-7)

-Dynamical Unity: The Joycean Machine mediates interactions with other systems, as changes occur.

The important point of all of this is that the reporting of self doesn't mean there is an entity inside the brain 'the self'. The 'reporting' is more like asserting-- a performative that makes it true by concluding what is going on within itself.

8/17/07

Menary, Richard - Attacking the Bounds of Cognition

08/17/2007

Philosophical Psychology Vol 19 No 3 June 2006

Author is undertaking to defend the hypothesis of extended cognition (HEC) and also what author considers is a more radical project that he calls 'cognitive integration', which takes internal (biological) and external vehicles to be integrated into a whole, which is properly considered cognition. The aim of this paper isn't to establish HEC or cognitive integration, but to defend it from the attacks of Adams, Aizawa, and Rupert (A&A).

Author lays out what the cognitive integrationist is committed to:

1) Manipulation thesis: place the 'cognizer' into an environment; agents complete cognitive tasks often by manipulating features of the environment. There are three types of manipulation:
A) Biological cases of coupling (pg 331)
B) Using the environment directly, without representing
C) Manipulation of the external representational system in accordance with cognitive norms

2) Hybrid Mind thesis: cognition is understood as a hybrid process of internal and external systems.

3) Transformation thesis: our cognitive capacities have grown, been transformed, or otherwise augmented by our ability to manipulate, use hybrid processes, and so on.

4) Cognitive Norm thesis: we are able to manipulate external vehicles of cognition because we learn norms that operate on how to manipulate those vehicles. (These norms of external vehicle manipulation are just as cognitive as internal ones.)

A&A, as 'traditional cognitive internalists', do not deny that we use e.g. mathematical symbols to complete cognitive tasks, they just deny such use constitutes a cognitive process. Author claims their objections misconstrue the manipulation thesis and attack a 'weak' parity principle.

The Parity Principle: if an external process were located in the skull, we'd call it cognitive (pg 333) This is supposed to be intuitive, not necessarily an argument for HEC.

A&A's first argument says that if a cognitive process uses/is coupled to object X, it doesn't follow that X is part of the cognition. Author replies that this misunderstands the where/how the cognition is being done. The cognitive integrationist instead has it that cognition is happening with internal processes and objects, together making up cognition. Thus: X is the manipulation of (e.g. the notebook) reciprocally coupled to Y (the brain process) which together constitute the cognitive process of (e.g. remembering). If this seems question begging, author claims that HEC has been independently established, and is beyond the scope of this paper (pg 334).

A&A have a 'intrinsic content' condition that author next attacks. The intrinsic condition seems to be that a process can be counted as cognitive only if it involves at least some intrinsic/non-derived content. Thus a process that involves no intrinsic content is non-cognitive. Somehow mental representations of 'natural objects' are fixed by 'naturalistic conditions on meaning' (Fodor or Millikan or Dretske), and A&A argue that artifical objects can be fixed the same way. The problem here, author claims, is that when you avoid saying that an internalist idea of an artificial object is fixed by conventional content, you stop yourself by using the convential norms that govern use of that artificial object in cognition. But we do use these norms in manipulation of these artificial objects. So either the objection takes us to be less competent than we are, or the objection posits intrinsic content that is suspiciously similar to convential content. (For an in-depth review of the dialectic, see pg 334-7)

A&A object that we have no good way of making a science out of the combination of brains and external tools, since external tools are all so disparate. A related objection from Rupert is that notebooks and any external tool you can use so far can't really be used when keeping up in conversation, so conversational memory doesn't work if it is external. (pg 339) Author replies to A&A by saying that they miss the entire point. It isn't that cognitive integrationists say that what happens externally is just like what happens internally! (pg 340) It is that, instead, the external vehicles take part in a hybrid process of cognition. Author replies to Rupert that he may be right, but other sorts of memory work differently.

8/10/07

Fisher, Justin - Why Nothing Mental Is Just In The Head

08/10/2007

Nous, Vol 41 No 2 2007

This paper uses a counter-example to 'mental internalism' to show that it isn't just what happens 'in the head' that influences mental events. Author defines a 'mental internalist' early:
A Mental Internalist believes that an individual's mental features supervene on what is in that individual's head at that time. Likewise for two individuals with the same mechanical layout: same things inside the head = same mental features. Author explains how some of this has been challenged by 'classical' externalist arguments (Putnam, Kripke, Burge), particularly on the side of the content of mental features (for instance, the content of my thought that 'Water is wet'), and in what justifies a belief. Of course, externalism of this sort has been open to challenge from a 'narrow content' view of the content of beliefs-- but author tries to get away from this. Classical externalist arguments haven't touched many of the hallmark mental features: phenomenal experiences, rationality, moral character, emotions, propositional-attitude-types. Author constructs an example that disproves mental internalism:

Imagine a world where there are 100 radiation 'pulses' per second shooting around. They are disruptive to the human physiology so that our mechanics/mental causation would go haywire if we were on that world 'Pulse-world': we would go quite mad. However, there are 'Pulselings' who have evolved to be just like humans except that their mental mechanics do just fine (maybe even need) to have these pulses going through their heads 100x/second. Now, one Pulseling Paula is having the experience of driving, and an Earthling Edna is having the experience of playing the saxophone. At some point in time t (in between pulses) author stipulates that these two people's mechanical/physical/inside-the-head properties are identical. (pg 321-2) If this is possible, there is a difference between mechanical inside-the-head and mental features. Thus mental internalism is false.

The next section deals with whether this example is possible. Author claims his example rests on three assumptions:
1) Our mental features are produced as the consequence of relatively simple interactions between many elements in our heads
2) These pulses 'coax' the elements in our heads to change the mechanics of how they operate
3) If these pulses change small elements in our heads, they can change large ones too
The moral of the story is that all cognitive systems depend deeply on the appropriate support (or at least non-interference) from their surroundings. (pg 324)

Author next considers replies to his example. The first is the other-minds skeptic. Since nobody can say much to him, author can't either. Nobody can convince the other-minds skeptic that other humans have mental features, let alone Pulselings. Another defense might be that a pulseling that receives these pulses is disqualified from having mental features attributed to her, because of these pulses. This is ad hoc, and denies explanatory power (since it certainly looks as though Pulselings are intelligent, have feelings, and so on), and might also disqualify ourselves as well (since we might be dependent on some sort of environmental factor).

Author considers two possible alternatives to mental internalism. The first is 'wide functionalism', which expands the mental features to some more of the subject's current surroundings. Author dislikes this in favor of 'teleo-functionalist' historical perspective, which takes into account the history of the subject in order to determine what the normal mechanics are for 'in the head' mental features. Author espouses the Principle of Mental Inertia:

--Altering things outside a creature's head won't significantly change the progression of mental states that that creature will undergo, unless those external alterations also bring about change within the creature's head. (pg 329) [What? Things won't be different unless they're different?!?]

Author briefly describes why his teleo-functionalist account is superior to the wide functionalist account, by suggesting that both Edna (Earthling) and Paula (Pulseling) are de-brained and their brains are thrust into identical vats: each would still have the same surroundings but their mental features would be different. Thus wide functionalism would fail here, but the Principle of Mental Inertia would be consistent with this result.

8/2/07

Montero, Barbara - Physicalism Could Be True Even If Mary Learns Something New

08/03/2007

The Philosophical Quarterly, Vol 57 No 227 April 2007

In this paper the thesis is that Mary would lack the concept of 'what it is like to see red', even if she knew what happened on the lower-level physical level, and could deduce what would happen on the higher-level physical level. Author dubs this the 'missing-concept' reply to the knowledge argument.

Author starts by discussing a 'less than ideal' knowledge argument that is open to flaws. She uses this as a starting point for some of her claims as replies. The less than ideal argument starts with 'Mary knows all the facts of physics, chemistry and neurophysiology...'. This is open to problems because there may be other physical facts that aren't included in these fields. There could be 'higher-level' physical facts (those that constitute/determine the experience of red) that aren't, strictly speaking, included in physics, chemistry or neuropsych. This is consistent with what author calls the 'non-reductive' physicalist position, with a conception of the 'broadly physical'. The 'broadly physical' is that mental facts are physical facts, whatever those facts may be (pg 179).

This leads to a discussion of what it is to be physical at all. Author begins with saying that as long as a property is either fundamental and physical or determined by fundamental physical properties, it is broadly physical. Much talk in the sciences involves talk of deducing higher-level physical facts from lower-level ones. There should be no reason why, in principle, this can't be done. This is the case, author points out, only if all fundamental physical facts are taken to be 'structural/relational' facts. If we construe the physical as the 'non-mental', then we won't have this necessary connection. (pg 181-2) [Doesn't this beg the question?] Only on a certain understanding of the physical as being ultimately accessible to physics using structure, position, charge, etc. can higher-level properties be deducible from lower-level physical ones.

The fixed Mary argument takes Mary to know all the fundamental lower-level physical facts and have perfect reasoning and deduction skills. Author abandons the previous argument she uses (above) and agrees that all higher-level facts are deducible from lower-level ones. Presumably, this can be done a priori. However, can it be done without the relevant concepts? One might think that this is what a priori just means. However, author claims that a priori means that the truth of the conclusion is justified from the truth of the premises without reference to empirical studies. This doesn't mean the conclusion can be reached by simply looking at the premises-- sometimes you'd need the relevant concepts to employ. (pg 183-87) Presumably, Mary could infer "Ahh, seeing red would look like this", except that she wouldn't understand that 'this' refers to, since she lacked the relevant concept of 'the experience of seeing red'.

The last bit of the paper tries to show that author's reply to the Mary argument is different from the 'old fact, new presentation' reply. The 'old fact, new presentation' argument uses identity between (brain-state B) and (seeing red). The 'non-reductive physicalist', however need not hold this identity-- in the sense that the two propositions have the same truth-value. (pg 188) [WHAT?!?]

7/27/07

MacDonald, Cynthia & Graham - The Metaphysics of Mental Causation

07/27/2007

The Journal of Philosophy, Vol CII, No 11, November 2006

This is a difficult (and long) paper about the causal efficacy and causal relevance of mental events. The causal efficacy of an event is a necessary condition for the causal relevance of one of the event's properties. The issue here is that there seem to be two causes that are causally efficacious for the same effect, e.g. turning on a light 'because you noticed it was cold' or 'because of some neuro-physical explanation'. Here is the 'qua problem' of non-reductive monism. Notice that if you can/want to reduce the mental to the physical, this isn't a concern. But if you believe the mental can't be reduced, then you have a case, made especially by Kim, that mental properties are 'too little' relevance for effects. This calls for a defense of the mental in conjunction with 'minimal physicalism', which makes the case for the irreducibility of the mental. The problem goes as follows:

PCR: physical properties of physical events are causally relevant to the physical effects of those events

MCR: Mental properties of physical events are causally relevant to some of the mental and physical effects of those events

EXCL: If P is causally sufficient for an effect, there is no other property Q that is distinct from and independent of P, that is causally relevant for that same effect

CLOS: If a physical event has a cause, it has a sufficient physical cause, where physical Ps are causally sufficient for the effect

Put all these together and it seems we have physical properties being causally relevant for physical effects, and the mental properties being 'too little' to be included. (pg546) Yet for the causal efficacy of mental events, you should be able to preserve all these 4 principles.

One possible fix for the causal efficacy of events is a trope theory. Tropes are abstract and not concrete, but this distinction doesn't map onto a universal/particular distinction. (pg547) Instead, a trope of red is the unique red of a certain robin (at a certain time and place), an abstract gained from attending to just one aspect of the robin. The concrete robin is all the tropes taken together. Under the trope theory, there are two conceptions of what it is to be a property. The first considered is the 'class of tropes' theory, where a property is all tropes of red taken together. (pg548) The second is that a property is just another trope. Authors consider the 'class of tropes' conception of properties first in their analysis of whether the trope theory solves the problem of mental efficacy.

The setup is that physical tropes (that are causally relevant) fit into a class of similar tropes to form a homogeneous property. A mental property is a higher-level (not 'higher-order') property that contains classes of these same physical tropes, and other physical tropes that instantiate the same trope-functional mental trope, e.g. pain is c-fibers and/or h-fibers and/or o-fibers... (pg550-1) Since the physical trope that is causally relevant falls into both a physical property and a mental property, the problem looks solvable. However, authors throw up the following objections: if a physical trope is causally relevant, in virtue of what? Prima facie, it seems relevant because it is a physical property, not because it is also a mental one. (pg552) Secondly, just because you call a higher-level property 'mental' doesn't make it mental-- there are lots of higher-level properties that are also physical. (pg553) Finally, authors claim that logically there is no connection between a causally relevant physical trope that is a physical property and a mental property, even if that causally relevant physical trope also inhabits that mental class. After all, there are several other heterogeneous classes which that physical trope will also inhabit that should not be considered causally relevant. (pg553-4)

A way out for the trope theorist is to claim not that properties are classes of tropes but instead just the tropes within the classes. (pg554) Authors attribute this view to Heil & Robb. Here is where authors level their biggest objection: the trope theorist misses the point of causal relevance: it isn't that a P property and an M property are identical therefore both relevant; causal relevance is the problem of: what is effected in virtue of what property? (pg555) This problem authors claim the trope theorist fails to address. Instead, authors offer up the Property Exemplification Account (PEA). PEA says that events like having a pain right now not only has the property of 'being a pain event' but also is an exemplifying of a property like 'has-pain'. (pg556). Here's how it works: objects are the subjects of events. In objects, property exemplifyings occur. When a property is an exemplifying in an event, it is actually exemplifying in the subject. A property exemplifying in a subject at a time is constitutive of an event, (pg556-7) though does not 'constitute' the event the same way e.g. a chair parts constitute a chair. (pg559)

Authors then posit that not only do events have constitutive properties (the properties of the objects), but events also have 'characterizing properties' as well. Characterizing properties have exemplifyings in events, and constitutive properties have exemplifyings in objects, the subjects of those events. (pg560) This sets up two sets of properties that an event can have. Kim argues that mind-body identity in events must be between constitutive properties of events, but authors consider instead that the identity should be between the properties of the events, not of their objects (subjects).

From here authors elaborate what a property is according to the PEA, and claim that two distinct properties can have exemplifyings in the same object of an event. In this case, you can have a mental property and a physical property exemplify in the same object of an event. They claim the mental-physical co-instantiation is a supervenience relation that is similar to the metaphysical relation between 'being colored' and 'being red' (pg561). So mental properties and physical ones are both exemplified in the same subject in the same event. Authors then argue that the 'universalist understanding' of properties forces the causal efficacy of mental events, since when a mental property is exemplified in an object of a physical event, that event is constitutively a mental event as well.(pg562) The result is that all properties that are exemplified in the subject of an event become efficacious. The immediate objection arises: too many properties are efficacious! Authors argue that this isn't a problem-- the only problem is if too many properties are causally relevant. (pg563)

To save causal relevance from this objection, authors introduce another thesis that works off the 'is colored'/'is red' relation by talking about different levels of mental and physical properties. Mental properties are 'higher-level' than their lower-level physical ones, but related in that when the lower-level one is exemplified, the higher-level one automatically is. Authors call this the Property-Dependence thesis (pg564). Crucial to this is understanding that a mental property of 'thinking of Vienna' is a higher-level property of 'neuro-state x'. The causal relevance of the lower-level physical property then can become the causal relevance of the higher-level mental one of the same object in the same event. Yet not every property becomes causally relevant (though every one could be considered causally efficacious) since not every property of the object is a higher-level property of the lower-level, causally relevant physical property. Lastly, mental properties aren't considered constitutive to the event (I guess they are part of the 'characterizing properties' of the event). This is a supervenience relation that authors analyze (pg565).

The next step is to show that mental properties can be causally relevant qua mental properties, not because they supervene on causally relevant physical ones. Authors claim that the framing of this problem by Kim is hostile to this possibility, so if they can show that the causal relevance of the mental no more problematic than any other causal relevance claim, they have done enough (pg567). At this point they draw the distinction between property instances and properties themselves. Causal efficacy is about property instances; causal relevance is about the properties themselves (that are instanced in objects of events). This distinction serves to show that there can be many physical properties being instanced in an event that will not be causally relevant to some of the effects. Authors claim this is a by-product of having a metaphysics that allows for multiple properties to be exemplified in the same event. In other words, there are some properties being relevant to some effects, other properties relevant to other effects, and so on. So it isn't just mental properties but also other physical ones that may fail to be causally relevant (for a particular effect property exemplifying). Given that this is context-dependent and empirical, authors insist it would be 'churlish' to reject the mental. (pg568)

The last objection is one that claims that Davidson's Principle of the Nomological Character of Causality (PNCC), combined with the position that the mental is anomalous (the mental doesn't figure in causal laws), makes mental property relevance impossible. (pg571) Authors first argue that PNCC isn't the only enduring causal theory, and that proposals from the likes of Lewis have suggested co-variance as a theory of causation. (pg572) These new propoals remove the 'covering-law' as necessary for causation, therefore still leaving open the possibility of mental property relevance along with physical property relevance. The lesson here is that 'overdetermination has to do with causal instances-- efficacy, not relevance' (pg574) Authors argue that two 'co-variation relationships' (causally relvant properties) can exist 'harmoniously'.


7/20/07

Haldane, John - The Breakdown of Contemporary Philosophy of Mind

07/20/2007

Mind, Metaphysics and Value, ed. J. Haldane, 2002

This is a self-contained chapter in a collection that tries to recapture some of the ancient theories (Aristotle & Acquinas) that grappled with the mind-body problem. As the problem became different with Descartes, part of what it means to use ancient theories is to change what the problem is in the first place. The first part of the paper is devoted to author comparing what the scholastic philosophy world was like just before Descartes to what our current anglo-analytic philosophy world is like, suggesting that there is about to be a major revolution that will sweep all this work away as irrelevant, or at least outdated.

Author continues to stress how we need a different approach to the mind-body problem altogether. One example is to focus on the non-representational forms of intentionality, using quotes from Merleau-Ponty and Anscombe that talk about immediate, unmediated practical knowledge that the mind acquires from the world. (pg 57-8) To make his point that current understandings are in need of overhaul, author points to the following problems:

-- The problem of eliminitivism: the unwelcome conclusion that denies mental content/experience
-- The problem of supervenience: the unwelcome conclusion that asserts some kind of connection between the mental and the physical, but fails to capture how (the problem given to the non-reductive physicalist)
--The problem of dualism: the unwelcome conclusion that separates the mind from the body in all important ways

With these three problems put in this way, it isn't clear how we're going to get out of it using our current reasoning. Author assumes that eliminitivism is a bad conclusion, as is mentalism. Author focuses on the problem of supervenience/dependence. What is the nature of this relation? The Davidsonian reply is weak: you can't have a change in the mental without a change in the physical, and vice verse (then add an asymmetry that favors the physical as the primary causal source). (This doesn't go as far as type-type or token-token identity.) The problem here is that any change in the physical (in any part of the world) could account for a change in my mental state, a sort of 'global supervenience' that is absurd.

In the next discussion, we have a potential fix that tries to use perceptual externalism. The problem is that we don't have a good world-mind connection, so why don't we just say that part of what is in the mind is the things that are in the world? (pg 62) Author thinks this fails because of the problems of genuine mental causation. The first problem of genuine mental causation is basically the same as the earlier problems: are there two distinct processes, one (non-identical) process, or what? (pg 64-5) The second problem is that it seems as though the physical systems are doing all the causal 'work', with the mental is an epiphenomenal, or byproduct of the physical. (pg 65-7)

Author turns to three possible arguments from Acquinas, two of which he would like to see revitalized. (pg 72) These are as follows:
2) Human reasoning uses not empirical particulars but abstract universals, which don't 'exist' per se
3) Thinking is self-reflexive: when I am thinking, I know I am thinking-- but not as a second-order thought but instead as part of the original thought.

Author thinks that pursuing these lines of argument might be fruitful.

7/13/07

Arnold, Jack & Shapiro, Stewart - Where in the (World Wide) Web of Belief is the Law of Non-contradiction?

07/13/2007

Nous, Vol 4 Issue 2, 2007

This is a paper that tries establish that there are two interpretations of Quine, or perhaps more accurately, that Quine was of two minds when it came to the status of logical truths. Authors believe there was a 'logic-friendly' Quinean empiricism, one that placed the rules of truth-preserving inference (logic) outside of the possibility of revision by recalcitrant experience. There was also 'radical' Quinean empiricism, dating back to 'Two Dogmas', that did not exclude any of the rules of logic from possible revision. This is a concern because it could mean that the law of non-contradiction and 'ex falso quodlibet' (explosion) are subject to revision. Because authors are classical logicians, this is troubling to them. Authors first lay out the possible conflict between the logic-friendly Quine and the radical Quine, but then mostly focus on the implications for the radical Quine. The thesis for the paper is that if we are to take these rules of logic on using the radical Quine's own principles of empirical confirmation, these rules will not have the robust status of universal rationality that many (most) logicians want them to have.

The first part of the paper entails the authors go to the original sources often and trying to work out the radical Quine's theory (pg 279-281). Ultimately the authors conclude that it is part of Quine's holism that logic is included in the 'web' of beliefs. It just so happens that the 'theories' of logic go very far to the inside of the web, making recalcitrant experience more likely to change beliefs on the edges rather than closer to the insides. This is complicated by other claims made by Quine that seem to say that logic cannot be changed since, any change in logic is a 'change of subject'. This is Quine's reply to dialetheists-- 'you're just changing the subject'. So, is a change in the theory of logic change the meaning of the theory? Quine says yes. (pg 280-281) Now we might be left to wonder how theories of logic could be changed at all without changing everything altogether.

The next matter for consideration is how the rules of logic should be used within the web of belief. Should they be used everywhere? One problem here is that Quine does not like to engage with normativity when discussing belief. Talk of 'should' would mean that the rules of logic have some sort of force other than merely helping to make predictions and avoid recalcitrant experience. Authors interpret Quine as talking about using causation (or constant conjunction) for belief formation and ordering (pg 283-4), not using normative logical rules.

Lastly, we have a problem of what 'recalcitrant experience' really is. Does it mean there are contradictions between one belief and another? If so, then it appears the law of non-contradiction has some priority and is immune from alteration. The authors use the same talk of data causing assent to Belief A, and they previously would have assented Belief ~A, which is impossible to do. No real talk of contradictions, just of beliefs that cannot be taken together because they are impossible. (pg 285-286)

The second portion of the paper involves using only a descriptive picture of the realm of science and the realm of ordinary every-day beliefs in discovering the status of the law of non-contradiction and explosion. Authors look at two 'chunks' of belief: everyday beliefs, and scientific theories. In both cases, authors argue that the law of non-contradiction applies in some areas and not in others, and that (in the case of everyday beliefs) humans have a 'knack' for intuiting where to apply it and where not to, and that (in the case of science) often we are willing to accept contradictions as long as we get the predictions right (pg 286-292). The case for explosion (ex falso quodlibet) is even worse. There is no widespread acceptance of this in everyday usage or even in science.

The last part of the paper discusses that we are left with. The only hope of establishing a robust notion of the law of non-contradiction is to assert it's epistemic usefulness. Certainly it fits into a logical system (classical logic) that is disciplined, consistent and orderly. But so is a paraconsistent system! This does not save the law of non-contradiction. (pg 292-293) Lastly, we might hope that the Minimum Mutilation Thesis preserves the truths of logic. Unfortunately, this seems consistent with paraconsistent/dialetheists as well.

7/6/07

Blankenhorn, David - Ch 6 Deinstitutionalize Marriage?

07/06/2007

The Future of Marriage, Encounter Books, 2007

In this chapter author discusses what it means to deinstitutionalize marriage, and why he thinks SSM is going to do that. Author starts by discussing the claims of Jonathan Rauch, who thinks that SSM will actually strengthen marriage. Author criticizes this 'dream' as using a purile definition or conception of marriage, one that is essentially private. Author then moves on to citing various leftist activists who are generally against traditional marriage but very much in favor of SSM.

After reciting a bunch of leftist writers who want to transform marriage and favor SSM for that very reason, author presents his main claim about the leftist thinkers; there are those who:
1) think marriage is a good thing and gays deserve to be brought in
2) think marriage is a bad thing and why not bring gays into it
3) think marriage is a bad thing and SSM will help to transform it

Author claims that we need to get clear about the fundamentals of marriage-- what it is for, what it is essentially about-- so that we can get clear on it's public meanings. The real fight here is about the public meaning of marriage, because that is what the 'institution' of marriage really is-- what public meanings it has. Author claims that the arguments often used to support SSM talk about what marriage is fundamentally about, and these miss the point. (#1-5 on pg 139-140). Author claims that the definitions/conceptions of marriage used in these pro-SSM arguments are mainly about supporting 'close personal relationships', not marriage-- and there is a big difference between the two things.

Next are five claims that 'disconnect' the traditionalist view of marriage from what marriage is now. (#6-10 on pg 139-140) Author likens these to 'turning off the lights until it is dark enough to suit us' (pg 150), referring to taking out of the conception of marriage the following: monogamous sex, bridging the male-female divide, raising biological children with a mother and father, and having a 'natural parent'.

Lastly, author wants to reply to various leftist objections to marriage a religious institution: marriage came about before religion in any modern sense of the word. Also, author points out that some claim that marriage as an institution has become weaker, making it a reason to allow SSM. Author sees this as totally backward-- that if it is only a weakened institution of marriage that will accept SSM, we should strengthen marriage and then, of course, this would exclude SSM. Many of author's conclusion end with a dilemma-- choose SSM and the various ideas that support and go along with it, or go with the other choice, a pro-marriage as a robust institution, anti-SSM package. "We must choose".

6/29/07

Blankenhorn, David - Defining Marriage Down... is no way to save it

06/29/2007

The Weekly Standard, Vol 12, Issue 28 04/02/2007

This is an article that is a shortened thesis of author's book The Future of Marriage. The claim is that marriage is a pro-child institution, perhaps the best pro-child institution that humans have ever created, and that this institution is declining. The problem for the author is that it seems that growing acceptance for the decline of marriage is correlated with beliefs that Same Sex Marriage (SSM) is acceptable as well. Author uses cross-cultural surveys that ask questions like the following:

1-People who want children ought to get married
2-One parent can bring up a child as well as two parents together
3-Married people are generally happier than unmarried people
4-It is all right for a couple to live together without intending to get married
5-Divorce is usually the best solution when a couple can't seem to work out their marriage problems
6-The main purpose of marriage these days is to have children
7-A child needs a home with both a father and a mother go grow up happily
8-It is all right for a woman to want a child but not a stable relationship with a man
9-Marriage is an outdated institution

The author wants to consider these questions as, generally, addressing the decline (or strengthening) of the institution of marriage. The issue for the author is that the countries whose populations generally agree to 2, 4, 5, 8, 9 also show support for SSM. This means, author concludes, that these ideas all go together, much like teenage drinking goes with teenage smoking, though perhaps neither causes the other-- they 'come in a bundle'.

The second argument author uses to support that SSM is generally related to the decline of the institution of marriage is that various leftist-socialists-poststructuralists who are generally against the institution of marriage are all for SSM, since they think that SSM will push traditional marriage off of its pedestal and open up a multiplicity of possible relationships.

REPLIES:
Rauch, Jonathan - Family Reunion

Democracy; A Journal of Ideas, Issue 5, Summer 2007

Rauch reviews the book by Blankenhorn The Future of Marriage and agrees with much of what author tries to prove about the history of marriage as an institution, and it's meaning as an institution. Rauch claims that Blankenhorn might appear to view marriage is a multidimensional personal, sexual, public, and child-bearing relationship, but Blankenhorn's main objection to SSM is that it hurts the child-bearing part. Rauch claims Blankehnorn needs to keep biological parents raising the biological child as the most central feature of marriage in order to get the argument he has to have any teeth at all. Since this is clearly not the sole, and perhaps not even the only central aspect of marriage, this argument falls.

Secondly, Rauch paints Blankenhorn as telling us we only have two choices. Go towards the 'bundle' of ideas that reinforce traditional marriage, or go toward the bundle of ideas (that include the permissibility of SSM) that deinstitutionalize it. Rauch claims this is a false dilemma. Why can't we blend and mix policies? Rauch predicts that we will be able to do this.

Carpenter, Dale - Blog
The Volokh Conspiracy
March 27th posting, and subsequent

Carpenter argues in a number of ways. He claims that while Blankenhorn is trying to avoid talk of causation, namely that is SSM causing the other beliefs about marriage to rise, that Blankenhorn is subtly sneaking causation into the mix. Of course claiming causation would be fallacious since correlation does not imply causation. Blankenhorn, in a side Blog (The Family Scholars Blog), seems to agree with him that he is in fact talking about causation, not correlation. [why?!?] Carpenter rightly discusses that correlation cannot prove causation, and that SSM comes after the rise in the other beliefs detrimental to marriage, so causation, unless it is somehow backwards, is impossible.

Secondly, a major argument of Blankenhorn's is that several liberal thinkers are all for SSM since it will deinstitutionalize it. Carpenter replies with several liberal thinkers that are worried that SSM will re-institutionalize marriage. Carpenter thinks that, probably, neither result will actually occur.

6/22/07

Sunstein, Cass & Vermeule, Adrian - Is Capital Punishment Morally Required? The Relevance of Life-Life Tradeoffs

06/22/2007

AEI-Brookings Joint Center For Regulatory Studies, March 2005

Authors argue that government legislation is always trafficking in courses of action that encourage and discourage in order, presumably, for social goods to be obtained. If it is the case that the death penalty is a deterrent for murder, then there is a life-life tradeoff in instituting the death penalty. It is a life-life tradeoff because a system of laws without the death penalty spares a few mostly guilty lives for numerous innocent lives, while a system of laws with the death penalty trades mostly guilty lives for numerous innocent ones.

The major thrust for this paper lies in recent studies that conclude that the death penalty does act as a deterrent, one such study claiming that there were, roughly, 18 murders deterred per legal execution. There is a 'threshold' effect that indicates that if not enough executions take place per year and/or are capricious enough, this deterrent effect doesn't work. Furthermore, even more murders are avoided as the time between the trial and the execution is shortened. The authors report this evidence and do not argue with the studies. They are interested in the moral issue if the studies are true, so they simply grant the evidence as true and start from there.

Authors want to avoid the reply that this only applies to a consequentalist and not to a deontologist. They argue that it applies to a consequentalist in a straightforward manner. For the deontologist, there may be a way out, but it is unpalatable. First off, authors frame the issue as a trade between numbers of lives: 1 vs 18. Under this trade, it seems that even the deontologist must hit a 'threshold override'. (pg 13-14) Authors claim that other objections from the deontologist are attributable to the acts/omissions distinction.

The biggest part of the paper involves replying to what they take to be the central principled objection to the death-penalty-as-deterrent argument. This argument goes as follows:
Central Objection: The death penalty is an affirmative action on the part of the state to kill another human being, while the deaths its absence allows are affirmative actions of citizens, not endorsed by the state at all. (This can be broken down into two separate objections: intentional and causal) (pg 15)
Intentional: under death penalty, the state intends to kill, while without death penalty, the state doesn't intend to kill, the citizenry does.
Causal: under death penalty, the state causes a death, while without death penalty, the causal chain is much longer, and ultimately the cause isn't the state, it is its citizens.
Authors claim these objections, and a few like them, operate on a hidden principle of the distinction between actions and omissions to action. Authors argue this distinction does not apply to governments. Governments are always choosing what package of legal policies will optimize the goals of the state & citizenry. (pg 16-7) In this case, there is the imagined two packages that are entirely equal except one includes capital punishment and the other doesn't. The evidence suggests that capital punishment package will result in far fewer murders-- isn't that the one to go with?

Other objections about capital punishment are then addressed. Mainly authors reject them as failing to take seriously the idea that a regime of capital punishment, no matter how flawed, will still save numerous entirely innocent lives. Objections: the innocent convict, the randomly assigned execution, the racially motivated execution. These are three additional objections to capital punishment. Authors reply that these scenarios also apply to the innocent lives that are being killed because the would-be killers aren't being deterred (like they would be if capital punishment was used). (pg 20-24)

Arguments against capital punishment as a tool for deterrence now roll in: there are other ways to deter murder, do those instead. Authors say: fine, fine; but don't forget that these other ways actually have to be committed to, practical, feasible, and proven. So far, it seems capital punishment is a good policy, at least until other policies arrive. There is no reason to be against it, since the evidence (ex hypothesi) is supportive that it works. Authors peg this argument, like most of their previous ones, to the evidence. If the evidence turns out faulty or not based on the regime of capital punishment, the moral conclusion would change. But as the evidence suggests that a life-life trafeoff is 1 to 18, capital punishment is morally obligatory.

Lastly, because this is a life-life tradeoff, they can't be accused of a slippery slope (pg 27) or extending their arguments to other domains (pg 40). Executing someone for rape may deter, but it isn't a life-life tradeoff. Therefore the moral calculus is different, and it may not be morally requisite.

Authors suggest that failure to take the life-life tradeoff seriously might be a cognitive error of not taking 'statistical lives' as real ones. (pg 32-35)

6/15/07

Kent, Bonnie - Virtue Theory

06/15/2007

The Cambridge Companion to Medieval Philosophy, A. McGrade ed.

Author begins paper with discussion of ancient virtue theories and the changes they underwent in the middle ages. The difficulty with virtue theory is that they appear circular-- in order to get virtue you need to perform right actions and get the right 'habitus', but in order for an action to be right you need to have virtue. This circularity was not lost on the medievals, who also made the entire virtue theory far more complex by adding religious virtues, ones only granted by god's grace, and by adding the concept of the Will, which could do whatever it pleased, regardless of virtuous 'habitus'.

Author gives summary of history of the thinking from the Medievals. Virtues became classified as a 'habitus' in the 12th century, a word for which there may not be a good english translation. It roughly approximates 'habit' or 'disposition'. This may have helped the Medievals distinguish between the habitus and the will. (pg 8)

Aquinas talked about two sets of virtues, 'civic' ones and 'divinely infused' ones. These were challenged. (pg 10) Divinely infused virtues were in many ways similar to civic ones, but they lacked being divinely inspired, gifts from God's charity. But even these divinely given virtues could be countermanded by an act of the free will, so what good were they anyway? Medievals began to even question this. Author believes that John Buridan rightly avoided the circularity that threatened virtue theory in his own commentary of Aristotle's Ethics. (pg 15)

6/8/07

Aune, Bruce - An Empiricist Epistemology Ch 6 "Memory And A Priori Inference"

06/08/2007

Unpublished Manuscript

Author starts with a discussion of memory and how it is essential for any theory of knowledge, since what is presently observed quickly becomes the past, and the past is usually only accessible through memory. The issue with memory is more complex than it is with observational knowledge, since memory involves inference, e.g. the inference that what I'm remembering now is what I observed then (a fact about the past is inferred using the present). To be justified in making this inference we need some sort of backing for fallible inference in general, and this is the thrust of the chapter.

Author starts with Hume (pg 204) who considered this type of inference 'experimental', meaning that it is reasoning about cause and effect, specifically a similarity between examined causes and effects with unexamined causes and examined effects. This is general inductive reasoning, the kind of reasoning we now need support for.

The difficulty with giving a rational justification for induction first lies in giving the proper account of what needs to be justified. Generally, we want to think that induction is assuming that As are Bs in all cases based on seeing that As are Bs in the examined cases. This is obviously open to critique, and various fixes have been offered, prior to attempting the justification.

Lycan & Russell have attempted, but each time it appears there needs to be some prior known 'representative sample' or some general way of specifying how to get one. Neither is available, author claims, without further empirical considerations. Bonjour proposes an a priori answer that makes use of 'metaphysically robust' regularities that we observe, therefore are confident using induction on. Author considers this naive, since there are many cases where we use induction but aren't assuming 'metaphysically robust' regularities (e.g. pollsters). Also, the discussion of 'robustness' assumes a bias toward our predicates rather than 'grue-like' ones, as the author brings up the infamous 'grue' counterexample and argues that there is no principled way to exclude these cases from Bonjour's account: no good way to presume that the future will be like the past.

The next proposed justification for induction comes from using Inference to the Best Explanation, which argues that the best explanatory account of the evidence is the one most likely to be true. Author disagrees, since we do not consider all the relevant alternative explanations before we pick the one, nor do we know all the relevant alternative explanations before we pick the one we want to go with.

Author instead proposes using probability theory and Bayes as a way to justify induction (pg 221-4). Under this theory, we need to have priorly assigned probabilities to background beliefs before we can assign a probability to new hypotheses; however the system should, ultimately, be self-correcting, so improperly assigned probabilities will eventually be changed to reflect newly acquired evidence and supported hypotheses. The worry though, is what initial beliefs to accept to get the whole system running. Here author considers an attempt by Chisholm, (pg 230) which he amends to accept. Initial beliefs must be given 'weak acceptance on a trial basis'. Author offers Bayesean theory as the alternative to Inference to the Best Explanation (pg 234).

With a new Bayesean backbone for induction, we can revisit the skeptical BIV (Brain-in-vat) argument and assign a low probability to it, since no evidence counts directly in favor of the 'surplus content' that BIV asserts (pg 237).

6/1/07

Aune, Bruce - An Empiricist Epistemology Ch 5 "Observational Knowledge"

06/01/2007

Unpublished Manuscript

This is a chapter mainly focused on the foundations of knowledge, and dealing with brain-in-the-vat problems. Author eventually concludes that there is no empiricist proof against brain-in-vat (BIV) problems, and that the most recent answer to BIV problems given by Putnam fails as well, so this cannot be used as an argument against empiricism.

When dealing with prototypical empiricist foundations of knowledge, author claims there are two sources: observation and memory. This chapter deals with observation. The problem with observation is its fallibility: we have optical illusions, phantom sounds, hot things sometimes feel cold, etc. The commonplace reply to this is that we must be careful to question our observations, not to take them at face value, to examine their context, their sources, and so on (Locke). The philosophical problem with this reply is that it tries to correct empirical evidence with more empirical evidence-- a regress or vicious circle.

The answer proposed by people like Russell, Moore and Carnap was that there were immediately known 'sense-data' that we were infallible about that served as the inferential foundations of the rest of our knowledge. The problem with this response was that it eventually failed because it put a 'sensuous curtain' between perceivers and the real world. The real world becomes a Kantian 'thing-in-itself' that is unknowable. (pg 171) Arguments requiring the basis of empirical knowledge to be a non-inferential foundation go back to Aristotle, who showed an infinite regress if all our knowledge is inferential. (pg. 173)

However, author argues, we do not need sense-data for a non-inferential basis for our empirical knowledge. Author has developed a framework for what he calls 'imperfect' empirical knowledge, which is knowledge that could be false (ch 1). Working from this, we can accept that there is a non-inferential base from which empirical knowledge is made possible, but that non-inferential base can simply be (fallible) observational beliefs. Author tells a story of how, when we were young ,we took most of our observations at face value, particularly when they didn't conflict with the observations of others. As we became more critical, we began to form generalizations and theories, so our observations became subject to our theories. So the foundation for empirical knowledge is other empirical knowledge. No regress or vicious circle looms, since in all this 'knowledge' talk we are talking about 'imperfect' knowledge.

Author considers alternatives to the Russell/Carnap 'Foundationalism', that are also alternatives to his own theory. One such alternative is Bonjour's 'Coherentism', which asserts that knowledge is justified on the whole, and that particular beliefs are justified by their being able to fit (or not) into that coherent whole. Author claims this is too tough a standard to hold anyone to: everybody has gaps, even the scientific community does!

The problem with empirical knowledge as Hume has sketched it is that it is susceptible to external world skepticism, or a more modern version, BIV. If there were a good answer to this, we would like to hear it. Putnam suggests the answer of Semantic Externalism: the reference relation involves a connection to the actual things referenced ('meanings just aren't in the head!'). Putnam got grist for this argument by successfully arguing that a Turing machine might be thinking, but it would never be referencing if it couldn't sense the objects it was talking about. So the anti-BIV argument runs as follows:
1) if someone S is a BIV, the reference of his words would be electrical impulses, not actual things
2) therefore S's claim 'I am BIV' is not referring to actual things like brains and vats, so he cannot be describing himself as a BIV

Author argues this fails, since it doesn't alleviate our concern: either S has an 'exotic' meaning, or S is saying something false about a real person. Author considers other problems with Semantic Externalism. The major problem is that there is no clear story about how references relations come about. Putnam describes 'language entry and exit rules' that should correspond to behavior and experience, but this fails to capture the full extent of our ability to reference. Author claims that these simplistic rules sound a lot like the verificationist's claim that everything with meaning must be verifiable. If there is room for unobservables to be referenced (like H2O), then it seems there is still no good story about cases of genuine reference in Semantic Externalism. Since there is no clear-cut winner in this, BIV is still an issue for both empiricists and non-empiricists alike.

5/25/07

Aune, Bruce - An Empiricist Epistemology Ch 4 "Properties And Concepts"

05/25/2007

Unpublished Manuscript

Author begins chapter with a discussion of the three leading theories of properties. Properties can be considered a feature of an object, such as 'red' or 'spherical'. There are theories promoted by David Armstrong that claim that properties exist in some universal sense and that objects partake in the existence of them-- similarity exists therefore between objects with similar properties by virtue of their sharing the same property. There are 'trope' theorists like Donald Williams or Keith Campbell (and possibly Aristotle) who claim that each property is unique, but share a similarity that is more-or-less similar, depending on the context, object, etc. The final theory that author prefers is a conceptual theory (Kant, Frege), the theory that a property of an object is a concept that the object falls under.

Author has two main criticisms of the first two theories. The first theory (Armstrong's A-theory) fails because it must include the object itself that has been stripped of all its properties, thus a 'bare particular'. If that wasn't bad enough, it seems that if properties exist, they too must have properties that distinguish them, which means that properties have 'bare particulars' too.



The second theory (trope theories) fail because we need to distinguish between all the different tropes out there, which means that each trope itself is made up of further tropes, and so on (pg 132-3).

Both theories fail, author suggests, because they have an incorrect view of predication as ascribing a property to an object. Author returns to the theory of properties he likes, the Fregean F-theory. In this theory, properties 'fall under' the concept of an object. Thus an object is red because the object itself is red, not because it has the property of redness. One upshot of this theory is that the concepts of an object can be used in propositions without much mutation. What is a concept? A concept is something associated with the thing it conceptualizes, and someone has a concept when she can use at the right times and in the right places. (pg 145)

One problem for the Fregean theory is that we are unclear what the 'falling under' relation actually is. For this, author uses the Sellarsian suggestion of distributive singular terms as a way to sort objects under concepts-- a token of a type that is distributed. There are some criticisms of Distributive Singular Terms (DSTs) that author deals with:

1) Not all lions are tawny. Response: that's fine, just restrict the DSTs to typical or ideal examples

2) DSTs sometimes are true due to their distribution, not due to the properties the objects have: 'the grizzly is found in North America'-- no one grizzly is all over North America. Response: those aren't DSTs!

Author makes sure to agree with Sellars that the concept 'red' and the word 'red' are expressing the same thing too. 'The concept "red"' is used as a DST to distribute to all proper times when 'the concept red' is employed.

Author revises the Fregean version of concepts being the connectors between predicates and objects. Now, predicates directly describe objects, or directly classify objects, without 'conceptual mediation'. This would be much like demonstratives or names. The problem now is where predicates get their conceptual function? Author: by usage (pg 154-5).

The next step is dealing with propositions. Author uses a Sellarsian 'distributive' treatment for propositions: propositions are similar because they distribute to, ultimately, beliefs about their proper usage. (pg 158-9)

This is an immensely fast-paced and difficult chapter that is the bulk of the sorting out between concepts, predicates, properties and propositions.

5/18/07

Aune, Bruce - An Empiricist Epistemology Ch 3 "Empiricism and the A Priori"

05/18/2007

Unpublished Manuscript

In this chapter author gives a positive account of analyticity and how it might work in relation to so-called propositional attitudes. The primary aim is to show that cases of supposed a priori knowledge are just cases of analyticity in language or in concepts.

The chapter begins with a brief survey of the origins of analytic truths as conceived by Kant. For Kant, an analytic truth (similar to Leibniz) is a truth where the predicate being ascribed to the object is already contained in the concept of the object itself. Author considers Kant's work to be acceptable, but only applicable to a limited class of judgments. (pg 82) Frege attempted to build on Kant by claiming that an analytic truth is a truth derivable only from general logical laws and definitions.

Frege and other empiricist philosophers were generally considered to be refuted by Quine in his paper "Two Dogmas of Empiricism". Author describes the problem was that we cannot use the idea of synonymy without using 'analytic', and vice versa. The two are involved in a vicious circle of definition. Further, any attempt at lining up all the sentences containing supposedly synonymous terms will contain the term 'analytic'. Quine later retreated from his earlier attack and admitted that there was some common-sense appeal to analyticity. He gives a rough definition: something is analytic for a native speaker S if S learns the truth of P by learning the usage of one or more of the words in P. This truth must be deductively closed-- so the steps to analyticity must 'count as analytic in turn'. (pg 88) But still, as with Kant, Quine relegates analytic truths to logic ones and linguistic tautologies.

Author describes his version of analytic truth as one developed from Carnap's, which consists of the specification of a formal language-system that has semantic rules and definitions. Author uses the example of specifying how 'if...then' applies, a usage that is separate from common usage. Once these specifications are considered, then the supposed counterexamples to modus ponens and modus tollens are clarified and dismissed (pg 97-98).

Author lastly turns to the problems with necessary truths that Kripke has shown. One example is the origin claim: an item cannot have it's origins in a different hunk of matter than it does now. (pg 112-) Author goes about showing what the proof for this would be. Assuming that 'distinctness' is defined properly; author claims this Kripkean claim can be considered analytic.

Lastly, author considers psychological states, which are not the same as propositions. If propositions can be analytic, can psycholgical states like beliefs? Author review the classical notion of propositions as expressing the sense of a sentence, with words having conceptual content. With the theory of names being rigid designators this classical notion is undone. Author discusses the ramifications of this failure and also discusses conceptualism, which says that propositional attitudes have 'contents' rather than 'objects'. Using this method at least, the empiricist can have analyticity in psychological states.

5/11/07

Aune, Bruce - An Empiricist Epistemology Ch 2 "A Priori Knowledge and the Claims of Rationalism"

05/11/2007

Unpublished Manuscript

This chapter is devoted to giving doubts to the anti-empiricist rationalist philosophies that claim support of a priori knowledge like "nothing can be green or yellow at the same time" or 'not (P and not P)'.

Author wants to distinguish between a priori knowledge and analytic knowledge. At issue here is whether commonly thought of a priori claims (not necessary identity and contingent a priori claims made by Kripke, by the way) can be proved or verified. The proof would be possible given a combination of axioms and inference. But now we are given to another question: what axioms do we use? Here the rationalist believes the axioms used are knowable by direct intuition. Author complicates this picture by showing that axioms are superfluous because they are derivable from rules, and further that the claim that anything is knowable by direct intuition is dubious (pg 46-). The contrary view is the empiricist's, whose standard claims are that the rules of inference are underwritten by convention that aims at preserving truth using 'semantical rules' (pg 42, top).

Author discusses that it is true that sometimes things are immediately apprehendable; but these things are like recognizing faces, or my own hand. Author does not believe that recognizing that things are of a given kind is as intuitive. For instance, we need to recognize the appropriate application conditions of modus ponens before we can use it in inference. Without taking a situation (or set of propositions) to be an instance where modus ponens properly applies, we can easily go wrong. Author uses two examples of moral and geometrical cases.

Author then takes on the standard cases that make up the rationalist backbone. E.g., the law of non-contradiction is complicated by non-self-referential liar-like-paradoxes. The law of the excluded middle is complicated by supposedly vague predicates 'is bald', 'has a beard', 'is a tree/sapling', 'is a child/adult'. Author also discusses 'nothing can be both red and green all over at the same time' and indicates that this happens to be more a matter of physiology than a property of the world, as an exercise, change 'red/green' to 'yellow/green' and you can have 'yellowish-green' and 'greenish-yellow'.

Lastly, author reviews the more modern cases of Kripkean necessary identity and contingent a priori. He claims these can possibly be shown to be analytic, and he will try to do so in the next chapter.

5/3/07

Aune, Bruce - An Empiricist Epistemology Ch 1 "Knowledge and Analysis"

05/04/2007

Unpublished Manuscript

The author has introduced in the preface how he intends to do some work of defending the empiricist notion of the analytic (or at least Carnap's version of it) against Quine's assaults.

The first chapter is devoted to sorting out some of the different senses of knowledge, how the different senses sometimes work in everyday language, and then to give a 'rational reconstruction' of two senses of knowledge.

Author first points out the various disagreements regarding the actual method to employ when trying to answer the question 'what is knowledge?' Is it an analysis that points to necessary and sufficient conditions? Is it conceptual analysis that tries to capture all (or most of) our intuitions? Is it trying to identify a property that is true of us when we know something? This last approach seems to be taken by Chisholm's followers, which assumes that such a property exists, even before we can find one. Author reviews the rejection of 'essential properties' talk, e.g. the famous critique given by Wittgenstein regarding a 'game', and so on. Given the difficulty since the Gettier examples to show a property of knowledge, it is unlikely to be located.

Author review the Gettier cases and points out that if we have two senses of knowledge, one being an ancient traditional approach of 'rational certainty' and the other as one that is based on inconclusive evidence, the Gettier cases only refute the later cases, not the former. One reply to the Gettier cases for inconclusive knowledge is Lewis' contextualism. Contextualism is a positive account that says that a subject S knows P when S has eliminated all relevant alternatives to P that are reasonable to consider given the context that S is in. Author points out that his major problem with this theory is that it requires relevant alternatives to P to be eliminated, but sometimes there are none.

Author proposes to avoid Gettier-like counterexamples by having access to the evidence that makes the proposition true. This 'making-true' concept author wants to keep elementary enough to avoid endorsing Armstrong's elaborate notions of 'truth-makers'.

At the end of the chapter, author gives his rational reconstruction of inconclusive or imperfect knowledge: S has imperfect knowledge of P in a context C only when (i) P is true, (ii) S has the information that P, (iii) The evidence for P is high enough in C to be considered adequate, (iv) S has evidential access to a sufficient truth-maker for P.