9/28/07

Kim, Jaegwon - Physicalism, or Something Near Enough

09/28/2007

Physicalism, or Something Near Enough, Ch 6 Princeton University Press, 2005

This book chapter isn't too different from the other recent papers by author. The major claim is that physicalism is mostly true, being able to account for everything but the non-functional aspects of qualia. Author uses the term 'ontological physicalism' as the view that material things in space-time are all there is. The first part of the chapter argues for accepting this view.

Why accept ontological physicalism?
Causal Closure:
The first point was that causation [at least how we understand it] requires a 'space-like structure' with objects that are identified within that space. Causation takes place in the physical realm-- we can't figure out how it could be otherwise. So if we are to have mental causation at all (that is things in the mind causing things in the world), then we need to believe that those things in the mind are physical. An objection to this comes from our desire to make us special, or to claim that mental properties 'emerge' from physical ones. All of this has failed, author claims (pg 152).

Causal Exclusion:
The second argument author uses is in reply to a dualist claiming that structuring causation as only physical is question-begging. So author instead gives us an explanation of what causes a finger to twitch while in pain, using only physical processes. Only now we also have mental properties-- both of which should cause the finger to twitch? This sounds like overdetermination with two distinct causes! Author claims that property dualists haven't been able to resolve this problem.

One alternative to ontological physicalism might be Davidson's anomalous monism or Putnam-Fodor and their functionalism, non-reductive materialism and emergentism. The only upshot to these claims is that: either the mental can't be causal or the mental is irrelevant. Since we want to save the causal efficacy of the mental, we need to reduce it to the physical. This is an if-then:

If the mental has causal efficacy, then it needs to reduce to the physical.

Reductionism: author lays out what he considers the principles of reduction. Reduction takes place when previously named 'concepts' or functional place-holders that describe causal entities/properties are shown to have mechanisms that underlie their functions/causal powers. Of course this allows for multiple realizations-- author denies this is a problem for physicalism (pg 164).
Reduction is a three step process: (pg 164)
1) Identify/name a concept that has a function or plays a causal role
2) Begin the scientific work to find the 'realizers' of this functional property
3) Develop an explanation of how the lower-level mechanisms perform the specified causal work

Author then claims that once we have discovered step 1, we can assume that the concept/property is reducible (pg 164). Now, what parts of the mind are reducible, and what parts aren't? Psychological states like beliefs, desires, thoughts, etc. are. Qualia aren't, at least to the extent of their qualitative character. That we can tell the difference between pink and light red is a discriminatory capacity that can be reduced, but the "look of red" is just mental residue. Author's suggestion: live with the residue and we have mostly, ontological physicalism with mental residue. Note: during this paper it seems author is skeptical that total zombies could exist (pg 169).

9/21/07

Gillett, Carl - Understanding the New Reductionism: The Metaphysics of Science and Compositional Reduction

09/21/2007

Journal of Philosophy, Vol CIV No 4 April 2007

Author begins by discussing the recent move by Kim, a reductionist, from semantic reduction to metaphysical reduction. This eschews the long-standing 'Nagelian' approach of reducing entities in the 'special sciences' (basically, any science that isn't particle physics) to more elementary ones by way of 'bridge laws'. Instead, the focus is on the metaphysics and ontological relations involved in 'mechanistic explanation'. Mechanistic explanation is, basically, an explanation of how a device or entity works by description of its parts, how each part functions, and how the parts work together. One of the suspicions is toward the term 'metaphysics', but author claims that this is the 'metaphysics of science': a careful, abstract investigation of ontological issues as they arise within the sciences-- (e.g. not a priori). (pg 194-5)

Author discusses the main problems with previous reductionist attempts: there was an unavailability of 'bridge laws' or other descriptors of the lower-level entities such that the higher-level 'emergent' properties couldn't be explained in terms of the lower-level entities. Thus the antireductionists argued that the entities, terms and properties of the special sciences (higher-level) were ineliminable for proper scientific understanding and experimentation.

Author launches into an example of neuroscience where many diverse lower-level entities compose higher-level ones (pg 198). What is interesting in the example is that the higher-level changes that take place aren't caused by lower-level changes-- it is part of the structure of the entities involved that the changes take place. Author argues that this isn't causation but instead noncausal determination (pg 199-200), and that a familiar kind of scientific practice is going on here: explanation by describing composition.

Focusing on the compositional nature of higher-level entities reveals the following: (1) the powers ascribed to the higher-level entity does not have a lower-level analogue. This was the point made by the antireductionists. (2) The lower-level entities are qualitatively different in kind, making up the higher-level ones. This indicates a 'many-one' relation, not an 'identity-identity' relation.

Earlier author put how he individuates properties using a Shoemaker-type 'causal theory' attempt: a property is individuated by what power it gives to the objects that instantiate it. A 'power' is an entity whose possession allows an individual to enter into a certain process (pg 201). Basically, the powers in the lower-level entities comprise the powers in higher-level entities iff the activated powers of the lower-level entities all taken together activate the powers of the higher-level entities (pg 202), and not vice versa. There is some weird discussion/usage of 'manifestation grounds'. Given that there is asymmetry in the 'manifestation grounds', there is room to argue for reduction on an ontological level while leaving the semantic aspects of the special sciences intact and ineliminable.

Author formulates his 'Argument from Composition': (pg 204-5)
1- Properties are individuated by what powers they grant to their instantiators
2- The properties belonging to the lower-level entities are sufficiently efficacious for manifesting higher-level powers
3- Higher-level properties grant no special powers to their higher-level entities, therefore are ontologically dispensable.

The remainder of the paper, author shows that though ontologically dispensable, higher-level entities and properties are not semantically or epistemologically dispensable, being necessary for science and scientific practice. Thus it seems to be granting some of the antireductionist points (and perhaps main contentions) while remaining reductionist in what matters most: ontology.

9/14/07

Clark, Andy - Curing Cognitive Hiccups: A Defense of the Extended Mind

09/14/07

Journal of Philosophy, Vol 54 No 4 April 2007

This is written in response to an earlier paper written by Rupert proposing an alternate hypothesis to the Hypothesis of Extended Cognition (HEC), namely, the Hypothesis of Embedded Cognition (HEMC). The first part of the paper replies to Rupert's two major points:
1) The external aspects of so-called cognition for HEC look dramatically different from the internal aspects, causing a dis-analogy, and making HEC look like a stretch.
2) The proper study of cognitive science depends on taking the 'stable persisting human individual' as the subject matter. If we lose sight of this, we risk losing a lot.

Author first replies to 1 as follows: much of this objection comes from taking the parity principle too seriously. What we're trying to get at is equality of opportunity for all sorts of processes. If we encountered alien neurology, would we say it isn't cognitive because it doesn't look like ours? No. Similarly, just because the fine-grained differences are significant doesn't mean hybrid processes can't play a constitutive role. (pg 167-8)

Author replies to 2: In general, there is little to worry about. Most of the time the stable, persisting human individual's brain is the sole instantiator of cognition. But sometimes there are important 'soft-assembled' resources that also serve to instantiate parts of the cognitive process. We shouldn't be worried that science can't get this right. (pg 169-70)

Author discusses an experiment that deals with programming a VCR, with subjects sometimes seeing the screen, sometimes just trying to remember it, and other times able to remove a barrier. The goal was the fastest programming. What was found was that sometimes memory was used, sometimes visual input. The study concluded that what was important was the fastest, least time-wasted method of computation, whether that was using solely internal information or using external, or a mix. This lead to the Hypothesis of Cognitive Impartiality: problem solving doesn't privilege in-the-head resources over external ones. (pg 174) The risk here is to think that there is a centralized processor that seeks the most expedient route to solve the problem (giving priority to the brain). Here, author tries to distinguish between two 'explanatory targets':
A) The 'recruitment of the extended organization itself' (here the brain plays a crucial role)
B) The 'flow of information and processing in the new soft-assembled extended device.' Author wants to focus on B with HEC/Cognitive impartiality and claims HEMC blurs these distinctions. (pg 175) This leads to the conclusion that cognition is organism centred, even if not organism-bound.

Author then goes into an elaborate, extended discussion of multiple studies and thinkers who have worked on gesturing. (pg 176-183) Gesturing, the conclusion is, not just expression that helps make an already formed point, not just partially-offloaded spatial tasks that it is easier to do in real space, not just part of a marking or crude reminder about what to think about next/remember, but instead constitutive of the act of thinking about certain things. Author wants to be sure to avoid merely treating gesture as a causal aspect that assists with our thinking, but instead as constitutive to that thinking. [Interesting problem in the experimentation see top of pg 179]

Author concedes that there is an asymmetry in that neural processes are considered cognitive always, while gesturing by itself isn't always considered cognitive. (pg 183) However, this is not to invalidate the systematic cognitive process. Take, as an alternative one single neuron-- it isn't cognitive either per se-- only when placed into the larger context is it; the same with gesture. Further, just because the gesturing causes a neural process doesn't mean that the gesture itself is dispensable or somehow not part of the cognition. Author argues that other ways of (in principle) bypassing neural processes to get the end-result brain state would also satisfy this condition, but would be considered cognitive (183-4) [not according to B, above!].

There is a real danger of falling into a 'merely causal' reply about external processing-- but author argues, this danger is the by-product of believing that everything external can only be causal! Author then elaborates by using the example of rain drops on the window: they cannot be part of cognition, even if they prompt me (cause me) to think certain thoughts about poetry, nature, etc. They 'are not part of ... any system either selected or maintained for the support of better cognizing' (pg 184) [teleological?] The 'mere backdrop' or 'merely causal' aspects of the rain contrast with constitutive, self-stimulating loops (like a turbo-engine, where the exhaust from the motor turns an air injector, which then makes the motor more powerful) that are part of the process of larger cognitive accomplishments.

Author discusses a simple mechanical robot that can instantiate 'exclusive or' just by using 'inclusive or' and 'and' and a few simple rules of behavior. (pg 185-8) The point of this description is that even a simple machine can instantiate more complex logical manipulation using less complex rules, and it can then use these more complex rules in further processing if it has some sort of self-feedback device that reports back to itself what it is instantiating; this is supposed to be analogous to humans' overt behavior as playing a part in our further cognitive tasks.

Author concludes by saying that we should not fall into thinking HEC, HEMC or some other hypothesis is true until we have done much more experimentation-- but that we shouldn't be biased against HEC for sure.

9/7/07

Campbell, Sue - Our Faithfulness to the Past: Reconstructing Memory Value

09/07/2007

Philosophical Psychology Vol 19 No 3 June 2006

Author begins by noting the common distinction between memory and imagination, the former being concerned with accuracy or truth. However, author wants to expand what it means to remember properly: not only must memory be faithful but remembering should also get right the significance of the past in relation to the present. Because meaning and significance is often contextual and also brought out and interpreted in social settings, remembering can be affected and perhaps constituted by a 'varied set of human activities' (pg362).

Author gives examples of the intricate functions of auto-biographical memory: a single composite memory formed from repeated similar events, or of using objects or talismans to remember a loss or grief-ridden memory but it changes over time as they come to accept and move past what happened. (pg363-4) The main thesis is that construing memory as solely archival and that accuracy to an original scene is the only important value of memory will miss much of the other important personal aspects of memory (pg365). In particular, author argues for using 'accuracy' as opposed to 'truth' or even 'detail'. Adam Morton in dealing with the possibility of having accurate but not true emotions, argues that accuracy as not reducible to truth (pg366); author takes this line of argument up and applies it to memory, though author argues that detail isn't always preferable (for instance, you can have irrelevant detail, or fail to get the overall themes right). (pg367-8)

With the accuracy of memory as an analog of the accuracy of emotion, author focuses on two issues: how faithful memory can be considered appropriate, rather than straightforwardly true (pg369-70), and how accurate memories can have different significance and different pieces recalled based on present contexts (pg370). Next, author deals with 'integrity' of memory, which she re-frames from being a solely personal characteristic to being a 'personal/social virtue' (pg373). The idea here is that public memory can shape one's own memory, like the public memory of 9/11 versus the personal thoughts and feelings that one might have been having at the time. Thus there is a decision about how and whether to integrate your memories into the public one, changing the public sphere but also perhaps changing your own. (pg374-5) Author's case is that the two virtues in 'reconstructive' memory is not faithfulness and truth, but accuracy and integrity. (pg377)