06/08/2007
Unpublished Manuscript
Author starts with a discussion of memory and how it is essential for any theory of knowledge, since what is presently observed quickly becomes the past, and the past is usually only accessible through memory. The issue with memory is more complex than it is with observational knowledge, since memory involves inference, e.g. the inference that what I'm remembering now is what I observed then (a fact about the past is inferred using the present). To be justified in making this inference we need some sort of backing for fallible inference in general, and this is the thrust of the chapter.
Author starts with Hume (pg 204) who considered this type of inference 'experimental', meaning that it is reasoning about cause and effect, specifically a similarity between examined causes and effects with unexamined causes and examined effects. This is general inductive reasoning, the kind of reasoning we now need support for.
The difficulty with giving a rational justification for induction first lies in giving the proper account of what needs to be justified. Generally, we want to think that induction is assuming that As are Bs in all cases based on seeing that As are Bs in the examined cases. This is obviously open to critique, and various fixes have been offered, prior to attempting the justification.
Lycan & Russell have attempted, but each time it appears there needs to be some prior known 'representative sample' or some general way of specifying how to get one. Neither is available, author claims, without further empirical considerations. Bonjour proposes an a priori answer that makes use of 'metaphysically robust' regularities that we observe, therefore are confident using induction on. Author considers this naive, since there are many cases where we use induction but aren't assuming 'metaphysically robust' regularities (e.g. pollsters). Also, the discussion of 'robustness' assumes a bias toward our predicates rather than 'grue-like' ones, as the author brings up the infamous 'grue' counterexample and argues that there is no principled way to exclude these cases from Bonjour's account: no good way to presume that the future will be like the past.
The next proposed justification for induction comes from using Inference to the Best Explanation, which argues that the best explanatory account of the evidence is the one most likely to be true. Author disagrees, since we do not consider all the relevant alternative explanations before we pick the one, nor do we know all the relevant alternative explanations before we pick the one we want to go with.
Author instead proposes using probability theory and Bayes as a way to justify induction (pg 221-4). Under this theory, we need to have priorly assigned probabilities to background beliefs before we can assign a probability to new hypotheses; however the system should, ultimately, be self-correcting, so improperly assigned probabilities will eventually be changed to reflect newly acquired evidence and supported hypotheses. The worry though, is what initial beliefs to accept to get the whole system running. Here author considers an attempt by Chisholm, (pg 230) which he amends to accept. Initial beliefs must be given 'weak acceptance on a trial basis'. Author offers Bayesean theory as the alternative to Inference to the Best Explanation (pg 234).
With a new Bayesean backbone for induction, we can revisit the skeptical BIV (Brain-in-vat) argument and assign a low probability to it, since no evidence counts directly in favor of the 'surplus content' that BIV asserts (pg 237).
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment