Saturday, February 9, 2013

Vague statements and the law of excluded middle

To: Chris M.
From: Geoffrey Klempner
Subject: Vague statements and the law of excluded middle
Date: 15 April 2009 10:49

Dear Christian,

Thank you for your email of 30 March, with your one hour timed essay, in response to the University of London BA Logic question, 'Do vague statements pose a threat to the law of excluded middle?'

You have covered a lot of ground -- multivalence, truth value gaps, supervaluation, vagueness as ignorance.

The first question we need to answer is, 'What is so important about the law of excluded middle?' This isn't a question one would raise about the law of non-contradiction (putting aside the small minority of logicians like Priest working in dialethic logic). A contradiction is something you avoid at all costs. Whereas it seems fairly commonsensical and intuitive to refuse to assert, of a balding man, 'Either Fred is bald or Fred is not-bald.' Why do we have to commit ourselves? Why isn't this a compelling example of a case where LEM fails?

As in intuitionist logic, refusing to assert P or not-P is not equivalent to the assertion, 'not-(P or not-P)' which is a flat-out contradiction. Moreover, applied outside mathematics, it is not clear that we *lose* anything in giving up the LEM. (By contrast, in classical mathematics, the LEM is needed to prove theorems which are unprovable in intuitionist mathematics.)

So, once again, as applied to empirical statements, why is LEM so important?

Dummett in his seminal article 'Truth' has some important things to say about this. (The article is reprinted with an appendix in 'Truth and Other Enigmas' Duckworth.) Dummett argues that it is an essential, defining characteristic of the concept of truth that the truth is what we aim at. Any assertion, is asserted as being true. The debate between Strawson and Russell over definite descriptions is one example Dummett cites where 'lacking a truth value' (when a referring expression lacks a reference) would simply be re-interpreted as a way of being false.

There is no such thing as making an 'indefinite assertion'. If I say, 'There's a chance that it might rain tomorrow' I am not indefinitely asserting that it will rain tomorrow but rather definitely asserting a statement regarding the probability of its raining tomorrow, to the effect that the probability is non-zero. In general, to assert that P is to imply that 'P' is definitely true.

If I see that Fred has a luxuriant head of hair, then my statement, 'It is not the case that Fred is bald' implies that it is definitely false that Fred is bald. If Fred has no hair on his head, then my statement, 'Fred is bald' implies that it is definitely true that Fred is bald. If you are tempted to say that there can arise circumstances where it is not definitely true that Fred is bald and also not definitely false that Fred is bald then this does indeed (by two simple applications of modus tolens) imply that a logical contradiction: Fred is bald and it is not the case that Fred is bald.

There is no easy way around the problem. Multivalence, which might work elsewhere (as in statements about the future, or paradoxes which you mention) has a major defect when applied to vague statements. Wherever you make the cut, and however many truth values you recognize, problems of vague boundaries break out all over again. The approach using supervaluations does not look very promising given the argument in the previous paragraph.

Dummett's own radical approach is to question whether the concept of truth should be the central concept in a theory of meaning. If you feel the pressure to give up the LEM because of vagueness, what you are giving up, in reality, is the standard and widely accepted view of what we are doing when we make assertions.

The importance of the later Wittgenstein's work in the philosophy of language is that it provides an alternative approach which rejects the possibility of an account of language use as mastery of a 'theory', represented in the form of a system of rules. However finely graded the rules may be, there will always be cases which are not covered by the rules.

However, this still leaves the question of the importance of the LEM unresolved. Timothy Williamson (who first proposed the 'vagueness is ignorance' theory which you mention at the end of your essay) starts from the position that there is, in principle, a theory of truth for a natural language, or in other words, that formal logic can be applied to statements of natural language. In cases where we don't know whether to say that Fred is bald or not, we proceed on the assumption that there 'is' an answer (which no-one, not even a recording angel, could ever know). There's no point in looking for some metaphysical significance in this view. It has none. We are simply accepting the logical consequence of the necessity (as Williamson sees it) of being able to apply formal logic to natural language. You bite the bullet. All this really amounts to is that we are prepared to talk *as if* there is always an answer to the question whether Fred is bald or not, even though we know that this implication is, in reality, false.

All the best,