I have recently been revisiting my search for a theory of truth, which I had previously abandoned as unproductive. It seemed like undefinability was simply a fact to be accepted; if I found myself unsatisfied with existing theories of truth, that was my problem, not theirs.
To some extent, this has become like an "old demon" for me. It's easy for me to spend too much time obsessing about this. So, despite trying to re-visit the problem, I found myself thinking: truth simply cannot be defined, but I need to find a way to come to terms with that. To settle the issue for myself, I needed an acceptable story.
I began reviewing the arguments which prove undefinability.
The fundamental problem seems to be related to the fact that a set cannot contain its powerset. More generally, a set S cannot index all functions f: S -> {0,1}; that is, there is no bijection between S and the set of those functions. I say "more generally" because this statement holds whether we take the "classical" stance, that there are uncountably many such functions, or take a more "constructivist" stance, considering only the computable functions. Either way, the result holds. There is a loophole, though, which will become important soon.
What does the existence of a bijection have to do with truth? Well, the Liar paradox and other similar problems arise from attempting to simultaneously refer to all your semantic values while maintaining an expressively powerful logic which allows you to take arbitrary functions of things. (By "semantic values" I mean just "true" and "false" to start with, though this set is often extended in an attempt to repair paradoxes.) Expressive power looks reasonable. It seems obvious that we should be able to express that the opposite of some statement is true. Less obviously, if we can take general computable functions of things, we end up satisfying a fixpoint theorem, which (as stated in the diagonal lemma) makes it possible to construct self-referential sentences. OK. This might sound a little weird, but it's just a consequence of that expressive power we wanted. Should be fine, still? Unfortunately, with these two results, we can construct a sentence which is its own negation. What? Where did that come from?
So, the problem seems to arise from trying to be able to assert arbitrary functions of semantic states. It seems reasonable at first: computations on semantic states are actually useful, and the more the better. This allows us to flexibly state things. So why is there a problem?
The problem is that we've unwittingly included more statements than we intended to. Expanding the language with a naive truth predicate extends it too far, adding some stuff that's really nonsense. It isn't as if there was a Liar sentence in the original language which we're now able to compute the negation of. No: the negation computation, allowed to apply itself to "anything", has been applied to itself to create a new, pathological entity (somewhat analogous to a non-halting computation).
(Still trying to convince myself not to worry about these things, I said:) We should not have expected that to work in the first place! It's just like trying to construct a set which is its own powerset. If you show me an attempt to do it, I can mechanically find a subset you left out.
Or, it's like trying to solve the halting problem. There are these "loopy" entities which never terminate, but by the very nature of that loopyness, it is impossible to construct a function which sorts the loopy from the useful: if we had such a function, we could use it to construct a new, "meta-loopy" computation which went loopy if and only if it didn't.
One way of attempting to get around this problem is to add a new semantic value, so that instead of {true, false} we have {true, false, meaningless} or something to that effect. Unfortunately, this does not fundamentally solve the problem: we need to make a decision about whether we can refer to this new semantic value. Informally, it seems like we can; this causes the same problem again, though, since if we can refer to it, we can apply the full power of the logic again (creating a new arbitrary-functions-on-semantic-states problem). On the other hand, we could take a hint, and remain silent about this "new value": so, we are only able to talk about truth and falsehood. This is, essentially, Kripke's theory of truth.
It seems as if we are forced to accept an inherently partial theory of truth, with some sentences in a purposefully ambiguous state- they are neither true nor false, but we are not able to refer to that fact definitively.
This has to do with the loophole I mentioned earlier. It is not possible to map a set S onto the functions f: S -> {true, false}. Even less is it possible to map them onto f: S -> {true, false, _}, with some third value. Yet, we can describe any computer program we want with a finite sequence of 0s and 1s... and the computer programs can be arbitrary computable mappings from such strings to {0,1}! We're indexing the functions with the entities being mapped. We seem to have arbitrary functions on our semantic states. Isn't this exactly what we said we can't do? Why does this work with computer programs, but not with truth? Because we accept the halting problem. We accept that our representation language for computer programs will include some nonsense programs which we don't actually want; meaningless programs are an unfortunate, but necessary, side-effect of the representational power of the language. Think of it as having 2.5 semantic values. A program can output 1, 0, or... it can fail to output. That isn't a 3rd output value; it's nothing.
That is why Kripke's theory is so appealing: it takes this loophole and applies it to the domain of truth.
However, it is not the solution I am putting forth today.
Today, my assertion is that trying to fit the set inside the powerset never made sense in the first place. Sure, Kripke's loophole is nice, but it still leaves us with this uncomfortable feeling that we can refer to more things than the logic we've constructed, because we've got this 3rd semantic state which we're supposed to shut up about. (Or, as Tim Maudlin suggests, we can refer to it so long as we don't make the mistake of thinking we're saying something true??)
We would like a "self-referential theory of truth", but what does that mean? It seems to me that all we want is a truth predicate which exists happily inside the language which it discusses. This does not require the kind of self-reference provided by the diagonal lemma. It seems like that just gets us nonsense. Perhaps, by discarding the sort of logical power which leads to the diagonal lemma, we can get around the problem. Incredible results have been achieved this way in the past: Dan Willard's self-verifying logic gets around the 2nd incompleteness theorem in this way. Can we do something similar for undefinability?
I've decided to split this up into multiple posts. The next post will go into the proposed solution.
No comments:
Post a Comment