First, it does not determine a specific set theory. In my previous post on the topic, I mentioned:
I'm actually disappointed that this view seems to endorse a more standard well-founded view of set theory, as opposed to what I view as more natural non-well founded theories (in which we have a set of all sets, which is itself a set). Oh well.Really, though, this isn't quite the case. I was imagining a process in which we strengthen the language iteratively, adding increasingly powerful truth predicates, as in the last paragraph of this old essay. However, we're actually free to postulate any set theory we like.
Basically, in this view, set theory is a way of transposing 2nd-order entities into the 1st-order level to increase the power of the system. We can't just inject "all of them", because there will always be more 2nd-order entities than 1st-order entities, so if we force all 2nd-order entities to be 1st-order entities, we can prove a contradiction. There are many different ways to do a partial job.
My 2nd-order entities are properties, but they are related to the notion of a 'class'. A class is much like a property except that (as with sets) we regard two classes as equal if they have exactly the same members. Set theories developed in 2nd-order logic use 2nd-order entities as classes, and define sets from these. (The difference between a set and a class is that a class is a 2nd-order entity, and therefore can have members, but cannot be a member in anything. A set is a 1st-order entity, and can therefore be a member, as well.)
It would be possible to develop a finitely axiomatized version of ZFC, like NBG set theory. We could also do many other things, though. For example, we could make a set theory like NFU, which has a universal set. This simply means that we reject the 'iterative' idea of converting 2nd-order entities into 1st-order ones. In the 'iterative' conception, we imagine transporting entities to the 1st-order level in stages. Since increasing the number of 1st-order entities can increase the membership of some of our 2nd-order entities, we think of it as changing the meaning of those 2nd-order properties; so, even though we transpose all the existing 2nd-order entities to the 1st-order level at any particular stage, there are yet more entities to consider at the next stage.
NFU rejects the intuition that each stage is conceptually distinct. So, for example, the universal set is the result of transposing the trivial property which is always true. This holds all entities at every stage; so rather than constructing a series of larger and larger distinct sets, each holding all the entities present at some particular stage (but no other entities), we allow it to exist as a single set holding everything.
The stages themselves are still important to NFU: we still need a restriction on which properties may form sets, since we can't make the claim that all properties correspond to sets. How NFU implements this restriction is that properties only become sets if the sets mentioned in the definition can be consistently assigned to stages in a way that ensures membership tests are well-defined. So, for example, Russel's paradoxical set (the set of all sets which don't contain themselves) will never exist, because at no stage can we already test membership if a set to itself. (The variable x in "x such that x ∉ x" can't be assigned a number to ensure that all membership tests a ∈ b have #a ∈ #b.)
Equivalently, the expression which gives us a predicate should be well-typed in simple type theory. (For a completely formal account, read the book.)
The point is, my foundational logic doesn't solve, or even greatly constrain, the foundations of set theory.
(One thing I intentionally left ambiguous was the semantics of the logic. How many properties are there? Are we quantifying over just property definitions, giving a more constructivist foundation, or are we quantifying over a classical space of properties, which is a superset of those we can define? I was hoping to address this in an eventual post.)
There are also other problems which I have not solved in this way. I've claimed that this logic gives us, in an important sense, a self-referential truth predicate. It gives us a language which contains a truth predicate that is correct for all sentences in that language, and also (at a different level) satisfies the Diagonal Lemma, despite Tarski's theorem. However, this doesn't exactly solve all paradoxes of the truth predicate. Kripke noted that a paradox can sometimes depend on accidental self-reference created by the world, not the logic:
Suppose that a spiteful Professor X writes on the chalkboard, "Whatever Professor Y is writing on his chalkboard is false." Meanwhile, the benevolent Professor Y is writing "Whatever Professor X is writing on his chalkboard is true." How should we analyse the situation?
In order to reason about statements written down in the external world, we must have a theory associating 1st and 2nd order entities, just as in the set-theory problem. Complex external objects must "indicate true propositions"!
So, the analysis is not completely determined by the theory of truth. Let's be conservative, though, and suppose we've done just one round of introspection: starting with my impredicative theory of truth, which I'll call L, we generate L', the theory of truth-in-L. This contains a first-order entity for every statement in L, and a truth predicate for these. Let's call this a meta-theory of truth.
We quickly see that we cannot unpack the statements made on the chalkboards to correspond to any particular statements in L. Any (very reasonable) theory which we try to make which relates marks on chalkboards to propositions will fail to relate these particular marks to anything. Before we determine what proposition Y is asserting, we would have to determine what proposition X is asserting, and vice versa.
This is basically because my theory of truth does not solve any problems associated with self-referential truth in the 2nd sense: the truth of self-referential statements. It only provides self-referential truth in the first sense: the ability to discuss truth-in-L in L (so long as we keep the discussion 2nd-order).
Different meta-theories of truth will give different results. Some meta-theories of truth may give us accounts of how statements which are self-referential can get associated with propositions, just as some set theories would allow for sets which contained themselves (such as the universal set).
A related deficiency of the theory is that we cannot form propositions via arbitrary computations. This would be a nice guarantee of the expressiveness of the language. At one point, I thought of this as a major goal: provide a language satisfying the diagonal lemma and containing its own truth predicate, so that the truth of arbitrary computations on atomic propositions could be asserted. Of course, it is not possible to include all and only the meaningful computations; however, it should be able to ensure that if we can prove a computation always halts, then we can use it; this would allow more computational power to be conveniently added by adding axioms. (IXI seems to be like this.) Then we can say that the limitation is not about what the system can't express, but only about what the system doesn't know (not having enough axioms).
It isn't terribly clear which of these problems needs to be solved.
No comments:
Post a Comment