This is the promised follow-up to this post.
What sense can we make of the standard treatment of subjective higher-order probabilities? Beyond the formalism, how are we to visualise this stuff?
I wanted a possible-worlds type semantics, but I decided to construct my own rather than looking at existing ones.
In normal possible-world semantics, we've got a bunch of worlds which hold complete valuations-- assignments of every statement to true or false. Then, we often talk about an "accessibility" relation between worlds: world A is only accessible from world B under certain conditions. The accessibility relation controls our notion of "possible" and "necessary": from our perspective at a world W, something is considered "possible" if it is true in some world accessible from W, and "necessary" if it is true in all accessible worlds.
Examples of useful accessibility relations include ones based on knowledge (ie, the accessible worlds are the ones consistent with what I know about the world I'm in) and based on time (ie, the accessible worlds are potential futures).
One way of getting probabilities out of this is to make the accessibility relation continuous-valued; rather than a world being accessible or not, a world has a degree of accessibility.
So, we start with just non-probabilistic facts in each world; then 1st-order probabilistic facts get values based on the immediately accessible worlds. Then 2nd-order facts, 3rd-order, et cetera. (This can go up to infinite orders if the language is expressive enough to mention them.)
To get coherent probabilities, we need to have some restrictions on the weights of the accessibility relations; I won't go into detail here.
Unfortunately, this really only gets us a 1st-order distribution: since all our facts are completely determined in each world, all our 1st-order probabilities will be completely determined in the first propagation step, so all higher-order probabilities will be 1 or 0.
To get an interesting higher-order distribution, we can give up the idea that possible worlds always hold complete valuations. Instead, a possible world contains just a partial picture: an assignment of a few facts to true or false. This allows the probabilities to be only partially determined, allowing nontrivial probabilities at each stage.
This goes along with an idea called "situation theory" which I know only a little about. As I understand it, the idea is that when talking about possibility, we talk about "situations" rather than worlds: partial assignments to true/false rather than complete ones.
It's probably better to think of these as "possible states of knowledge" rather than possible worlds or even possible situations, given the probabilistic interpretation. We move between worlds when we gain knowledge. Some of the worlds *will* be fully specified, and one of these will correspond to the "actual world"; however, we will never get there in terms of our state of knowledge.
Now, since this construction gives us a self-referential probability predicate, it is interesting to think of it as a sort of generalised theory of self-referential truth. P(S)=1 plays the role of True(S), but does so imperfectly: it is possible for the probability of a statement to be 1 but for the statement to later turn out to be false. For example, if we have an even distribution over the possible probabilities of some statement A, then P(P(A)=r)=0 for all real numbers r. This means P(P(A) not equal r)=1 for all r. Yet, we may eventually transition into a world in which r takes on a specific value-- so we can't think of all statements "P(A) not equal r" as being true. This makes our theory of (pseudo-)truth one in which the inference A => True(A) is justified, but not the inference True(A) => A.
The propagation method gives us a "least-fixed-point" type probability distribution. However, there might also be other useful fixed points. For example, we may want some self-referential sentences to come out with probability values.
In any fixed point, there will be probabilistic statements which do not get any truth value. First off, any statement which has a (non-integer) probabilistic belief associated with it can't also have a truth value. There will also be statements, though, which can't be consistently assigned any level of belief.
Overall, this probably doesn't offer an especially appealing theory of truth. It would be interesting to know more about precisely how much it can give us, though.
No comments:
Post a Comment