This post is inspired by the
GOLEM architecture that Ben Goertzel recently wrote a post about. In discussing the architecture on the Singularity list, we came up with the idea of allowing an artificial intelligence to alter its own utility function computation in a controlled way: it evaluates potential new utility calculations by comparing the outputs to the outputs of the original utility function, looking for computations that are essentially the same but faster.
Now there are some fun (and potentially important) questions about how to deal with uncertainty in the original utility function, but I won't go into these here.
The discussion made me think about the following sort of system:
- Start with a set of axioms and rules of inference to start with, and if you like, also a set of statements about the world (perhaps sensory data).
- Look for new logics which can derive what the old can derive, but possibly more.
- Judge these based on some criteria; in particular, shortness of derivations and simplicity of the new logics both seem sensible.
This potentially solves my problem of coming up with a system that iteratively learns higher and higher levels of the Tarski hierarchy. If the system keeps augmenting itself with a new truth predicate, then it will keep increasing the efficiency with which it can derive the truths from the initial system (by a result of Goedel for the type-theoretic hierarchy which if I'm not mistaken will hold similarly for the Tarski hierarchy; see his
On the Length of Proofs). This does
not show that the Tarski hierarchy is the
best way of increasing the power of the system, but I am perfectly OK with that.... what I'd like, however, would be some guarantee that (some canonical axiomatization of) each level of the Tarski hierarchy can at least eventually be
interpreted (ie, as we keep adding further extensions, we interpret more of the Tarski hierarchy, without bound). I do not know how to show this, if it's true.
No comments:
Post a Comment