Monday, April 12, 2010

More on Self-Modifying Logic

This is a follow-up to this post, and will simply attempt to fill in some details.

In order to properly refer to a system as "self modifying logic," one would want the logic to modify itself, not merely to be a system of logic which gets modified by an overseer-type system... of course. I did not make it clear in the previous post how this would be accomplished.


It seems to me that a prerequisite would be what one might call an action logic: a logic which isn't just a way of representing and manipulating declarative knowledge, but which is also capable of making and executing decisions, including decisions about its own reasoning procedures.

Lacking a full-fledged action logic, I can only offer a first approximation (loosely inspired by Markus Hutter's algorithm for all well-defined problems): We have a set of axioms describing what kind of outcomes are better and worse, and how our current action policy affect them. We have some standing  we have a reasoning system which uses some declarative logic and is proving things in an exhaustive manner (Hutter uses Levin search). We keep track of the proofs of action-policy values, and at all times use the best policies currently known of.

And, of course, the theorem proving method (initially an exhaustive search) should ideally be a part of the policy so that it can be modified. (This becomes more like Godel machines, which I'm sure I link to often enough.)

Now, to make this usable, we've got to be selecting policies based on estimated value rather than provable values; thus the logic should include probabilistic reasoning about the policies based on limited reasoning and limited testing of the methods. (ie, the probabilistic reasoning should be broad enough to admit such spot-tests as evidence, something that requires a solution to the problem of logical uncertainty.)

So, obviously, many details need filled in here.

In any case, one of the available actions will be to change the logic itself, substituting a more useful one.

The utility of such changes should in principle be derived from the basic utility function (that is, our preference ordering on possible outcomes); however, for the discussion in the previous post, for analysis of possible behaviors of such systems, and possibly for practical implementations, a notion of goodness that applies directly to the logic itself is helpful.

So, questions:

--What might a full-featured "action logic" look like?
--Under what conditions will the system tend to add new mathematical truths to the set of axioms? Under what conditions will the system tend to add truth predicates, or in any case, metalanguages?
--Under what conditions will the system satisfy the backwards-compatibility requirement that I mentioned last time?

No comments:

Post a Comment