Monday, July 4, 2011

Procedural Logic Once Again

Follow up to Procedural Logic.

This is one of those things I've been meaning to write about for a while now.

Sometimes early this year, I had the thought that some form of "agenda" or priority queue is lurking inside a very large number of AI algorithms. What happens if we push the idea of making priority queues the heart of AI systems? Can I make a generally useful AI toolkit by creating an optimized priority queue with some generally useful built-in features? (This would be similar to the way SOAR is "just" a highly optimized rule engine with useful extra features.) This was to be my answer to the "Procedural Logic" question: a principled way to handle thinking-through-time (and the acting/thinking trade-off) by making use of scheduling at the basic level.

If I was writing this post back then, I would have had a big argument justifying this idea. The main thrust of the argument would be that priorities can be thought of as a procedural form of expected utilities; IE, they should be an immediate estimate of the value of performing that action. The special features of the system would center around improving that estimate, through methods like reinforcement learning. Via an analogy to the copycat system, I was referring to each task as a "codelet".

When I told this idea to Russel Wallace, he immediately spotted the problem (though it took me a while to appreciate the insight). Such a system will run into the "problem of reflection" (representing and reasoning about one's own representations and reasoning) head-on, with no direct solution. We can't easily use the task system to think about what priorities should be assigned to tasks without risking adding infinitely many tasks to the queue (if we try to have an "estimate the priority of this" task for each task).

We can get around that if we are careful, but we have to make compromises. Even if we settle on a good trade-off, the credit assignment problem is far from easy; we can't just apply standard RL. To get a good idea of what is going on, it seems as if we have to keep a record of dependencies (what tasks pushed what other tasks into the queue) within the system. This is something SOAR does in order to implement its chunking mechanism, so it's possible. Chunking-like behavior was another feature I was considering: if several codelets often run in sequence, compile their actions into one codelet so that it runs faster. This would actually be very similar to JIT compilation of bytecode, so my hope was to use algorithms which had been developed for that.

A system more heavily built upon the concept of activity tracing is Funk2. The goal of Funk2 is exactly the one I mentioned: solving the credit assignment problem by tracing the cause-and-effect relationships within the code.

So, perhaps there is some hope for the idea. At every corner, though, just when Solution X looks like it will make the idea work, I start wanting to make "Smart X"; ie, X which learns over time thanks to making more general use of the system, and in particular, a version of X which makes use of the task queue. This is, of course, Bad Thinking. There really does need to be a bottom somewhere, even if as a programmer I'm used to the power of recursive decomposition of problems. This takes some of the magic out of the idea, forcing concrete design decisions which may not be as general-purpose as my original vision.

So, all told, I reached a point where the idea was set aside.

Then I found Dyna. The basic idea of Dyna sounded at first outlandish: an AI programming language which manages to make no assumptions about the style of AI being implemented (ie, it "takes no sides" concerning the different debates going on within AI) yet makes it possible to implement many standard AI algorithms in just a few lines. Furthermore, the concept of the language is very simple! It's not obvious at all that this should be possible; usually, you've got to make assumptions to gain anything. Yet, there it is: simple, (mostly) intuitive descriptions of textbook algorithms, often in less than 10 short lines of code. And, as if it was designed just to make me happy, it's got a priority queue at the core of its implementation.

The basic idea behind Dyna, in case you don't want to just dive into the links, is basically just to take rule-based programming / logic programming and completely scrap the idea that the rules have to be passing anything like truth values. A "rule" is just an expression which determines how some sort of value should be propagated. The value can be anything you like; it can be a true/false, or a probability, but it can also be a neural-network activation or an instance count or even a complicated data structure. In this way, it is very similar to probabilistic programming languages like BLP, but drops the "probabilistic" part. (Dyna also relies fundamentally on a concept of aggregation which is very similar to the concept of "combination" in BLP, and also similar to "summary" in the summary-product alg and yet also captures the idea of "aggregate CPD" in statistical-relational modeling... but I'll let you read the papers.)

Dyna, however, is far from a "procedural logic": the philosophy of Dyna is to strictly separate the abstract program specification from the implementation level. The Dyna programmer specifies the logical (or "equational") relationships between different values, but it's up to the Dyna compiler to decide how to implement these effectively.

As with any "Solution X", the obvious thing to want next is "smart X"; and, indeed, the papers refer repeatedly to the idea that Dyna will eventually use machine learning to figure out what implementation strategies work best for which kinds of programs. My reaction to this is that it seems a tad hopeful to think that ML can figure out what the human programmer has difficulty with. It seems safer to just say that we should buckle down and look for good rules for the compiler to use to decide how to implement particular pieces of code, perhaps tuning the occasional threshhold via experimentation...

...but what I really want to say is that Dyna should not have that strict divide between specification and implementation at all; instead, it should be build from the beginning with the reflection capabilities necessary for it to control its own execution strategies via run-time priority manipulation, and so on. This is only bolstered by the fact that Dyna was originally designed with dynamic programming in mind, and RL is a dynamic programming problem!

But we know where all of this leads. Right into the reflection problem. [Update: Ah, actually, they do plan on allowing this sort of thing! They call it "voodoo computation".]

So, all told, I've made very little progress on the whole "procedural logic" idea. However, I do think Dyna is very cool; it brings together a lot of ideas of textbook AI in a small and elegant space, perhaps revealing that there was more to them than we thought when we were in undegrad AI class. :)

PS:

One more semi-relevant idea. Clean is a programming language which is very similar to Haskell, slightly older, slightly less elegant, but with a much better way of thinking about IO (it seems). It allows side effects without compromising referential transparency by introducing a uniqueness type which allows only one reference to values which may change due to side-effects (so that we can't be surprised when the value changes). This seems, well, cleaner than the IO monad (though I'm no expert on IO in Haskell); a good way of interfacing the logical with the procedural, but no solution to generating nanosecond-level goal-oriented behavior.

(Dyna is actually a "pure" language, no side-effects allowed... though values can change over time in several ways!)

1 comment:

  1. excellent insight into an explorative expedition into ai, thanks... dyna does indeed sound interesting, and though your intuition seems to meet the same problem, i like the sound of it... i will check out your subsequent posts to see how things panned out with interest :)

    ReplyDelete