Continuing a thought in this post.
A normal axiomatic logic pins down which inferences are permissible, but makes no comment on how these inferences are to be made. For real implementations, we've come up with a plethora of inference strategies-- however, the axiomatic system is always treated as separate from the inference strategies. My idea of action logic is to merge the two, hopefully in order to do nifty things like letting the system reason about its own inference strategies in a smooth way.
In other words, what we want is to nicely integrate logic and programming.
Several attempts to do this already exist. One example is Prolog. Prolog does not, however, satisfy the desire to allow deductive reasoning to determine flow of inference; Prolog just performs a backtracking resolution algorithm on horn clauses, paying some heed to meta-logical control flow statements that can be inserted.
However, I think we can take some inspiration from a predecessor of Prolog, known as Planner. Planner worked through a backtracking mechanism similar to that of Prolog, but more expressive. It used a separate notion of "goal" and "fact" to drive reasoning: if a goal is present, then the system starts reasoning about how to achieve that goal. This can involve the creation of subgoals which drive more reasoning, the execution of programs, and performance of inference steps.
One could argue that Prolog makes this distinction, but it is much less explicit: since programs and statements are the same thing, the creation of subgoals always just corresponds to the conditions in an if-then. (In PLANNER one can specify arbitrary advice for what subgoal to create at a given step; it is not so strongly tied to the form of the logical assertions in the knowledge base.)
PLANNER also has the feature that facts can be removed from the knowledge base when they become false, ie, the system does not need to index facts to time in order to model facts changing; this is important for efficiency, though it is less "logically pure" than the alternative.
This is a good overview of Planner for the interested. More recent systems such as SOAR, SCREAMER, ACL2, et cetera have many of the same features; I am really picking out PLANNER mainly for its historical significance.
In any case. I said that I was trying to merge declarative and procedural, yet I am praising PLANNER's explicit goal system over Prolog's implicit way of dealing with goals. Why? Because it is more powerful: we can provide more inference guidance in PLANNER than in Prolog.
In any case, I am not saying that Planner is a sufficient action logic, either! In fact, I'd say, let's question the basic mechanism which it shares with Prolog: if the system has goal G, and knows a statement of the form "S implies G," then it will automatically make a subgoal S. (For the curious: Both PLANNER and Prolog will select out of several statements "X implies G", try that X, give up if that subgoal turns out infeasible, and try the next. This technique is applied recursively, creating sub-sub goals, et cetera. In Prolog this is the only mechanism at play; in PLANNER there are more conditions to be met for this to happen, there are other mechanisms at work, and the programmer can add arbitrary mechanisms.)
What happens when we vary this mechanism? Well, there are a lot of options. We could create a reinforcement-type agent which attempted to learn what inferences to draw and what subgoals to create and when, getting rewarded whenever it meets a goal specified by the programmer. We could create a brute-force system which keeps proving theorems until it happens upon one of the form "P implies G" where P is a fully specified sequence of actions, rather than just a subgoal-- the system then does P. We could produce a system like Hutter's asymptotically optimal algorithm by creating a mechanism which searches for plans of action, replacing the current plan when the system finds a proof that switching to a different plan will result in a better outcome.
The weakest choice is to just require that if the goal "do X" is asserted, and X is a sequence of actions, the system does them immediately. This just means that the system is programmable. (X can be an arbitrary program because it can include branching goto type statements.) Any other choice of system can be modeled in this one by telling it to behave in that way.
Stronger choices will provide more built-in useful behavior, but will also have more built-in assumptions.
My desire is to create an abstract framework, not something that pins things down to a specific implementation. So, it seems like what I'm looking for is an abstract "should-ness" concerning what inferences to draw when, and what subgoals to make when.
Much of the logic discussed under the title of action logic in the Stanford Encyclopedia is somewhat relevant here, though the logics there are definitely not action logics in the sense I intend: they are just normal axiomatic systems, which must rely on outside inference procedures.
In general, my approach to AI is very much in the line of probabilistic relational methods: I advocate a move to probabilistic reasoning where just boolean reasoning is present, and to relational methods where just propositional methods are present. In this case, that means that what I see as the logical next step is to research probabilistic generalizations of PLANNER... it seems this would force many of the simplistic methods employed in PLANNER to be "broken open," so to speak, getting out the chaff and determining what can be salvaged and improved for the probabilistic setting.