I've been thinking for a while about why postmodernism might be a tempting intellectual position. There are certain logically sound logical arguments that support particular assertions associated with postmodernism, as well as certain practical issues which might make it tempting. I will try to illustrate my position against postmodernism by giving the best arguments I can muster for it-- the intention is to show why we can get this far, but no further.
I won't be too surprised if someone reads this and says "but postmodernism doesn't try to go any further than that with the argument,"or further, "that's not what postmodernism is at all;" these arguments are based on my impression of postmodernist thought, not based on a formal education in the subject.
Since this thing would be tl/dr (or worse, tl/dw) if I did it as one post, I'll just post an outline for now; the points below will become links as I write individual posts. (I may also come back and add/delete/edit points, of course.)
Practical Reasons
-making room for others to doubt your beliefs
-the intellectual proliferation of hypotheticals
-Flexibility of language and definitions; anti-prescriptivism
-disagreements are often based on matters of language (differing definitions) rather than matters of fact
-such things as tables, chairs, etc don't exist (strictly speaking)
-Scientific theories are approximations
Logical Reasons
-loeb's theorem
-algorithmic complexity is relative
-We can always "interpret" talk in any logical system or system of axioms as just hypothetical first-order talk (by throwing out the naturalness constraint on interpretations)
Wednesday, April 14, 2010
Monday, April 12, 2010
More on Self-Modifying Logic
This is a follow-up to this post, and will simply attempt to fill in some details.
In order to properly refer to a system as "self modifying logic," one would want the logic to modify itself, not merely to be a system of logic which gets modified by an overseer-type system... of course. I did not make it clear in the previous post how this would be accomplished.
It seems to me that a prerequisite would be what one might call an action logic: a logic which isn't just a way of representing and manipulating declarative knowledge, but which is also capable of making and executing decisions, including decisions about its own reasoning procedures.
Lacking a full-fledged action logic, I can only offer a first approximation (loosely inspired by Markus Hutter's algorithm for all well-defined problems): We have a set of axioms describing what kind of outcomes are better and worse, and how our current action policy affect them. We have some standing we have a reasoning system which uses some declarative logic and is proving things in an exhaustive manner (Hutter uses Levin search). We keep track of the proofs of action-policy values, and at all times use the best policies currently known of.
And, of course, the theorem proving method (initially an exhaustive search) should ideally be a part of the policy so that it can be modified. (This becomes more like Godel machines, which I'm sure I link to often enough.)
Now, to make this usable, we've got to be selecting policies based on estimated value rather than provable values; thus the logic should include probabilistic reasoning about the policies based on limited reasoning and limited testing of the methods. (ie, the probabilistic reasoning should be broad enough to admit such spot-tests as evidence, something that requires a solution to the problem of logical uncertainty.)
So, obviously, many details need filled in here.
In any case, one of the available actions will be to change the logic itself, substituting a more useful one.
The utility of such changes should in principle be derived from the basic utility function (that is, our preference ordering on possible outcomes); however, for the discussion in the previous post, for analysis of possible behaviors of such systems, and possibly for practical implementations, a notion of goodness that applies directly to the logic itself is helpful.
So, questions:
--What might a full-featured "action logic" look like?
--Under what conditions will the system tend to add new mathematical truths to the set of axioms? Under what conditions will the system tend to add truth predicates, or in any case, metalanguages?
--Under what conditions will the system satisfy the backwards-compatibility requirement that I mentioned last time?
In order to properly refer to a system as "self modifying logic," one would want the logic to modify itself, not merely to be a system of logic which gets modified by an overseer-type system... of course. I did not make it clear in the previous post how this would be accomplished.
It seems to me that a prerequisite would be what one might call an action logic: a logic which isn't just a way of representing and manipulating declarative knowledge, but which is also capable of making and executing decisions, including decisions about its own reasoning procedures.
Lacking a full-fledged action logic, I can only offer a first approximation (loosely inspired by Markus Hutter's algorithm for all well-defined problems): We have a set of axioms describing what kind of outcomes are better and worse, and how our current action policy affect them. We have some standing we have a reasoning system which uses some declarative logic and is proving things in an exhaustive manner (Hutter uses Levin search). We keep track of the proofs of action-policy values, and at all times use the best policies currently known of.
And, of course, the theorem proving method (initially an exhaustive search) should ideally be a part of the policy so that it can be modified. (This becomes more like Godel machines, which I'm sure I link to often enough.)
Now, to make this usable, we've got to be selecting policies based on estimated value rather than provable values; thus the logic should include probabilistic reasoning about the policies based on limited reasoning and limited testing of the methods. (ie, the probabilistic reasoning should be broad enough to admit such spot-tests as evidence, something that requires a solution to the problem of logical uncertainty.)
So, obviously, many details need filled in here.
In any case, one of the available actions will be to change the logic itself, substituting a more useful one.
The utility of such changes should in principle be derived from the basic utility function (that is, our preference ordering on possible outcomes); however, for the discussion in the previous post, for analysis of possible behaviors of such systems, and possibly for practical implementations, a notion of goodness that applies directly to the logic itself is helpful.
So, questions:
--What might a full-featured "action logic" look like?
--Under what conditions will the system tend to add new mathematical truths to the set of axioms? Under what conditions will the system tend to add truth predicates, or in any case, metalanguages?
--Under what conditions will the system satisfy the backwards-compatibility requirement that I mentioned last time?
Thursday, April 1, 2010
More on Curiosity
Follow up to this.
I took a look at this paper on curiosity, which includes a good review on more recent work than what I had read about when I wrote the previous post. One nice insight that has been made is that it is useful to split up the value function based on actual reward from the value function based on "exploration bonus". These can then added together to make the final value. One can still think of the exploration bonus in terms of optimism, but another way to think of it is that the system is really just trying to calculate the benefit of exploring a particular option (that is, the learning benefit), and adding that to the direct benefit of choosing the route.
In this account, the confidence-interval method mentioned in the last post is seen as a method of estimating the learning benifit of a state as the distance between the most probable average utility and the top of the X%-confidence range for the average utility.
A related estimate might be the expected information gain...
It's not yet clear to me how to make an estimate that approximates the true benefit in the limmit.
I took a look at this paper on curiosity, which includes a good review on more recent work than what I had read about when I wrote the previous post. One nice insight that has been made is that it is useful to split up the value function based on actual reward from the value function based on "exploration bonus". These can then added together to make the final value. One can still think of the exploration bonus in terms of optimism, but another way to think of it is that the system is really just trying to calculate the benefit of exploring a particular option (that is, the learning benefit), and adding that to the direct benefit of choosing the route.
In this account, the confidence-interval method mentioned in the last post is seen as a method of estimating the learning benifit of a state as the distance between the most probable average utility and the top of the X%-confidence range for the average utility.
A related estimate might be the expected information gain...
It's not yet clear to me how to make an estimate that approximates the true benefit in the limmit.
Subscribe to:
Posts (Atom)