Sunday, November 15, 2015

Intuitionistic Intuitions

I've written quite a few blog posts about the nature of truth over the years. There's been a decent gap, though. This is partly because my standards have increased and I don't wish to continue the ramblings of past-me, and partly because I've moved on to other things. However, over this time I've noticed a rather large shift taking place in my beliefs about these things; specifically my reaction to intuitionist/constructivist arguments.

My feeling in former days was that classical logic is clear and obvious, while intuitionistic logic is obscure. I had little sympathy for the arguments in favor of intuitionism which I encountered. I recall my feeling: "Everything, all of these arguments given, can be understood in terms of classical logic -- or else not at all." My understanding of the meaning of intuitionistic logic was limited to the provability interpretation, which translates intuitionistic statements into classical statements. I could see the theoretical elegance and appeal of the principle of harmony and constructivism as long as the domain was pure mathematics, but as soon as we use logic to talk about the real world, the arguments seemed to fall apart; and surely the point (even when dealing with pure math) is to eventually make useful talk about the world? I wanted to say: all these principles are wonderful, but on top of all of this, wouldn't you like to add the Law of Excluded Middle? Surely it can be said that any meaningful statement is either true, or false?

My thinking, as I say, has shifted. However, I find myself in the puzzling position of not being able to point to a specific belief which has changed. Rather, the same old arguments for intuitionistic logic merely seem much more clear and understandable from my new perspective. The purpose of this post, then, is to attempt to articulate my new view on the meaning of intuitionistic logic.

The slow shift in my underlying beliefs was punctuated by at least two distinct realizations, so I'll attempt to articulate those.

Language Is Incomplete

In certain cases it's quite difficult to distinguish what "level" you're speaking about with natural language. Perhaps the largest example of this is that there isn't the same kind of use/mention distinction which is firmly made in formal logic. It's hard to know exactly when we're just arguing semantics (arguing about the meaning of words) vs arguing real issues. If I say "liberals don't necessarily advocate individual freedom" am I making a claim about the definition of the word liberal, or an empirical claim about the habits of actual liberals? It's unclear out of context, and can even be unclear in context.

My first realization was that the ambiguity of language allows for two possible views about what kind of statements are usually being made:

  1. Words have meanings which can be fuzzy at times, but this doesn't matter too much. In the context of a conversation, we attempt to agree on a useful definition of the word for the discussion we're having; if the definition is unclear, we probably need to sort that out before proceeding. Hence, the normal, expected case is that words have concrete meanings referring to actual things.
  2. Words are social constructions whose meanings are partial at the best of times. Even in pure mathematics, we see this: systems of axioms are typically incomplete, leaving wiggle room for further axioms to be added, potentially ad infinitum. If we don't pin down the topic of discourse precisely in math, how can we think that's the case in typical real-world cases? Therefore, the normal, expected case is that we're dealing with only incompletely-specified notions. Because our statements must be understood in this context, they have to be interpreted as mostly talking about these constructions rather than talking about the real world as such.
This is undoubtedly a false dichotomy, but helped me see why one might begin to advocate intuitionistic logic. I might think that there is always a fact of the matter about purely physical items such as atoms and gluons, but when we discuss tables and chairs, such entities are sufficiently ill-defined that we're not justified in acting as if there is always a physical yes-or-no sitting behind our statements. Instead, when I say "the chair is next to the table" the claim is better understood as indicating that understood conditions for warranted assertibility have been met. Likewise, if I say "the chair is not next to the table" it indicates that conditions warranting denial have been met. There need not be a sufficiently precise notion available so that we would say the chair "is either next to the table or not" -- there well may be cases when we would not assent to either judgement.

After thinking of it this way, I was seeing it as a matter of convention -- a tricky semantic issue somewhat related to use/mention confusion.

Anti-Realism Is Just Rejection of Map/Territory Distinctions

Anti-realism is a position which some (most?) intuitionists take. Now, on the one hand, this sort of made sense to me: my confusion about intuitionism was largely along the lines "but things are really true or false!", so it made a certain kind of sense for the intuitionist reply to be "No, there is no real!". The intuitionists seemed to retreat entirely into language. Truth is merely proof; and proof in turn is assertability under agreed-upon conventions. (These views are not necessarily what intuitionists would say exactly, but it's the impression I somehow got of them. I don't have sources for those things.)

If you're retreating this far, how do you know anything? Isn't the point to operate in the real world, somewhere down the line?

At some point, I read this facebook post by Brienne, which got me thinking:

One of the benefits of studying constructivism is that no matter how hopelessly confused you feel, when you take a break to wonder about a classical thing, the answer is SO OBVIOUS. It's like you want to transfer just the pink glass marbles from this cup of water to that cup of water using chopsticks, and then someone asks whether pink marbles are even possible to distinguish from blue marbles in the first place, and it occurs to you to just dump out all the water and sort through them with your fingers, so you immediately hand them a pink marble and a blue marble. Or maybe it's more like catching Vaseline-coated eels with your bare hands, vs. catching regular eels with your bare hands. Because catching eels with your bare hands is difficult simpliciter. Yes, make them electric, and that's exactly what it's like to study intuitionism. Intuitionism is like catching vaseline-coated electric eels with your bare hands.
Posted by Brienne Yudkowsky on Friday, September 25, 2015


I believe she simply meant that constructivism is hard and classical logic is easy by comparison. (For the level of detail in this blog post, constructivism and intuitionism are the same.) However, the image with the marbles stuck with me. Some time later, I had the sudden thought that a marble is a constructive proof of a marble. The intuitionists are not "retreating entirely into language" as I previously thought. Rather, almost the opposite: they are rejecting a strict brain/body separation, with logic happening only in the brain. Logic becomes more physical.

Rationalism generally makes a big deal of the map/territory distinction. The idea is that just as a map describes a territory, our beliefs describe the world. Just as a map must be constructed by looking at the territory if it's to be accurate, our beliefs must be constructed by looking at the world. The correspondence theory of truth holds that a statement or belief is true or false based on a correspondence to the world, much as a map is a projected model of the territory it depicts, and is judged correct or incorrect with respect to this projection. This is the meaning of the map.

In classical logic, this translates to model theory. Logical sentences correspond to a model via an interpretation; this determines their truth values as either true or false. How can we understand intuitionistic logic in these terms? The standard answer is Kripke semantics, but while that's a fine formal tool, I never found it helped me understand the meaning of intuitionistic statements. Kripke semantics is a many-world interpretation; the anti-realist position seemed closer to no-world-at-all. I now see I was mistaken.

Anti-realism is not rejection of the territory. Anti-realism is rejection of the map-territory correspondence. In the case of a literal map, such a correspondence makes sense because there is a map-reader who interprets the map. In the case of our beliefs, however, we are the only interpreter. A map cannot also contain its own map-territory correspondence; that is fundamentally outside of the map. A map will often include a legend, which helps us interpret the symbols on the map; but the legend itself cannot be explained with a legend, and so on ad infinitum. The chain must bottom out somehow, with some kind of semantics other than the map-territory kind.

The anti-realist provides this with constructive semantics. This is not based on correspondence. The meaning of a sentence rests instead in what we can do with it. In computer programming terms, meaning is more like a pointer: we uncover the reference by a physical operation of finding the location in memory which we've been pointing to, and accessing its contents. If we claim that 3*7=21, we can check the truth of this statement with a concrete operation. "3" is not understood as a reference to a mysterious abstract entity known as a "number"; 3 is the number. (Or if the classicist insists that 3 is only a numeral, then we accept this, but insist that it is clear numerals exist but unclear that numbers exist.)

A proof worked out on a sheet of paper is a proof; it does not merely mean the proof. It is not a set of squiggles with a correspondence to a proof.

How does epistemology work in this kind of context? How do we come to know things? Well...

Bayesianism Needs No Map

The epistemology of machine learning has been pragmatist all along: all models are wrong; some are useful. A map-territory model of knowledge plays a major role in the way we think of modeling, but in practice? There is no measurement of map-territory correspondence. What matters is goodness-of-fit and generalization error. In other words, a model is judged by the predictions it makes, not by the mechanisms which lead to those predictions. We tend to expect models which make better predictions to have internal models close to what's going on in the external world, but there is no precise notion of what this means, and none is required. The theorems of statistical learning theory and Bayesian epistemology (of which I am aware) do not make use of a map-territory concept, and the concept is not missed.

It's interesting that formal Bayesian epistemology relies so little on a map/territory distinction. The modern rationalist movement tends to advocate both rather strongly. While Bayesianism is compatible with map-territory thinking, it doesn't need it or really encourage it. This realization was rather surprising to me.

No comments:

Post a Comment