Saturday, April 2, 2016

When Does One Program Embed Another?

When does one program simulate, or embed another? This is a question that has vaguely bothered me for some time. Intuitively, it seems fairly clear when one program is running inside another. However, it gets quite tricky to formalize. I'm thinking about this now because it's closely related to the type of "correspondence" needed for the correspondence theory of truth.

(This post also came out of discussions with @moralofstory and @alleleofgene.)

Easy Version: Subroutines

The simple case is when one program calls another. For this to be meaningful, we need a syntactic notion of procedure call. Many computing formalisms provide this. In Lambda Calculus it's easy; however, Lambda Calculus is Turing complete, but not universal. (A universal Turing machine is needed for the invariance theorem of algorithmic information theory; Turing-complete formalisms like lambda calculus are insufficient for this, because they can introduce a multiplicative cost in description length.) For a universal machine, it's convenient to suppose that there's a procedure call much like that in computer chip architectures.

In any case, this is more or less useless to us. The interesting case is when a program is embedded in a larger program with no "markings" telling us where to look for it.

An Orthodox Approach: Reductions

For the study of computational complexity, a notion of reduction is often used. One problem class P is polynomial-time reducible to another Q if we can solve P in polynomial time when we have access to an oracle for Q problems. An "oracle" is essentially the same concept as a subroutine, but where we don't count computation time spent inside the subroutine. This allows us to examine how much computation we save when we're provided this "free" subroutine use.

This has been an extremely fruitful concept, especially for the study of NP-completeness / NP-hardness. However, it seems of little use to us here.
  1. Only input/output mappings are considered. The use of oracles allows us to quantify how useful a particular subroutine would be for implementing a specific input/output mapping. What I intuitively want to discuss is whether a particular program embeds another, not whether an input-output mapping (which can be implemented in many different ways) can be reduced to another. For example, it's possible that a program takes no input and produces no output, but simulates another program inside it. I want to define what this means rigorously.
  2. Polynomial-time reducibility is far too permissive, since it means all poly-time algorithms are considered equivalent (they can be reduced to each other). However, refining things further (trying things like quadratic-time reducibility) becomes highly formalism-dependent. (Different Turing machine formalisms can easily have poly-time differences.)

Bit-To-Bit Embedding

Here's a simplistic proposal, to get us off the ground. Consider the execution history of two Turing machines, A and B. Imagine these as 2D spaces, with time-step t and tape location l. The intuition is that B embeds A if there is a computable function embed(t,l) which takes a t,l for A and produces one for B, and the bits are always exactly the same in these two time+locations.

The problem is, this can be a coincidence. embed(t,l) might be computing A completely, and finding a 0 bit or a 1 bit accordingly. This means there will always be an embedding, making the definition trivial. This is similar to the problem which is solved in "reductions" by restricting the reduction to be polynomial time. We could restrict the computational complexity of embed to try and make sure it's not cheating us by computing A. however, I don't think that works in our favor. I think we need to solve it by requiring causal structure.


My intuition is that causality is necessary to solve the problem, not just for this "internal simple embedding" thing, but more generally.

The causal structure is defined by the interventions (see Pearl's book, Causality). If we change a bit during the execution of a program, this has well-defined consequences for the remainder of the program execution. (It may even have no consequences whatsoever, if the bit is never used again.)

We can use all computable embed(t,l) as before, but now we don't just require that the bits at t,l in A are the same as the bits at embed(t,l) in B; we also require the interventions to be the same. That is, when we change bit t,l in A and embed(t,l) in B, then the other bits still correspond. (We need to do multi-bit interventions, not just single-bit; but I think infinite-bit interventions are never needed, due to the nature of computation.)

Bit-To-Pattern Embedding

The embeddings recognized by the above proposal are far too limited. Most embeddings will not literally translate the program history bit for bit. For example, suppose we have a program which simulates Earth-like physics with enough complexity that we can implement a transistor-based computer as a structure within the simulation. B could be a physical implementation of A based on this simulation. There will not necessarily be specific t,l correspondences which give a bit-to-bit embedding. Instead, bits in A will map onto electrical charges in predictable locations within B's physics. Recognizing one such electrical charge in the execution history of B might require accessing a large number of bits from B's low-level physics simulation.

This suggests that we need embed(t,l) to output a potentially complicated pattern-matcher for B's history, rather tan a simple t,l location.

A difficulty here is how to do the causal interventions on the pattern-matched structure. We need to "flip bits" in B when the bit is represented by a complicated pattern.

We can make useful extensions to simple embedding by restricting the "pattern matcher" in easily reversible ways. embed(t,l) can give a list of t,l locations in B along with a dictionary/function which classifies these as coding 1 or 0, or invalid. (This can depend on t,l! It doesn't need to be fixed.) An intervention changing t,l in A would be translated as any of the changes to the set embed(t,l) which change its classification. I'd say all the (valid) variations should work in order for the embedding to be valid. (There might be somewhat less strict ways to do it, though.)

This approach is the most satisfying for me at the moment. It seems to capture almost all of the cases I want. I'm not totally confident that it rules out all the non-examples I'd want to rule out, though. We can make a "causality soup" program, which computes every Boolean expression in order, caching values of sub-expressions so that there's a causal chain from the simplest expressions to the most complicated. This program embeds every other program, by the definition here. I'm not sure this should be allowed: it feels like almost the same error as the claim that the digits of pi are Turing-complete because (if pi is normal, as it appears to be) you can find any computable bit sequence in them. While the set of Boolean expressions gives a lot of structure, it doesn't seem like as much structure as the set of all programs.

Another case which seems problematic: suppose B embeds a version of A, but wired to self-destruct if causal interventions are detected. This can be implemented by looking for a property which the real execution history always has (such as a balance of 1s and 0s that never goes beyond some point), and stopping work whenever the property is violated. Although this is intuitively still an embedding, it lacks some of the causal structure of A, and therefore would not be counted.

Pattern-To-Pattern Embedding

Bit-to-pattern embeddings may still be too inflexible to capture everything. What if we want some complex structures in A to map to simple structures in B, so long as the causal structure is preserved? An important example of this is a bit which is modified by the computation at time t, left untouched for a while, and then used again at time t+n. In terms of bit-to-pattern embeddings, each individual time t, t+1, t+2, ... t+n has to have a distinct element in B to map to. This seems wrong: it's requiring too much causal structure in B. We want to treat the bit as "one item" while it is untouched by the computation.

Rather than looking for an embed function, I believe we now need an embedding relation. I'm not sure exactly how this goes. One idea:
  • A "pattern frame" is an ordered set of t,l locations.
  • A "pattern" is an ordered set of bits (which can be fit in a frame of equal size).
  • An "association" is a frame for A, a frame for B, a single pattern for A (size matching the frame), and a set of patterns for B (size matching the frame).
  • embedding is a program which enumerates associations. A proper embedding between A and B is one which:
    • Covers A completely, in the actual run and in all interventions. ("Covers" means that the associations contain a pattern frame with matching pattern for every t,l location in A.)
    • For all interventions on A, for all derived interventions on B, the execution of B continues to match with A according to the embedding.

This concept is asymmetric, capturing the idea that B embeds all the causal structure of A, but possibly has more causal structure besides. We could make symmetric variants, which might also be useful.

In any case, this doesn't seem to work as desired. Suppose B is the physics simulation mentioned before, but without any computer in it. A embeds B anyway, by the following argument. Let the pattern frames be the whole execution histories. Map the case where A has no interventions to the case where B has no intervention. Map the cases with interventions to entirely altered versions of B, containing appropriate A-computers with the desired interventions. This meets all the requirements, but intuitively isn't a real embedding of A in B.

Pattern-to-pattern embeddings seem necessary for this to apply to theories of truth, as I'm hoping for, because a belief will necessarily be represented by a complex physical sign. For example, a neural structure implementing a concept might have a causal structure which at a high level resembles something in the physical world; but, certainly, the internal causal structure of the individual neurons is not meant to be included in this mapping.

In any case, more work is needed.

Sunday, February 28, 2016

The Correspondence Theory

In my post on intuitionistic intuitions, I discussed my apparent gradual slide into constructivism, including some reasons to be skeptical about classical notions of truth. After a conversation with Sam Eisenstat, I've become less skeptical once again - mainly out of renewed hope that the classical account of truth, the correspondence theory, can be made mathematically rigorous.

Long-time readers might notice that this is a much different notion of truth than I normally talk about on this blog. I usually talk about mathematical truth, which deals with how to add truth predicates to a logical language, and has a lot to do with self-reference. Here, I'm talking about empirical truth, which has to do with descriptions of the world derived from observation. The two are not totally unrelated, but how to relate them is a question I won't deal with today.

Pragmatism

At bare minimum, I think, I accept a pragmatist notion of truth: beliefs are useful, some more than others. A belief is judged in terms of what it allows us to predict and control. The pragmatist denies that beliefs are about the world. Thinking in terms of the world is merely convenient.

How do we judge usefulness, then? Usefulness can't just be about what we think is useful, or else we'll include any old mistaken belief. It seems as if we need to refer to usefulness in the actual world. Pragmatism employs a clever trick to get around this. Truth refers to the model that we would converge to upon further investigation:

The real, then, is that which, sooner or later, information and reasoning would finally result in, and which is therefore independent of the vagaries of me and you. Thus, the very origin of the conception of reality shows that this conception essentially involves the notion of a COMMUNITY, without definite limits, and capable of an indefinite increase of knowledge. (Peirce 1868, CP 5.311).

This is saying that truth is more than just social consensus; it's a kind of idealized social consensus which would eventually be reached. The truth isn't what we currently think we believe, but we are still the ultimate judge.

If we make a mathematical model out of this, we can get some quite interesting results. Machine learning is full of results like this: we connect possible beliefs with a loss function which tells us when we make an error and how much it costs us to make different kinds of errors, and then prove that a particular algorithm has bounded regret. Regret is the loss relative to some better model; bounded regret means the total loss will not be too much worse than that of the best model in some class of models.

The ideal model is one with minimum loss; this is the model which we would assent to after the fact, which we'd want to tell to our previous self if we could. Since we can't have this perfect belief, the principle of bounded regret is a way to keep the damage to an acceptably low level. This might not be exactly realistic in life (the harms of bad beliefs might not be bounded), but at least it's a useful principle to apply when thinking about thinking.

Bayesianism

The way I see it, Bayesian philosophy is essentially pragmatist. The main shift from bounded-regret type thinking to Bayesian thinking is that Bayesians are more picky about which loss function is employed: it should be a proper scoring rule, ideally the logarithmic scoring rule (which has strong ties to information theory).

Bayesianism has a stronger separation between knowledge and goals than pragmatism. Pragmatism says that the aim of knowledge is to predict and manipulate the world. Bayesianism says "Wait a minute... predict... and manipulate. Those two sound distinct. Let's solve those problems separately." Knowledge is about prediction only, and is scored exclusively on predictive accuracy. Bayesian decision theory distinguishes between the probability function, P(x), and the utility function, U(x), even though in the end everything gets mixed together in the expected value.

Perhaps the reason this separation is so useful is that the same knowledge can be useful toward many different goals. Even though it's easy to find knowledge which doesn't fit this pattern (which is more useful for some goals than others), the abstraction is useful enough to persist because before you've solved a problem, you can't predict which pieces of knowledge will or won't be useful -- so you need a usefulness-agnostic notion of knowledge to some extent.

I think many Bayesians would also go further, saying truth has to do with a map-territory distinction and so on. However, this concept doesn't connect very strongly with the core of Bayesian techniques. A pragmatic notion of truth ("all models are wrong, but some are useful") seems to be closer to both theory and practice.

Still, this is an extremely weak notion of "truth". There doesn't need to be any notion of an external world. As in the pragmatist view quoted earlier, "the real" is a sort of convergence of belief. Knowledge is about making predictions, so all belief is fundamentally about sense-data; the view can be very solipsistic.

External Things

If all we have access to is our direct sense-data, what does it mean to believe in external things? One simplistic definition is to say that our model contains additional variables beyond the sense-data. In statistics, these are called "hidden variables" or "latent variables": stuff we can't directly observe, but which we put in our model anyway. Why would we ever do this? Well, it turns out to be really useful for modeling purposes. Even if only sense-data is regarded as "real", almost any approach will define probabilities over a larger set of variables.

This kind of "belief in external objects" is practically inevitable. Take any kind of black-box probability distribution over sense-data. If you open it up, there must be some mechanics inside; unless it's just a big look-up table, we can interpret it as a model with additional variables.

The pragmatist says that these extra beliefs inside the black box are true to the extent that they are useful (as judged in hindsight). The realist (meaning, the person who believes in external things) responds that this notion of truth is insufficient.

Imagine that one of the galaxies in the sky is fake: a perfect illusion of a galaxy, hiding a large alien construct. Further, let's suppose that we can never get to that galaxy with the resources available to us. Whatever is really there behind the galaxy-illusion has the mass of a galaxy, and everything fits correctly within the pattern of surrounding galaxies. We have no reason to believe that the galaxy is fake, so we will say that there's a galaxy there. This belief will never change, no matter how much we investigate.

For the pragmatist, this is fine. The belief has no consequence for prediction or action. It's "true" in every respect that might ever be important. It still seems to be a false belief, though. What notion of truth might we be employing, which marks this belief false?

I think part of the problem is that from our perspective, now, we don't know which things we will have an opportunity to observe or not. We want to have correct beliefs for anything we might observe. Because we can't really quantify that, we want to have correct beliefs for everything.

This leads to a model in which "external things" are anything which could hypothetically be sense-data. Imagine we have a camera which we can aim at a variety of things. We're trying to predict what we see in the camera based on where we swing it. We could try to model the flow of images recorded by the camera directly. However, this is not likely to work well. Instead, we should built up a 3D map of the environment. This map can predict observations at a much larger variety of angles than we have ever observed. Some of these will be angles that we could never observe -- perhaps places too high for us to lift the camera to, or views inside small places we cannot fit the camera. We won't have enough data to construct a 3D model for all of those non-possible angles, either; but, it makes sense to talk (speculatively) about what would be there if we could see it.

This is much more than the "black box" model mentioned earlier, where we look inside and see that there are some extra variables doing something. Here, the model itself explicitly presents us with spurious "predictions" about things which will never be sense-data, as a natural result of attempting to model the situation.

I think "external things" are like that. We use models which provide explicit predictions for things, outside of any particular idea of how we might ever measure those things. Like the earlier argument about Bayesians separating P(x) from U(x), this is an argument from convenience: we don't know ahead of time which things will be observable or how, so it turns out to be very useful to construct models which are agnostic to this.

Correspondence

The correspondence theory of truth is the oldest, and still the most widely accepted. We're now in a position to outline and defend a correspondence theory, but first, I'd like to expand a bit more on the concerns I'm trying to address (which I described somewhat in intuitionistic intuitions).

The Problem to Be Solved

According to the correspondence theory, truth is like the relationship between a good map and the territory it describes: if you understand the scale of the map, the georgraphic area it is representing, and the meaning of the various symbols employed by the map, you can understand where things are and navigate the landscape. If you are checking the map for truth, you can travel to areas depicted on the map and check whether the map accurately records what's there.

The correspondence theory of truth says that beliefs are like maps, and reality is the territory being described. We believe in statements which describe things in the external world. These beliefs are true when the descriptions are accurate, and false when inaccurate. Seems simple, right?

I've been struggling with this for several reasons:
  1. The Bayesian understanding of knowledge has no obvious need for map-territory correspondence. If "truth" lacks information-theoretic relevance, why would we talk about such a thing?
  2. Given perfect knowledge of our human beliefs, and perfect knowledge of the external world, it's not clear that a single correct correspondence can be found to decide whether beliefs are true or false. A map which must be graciously interpreted is no map at all; take any random set of scribbles and you can find some interpretation in the landscape.
  3. Even if we can account for that somehow, it's not clear what "territory" our map should be corresponding to. Although the universe contains many "internal" views (people observing the universe from the inside), it's not clear that there is any objective "external" view to compare things to. To understand such a view, we would have to imagine an entity sitting outside of the universe and observing it.

Point #1 was largely what I've been addressing in the essay up to this point. I propose that "truth" is a notion of hypothetical predictive accuracy, over a space of things which are in-principle observable, even if we know we cannot directly observe those things. We use truth as a convenient "higher standard" which implies good predictive accuracy. This ends up being useful because in practice we don't know beforehand what variables will be observable or closely connected with observation. The hypothesis of an external world has been validated again and again as a predictor of our actual observations.

In order to address point #2, we need a mathematically objective way of determining the correspondence by which to judge truth. In our story about maps and territories, a human played an important role. The human interprets the map: we understand the correspondence, because we can check whether the map is true. It's not possible for the correspondence to be contained in the map; no matter how much is written on the map to indicate scale, meaning of symbols, and so on, a human needs to understand the symbolic language in which such an explanation is written. This metaphor breaks down when we attempt to apply it to a human's beliefs. It seems that the human, or at least the human society at large, needs to contain everything necessary to interpret beliefs.

Point #3 will similarly be addressed by a formal theory.

Now for some formalism. I'll keep things fairly light.

Solution Sketch

Suppose we observe a sequence of bits. We call these the observable variables. We want to predict that sequence of bits as accurately as possible, by choosing from a set of possible models which make different predictions. However, these models may also predict hidden variables which we never observe, but which we hypothesize.

Definition. A model declares a (possibly infinite) collection of boolean-valued variables, which must include the observation bits. It also provides a set of functions which determine some variables from other variables, possibly probabilistically. These functions must compose into a complete model, IE, giving probabilities to all of the variables in the model. A model with only the observed variables is called a fully observable model; otherwise, a model is a hidden-variable model.

Note that because the global probability distribution is made of local functions which get put together, we've got more than just a probability distribution; we also have a causal structure. I'll explain why we need this later.

(I said I'd keep this light! More technically, this means that the sigma-algebra of the probability distribution can contain events which are distinct despite allowing the same set of possible observation-bit values; additionally, a directed acyclic graph is imposed on the variables, which determines their causal structure.)

Any hidden-variable model can be transformed into a fully observable model which makes the exact same predictions, by summing out the hidden variables and computing the probability of the next observation purely from the observations so far. Why, then, might an agent prefer the hidden-variable version? My answer is that adding hidden variables can be computationally more convenient for a number of reasons. Although we can always specify a probability distribution only on the history, there will usually be intermediate variables which could be useful to many predictions. These can be stored in hidden variables.

Consider again the example of a camera moving around a 3D environment. It's possible to try and predict the observations for a new angle as a pure function of the history. We would try to grab memories from related angles, and then construct a prediction on the fly from those; sometimes simple interpolation would give a good enough prediction, but in general, we need to build up a 3D model. It's better to keep the 3D models in memory and keep building them up as we receive more observations, rather than trying to create them from sensory memories on the fly.

Is this an adequate reason for believing in an external world? I'm implying that our ontology is a rather fragile result of our limited computational resources. If we did not have limited computing power, then we would never need to postulate hidden variables. Maybe this is philosophically inadequate. I think this might be close to our real reason for believing in an external world, though.

Now, how do we judge which model is true?

In order to say that a model is true, we need to compare it to something. Suppose some model R represents the real functional structure of the universe. Unlike other models, we require that R is fully deterministic, giving us a fixed value for every variable; I'll call this the true state, S. (It seems to me like a good idea to assume that reality is deterministic, but this isn't a necessary part of the model; the interested might modify this part of the definition.)

Loose Definition. A model M is meaningful to the extend that it "cuts reality at the joints", introducing hidden variables whose values correspond to clusters in the state-space of R. The beliefs about the hidden variables in M are true to the extend that they pick out the actual state S.

Observations:
  • We're addressing issue #3 by comparing a model to another model. This might at first seem like cheating; it looks like we're merely judging one opinion about the universe via some other opinion. However, the idea isn't that R is supplied by some external judge. Rather, R is representing the actual state of affairs. We don't ever have full access to R; we will never know what it is. All we can ever do is judge someone else's M by our probability distribution over possible R. That's what it looks like to reason about truth under uncertainty. We're describing external reality in terms of a hypothetical true model R; this doesn't mean reality "is" R, but it does assume reality is describable.
  • This is very mathematically imprecise. I don't know yet how to turn "cut reality at the joints" into math here. However, it seems like a tractable problem which might be addressed by tools from information theory like KL-divergence, mutual information, and so on.
  • Because R determines a single state S, the concept of "state-space" needs to rely on the causal structure of R. Correspondence between M and R needs to look something like "if I change variable x in R, and change variable y in M, we see the same cascade of consequences on other corresponding states in both models."
  • The notion of correspondence probably needs to be a generalization of program equivalence. This suggests truth is uncomputable even if we have access to R. (This is only a concern for infinite models, however.)
  • The correspondence between M and R is an interpretation of our mental states as descriptions of states of reality.
  • In order to be valid, the correspondence needs to match the observable variables in M with the observable variables in R. Other variables are matched based on the similarity of the causal structure, and the constraint imposed by exactly matching the observable variables.
To show that my loose definition has some hope of being formalized, here is a too-tight version:

Strict Definition. A model M is meaningful if there is a unique injection of the variables of M into those of R which preserves the causal structure: if x=f(y,z) in M, and the injection takes x,y,z to a,b,c, then we need a=g(b,c,...) in R. (The "..." means there can be more variables which a depends on.) Additionally, observable variables in M must be mapped onto the same observable variables in R. Furthermore, the truth of a belief is defined by the log-loss with respect to the actual state S.

This definition is too strict mainly because it doesn't let variables in our model M stand for clusters in R. This means references to aggregate objects such as tables and chairs aren't meaningful in this definition. However, it does capture the important observation that we can have a meaningful and true model which is incomplete: because the correspondence is an injection, there may be some variables in R which we have no concept of in M. This partly explains why our beliefs need to be probabilistic; if the variable "a" is really set by the deterministic function a=g(b,c,d,e) but we only model "b" and "c", our function x=f(y,z) can have apparently stochastic behavior.

Let's take stock of how well we've addressed the three concerns I listed before.

#1: Bayesianism doesn't need map-territory correspondence.

I think I've given a view that a pragmatist can accept, but which goes beyond pragmatism. Bayesian models will tend to talk about hidden variables anyway. This can be pragmatically justified. A pragmatist might say that the correspondence theory of truth is a feature of the world we find ourselves in; it could have been that we can model experience quite adequately without hidden variables, in which case the pragmatist would have rejected such a theory as confused. Since our sensory experience in fact requires us to postulate hidden variables to explain it very well, the correspondence theory of truth is natural to us.

#2: It's not clear that there will be a unique correspondence. A map which must be interpreted generously is not a good map.

I think I've only moderately succeeded in addressing this point. My strict definition requires that there is a unique injection. Realistically, this seems far too large a requirement to impose upon the model. Perhaps a more natural version of this would allow parts of a model to be meaningless, due to having non-unique interpretations. I think we need to go even further, though, allowing degrees of meaning based on how vague a concept is. Human concepts will be quite vague. So, I'm not confident that a more worked-out version of this theory would adequately address point #2.

#3: It's not clear that there IS an objective external view of reality to judge truth by.

Again, I think I've only addressed this point moderately well. I use the assumption that reality is describable to justify hypothesizing an R by which we judge the truth of M. Is there a unique R we can use? Most likely not. Can the choice of R change the judgement of truth in M? I don't have the formal tools to address such a question. In order for this to be a very good theory of truth, it seems like we need to be able to claim that some choices of R are right and some are wrong, and all the right ones are equivalent for the purpose of evaluating M.

Saturday, December 19, 2015

Levels and Levels

A system of levels related to my idea of epistemic/intellectual trust:
  1. Becoming defensive if your idea is attacked. A few questions might be fine, but very many feels like an interrogation. Objections to ideas are taken personally, especially if they occur repeatedly. This is sort of the level where most people are at, especially about identity issues like religion. Intellectuals can hurt people without realizing it when they try to engage people on such issues. The defensiveness is often a rational response to an environment in which criticism very often is an attack, and arguments like this are used as dominance moves.
  2. Competitive intellectualism. Like at level 1, debates are battles and arguments are soldiers. However, at level 2 this becomes a friendly competition rather than a problem. Intelligent objections to your ideas are expected and welcome; you may even take trollish positions in order to fish for them. Still, you're trying to win. Pseudo-legalistic concepts like burden of proof may be employed. Contrarianism is encouraged; the more outrageous the belief you can successfully defend, the better. At this level  of discourse, scientific thought may be conflated with skepticism. The endpoint of this style of intellectualism can be universal skepticism as a result.
  3. Intellectual honesty. Sorting out possibilities. Exploring all sides of an issue. This can temporarily look a lot like level 2, because taking a devil's-advocate position can be very useful. However, you never want to convey stronger evidence than exists. The very idea of arguing one side and only one side, as in level 2, is crazy -- it would defeat the point. The goal is to understand the other person's thinking, get your own thoughts across, and then try to take both forward by thinking about the issue together. You don't "win" and "lose"; all participants in the discussion are trying to come up with arguments that constrain the set of possibilities, while listing more options within that and evaluating the quality of different options. If a participant in the discussion appears to be giving a one-sided arguement for an extended period, it's because they think they have a point which hasn't been understood and they're still trying to convey it properly.

This is more nuanced than the two-level view I articulated previously, but it's still bound to be very simplistic compared to the reality. Discussions will mix these levels, and there are things happening in discussions which aren't best understood in terms of these levels (such as storytelling, jokes...). People will tend to be at different levels for different sets of beliefs, and with different people, and so on. Politics will almost always be at level 1 or 2, while it hardly even makes sense to talk about mathematics at anything but level 3. Higher levels are in some sense better than lower levels, but this should not be taken too far. Each level is an appropriate response to a different situation, and problems occur if you're not appropriately adapting your level of response to the situation. Admitting the weakness of your argument is a kind of countersignaling which can help shift from level 2 to level 3, but which can be ineffective or backfire if the conversation is stuck at level 2 or 1.

Here's an almost unrelated system of levels:
  1. Relying on personal experience and to a less extent anecdotal evidence, as opposed to statistics and controlled studies. (This is usually looked down upon by those with a scientific mindset, but again I'll be arguing that these levels shouldn't be taken as a scale from worse to better.) This is a human bias, since a personal example or story from a friend (or friend of friend) will tend to stick out in memory more vividly than numbers will. Practitioners of this approach to evidence can often be heard saying things like "you can prove anything with statistics" (which is, of course, largely true!).
  2. Relying on science, but only at the level it's conveyed in popular media. This is often really, really misleading. What the science says is often misunderstood, misconstrued, or ignored.
  3. Single study syndrome. Beware the man of one study. The habit/tactic of taking the conclusion of one scientific study as the truth. While looking at the actual studies is better than listening to the popular media, this replicates the same mistake that those who write the popular media articles are usually making. It ignores the fact that studies are often not replicated, and can show conflicting results. Another, perhaps even more important reason why single study syndrome is dangerous is because you can fish for a study to back up almost any view you like. You can do this without even realizing it; if you google terms related to what you are thinking, it will often result in information confirming those things. To overcome this, you've typically got to search for both sides of the argument. But what do you do when you find confirming evidence on both sides?
  4. Surveying science. Looking for many studies and meta-studies. This is, in some sense, the end of the line; unless you're going to break out the old lab coat and start doing science yourself, the best you can do is become acquainted with the literature and make an overall judgement from the disparate opinions there. Unfortunately, this can still be very misleading. A meta-analysis is not just a matter of finding all the relevant studies and adding up what's on one side vs the other, although this much effort is already quite a lot. Often the experiments in different studies are testing for different things. Determining which statistics are comparable will tend to be difficult, and usually you'll end up making somewhat crude comparisons. Even when studies are easily comparable,  due to publication bias, a simple tally can look like overwhelming evidence where in fact there is only chance (HT @grognor for reference). And when an effect is real, it can be due to operational definitions whose relation to real life is difficult to pin down; for example, do cognitive biases which are known to exist in a laboratory setting carry over to real-world decision-making?

Due to the troublesome nature of scientific evidence, the dialog at level 4 can sound an awful lot like level 1 at times. However, keep in mind that level 4 takes a whole lot more effort than level 1. We can put arbitrary amounts of effort into fact-checking any individual belief. While it's easy to criticize the lower levels and say that everyone should be at level 4 all the time (and they should learn to do meta-studies right, darnit!), it's practically impossible to put that amount of effort in all the time. When one does, one is often confronted with a disconcerting labrynth of arguments and refutations on both sides, so that any conclusion you may come to is tempered by the knowledge that many people have been very wrong about this same thing (for surprising reasons).

While you'll almost certainly find more errors in your thinking if you go down that rabbit hole, at some point you've got to stop. For some kinds of beliefs, the calculated point of stopping is quite early; hence, we're justified in staying at level 1 for many (most?) things. It may be easy to underestimate the amount of investigation we need to do, since long-term consequences of wrong beliefs are unpredictable (it's easier to think about only short-term needs) and it's much easier to see the evidence currently in favor of our position than the possible refutations which we've yet to encounter. Nonetheless, only so much effort is justified.

Sunday, November 15, 2015

Intuitionistic Intuitions

I've written quite a few blog posts about the nature of truth over the years. There's been a decent gap, though. This is partly because my standards have increased and I don't wish to continue the ramblings of past-me, and partly because I've moved on to other things. However, over this time I've noticed a rather large shift taking place in my beliefs about these things; specifically my reaction to intuitionist/constructivist arguments.

My feeling in former days was that classical logic is clear and obvious, while intuitionistic logic is obscure. I had little sympathy for the arguments in favor of intuitionism which I encountered. I recall my feeling: "Everything, all of these arguments given, can be understood in terms of classical logic -- or else not at all." My understanding of the meaning of intuitionistic logic was limited to the provability interpretation, which translates intuitionistic statements into classical statements. I could see the theoretical elegance and appeal of the principle of harmony and constructivism as long as the domain was pure mathematics, but as soon as we use logic to talk about the real world, the arguments seemed to fall apart; and surely the point (even when dealing with pure math) is to eventually make useful talk about the world? I wanted to say: all these principles are wonderful, but on top of all of this, wouldn't you like to add the Law of Excluded Middle? Surely it can be said that any meaningful statement is either true, or false?

My thinking, as I say, has shifted. However, I find myself in the puzzling position of not being able to point to a specific belief which has changed. Rather, the same old arguments for intuitionistic logic merely seem much more clear and understandable from my new perspective. The purpose of this post, then, is to attempt to articulate my new view on the meaning of intuitionistic logic.

The slow shift in my underlying beliefs was punctuated by at least two distinct realizations, so I'll attempt to articulate those.

Language Is Incomplete

In certain cases it's quite difficult to distinguish what "level" you're speaking about with natural language. Perhaps the largest example of this is that there isn't the same kind of use/mention distinction which is firmly made in formal logic. It's hard to know exactly when we're just arguing semantics (arguing about the meaning of words) vs arguing real issues. If I say "liberals don't necessarily advocate individual freedom" am I making a claim about the definition of the word liberal, or an empirical claim about the habits of actual liberals? It's unclear out of context, and can even be unclear in context.

My first realization was that the ambiguity of language allows for two possible views about what kind of statements are usually being made:

  1. Words have meanings which can be fuzzy at times, but this doesn't matter too much. In the context of a conversation, we attempt to agree on a useful definition of the word for the discussion we're having; if the definition is unclear, we probably need to sort that out before proceeding. Hence, the normal, expected case is that words have concrete meanings referring to actual things.
  2. Words are social constructions whose meanings are partial at the best of times. Even in pure mathematics, we see this: systems of axioms are typically incomplete, leaving wiggle room for further axioms to be added, potentially ad infinitum. If we don't pin down the topic of discourse precisely in math, how can we think that's the case in typical real-world cases? Therefore, the normal, expected case is that we're dealing with only incompletely-specified notions. Because our statements must be understood in this context, they have to be interpreted as mostly talking about these constructions rather than talking about the real world as such.
This is undoubtedly a false dichotomy, but helped me see why one might begin to advocate intuitionistic logic. I might think that there is always a fact of the matter about purely physical items such as atoms and gluons, but when we discuss tables and chairs, such entities are sufficiently ill-defined that we're not justified in acting as if there is always a physical yes-or-no sitting behind our statements. Instead, when I say "the chair is next to the table" the claim is better understood as indicating that understood conditions for warranted assertibility have been met. Likewise, if I say "the chair is not next to the table" it indicates that conditions warranting denial have been met. There need not be a sufficiently precise notion available so that we would say the chair "is either next to the table or not" -- there well may be cases when we would not assent to either judgement.

After thinking of it this way, I was seeing it as a matter of convention -- a tricky semantic issue somewhat related to use/mention confusion.

Anti-Realism Is Just Rejection of Map/Territory Distinctions

Anti-realism is a position which some (most?) intuitionists take. Now, on the one hand, this sort of made sense to me: my confusion about intuitionism was largely along the lines "but things are really true or false!", so it made a certain kind of sense for the intuitionist reply to be "No, there is no real!". The intuitionists seemed to retreat entirely into language. Truth is merely proof; and proof in turn is assertability under agreed-upon conventions. (These views are not necessarily what intuitionists would say exactly, but it's the impression I somehow got of them. I don't have sources for those things.)

If you're retreating this far, how do you know anything? Isn't the point to operate in the real world, somewhere down the line?

At some point, I read this facebook post by Brienne, which got me thinking:

One of the benefits of studying constructivism is that no matter how hopelessly confused you feel, when you take a break to wonder about a classical thing, the answer is SO OBVIOUS. It's like you want to transfer just the pink glass marbles from this cup of water to that cup of water using chopsticks, and then someone asks whether pink marbles are even possible to distinguish from blue marbles in the first place, and it occurs to you to just dump out all the water and sort through them with your fingers, so you immediately hand them a pink marble and a blue marble. Or maybe it's more like catching Vaseline-coated eels with your bare hands, vs. catching regular eels with your bare hands. Because catching eels with your bare hands is difficult simpliciter. Yes, make them electric, and that's exactly what it's like to study intuitionism. Intuitionism is like catching vaseline-coated electric eels with your bare hands.
Posted by Brienne Yudkowsky on Friday, September 25, 2015


I believe she simply meant that constructivism is hard and classical logic is easy by comparison. (For the level of detail in this blog post, constructivism and intuitionism are the same.) However, the image with the marbles stuck with me. Some time later, I had the sudden thought that a marble is a constructive proof of a marble. The intuitionists are not "retreating entirely into language" as I previously thought. Rather, almost the opposite: they are rejecting a strict brain/body separation, with logic happening only in the brain. Logic becomes more physical.

Rationalism generally makes a big deal of the map/territory distinction. The idea is that just as a map describes a territory, our beliefs describe the world. Just as a map must be constructed by looking at the territory if it's to be accurate, our beliefs must be constructed by looking at the world. The correspondence theory of truth holds that a statement or belief is true or false based on a correspondence to the world, much as a map is a projected model of the territory it depicts, and is judged correct or incorrect with respect to this projection. This is the meaning of the map.

In classical logic, this translates to model theory. Logical sentences correspond to a model via an interpretation; this determines their truth values as either true or false. How can we understand intuitionistic logic in these terms? The standard answer is Kripke semantics, but while that's a fine formal tool, I never found it helped me understand the meaning of intuitionistic statements. Kripke semantics is a many-world interpretation; the anti-realist position seemed closer to no-world-at-all. I now see I was mistaken.

Anti-realism is not rejection of the territory. Anti-realism is rejection of the map-territory correspondence. In the case of a literal map, such a correspondence makes sense because there is a map-reader who interprets the map. In the case of our beliefs, however, we are the only interpreter. A map cannot also contain its own map-territory correspondence; that is fundamentally outside of the map. A map will often include a legend, which helps us interpret the symbols on the map; but the legend itself cannot be explained with a legend, and so on ad infinitum. The chain must bottom out somehow, with some kind of semantics other than the map-territory kind.

The anti-realist provides this with constructive semantics. This is not based on correspondence. The meaning of a sentence rests instead in what we can do with it. In computer programming terms, meaning is more like a pointer: we uncover the reference by a physical operation of finding the location in memory which we've been pointing to, and accessing its contents. If we claim that 3*7=21, we can check the truth of this statement with a concrete operation. "3" is not understood as a reference to a mysterious abstract entity known as a "number"; 3 is the number. (Or if the classicist insists that 3 is only a numeral, then we accept this, but insist that it is clear numerals exist but unclear that numbers exist.)

A proof worked out on a sheet of paper is a proof; it does not merely mean the proof. It is not a set of squiggles with a correspondence to a proof.

How does epistemology work in this kind of context? How do we come to know things? Well...

Bayesianism Needs No Map

The epistemology of machine learning has been pragmatist all along: all models are wrong; some are useful. A map-territory model of knowledge plays a major role in the way we think of modeling, but in practice? There is no measurement of map-territory correspondence. What matters is goodness-of-fit and generalization error. In other words, a model is judged by the predictions it makes, not by the mechanisms which lead to those predictions. We tend to expect models which make better predictions to have internal models close to what's going on in the external world, but there is no precise notion of what this means, and none is required. The theorems of statistical learning theory and Bayesian epistemology (of which I am aware) do not make use of a map-territory concept, and the concept is not missed.

It's interesting that formal Bayesian epistemology relies so little on a map/territory distinction. The modern rationalist movement tends to advocate both rather strongly. While Bayesianism is compatible with map-territory thinking, it doesn't need it or really encourage it. This realization was rather surprising to me.

Sunday, June 14, 2015

Associated vs Relevant

Also cross-posted to LessWrong.

The List of Nuances (which is actually more of a list of fine distinctions - a fine distinction which only occurred to its authors after the writing of it) has one glaring omission, which is the distinction between associated and relevant. A List of Nuances is largely a set of reminders that we aren't omniscient, but it also serves the purpose of listing actual subtleties and calling for readers to note the subtleties rather than allowing themselves to fall into associationism, applying broad cognitive clusters where fine distinctions are available. The distinction between associated and relevant is critical to this activity.

An association can be anything related to a subject. To be relevant is a higher standard: it means that there is an articulated argument connecting to a question on the table, such that the new statement may well push the question one way or the other (perhaps after checking other relevant facts). This is close to the concept of value of information.

Whether something is relevant or merely associated can become confused when epistemic defensiveness comes into play. From A List of Nuances:
10. What You Mean vs. What You Think You Mean
  1. Very often, people will say something and then that thing will be refuted. The common response to this is to claim you meant something slightly different, which is more easily defended.
    1. We often do this without noticing, making it dangerous for thinking. It is an automatic response generated by our brains, not a conscious decision to defend ourselves from being discredited. You do this far more often than you notice. The brain fills in a false memory of what you meant without asking for permission.

As mentioned in Epistemic Trust, a common reason for this is when someone says something associated to the topic at hand, which turns out not to be relevant.

There is no shame in saying associated things. In a free-ranging discussion, the conversation often moves forward from topic to topic by free-association. All of the harm here comes from claiming that something is relevant when it is merely associated. Because this is often a result of knee-jerk self-defense, it is critical to repeat: there is no shame in saying something merely associated with the topic at hand!

It is quite important, however, to spot the difference. Association-based thinking is one of the signs of a death spiral, as a large associated memeplex reinforces itself to the point where it seems like a single, simple idea. A way to detect this trap is to try to write down the idea in list form and evaluate the different parts. If you can't explicitly articulate the unseen connection you feel between all the ideas in the memeplex, it may not exist.

Utilizing the power of associations is a powerful tool for creating a good story (although, see item #3 here for a counterpoint). Repeating themes can create a powerful feeling of relevance, which may be good for convincing people of a memeplex. Furthermore, association is a wonderful exploratory tool. However, it can turn into an enemy of articulated argument; for this reason, it is important to tread carefully (especially in one's own mind).

Wednesday, June 10, 2015

Epistemic Trust: Clarification

Cross-posted to LessWrong Discussion.

A while ago, I wrote about epistemic trust. The thrust of my argument was that rational argument is often more a function of the group dynamic, as opposed to how rational the individuals in the group are. I assigned meaning to several terms, in order to explain this:

Intellectual honesty: being up-front not just about what you believe, but also why you believe it, what your motivations are in saying it, and the degree to which you have evidence for it.


Intellectual-Honesty Culture: The norm of intellectual honesty. Calling out mistakes and immediately admitting them; feeling comfortable with giving and receiving criticism.

Face Culture: Norms associated with status which work contrary to intellectual honesty. Agreement as social currency; disagreement as attack. A need to save face when one's statements turn out to be incorrect or irrelevant; the need to make everyone feel included by praising contributions and excusing mistakes.

Intellectual trust: the expectation that others in the discussion have common intellectual goals; that criticism is an attempt to help, rather than an attack. The kind of trust required to take other people's comments at face value rather than being overly concerned with ulterior motives, especially ideological motives. I hypothesized that this is caused largely by ideological common ground, and that this is the main way of achieving intellectual-honesty culture.

There are several important points which I did not successfully make last time.
  • Sometimes it's necessary to play at face culture. The skills which go along with face-culture are important. It is generally a good idea to try to make everyone feel included and to praise contributions even if they turn out to be incorrect. It's important to make sure that you do not offend people with criticism. Many people feel that they are under attack when engaged in critical discussion. Wanting to change this is not an excuse for ignoring it.
  • Face culture is not the error. Being unable to play the right culture at the right time is the error. In my personal experience, I've seen that some people are unable to give up face-culture habits in more academic settings where intellectual honesty is the norm. This causes great strife and heated arguments! There is no gain in playing for face when you're in the midst of an honesty culture, unless you can do it very well and subtly. You gain a lot more face by admitting your mistakes. On the other hand, there's no honor in playing for honesty when face-culture is dominant. This also tends to cause more trouble than it's worth.
  • It's a cultural thing, but it's not just a cultural thing. Some people have personalities much better suited to one culture or the other, while other people are able to switch freely between them. I expect that groups move further toward intellectual honesty as a result of establishing intellectual trust, but that is not the only factor. Try to estimate the preferences of the individuals you're dealing with (while keeping in mind that people may surprise you later on).






Wednesday, June 3, 2015

Simultaneous Overconfidence and Underconfidence

Follow-up to this and this. Prep for this meetup. Cross-posted to LessWrong.
Eliezer talked about cognitive bias, statistical bias, and inductive bias in a series of posts only the first of which made it directly into the LessWrong sequences as currently organized (unless I've missed them!). Inductive bias helps us leap to the right conclusion from the evidence, if it captures good prior assumptions. Statistical bias can be good or bad, depending in part on the bias-variance trade-off. Cognitive bias refers only to obstacles which prevent us from thinking well.

Unfortunately, as we shall see, psychologists can be quite inconsistent about how cognitive bias is defined. This created a paradox in the history of cognitive bias research. One well-researched and highly experimentally validated effect was conservatism, the tendency to give estimates too middling, or probabilities too near 50%. This relates especially to integration of information: when given evidence relating to a situation, people tend not to take it fully into account, as if they are stuck with their prior. Another highly-validated effect was overconfidence, relating especially to calibration: when people give high subjective probabilities like 99%, they are typically wrong with much higher frequency.

In real-life situations, these two contradict: there is no clean distinction between information integration tasks and calibration tasks. A person's subjective probability is always, in some sense, the integration of the information they've been exposed to. In practice, then, when should we expect other people to be under- or over- confident?



Simultaneous Overconfidence and Underconfidence


The conflict was resolved in an excellent paper by Ido Ereve et al which showed that it's the result of how psychologists did their statistics. Essentially, one group of psychologists defined bias one way, and the other defined it another way. The results are not really contradictory; they are measuring different things. In fact, you can find underconfidence or overconfidence in the same data sets by applying the different statistical techniques; it has little or nothing to do with the differences between information integration tasks and probability calibration tasks. Here's my rough drawing of the phenomenon (apologies for my hand-drawn illustrations):

Overconfidence here refers to probabilities which are more extreme than they should be, here illustrated as being further from 50%. (This baseline makes sense when choosing from two options, but won't always be the right baseline to think about.) Underconfident subjective probabilities are associated with more extreme objective probabilities, which is why the slope tilts up in the figure. Overconfident similarly tilts down, indicating that the subjective probabilities are associated with less-extreme objective probabilities. Unfortunately, if you don't know how the lines are computed, this means less than you might think. Ido Ereve et al show that these two regression lines can be derived from just one data-set. I found the paper easy and fun to read, but I'll explain the phenomenon in a different way here by relating it to the concept of statistical bias and tails coming apart.


The Tails Come Apart


Everyone who has read Why the Tails Come Apart will likely recognize this image:
The idea is that even if X and Y are highly correlated, the most extreme X values and the most extreme Y values will differ. I've labelled the difference the "curse" after the optimizer's curse: if you optimize a criteria which is merely correlated with the thing you actually want, you can expect to be disappointed.

Applying the idea to calibration, we can say that the most extreme subjective beliefs are almost certainly not the most extreme on the objective scale. That is: a person's most confident beliefs are almost certainly overconfident. A belief is not likely to have worked its way up to the highest peak of confidence by merit alone. It's far more likely that some merit but also some error in reasoning combined to yield high confidence.

In what follows, I'll describe a "soft version" which shows the tails coming apart gradually, rather than only talking about the most extreme points.
Statistical Bias

Statistical bias is defined through the notion of an estimator. We have some quantity we want to know, X, and we use an estimator to guess what it might be. The estimator will be some calculation which gives us our estimate, which I will write as X^. An estimator is derived from noisy information, such as a sample drawn at random from a larger population. The difference between the estimator and the true value, X^-X, would ideally be zero; however, this is unrealistic. We expect estimators to have error, but systematic error is referred to as bias.

Given a particular value for X, the bias is defined as the expected value of X^-X, written EX(X^-X). An unbiased estimator is an estimator such that EX(X^-X)=0 for any value of X we choose.

Due to the bias-variance trade-off, unbiased estimators are not the best way to minimize error in general. However, statisticians still love unbiased estimators. It's a nice property to have, and in situations where it works, it has a more objective feel than estimators which use bias to further reduce error.

Notice, the definition of bias is taking fixed X; that is, it's fixing the quantity which we don't know. Given a fixed X, the unbiased estimator's average value will equal X. This is a picture of bias which can only be evaluated "from the outside"; that is, from a perspective in which we can fix the unknown X.

A more inside-view of statistical estimation is to consider a fixed body of evidence, and make the estimator equal the average unknown. This is exactly inverse to unbiased estimation:

In the image, we want to estimate unknown Y from observed X. The two variables are correlated, just like in the earlier "tails come apart" scenario. The average-Y estimator tilts down because good estimates tend to be conservative: because I only have partial information about Y, I want to take into account what I see from X but also pull toward the average value of Y to be safe. On the other hand, unbiased estimators tend to be overconfident: the effect of X is exaggerated. For a fixed Y, the average Y^ is supposed to equal Y. However, for fixed Y, the X we will get will lean toward the mean X (just as for a fixed X, we observed that the average Y leans toward the mean Y). Therefore, in order for Y^ to be high enough, it needs to pull up sharply: middling values of X need to give more extreme Y^ estimates.

If we superimpose this on top of the tails-come-apart image, we see that this is something like a generalization:


Wrapping It All Up


The punchline is that these two different regression lines were exactly what yields simultaneous underconfidence and overconfidence. The studies in conservatism were taking the objective probability as the independent variable, and graphing people's subjective probabilities as a function of that. The natural next step is to take the average subjective probability per fixed objective probability. This will tend to show underconfidence due to the statistics of the situation.

The studies on calibration, on the other hand, took the subjective probabilities as the independent variable, graphing average correct as a function of that. This will tend to show overconfidence, even with the same data as shows underconfidence in the other analysis.

From an individual's standpoint, the overconfidence is the real phenomenon. Errors in judgement tend to make us overconfident rather than underconfident because errors make the tails come apart so that if you select our most confident beliefs it's a good bet that they have only mediocre support from evidence, even if generally speaking our level of belief is highly correlated with how well-supported a claim is. Due to the way the tails come apart gradually, we can expect that the higher our confidence, the larger the gap between that confidence and the level of factual support for that belief.

This is not a fixed fact of human cognition pre-ordained by statistics, however. It's merely what happens due to random error. Not all studies show systematic overconfidence, and in a given study, not all subjects will display overconfidence. Random errors in judgement will tend to create overconfidence as a result of the statistical phenomena described above, but systematic correction is still an option.

Sunday, March 29, 2015

Good Bias, Bad Bias

I had a conceptual disagreement with a couple of friends, and I'm trying to spell out what I meant here in order to continue the discussion.

The statistical definition of bias is defined in terms of estimators. Suppose there's a hidden value, Theta, and you observe data X whose probability distribution is dependent on Theta, with known P(X|Theta). An estimator is a function of the data which gives you a hopefully-plausible value of Theta.

An unbiased estimator is an estimator which has the property that, given a particular value of Theta, the expected value of the estimator (expectation in P(X|Theta)) is exactly Theta. In other words: our estimate may be higher or lower than Theta due to the stochastic relationship between X and Theta, but it hits Theta on average. (In order for averaging to make sense, we're assuming Theta is a real number, here.)

The Bayesian view is that we have a prior on Theta, which injects useful bias in our judgments. A Bayesian making statistical estimators wants to minimize loss. Loss can mean different things in different situations; for example, if we're estimating whether a car is going hit us, the damage done by wrongly thinking we are safe is much larger than the damage done by wrongly thinking we're not. However, if we don't have any specific idea about real-world consequences, it may be reasonable to assume a squared-error loss so that we are trying to get our estimated Theta to match the average value of Theta.

Even so, the Bayesian choice of estimator will not be unbiased, because Bayesians will want to minimize the expected loss accounting for the prior, which means looking at the expectation in P(X|Theta)*P(Theta). In fact, we can just look at P(Theta|X). If we're minimizing squared error, then our estimator would be the average Theta in P(Theta|X), which is proportional to P(X|Theta)P(Theta).

Essentially, we want to weight our average by the prior over Theta because we decrease our overall expected loss by accepting a lot of statistical bias for values of Theta which are less probable according to our prior.

So, a certain amount of statistical bias is perfectly rational.

Bad bias, to a Bayesian, refers to situations when we can predictably improve our estimates in a systematic way.

One of the limitations of the paper reviewed last time was that it didn't address good vs bad bias. Bias, in that paper, was more or less indistinguishable from bias in the statistical sense. Detangling things we can improve from things which we want would require a deeper analysis of the mathematical model, and of the data.

Saturday, March 28, 2015

A Paper on Bias

I've been reading some of the cognitive bias literature recently.

First, I dove into Toward a Synthesis of Cognitive Biases, by Martin Hilbert: a work which claims to explain how eight different biases observed in the literature are an inevitable result of noise in the information-processing channels in the brain.

The paper starts out with what it calls the conservatism bias. (The author complains that the literature is inconsistent about naming biases, both giving one bias multiple names and using one name for multiple biases. Conservatism is what is used for this paper, but this may not be standard terminology. What's important is the mathematical idea.)

The idea behind conservatism is that when shown evidence, people tend to update their probabilities more conservatively than would be predicted by probability theory. It's as if they didn't observe all the evidence, or aren't taking the evidence fully into account. A well-known study showed that subjects were overly conservative in assigning probabilities to gender based on height; an earlier study had found that the problem is more extreme when subjects are asked to aggregate information, guessing the gender of a random selection of same-sex individuals from height. Many studies were done to confirm this bias. A large body of evidence accumulated which indicated that subjects irrationally avoided extreme probabilities, preferring to report middling values.

The author construed conservatism very broadly. Another example given was: if you quickly flash a set of points on a screen and ask subjects to estimate their number, then subjects will tend to over-estimate the number of a small set of points, and under-estimate the number of a large set of points.

The hypothesis put forward in Toward a Synthesis is that conservatism is a result of random error in the information-processing channels which take in evidence. If all red blocks are heavy and all blue blocks are light, but you occasionally mix up red and blue, you will conclude that most red blocks are heavy and most blue blocks are light. If you are trying to integrate some quantity of information, but some of it is mis-remembered, small probabilities will become larger and large will become smaller.

One thing that bothered me about this paper was that it did not directly contrast processing-error conservitism with the rational conservatism which can result from quantifying uncertainty. My estimate of the number of points on a screen should tend toward the mean if I only saw them briefly; this bias will increase my overall accuracy rate. It seems that previous studies established that people were over-conservative compared to the rational amount, but I didn't take the time to dig up those analyses.

All eight biases explained in Toward a Synthesis were effectively consequences of conservatism in different ways.


  • Illusory correlation: Two rare events X and Y which are independent appear correlated as a result of their probabilities being inflated by conservatism bias. I found this to be the most interesting application. The standard example of illusory correlation is stereotyping of minority groups. The race is X, and some rare trait is Y. What was found was that stereotyping could be induced in subjects by showing them artificial data in which the traits were entirely independent of the races. Y could be either a positive or a negative trait; illusory correlation occurs either way. The effect that conservatism has on the judgements will depend on how you ask the subject about the data, which is interesting, but illusory correlation emerges regardless. Essentially, because all the frequencies are smaller within the minority group, the conservatism bias operates more strongly; the trait Y is inflated so much that it's seen as being about 50-50 in that group, whereas the judgement about its frequency in the majority group is much more realistic.
  • Self-Other Placement: People with low skill tend to overestimate their abilities, and people with high skill tend to underestimate theirs; this is known as the Dunning-Kruger effect. This is a straightforward case of conservatism. Self-other placement refers to the further effect that people tend to be even more conservative about estimating other people's abilities, which paradoxically means that people of high ability tend to over-estimate the probability that they are better than a specific other person, despite the Dunning-Kruger effect; ans similarly, people of low ability tend to over-estimate the probability that they are worse as compared with specific individuals, despite over-estimating their ability overall. The article explains this as a result of having less information about others, and hence, being more conservative. (I'm not sure how this fits with the previously-mentioned result that people get more conservative as they have more evidence.)
  • Sub-Additivity: This bias is a class of inconsistent probability judgements. The estimated probability of an event will be higher if we ask for the probability of a set of sub-events, rather than merely asking for the overall probability. From WikipediaFor instance, subjects in one experiment judged the probability of death from cancer in the United States was 18%, the probability from heart attack was 22%, and the probability of death from "other natural causes" was 33%. Other participants judged the probability of death from a natural cause was 58%. Natural causes are made up of precisely cancer, heart attack, and "other natural causes," however, the sum of the latter three probabilities was 73%, and not 58%. According to Tversky and Koehler (1994) this kind of result is observed consistently. The bias is explained with conservativism again. The smaller probabilities are inflated more by the conservatism bias than the larger probability is, which makes their sum much more inflated than the original event.
  • Hard-Easy Bias: People tend to overestimate the difficulty of easy tasks, and underestimate the difficulty of hard ones. This is straightforward conservatism, although the paper framed it in a somewhat more complex model (it was the 8th bias covered in the paper, but I'm putting it out of order in this blog post).

That's 5 biases down, and 3 to go. The article has explained conservatism as a mistake made by a noisy information-processor, and explains 4 other biases as consequences of conservatism. So far so good.

Here's where things start to get... weird.

Simultaneous Overestimation and Underestimation

Bias 5 is termed exaggerated expectation in the paper. This is a relatively short section which reviews a bias dual to conservatism. Conservatism looks at the statistical relationship from the evidence, to the estimate formed in the brain. If there is noise in the information channel connecting the two, then conservatism is a statistical near-certainty.

Similarly, we can turn the relationship around. The conservatism bias was based on looking at P(estimate|evidence). We can turn it around with Bayes' Law, to examine P(evidence|estimate). If there is noise in one direction, there is noise in the other direction. This has a surprising implication: the evidence will be conservative with respect to the estimate, by essentially the same argument which says that the estimate will tend to be conservative with respect to the evidence. This implies that (under statistical assumptions spelled out in the paper), our estimates will tend to be more extreme than the data. This is the exaggerated expectation effect.

If you're like me, at this point you're saying what???

The whole idea of conservatism was that the estimates tend to be less extreme than the data! Now "by the same argument" we are concluding the opposite?

The section refers to a paper about this, so before moving further I took a look at that reference. The paper is Simultaneous Over- and Under- Confidnece: the Role of Error in Judgement Process by Erev et. al. It's a very good paper, and I recommend taking a look at it.

Simultaneous Over- and Under- Estimation reviews two separate strains of literature in psychology. A large body of studies in the 1960s found systematic and reliable underestimation of probabilities. This revision-of-opinion literature concluded that it was difficult to take the full evidence into account to change your beliefs. Later, many studies on calibration found systematic overestimation of probabilities: when subjects are asked to give probabilities for their beliefs, the probabilities are typically higher than their frequency of being correct.

What is going on? How can both of these be true?

One possible answer is that the experimental conditions are different. Revision-of-opinion tests give a subject evidence, and then test how well the subject has integrated the evidence to form a belief. Calibration tests are more like trivia sessions; the subject is asked an array of questions, and assigns a probability to each answer they give. Perhaps humans are stubborn but boastful: slow to revise their beliefs, but quick to over-estimate the accuracy of those beliefs. Perhaps this is true. It's difficult to test this against the data, though, because we can't always distinguish between calibration tests and revision-of-opinion tests. All question-answering involves drawing on world knowledge combined with specific knowledge given in the question to arrive at an answer. In any case, a much more fundamental answer is available.

The Erev paper points out that revision-of-opinion experiments used different data analysis. Erev re-analysed the data for studies on both sides, and found that the statistical techniques used by revision-of-opinion researchers found underconfidence, while the techniques of calibration researchers found overconfidence, in the same data-set!

Both techniques compared the objective probability, OP, with the subject's reported probability, SP. OP is the empirical frequency, while SP is whatever the subject writes down to represent their degree of belief. However, revision-of-opinion studies started with a desired OP for each situation and calculated the average SP for a given OP. Calibration literature instead starts with the numbers written down by the subjects, and then asks how often they were correct; so, they're computing the average OP for a given SP.

When we look at data and try to find functions from X to Y like that, we're creating statistical estimators. A very general principle is that estimators tend to be regressive: my Y estimate will tend to be closer to the Y average than the actual Y. Now, in the first case, scientists were using X=OP and Y=SP; lo and behold, they found it to be regressive. In later decades, they took X=SP and Y=OP, and found that to be regressive! From a statistical perspective, this is plain and ordinary business as usual. The problem is that one case was termed under-confidence and the other over-confidence, and they appeared from those names to be contrary to one another.

This is exactly what the Toward a Synthesis paper was trying to get across with the reversed channel, P(estimate|evidence) vs P(evidence|estimate).

Does this mean that the two biases are mere statistical artifacts, and humans are actually fairly good information systems whose beliefs are neither under- nor over- confident? No, not really. The statistical phenomena are real: humans are both under- and over-confident in these situations. What Toward a Synthesis and Simultaneous Over- and Under- Confidence are trying to say is that these are not mutually inconsistent, and can be accounted for by noise in the information-processing system of the brain.

Both papers propose a model which accounts for overconfidence as the result of noise during the creation of an estimate, although they are put in different terms. The next section of Toward a Synthesis is about overconfidence bias specifically (which it sees as a special case of exaggerated expectations, as I understand them; the 7th bias to be examined in the paper, for those keeping count). The model shows that even with accurate memories (and therefore the theoretical ability to reconstruct accurate frequencies), an overconfidence bias should be observed (under statistical conditions outlined in the paper). Similarly, Simultaneous Over-and Under- confidence constructs a model in which people have perfectly accurate probabilities in their heads, and the noise occurs when they put pen to paper: their explicit reflection on their belief adds noise which results in an observed overconfidence.

Both models also imply underconfidence. This means that in situations where you expect perfectly rational agents to reach 80% confidence in a belief, you'd expect rational agents with noisy reporting of the sort postulated to give estimates averaging lower (say, 75%). This is the apparent underconfidence. On the other hand, if you are ignorant of the empirical frequency and one of these agents tells you that it is 80%, then it is you who is best advised to revise the number down to 75%.

This is made worse by the fact that human memories and judgement are actually fallible, not perfect, and subject to the same effects. Information is subject to bias-inducing-noise at each step of the way, from first observation, through interpretation and storage in the brain, modification by various reasoning processes, and final transmission to other humans. In fact, most information we consume is subject to distortion before we even touch it (as I discussed in my previous post). I was a bit disappointed when the Toward a Synthesis paper dismissed the relevance of this, stating flatly "false input does not make us irrational".

Overall, I find Toward a Synthesis of Cognitive Biases a frustrating read and recommend the shorter, clearer Simultaneous Over- and Under- Confidence as a way to get most of the good ideas with less of the questionable ones. However, that's for people who already read this blog post and so have the general idea that these effects can actually explain a lot of biases. By itself, Simultaneous Over- and Under- Confidence is one step away from dismissing these effects as mere statistical artifacts. I was left with the impression that Erev doesn't even fully dismiss the model where our internal probabilities are perfectly calibrated and it's only the error in conscious reporting that's causing over- and under- estimation to be observed.

Both papers come off as quite critical of the state of the research, and I walk away from these with a bitter taste in my mouth: is this the best we've got? The extend of the statistical confusion observed by Erev is saddening, and although it was cited in Toward a Synthesis, I didn't get the feeling that it was sharply understood (another reason I recommend the Erev paper instead). Toward a Synthesis also discusses a lot of confusion about the names and definitions of biases as used by different researchers,which is not quite as problematic, but also causes trouble.

A lot of analysis is still needed to clear up the issues raised by these two papers. One problem which strikes me is the use of averaging to aggregate data, which has to do with the statistical phenomenon of simultaneous over- and under- confidence. Averaging isn't really the right thing to do to a set of probabilities to see whether it has a tendency to be over or under a mark. What we really want to know, I take it, is whether there is some adjustment which we can do after-the-fact to systematically improve estimates. Averaging tells us whether we can improve a square-loss comparison, but that's not the notion of error we are interested in; it seems better to use a proper scoring rule.

Finally, to keep the reader from thinking that this is the only theory trying to account for a broad range of biases: go read this paper too! It's good, I promise.