Saturday, December 19, 2015

Levels and Levels

A system of levels related to my idea of epistemic/intellectual trust:
  1. Becoming defensive if your idea is attacked. A few questions might be fine, but very many feels like an interrogation. Objections to ideas are taken personally, especially if they occur repeatedly. This is sort of the level where most people are at, especially about identity issues like religion. Intellectuals can hurt people without realizing it when they try to engage people on such issues. The defensiveness is often a rational response to an environment in which criticism very often is an attack, and arguments like this are used as dominance moves.
  2. Competitive intellectualism. Like at level 1, debates are battles and arguments are soldiers. However, at level 2 this becomes a friendly competition rather than a problem. Intelligent objections to your ideas are expected and welcome; you may even take trollish positions in order to fish for them. Still, you're trying to win. Pseudo-legalistic concepts like burden of proof may be employed. Contrarianism is encouraged; the more outrageous the belief you can successfully defend, the better. At this level  of discourse, scientific thought may be conflated with skepticism. The endpoint of this style of intellectualism can be universal skepticism as a result.
  3. Intellectual honesty. Sorting out possibilities. Exploring all sides of an issue. This can temporarily look a lot like level 2, because taking a devil's-advocate position can be very useful. However, you never want to convey stronger evidence than exists. The very idea of arguing one side and only one side, as in level 2, is crazy -- it would defeat the point. The goal is to understand the other person's thinking, get your own thoughts across, and then try to take both forward by thinking about the issue together. You don't "win" and "lose"; all participants in the discussion are trying to come up with arguments that constrain the set of possibilities, while listing more options within that and evaluating the quality of different options. If a participant in the discussion appears to be giving a one-sided arguement for an extended period, it's because they think they have a point which hasn't been understood and they're still trying to convey it properly.

This is more nuanced than the two-level view I articulated previously, but it's still bound to be very simplistic compared to the reality. Discussions will mix these levels, and there are things happening in discussions which aren't best understood in terms of these levels (such as storytelling, jokes...). People will tend to be at different levels for different sets of beliefs, and with different people, and so on. Politics will almost always be at level 1 or 2, while it hardly even makes sense to talk about mathematics at anything but level 3. Higher levels are in some sense better than lower levels, but this should not be taken too far. Each level is an appropriate response to a different situation, and problems occur if you're not appropriately adapting your level of response to the situation. Admitting the weakness of your argument is a kind of countersignaling which can help shift from level 2 to level 3, but which can be ineffective or backfire if the conversation is stuck at level 2 or 1.

Here's an almost unrelated system of levels:
  1. Relying on personal experience and to a less extent anecdotal evidence, as opposed to statistics and controlled studies. (This is usually looked down upon by those with a scientific mindset, but again I'll be arguing that these levels shouldn't be taken as a scale from worse to better.) This is a human bias, since a personal example or story from a friend (or friend of friend) will tend to stick out in memory more vividly than numbers will. Practitioners of this approach to evidence can often be heard saying things like "you can prove anything with statistics" (which is, of course, largely true!).
  2. Relying on science, but only at the level it's conveyed in popular media. This is often really, really misleading. What the science says is often misunderstood, misconstrued, or ignored.
  3. Single study syndrome. Beware the man of one study. The habit/tactic of taking the conclusion of one scientific study as the truth. While looking at the actual studies is better than listening to the popular media, this replicates the same mistake that those who write the popular media articles are usually making. It ignores the fact that studies are often not replicated, and can show conflicting results. Another, perhaps even more important reason why single study syndrome is dangerous is because you can fish for a study to back up almost any view you like. You can do this without even realizing it; if you google terms related to what you are thinking, it will often result in information confirming those things. To overcome this, you've typically got to search for both sides of the argument. But what do you do when you find confirming evidence on both sides?
  4. Surveying science. Looking for many studies and meta-studies. This is, in some sense, the end of the line; unless you're going to break out the old lab coat and start doing science yourself, the best you can do is become acquainted with the literature and make an overall judgement from the disparate opinions there. Unfortunately, this can still be very misleading. A meta-analysis is not just a matter of finding all the relevant studies and adding up what's on one side vs the other, although this much effort is already quite a lot. Often the experiments in different studies are testing for different things. Determining which statistics are comparable will tend to be difficult, and usually you'll end up making somewhat crude comparisons. Even when studies are easily comparable,  due to publication bias, a simple tally can look like overwhelming evidence where in fact there is only chance (HT @grognor for reference). And when an effect is real, it can be due to operational definitions whose relation to real life is difficult to pin down; for example, do cognitive biases which are known to exist in a laboratory setting carry over to real-world decision-making?

Due to the troublesome nature of scientific evidence, the dialog at level 4 can sound an awful lot like level 1 at times. However, keep in mind that level 4 takes a whole lot more effort than level 1. We can put arbitrary amounts of effort into fact-checking any individual belief. While it's easy to criticize the lower levels and say that everyone should be at level 4 all the time (and they should learn to do meta-studies right, darnit!), it's practically impossible to put that amount of effort in all the time. When one does, one is often confronted with a disconcerting labrynth of arguments and refutations on both sides, so that any conclusion you may come to is tempered by the knowledge that many people have been very wrong about this same thing (for surprising reasons).

While you'll almost certainly find more errors in your thinking if you go down that rabbit hole, at some point you've got to stop. For some kinds of beliefs, the calculated point of stopping is quite early; hence, we're justified in staying at level 1 for many (most?) things. It may be easy to underestimate the amount of investigation we need to do, since long-term consequences of wrong beliefs are unpredictable (it's easier to think about only short-term needs) and it's much easier to see the evidence currently in favor of our position than the possible refutations which we've yet to encounter. Nonetheless, only so much effort is justified.

No comments:

Post a Comment