Decision theory and zero-sum game theory, NP and PSPACE

(Also posted on LessWrong)

At a rough level:

  • Decision theory is about making decisions to maximize some objective function.
  • Zero-sum game theory is about making decisions to optimize some objective function while someone else is making decisions to minimize this objective function.

These are quite different.

Decision theory and NP

Decision theory roughly corresponds to the NP complexity class.  Consider the following problem:

Given a set of items, each of which has a integer-valued value and weight, does there exist a subset with total weight less than w and total value at least v?

(It turns out that finding a solution is not much harder than determining whether there is a solution; if you know how to tell whether there is a solution to arbitrary problems of this form, you can in particular tell if there is a solution that uses any particular item.)

This is the knapsack problem, and it is in NP.  Given a candidate solution, it is easy to check whether it actually is a solution: you just count the values and the weights.  Since this solution would constitute a proof that the answer to the question is “yes”, and a solution exists whenever the answer is “yes”, this problem is in NP.

The following is a general form for NP problems:

\exists x_1 \in \{0, 1\} \exists x_2 \in \{0, 1\} \ldots \exists x_k \in \{0, 1\} f(x_1, ..., x_k)

where f is a specification of a circuit (say, made of AND, OR, and NOT gates) that outputs a single Boolean value.  That is, the problem is to decide whether there is some assignment of values to x_1, \ldots, x_k that f outputs true on.  This is a variant of the Boolean satisfiability problem.

In decision theory (and in NP), all optimization is in the same direction.  The only quantifier is \exists.

Zero-sum game theory and PSPACE

Zero-sum game theory roughly corresponds to the PSPACE complexity class.  Consider the following problem:

Given a specification of a Reversi game state (on an arbitrarily-large square board), does there exists a policy for the light player that guarantees a win?

(It turns out that winning the game is not much harder than determining whether there is a winning policy; if you know how to tell whether there is a solution to arbitrary problems of this form, then in particular you can tell if dark can win given a starting move by light.)

This problem is in PSPACE: it can be solved by a Turing machine using a polynomial amount of space.  This Turing machine works through the minimax algorithm: it simulates all possible games in a backtracking fashion.

The following is a general form for PSPACE problems:

\exists x_1 \in \{0, 1\} \forall y_1 \in \{0, 1\} \ldots \exists x_k \in \{0, 1\} \forall y_k \in \{0, 1\} f(x_1, y_1, \ldots, x_k, y_k)

where f is a specification of a circuit (say, made of AND, OR, and NOT gates) that outputs a single Boolean value.  That is, the problem is to determine whether it is possible to set the x values interleaved with an opponent setting the y values such that, no matter how the opponent acts, f(x_1, y_1, \ldots, x_k, y_k) is true.  This is a variant of the quantified Boolean formula problem.  (Interpreting a logical formula containing \exists and \forall as a game is standard; see game semantics).

In zero-sum game theory, all optimization is in one of two completely opposite directions.  There is literally no difference between something that is good for one player and something that is bad for the other.  The opposing quantifiers \exists and \forall, representing decisions by the two opponents, are interleaved.

Different cognitive modes

The comparison to complexity classes suggests that there are two different cognitive modes for decision theory and zero-sum game theory, as there are two different types of algorithms for NP-like and PSPACE-like problems.

In decision theory, you plan with no regard to any opponents interfering with your plans, allowing you to plan on arbitrarily long time scales.  In zero-sum game theory, you plan on the assumption that your opponent will interfere with your plans (your \existss are interleaved with your opponent’s \foralls), so you can only plan as far as your opponent lacks the ability to interfere with these plans.  You must have a short OODA loop, or your opponent’s interference will make your plans useless.

In decision theory, you can mostly run on naïve expected utility analysis: just do things that seem like they will work.  In zero-sum game theory, you must screen your plans for defensibility: they must be resistant to possible attacks.  Compare farming with border defense, mechanical engineering with computer security.

High-reliability engineering is an intermediate case: designs must be selected to work with high probability across a variety of conditions, but there is normally no intelligent optimization power working against the design.  One could think of nature as an “adversary” selecting some condition to test the design against, and represent this selection by a universal quantifier; however, this is qualitatively different from a true adversary, who applies intentional optimization to break a design rather than haphazard selection of conditions.

Conclusion

These two types of problems do not cover all realistic situations an agent might face.  Decision problems involving agents with different but not completely opposed objective functions are different, as are zero-sum games with more than two players.  But realistic situations share some properties with each of these, and I suspect that there might actually be a discrete distinction between cognitive modes for NP-like decision theory problems and PSPACE-like zero-sum games.

What’s the upshot?  If you want to know what is going on, one of the most important questions (perhaps the most important question) is: what kind of game are you playing?  Is your situation more like a decision theory problem or a zero-sum game?  To what extent is optimization by different agents going in the same direction, opposing directions, or orthogonal directions?  What would have to change for the nature of the game to change?


Thanks to Michael Vassar for drawing my attention to the distinction between decision theory and zero-sum game theory as a distinction between two cognitive modes.

Related: The Face of the Ice

In the presence of disinformation, collective epistemology requires local modeling

In Inadequacy and Modesty, Eliezer describes modest epistemology:

How likely is it that an entire country—one of the world’s most advanced countries—would forego trillions of dollars of real economic growth because their monetary controllers—not politicians, but appointees from the professional elite—were doing something so wrong that even a non-professional could tell? How likely is it that a non-professional could not just suspect that the Bank of Japan was doing something badly wrong, but be confident in that assessment?

Surely it would be more realistic to search for possible reasons why the Bank of Japan might not be as stupid as it seemed, as stupid as some econbloggers were claiming. Possibly Japan’s aging population made growth impossible. Possibly Japan’s massive outstanding government debt made even the slightest inflation too dangerous. Possibly we just aren’t thinking of the complicated reasoning going into the Bank of Japan’s decision.

Surely some humility is appropriate when criticizing the elite decision-makers governing the Bank of Japan. What if it’s you, and not the professional economists making these decisions, who have failed to grasp the relevant economic considerations?

I’ll refer to this genre of arguments as “modest epistemology.”

I see modest epistemology as attempting to defer to a canonical perspective: a way of making judgments that is a Schelling point for coordination. In this case, the Bank of Japan has more claim to canonicity than Eliezer does regarding claims about Japan’s economy. I think deferring to a canonical perspective is key to how modest epistemology functions and why people find it appealing.

In social groups such as effective altruism, canonicity is useful when it allows for better coordination. If everyone can agree that charity X is the best charity, then it is possible to punish those who do not donate to charity X. This is similar to law: if a legal court makes a judgment that is not overturned, that judgment must be obeyed by anyone who does not want to be punished. Similarly, in discourse, it is often useful to punish crackpots by requiring deference to a canonical scientific judgment.

It is natural that deferring to a canonical perspective would be psychologically appealing, since it offers a low likelihood of being punished for deviating while allowing deviants to be punished, creating a sense of unity and certainty.

An obstacle to canonical perspectives is that epistemology requires using local information. Suppose I saw Bob steal my wallet. I have information about whether he actually stole my wallet (namely, my observation of the theft) that no one else has. If I tell others that Bob stole my wallet, they might or might not believe me depending on how much they trust me, as there is some chance I am lying to them. Constructing a more canonical perspective (e.g. a in a court of law) requires integrating this local information: for example, I might tell the judge that Bob stole my wallet, and my friends might vouch for my character.

If humanity formed a collective superintelligence that integrated local information into a canonical perspective at the speed of light using sensible rules (e.g. something similar to Bayesianism), then there would be little need to exploit local information except to transmit it to this collective superintelligence. Obviously, this hasn’t happened yet. Collective superintelligences made of humans must transmit information at the speed of human communication rather than the speed of light.

In addition to limits on communication speed, collective superintelligences made of humans have another difficulty: they must prevent and detect disinformation. People on the internet sometimes lie, as do people off the internet. Self-deception is effectively another form of deception, and is extremely common as explained in The Elephant in the Brain.

Mostly because of this, current collective superintelligences leave much to be desired. As Jordan Greenhall writes in this post:

Take a look at Syria. What exactly is happening? With just a little bit of looking, I’ve found at least six radically different and plausible narratives:

• Assad used poison gas on his people and the United States bombed his airbase in a measured response.

• Assad attacked a rebel base that was unexpectedly storing poison gas and Trump bombed his airbase for political reasons.

• The Deep State in the United States is responsible for a “false flag” use of poison gas in order to undermine the Trump Insurgency.

• The Russians are responsible for a “false flag” use of poison gas in order to undermine the Deep State.

• Putin and Trump collaborated on a “false flag” in order to distract from “Russiagate.”

• Someone else (China? Israel? Iran?) is responsible for a “false flag” for purposes unknown.

And, just to make sure we really grasp the level of non-sense:

• There was no poison gas attack, the “white helmets” are fake news for purposes unknown and everyone who is in a position to know is spinning their own version of events for their own purposes.

Think this last one is implausible? Are you sure? Are you sure you know the current limits of the war on sensemaking? Of sock puppets and cognitive hacking and weaponized memetics?

All I am certain of about Syria is that I really have no fucking idea what is going on. And that this state of affairs — this increasingly generalized condition of complete disorientation — is untenable.

We are in a collective condition of fog of war. Acting effectively under fog of war requires exploiting local information before it has been integrated into a canonical perspective. In military contexts, units must make decisions before contacting a central base using information and models only available to them. Syrians must decide whether to flee based on their own observations, observations of those they trust, and trustworthy local media. Americans making voting decisions based on Syria must decide which media sources they trust most, or actually visit Syria to gain additional info.

While I have mostly discussed differences in information between people, there are also differences in reasoning ability and willingness to use reason. Most people most of the time aren’t even modeling things for themselves, but are instead parroting socially acceptable opinions. The products of reasoning could perhaps be considered as a form of logical information and treated similar to other information.

In the past, I have found modest epistemology aesthetically appealing on the basis that sufficient coordination would lead to a single canonical perspective that you can increase your average accuracy by deferring to (as explained in this post). Since then, aesthetic intuitions have led me to instead think of the problem of collective epistemology as one of decentralized coordination: how can good-faith actors reason and act well as a collective superintelligence in conditions of fog of war, where deception is prevalent and creation of common knowledge is difficult? I find this framing of collective epistemology more beautiful than the idea of a immediately deferring to a canonical perspective, and it is a better fit for the real world.

I haven’t completely thought through the implications of this framing (that would be impossible), but so far my thinking has suggested a number of heuristics for group epistemology:

  • Think for yourself. When your information sources are not already doing a good job of informing you, gathering your own information and forming your own models can improve your accuracy and tell you which information sources are most trustworthy. Outperforming experts often doesn’t require complex models or extraordinary insight; see this review of Superforecasting for a description of some of what good amateur forecasters do.
  • Share the products of your thinking. Where possible, share not only opinions but also the information or model that caused you to form the opinion. This allows others to verify and build on your information and models rather than just memorizing “X person believes Y”, resulting in more information transfer. For example, fact posts will generally be better for collective epistemology than a similar post with fewer facts; they will let readers form their own models based on the info and have higher confidence in these models.
  • Fact-check information people share by cross-checking it against other sources of information and models. The more this shared information is fact-checked, the more reliably true it will be. (When someone is wrong on the internet, this is actually a problem worth fixing).
  • Try to make information and models common knowledge among a group when possible, so they can be integrated into a canonical perspective. This allows the group to build on this, rather than having to re-derive or re-state it repeatedly. Contributing to a written canon that some group of people is expected to have read is a great way to do this.
  • When contributing to a canon, seek strong and clear evidence where possible. This can result in a question being definitively settled, which is great for the group’s ability to reliably get the right answer to the question, rather than having a range of “acceptable” answers that will be chosen from based on factors other than accuracy.
  • When taking actions (e.g. making bets), use local information available only to you or a small number of others, not only canonical information. For example, when picking organizations to support, use information you have about these organizations (e.g. information about the competence of people working at this charity) even if not everyone else has this info. (For a more obvious example to illustrate the principle: if I saw Bob steal my wallet, then it’s in my interest to guard my possessions more closely around Bob than I otherwise would, even if I can’t convince everyone that Bob stole my wallet).

Against unreasonably high standards

Consider the following procedure:

  1. Create unreasonably high standards that people are supposed to follow.
  2. Watch as people fail to meet them and thereby accumulate “debt”.
  3. Provide a way for people to discharge their debt by sacrificing their agency to some entity (concrete or abstract).

This is a common way to subjugate people and extract resources from them.  Some examples:

  • Christianity: Christianity defines many natural human emotions and actions as “sins” (i.e. things that accumulate debt), such that almost all Christians sin frequently.  Even those who follow all the rules have “original sin”.  Christianity allows people to discharge their debt by asking Jesus to bear their sins (thus becoming subservient to Jesus/God).
  • The Western education system: Western schools (and many non-Western schools) create unnatural standards of behavior that are hard for students to follow.  When students fail to meet these standards, they are told they deserve punishments including public humiliation and being poor as an adult.  School doesn’t give a way to fully discharge debts, leading to anxiety and depression in many students and former students, but people can partially discharge debt by admitting that they are in an important sense subservient to the education system (e.g. accepting domination from the more-educated boss in the workplace).
  • Effective altruism: The drowning child argument (promoted by effective altruists such as Peter Singer) argues that middle-class Americans have an obligation to sacrifice luxuries to save the lives of children in developing countries, or do something at least this effective (in practice, many effective altruists instead support animal welfare or existential risk organizations).  This is an unreasonably high standard; nearly no one actually sacrifices all their luxuries (living in poverty) to give away more money.  Effective altruism gives a way to discharge this debt: you can just donate 10% of your income to an effective charity (sacrificing some of your agency to it), or change your career to a more good-doing one.  (This doesn’t work for everyone, and many “hardcore EAs” continue to struggle with scrupulosity despite donating much more than 10% of their income or changing their career plans significantly, since they always could be doing more).
  • The rationalist community: I hesitate to write this section for a few reasons (specifically, it’s pretty close to home and is somewhat less clear given that some rationalists have usefully criticized some of the dynamics I’m complaining about).  But a subtext I see in the rationalist community says something like: “You’re biased so you’re likely to be wrong and make bad decisions that harm other people if you take actions in the world, and it’ll be your fault.  Also, the world is on fire and you’re one of the few people who knows about this, so it’s your responsibility to do something about it.  Luckily, you can discharge some of your debts by improving your own rationality, following the advice of high-level rationalists, and perhaps giving them money.”  That’s clearly an instance of this pattern; no one is unbiased, “high-level rationalists” included.  (It’s hard to say where exactly this subtext comes from, and I don’t think it’s anyone’s “fault”, but it definitely seems to exist; I’ve been affected by it myself, and I think it’s part of what causes akrasia in many rationalists.)

There are many more examples; I’m sure you can think of some.  Setting up a system like this has some effects:

  • Hypocrisy: Almost no one actually follows the standards, but they sometimes pretend they do.  Since standards are unreasonably high, they are enforced inconsistently, often against the most-vulnerable members of a group, while the less-vulnerable maintain the illusion that they are actually following the standards.
  • Self-violence: Buying into unreasonably high standards will make someone turn their mind against itself.  Their mind will split between the “righteous” part that is trying to follow and enforce the unreasonably high standards, and the “sinful” part that is covertly disobeying these standards in order to get what the mind actually wants (which is often in conflict with the standards).  Through neglect and self-violence, the “sinful” part of the mind develops into a shadow.  Self-hatred is a natural results of this process.
  • Distorted perception and cognition: The righteous part of the mind sometimes has trouble looking at ways in which the person is failing to meet standards (e.g. it will avoid looking at things that the person might be responsible for fixing).  Consciousness will dim when there’s risk of seeing that one is not meeting the standards (and sometimes also when there’s risk of seeing that others are not meeting the standards).  Concretely, one can imagine someone who gets lost surfing the internet to avoid facing some difficult work they’re supposed to do, or someone who avoids thinking about the ways in which their project is likely to fail.  Given the extent of the high standards and the debt that most people feel they are in, this will often lead to extremely distorted perception and cognition, such that coming out of it feels like waking from a dream.
  • Motivational problems: Working is one way to discharge debt, but working is less motivating if all products of your work go to debt-collectors rather than yourself.  The “sinful” part of the mind will resist work, as it expects to derive little benefit from it.
  • Fear: Accumulating lots of debt gives one the feeling that, at any time, debt-collectors could come and demand anything of you.  This causes the scrupulous to live in fear.  Sometimes, there isn’t even a concretely-identifiable entity they’re afraid of, but it’s clear that they’re afraid of something.

Systems involving unreasonably high standards could theoretically be justified if they were good coordination mechanisms.  But it seems implausible that they are.  Why not just make the de jure norms ones that people are actually likely to follow?  Surely a sufficient set of norms exists, since people are already following the de facto ones.  You can coordinate a lot without optimizing your coordination mechanism for putting everyone in debt!

I take the radical position that TAKING UNREASONABLY HIGH STANDARDS SERIOUSLY IS A REALLY BAD IDEA and ALL OF MY FRIENDS AND PERHAPS ALL HUMANS SHOULD STOP DOING IT.  Unreasonably high standards are responsible for a great deal of violence against life, epistemic problems, and horribleness in general.

(It’s important to distinguish having unreasonably high standards from having a preference ordering whose most-preferred state is impractical to attain; the second does not lead to the same problems unless there’s some way of obligating people to reach an unreasonably good state in the preference ordering.  Attaining a decent but non-maximally-preferred state should perhaps feel annoying or aesthetically displeasing, but not anxiety-inducing.)

My advice to the scrupulous: you are being scammed and you are giving your life away to scammers.  The debts that are part of this scam are fake, and you can safely ignore almost all of them since they won’t actually be enforced.  The best way to make the world better involves first refusing to be scammed, so that you can benefit from the products of your own labor (thereby developing intrinsic motivation to do useful things) instead of using them to pay imaginary debts, and so you can perceive the world accurately without fear.  You almost certainly have significant intrinsic motivation for helping others; you are more likely to successfully help them if your help comes from intrinsic motivation and abundance rather than fear and obligation.

Coordination isn’t hard

In discussions around existential risk, I sometimes hear the phrase “coordination is hard”.  Most of the time, the person saying this is using this as a reason to give up on strategies involving decentralized coordination and instead pursue more-centralized strategies, such as “a small number of people being extremely rational individuals and using other people as advisers and actuators”.  In this context, I think the phrase “coordination is hard” is very misleading.

Here are some examples of successful coordination:

  • Labor movements manage to hold strikes despite individual incentives to break the strike.
  • Open source software is usually created by volunteers and is often of fairly high quantity and quality.  Some projects, such as Linux, have hundreds or thousands of contributors.
  • CFCs have been largely phased out globally due to their impact on the ozone layer.
  • Communities, in-person and online, develop discourse norms, which can implement significant functionality for the community (e.g. fact-checking, preventing people from unjustly harming others)
  • Some groups of people distributed among different countries (e.g. workers, moderates, various racial/ethnic groups, intellectuals) will feel affinity with each other and help each other out (e.g. intellectuals in the US might support immigration as a way of assisting intellectuals in other countries)
  • Militaries usually have significant decentralized coordination.  Soldiers usually fight bravely; defectors are relatively rare.
  • Many societies have a well-functioning justice system.  This system requires people to perform many different functions (judge, juror, witness, expert, …).

None of these examples are in perfect analogy with current existential risk issues, but given them, the generic statement “coordination is hard” looks silly.  Upon dismissing this generic statement, more interesting questions emerge:

  • When is coordination easy or hard?
  • How does ability to coordinate vary across different societies historically?
  • What different coordination strategies are there?  How do they work?  What are their strengths and weaknesses?

In many cases, humans seem to be better at coordinating than Homo economicus would predict; anyone concerned with the future of humanity ought to find this fact extremely interesting.

 

 

 

 

Naïve epistemology, savvy epistemology

[epistemic status: confident this distinction exists and is important, unsure about details/framing]

A contrast:

  • Naïve epistemology: simple models; mathematics and logic; “prima facie”; original seeing; “natural” descriptions of facts on the ground; “from the mouth of a babe”; not trusting others’ models/narratives (but often trusting their reports of basic observable facts); blunt
  • Savvy epistemology: responding to the dominant opinions in a social group such as expert consensus (agreeing with parts of it and/or disagreeing with parts of it); “this is a complex topic”; not wanting to omit considerations that others know about; making sure to be “informed” and “up to date”; educated; responsible; legitimate; sophisticated; subtle; trust or cynicism regarding others’ models/narratives

Naïve epistemology mostly models the world; savvy epistemology mostly models other people’s models of the world.  Naïve epistemology avoids the social field; savvy epistemology swims in it.

The virtues of naiveté

In the past year, I’ve strongly updated towards using more naïve epistemology than savvy epistemology.  Savvy epistemology is more informed in some ways, but is also more corrupt.  It will tend to be slanted by whoever has local political power, rather than being purely optimized for accuracy.  Some examples:

  • Tetlock’s book Superforecasting shows that some non-experts using relatively naïve methods can predict world events significantly better than savvier experts.  This indicates that these experts’ opinions are not very well optimized for accuracy (and are most likely optimized for something other than accuracy).
  • I am convinced by Robin Hanson’s argument that medical care has little association with health outcomes in the US.  The “savvy” responses to his “naïve” article (linked at the bottom of the page) seem to basically be misdirections intended to avoid taking seriously his main point (that cutting medical spending by 50% across the board would make little difference to health outcomes).
  • AI risk was discussed on LessWrong many years before the topic received serious mainstream attention, mostly from a naïve epistemology perspective.  The savvy expert view started out saying it wasn’t a serious problem (for various reasons), and has since drifted somewhat towards the LessWrong view.  From thinking extensively about AI alignment and talking with people over the past few years, I feel that I have gotten significant personal evidence in favor of the LessWrong view that the alignment problem is very difficult and that no one has a good idea of how to solve it.

The corruption of savvy epistemology does not make it useless: it usually contains a great deal of information not captured by naïve epistemology, and use of savvy epistemology allows for easier coordination with others using it.  But it does mean that it’s highly valuable to develop a naïve perspective in addition to a savvy one.

Why be afraid of naïve epistemology?

There are legitimate reasons to be afraid that someone taking naïve epistemology seriously will cause harm:

  • Uncooperative actions: Taking one’s own models seriously can lead one to take actions that are uncooperative with others who have different models, such as dishonesty or violence.  See The Unilateralist’s Curse.  The “social field” that savviness swims in is a coordination mechanism that, among other things, prevents people from taking some uncooperative actions.
  • Safety: Common recommendations for what to believe, though often false, at least offer safety in the medium term for most people.  Having and acting on unusual beliefs can be dangerous to one’s self and others, since at that point you’re not using a set of beliefs that’s already been optimized for medium-term safety.  (In the long term, safety from some risks such as large-scale wars and some medical issues might require nonconformist epistemics)
  • Disharmony: To the extent that words ostensibly about the world are used to coordinate actions, people using different words (e.g. those generated by naïve epistemology) can generate confusion about what actions the group is supposed to be taking.
  • Politics: Saying naïve things can disrupt politics, by e.g. offending allies or potential allies.  This can trigger a defensive reaction.
  • Legibility bias: Naïve models tend to be legible.  It can be tempting to make oversimplified models that don’t actually summarize the fiddly details well, leading to failures such as utopian totalitarianism.

Social technology for being naïve

I think that people who want good epistemics should have the affordance to use and act on naïve epistemology.  This requires addressing the problems in the previous section.  Here are some tools I use to give myself the affordance to do naïve epistemology without paying large costs:

  • Decision theory: Preferentially use naïve epistemology when making bets using one’s own resources, and use shared epistemology when making bets using others’ resources.  Detect “prisoner’s dilemma” or “unilateralist’s curse” situations (including epistemic ones) and make a good-faith effort to achieve mutual cooperation.  Develop and signal the intent to honestly inform others and to learn from others.  Take the true outside view and act as you would want others in your position to act.  (See also: Guided by the beauty of our weapons)
  • Testing things: If a naïve model implies some surprising prediction, it can be good to test it out (if it isn’t very costly).  If the prediction is confirmed, great; if not, then perhaps create a new naïve model that takes this new piece of evidence into account.
  • Separate magisteria: Calling forms of model-building “naïve” puts these models in a separate magisterium where it isn’t socially expected that they will be taken seriously, triggering less defensive reaction.  (I bet many people would be less receptive to this post if I had called it “world-modeling epistemology” rather than “naïve epistemology”).  Other ways of putting model-building in a separate magisterium: calling yourself “a contrarian acting on a weird inside view”; saying you’re “discussing weird ideas”; etc.
  • Translation: Some naïve models can eventually be translated into savvy arguments.  (In practice, not all of them can be translated without losing fidelity).
  • Arguing against savviness:  In general, internalizing arguments against “mainstream” opinions makes it psychologically and socially easier to do “non-mainstream” things.  This post itself contains arguments against savvy epistemology.  Getting arguments against specific savvy beliefs/worldviews can also be helpful.
  • Playing with identity: In general, some identities (“independent”, “truth-seeking”, “weird”) will create a local incentive to lean towards naïve epistemology,  while others (“responsible”, “official”, “educated”) will create a local incentive to lean towards savvy epistemology.  Someone who leans savvy might benefit by creating a context where they can embody a more naïve identity.

I would like it if more people had the affordance to develop a naïve epistemology, so that they can make important inferences that savvy epistemology obscures.  I don’t think that naïve epistemology is an “endpoint” of epistemology (it seems desirable to eventually develop epistemology that is fundamentally truth-tracking the way naïve epistemology is in addition to being highly informed and coordinated across people), but I expect it to be an important stepping stone for people currently embedded in savvy epistemology.

Beware defensibility

[epistemic status: confident this is a thing, not sure if I’m framing it right]

I’ve noticed that my writing is usually better when I write it quickly and don’t edit it as much.  Continuously adjusting it results in something blander.  I think this is about defensibility.  Some examples:

  • Academic papers are more defensible than blog posts.
  • Public relations speech is more defensible that private speech.
  • Proofs are more defensible than intuitive arguments.
  • Slowly-written things are more defensible than quickly-written things.
  • “Literal” speech is more defensible than “metaphorical” speech.

Defensibility is about resisting all possible attacks from some class of adversaries.  The more you expect your expression to be picked apart and used against you, the more defensible your expression will be.  Slowly-written things look more polished and the flaws stand out more, so they get nitpicked.

Defensibility can be good.  In mathematics, a defensible argument (i.e. a proof) is more likely to be correct.  In science, a defensible statistical result is more likely to replicate.  Defensible results become blocks of knowledge that others can (and sometimes must) build upon.

Defensibility can be bad.  Academic papers are usually worse at explaining things than blog posts.  PR is usually highly misleading.  Fully formal proofs are usually harder to understand than intuitive arguments.  Defensible art is bland.

Defensibility requires conformity.  If anything you say can and will be used against you, it is better to say the same things others are saying.  Defensible expressions happen in a shared ontology, such as formal logic, or the “ordinary official speech” ontology that Wikipedia uses.

Some expression should be defensible (against different classes of adversaries).  Some shouldn’t be.  It is virtuous to be flexible about defensibility.

Rationality techniques as patterns

[epistemic status: optimistic vision that currently seems highly useful]

In Christopher Alexander’s work, a pattern consists of:

  1. A way of perceiving an existing tension in a living system.
  2. A model for why the tension exists.
  3. A way of adapting the system in a way that resolves the tension.

Christopher Alexander describes hundreds of patterns for towns and buildings in A Pattern Language.  Consider pattern #126, SOMETHING ROUGHLY IN THE MIDDLE:

A public space without a middle is quite likely to stay empty.

if there is a reasonable area in the middle, intended for public use, it will be wasted unless there are trees, monuments, seats, fountains—a place where people can protect their backs, as easily as they can around the edge.

Therefore:

Between the natural paths which cross a public square or courtyard or piece of common land choose something to stand roughly in the middle: a fountain, a tree, a statue, a clock-tower with seats, a windmill, a bandstand.  Make it something which gives a strong and steady pulse to the square, drawing people in twoards the center.  Leave it exactly where it falls between the paths; resist the impulse to put it exactly in the middle.

This pattern points out a tension: a public space without a middle will feel weird and people will avoid it.  Next, it gives a model for the tension: it’s because people want their back to be against something.  Finally, it gives a way to resolve this tension: put something roughly in the middle.

Patterns that work well with each other form a pattern language.  SOMETHING ROUGHLY IN THE MIDDLE works with SMALL PUBLIC SQUARES; along with the other patterns, they form a pattern language.

The initial tension will sometimes be diffuse and hard to pin down.  People might feel “off” when they’re in a large empty space but not have the words to say why.  They might not even consciously realize there is a problem and instead just have slightly elevated stress levels.  The tension becomes more pointed if it is explicitly pointed out and especially if there is a model for it.

Many of the rationality techniques discussed in the LessWrong-descended rationality community are patterns in this sense.  The technique of leaving a line of retreat is about noticing a tension (some possibility X seems hard to seriously consider) and resolving it by imagining a world in which X is true in detail (in order to allow yourself to compare the world where X is true and the world where X is false even-handedly).  The technique of finding a true rejection is about noticing a tension (people have arguments where they give false reasons for their beliefs and don’t actually change their minds when these reasons are refuted) and resolving it by urging people to find the actual historical reasons for their beliefs.  CFAR‘s debugging techniques are also usually patterns of this form (they resolve some sort of tension within a person or between a person and their environment, usually reducing stress levels).

Resolving tensions fully brings to mind the image of a placid lake.  But resolving tensions doesn’t necessarily lead to placidity, because systems involving humans are alive.  Solving all of someone’s immediate problems might make them bored, and might make them want to help others.

Christopher Alexander praises the way that well-built towns and buildings bring out life in The Timeless Way of Building (I highly recommend reading the first 2 chapters):

There is one timeless way of building.

It is thousands of years old, and the same today as it has always been.

The great traditional buildings of the past, the villages and tents and temples in which man feels at home, have always been made by people who were very close to the center of this way.  It is not possible to make great buildings, or great towns, beautiful places, places where you feel yourself, places where you feel alive, except by following this way.  And, as you will see, this way will lead anyone who looks for it to buildings which are themselves as ancient in their form, as the trees and hills, and as our faces are.

It is a process through which the order of a building or a town grows out directly from the inner nature of the people, and the animals, and plants, and matter which are in it.

It is a process which allows the life inside a person, or a family, or a town, to flourish, openly, in freedom, so vividly that it gives birth, of its own accord, to the natural order which is needed to sustain this life.

But as things are, we have so far beset ourselves with rules, and concepts, and ideas of what must be done to make a building or a town alive, that we have become afraid of what will happen naturally, and convinced that we must work within a “system” and with “methods” since without them our surroundings will come tumbling down in chaos.

The thoughts and fears which feed these methods are illusions.

I find Christopher Alexander’s vision highly appealing.  Something happened in Western architecture that interfered with the timeless way of building; this probably had to do with the increase in degrees of freedom in the transition from pre-modern to modern societies.  Similar phenomena have happened in other domains.  Luckily, the way can be regained through the discipline of pattern languages, and eventually through dissolving disciplines and doing what is natural.  There are obvious parallels with Taoism.

I want pattern languages for improving people’s psychological functioning, for improving groups’ ability to work together, for improving models of the world, for making good institutions, for all sorts of things.  Groups of people are developing pattern languages in many different places, mostly without explicitly knowing what they are doing.  I think a more explicit understanding of pattern languages and knowledge of how past pattern languages were created will help to design better ones in the future.  Maybe with the right pattern languages (and with the right going-beyond-them), we can become much more alive, as individuals and as groups, and much more able to coordinate to accomplish very hard things.

The true outside view

[epistemic status: strong opinions weakly held, not very original]

The false outside view says that if you have a contrarian inside view, you’re probably wrong, so you should not act like you’re correct. The false outside view is driven by intuitions about social status and/or fear of being wrong.

The true outside view says that a great deal of good comes from contrarians pushing on their contrarian inside views (acting on them, talking about them with others, etc), so if you have one of those you should maybe do that, while additionally taking others’ perspectives seriously, testing your ideas often, being honest, avoiding some potentially-catastrophic unilateral actions, etc.

The true outside view follows from any halfway-decent decision theory (something like “updateless decision theory for humans”, as gestured at before. This decision theory might look “epistemically modest” sometimes and “epistemically arrogant” other times, but these descriptions impose ego on something that by construction doesn’t have an ego.