Against unreasonably high standards

Consider the following procedure:

  1. Create unreasonably high standards that people are supposed to follow.
  2. Watch as people fail to meet them and thereby accumulate “debt”.
  3. Provide a way for people to discharge their debt by sacrificing their agency to some entity (concrete or abstract).

This is a common way to subjugate people and extract resources from them.  Some examples:

  • Christianity: Christianity defines many natural human emotions and actions as “sins” (i.e. things that accumulate debt), such that almost all Christians sin frequently.  Even those who follow all the rules have “original sin”.  Christianity allows people to discharge their debt by asking Jesus to bear their sins (thus becoming subservient to Jesus/God).
  • The Western education system: Western schools (and many non-Western schools) create unnatural standards of behavior that are hard for students to follow.  When students fail to meet these standards, they are told they deserve punishments including public humiliation and being poor as an adult.  School doesn’t give a way to fully discharge debts, leading to anxiety and depression in many students and former students, but people can partially discharge debt by admitting that they are in an important sense subservient to the education system (e.g. accepting domination from the more-educated boss in the workplace).
  • Effective altruism: The drowning child argument (promoted by effective altruists such as Peter Singer) argues that middle-class Americans have an obligation to sacrifice luxuries to save the lives of children in developing countries, or do something at least this effective (in practice, many effective altruists instead support animal welfare or existential risk organizations).  This is an unreasonably high standard; nearly no one actually sacrifices all their luxuries (living in poverty) to give away more money.  Effective altruism gives a way to discharge this debt: you can just donate 10% of your income to an effective charity (sacrificing some of your agency to it), or change your career to a more good-doing one.  (This doesn’t work for everyone, and many “hardcore EAs” continue to struggle with scrupulosity despite donating much more than 10% of their income or changing their career plans significantly, since they always could be doing more).
  • The rationalist community: I hesitate to write this section for a few reasons (specifically, it’s pretty close to home and is somewhat less clear given that some rationalists have usefully criticized some of the dynamics I’m complaining about).  But a subtext I see in the rationalist community says something like: “You’re biased so you’re likely to be wrong and make bad decisions that harm other people if you take actions in the world, and it’ll be your fault.  Also, the world is on fire and you’re one of the few people who knows about this, so it’s your responsibility to do something about it.  Luckily, you can discharge some of your debts by improving your own rationality, following the advice of high-level rationalists, and perhaps giving them money.”  That’s clearly an instance of this pattern; no one is unbiased, “high-level rationalists” included.  (It’s hard to say where exactly this subtext comes from, and I don’t think it’s anyone’s “fault”, but it definitely seems to exist; I’ve been affected by it myself, and I think it’s part of what causes akrasia in many rationalists.)

There are many more examples; I’m sure you can think of some.  Setting up a system like this has some effects:

  • Hypocrisy: Almost no one actually follows the standards, but they sometimes pretend they do.  Since standards are unreasonably high, they are enforced inconsistently, often against the most-vulnerable members of a group, while the less-vulnerable maintain the illusion that they are actually following the standards.
  • Self-violence: Buying into unreasonably high standards will make someone turn their mind against itself.  Their mind will split between the “righteous” part that is trying to follow and enforce the unreasonably high standards, and the “sinful” part that is covertly disobeying these standards in order to get what the mind actually wants (which is often in conflict with the standards).  Through neglect and self-violence, the “sinful” part of the mind develops into a shadow.  Self-hatred is a natural results of this process.
  • Distorted perception and cognition: The righteous part of the mind sometimes has trouble looking at ways in which the person is failing to meet standards (e.g. it will avoid looking at things that the person might be responsible for fixing).  Consciousness will dim when there’s risk of seeing that one is not meeting the standards (and sometimes also when there’s risk of seeing that others are not meeting the standards).  Concretely, one can imagine someone who gets lost surfing the internet to avoid facing some difficult work they’re supposed to do, or someone who avoids thinking about the ways in which their project is likely to fail.  Given the extent of the high standards and the debt that most people feel they are in, this will often lead to extremely distorted perception and cognition, such that coming out of it feels like waking from a dream.
  • Motivational problems: Working is one way to discharge debt, but working is less motivating if all products of your work go to debt-collectors rather than yourself.  The “sinful” part of the mind will resist work, as it expects to derive little benefit from it.
  • Fear: Accumulating lots of debt gives one the feeling that, at any time, debt-collectors could come and demand anything of you.  This causes the scrupulous to live in fear.  Sometimes, there isn’t even a concretely-identifiable entity they’re afraid of, but it’s clear that they’re afraid of something.

Systems involving unreasonably high standards could theoretically be justified if they were good coordination mechanisms.  But it seems implausible that they are.  Why not just make the de jure norms ones that people are actually likely to follow?  Surely a sufficient set of norms exists, since people are already following the de facto ones.  You can coordinate a lot without optimizing your coordination mechanism for putting everyone in debt!

I take the radical position that TAKING UNREASONABLY HIGH STANDARDS SERIOUSLY IS A REALLY BAD IDEA and ALL OF MY FRIENDS AND PERHAPS ALL HUMANS SHOULD STOP DOING IT.  Unreasonably high standards are responsible for a great deal of violence against life, epistemic problems, and horribleness in general.

(It’s important to distinguish having unreasonably high standards from having a preference ordering whose most-preferred state is impractical to attain; the second does not lead to the same problems unless there’s some way of obligating people to reach an unreasonably good state in the preference ordering.  Attaining a decent but non-maximally-preferred state should perhaps feel annoying or aesthetically displeasing, but not anxiety-inducing.)

My advice to the scrupulous: you are being scammed and you are giving your life away to scammers.  The debts that are part of this scam are fake, and you can safely ignore almost all of them since they won’t actually be enforced.  The best way to make the world better involves first refusing to be scammed, so that you can benefit from the products of your own labor (thereby developing intrinsic motivation to do useful things) instead of using them to pay imaginary debts, and so you can perceive the world accurately without fear.  You almost certainly have significant intrinsic motivation for helping others; you are more likely to successfully help them if your help comes from intrinsic motivation and abundance rather than fear and obligation.

Coordination isn’t hard

In discussions around existential risk, I sometimes hear the phrase “coordination is hard”.  Most of the time, the person saying this is using this as a reason to give up on strategies involving decentralized coordination and instead pursue more-centralized strategies, such as “a small number of people being extremely rational individuals and using other people as advisers and actuators”.  In this context, I think the phrase “coordination is hard” is very misleading.

Here are some examples of successful coordination:

  • Labor movements manage to hold strikes despite individual incentives to break the strike.
  • Open source software is usually created by volunteers and is often of fairly high quantity and quality.  Some projects, such as Linux, have hundreds or thousands of contributors.
  • CFCs have been largely phased out globally due to their impact on the ozone layer.
  • Communities, in-person and online, develop discourse norms, which can implement significant functionality for the community (e.g. fact-checking, preventing people from unjustly harming others)
  • Some groups of people distributed among different countries (e.g. workers, moderates, various racial/ethnic groups, intellectuals) will feel affinity with each other and help each other out (e.g. intellectuals in the US might support immigration as a way of assisting intellectuals in other countries)
  • Militaries usually have significant decentralized coordination.  Soldiers usually fight bravely; defectors are relatively rare.
  • Many societies have a well-functioning justice system.  This system requires people to perform many different functions (judge, juror, witness, expert, …).

None of these examples are in perfect analogy with current existential risk issues, but given them, the generic statement “coordination is hard” looks silly.  Upon dismissing this generic statement, more interesting questions emerge:

  • When is coordination easy or hard?
  • How does ability to coordinate vary across different societies historically?
  • What different coordination strategies are there?  How do they work?  What are their strengths and weaknesses?

In many cases, humans seem to be better at coordinating than Homo economicus would predict; anyone concerned with the future of humanity ought to find this fact extremely interesting.

 

 

 

 

Naïve epistemology, savvy epistemology

[epistemic status: confident this distinction exists and is important, unsure about details/framing]

A contrast:

  • Naïve epistemology: simple models; mathematics and logic; “prima facie”; original seeing; “natural” descriptions of facts on the ground; “from the mouth of a babe”; not trusting others’ models/narratives (but often trusting their reports of basic observable facts); blunt
  • Savvy epistemology: responding to the dominant opinions in a social group such as expert consensus (agreeing with parts of it and/or disagreeing with parts of it); “this is a complex topic”; not wanting to omit considerations that others know about; making sure to be “informed” and “up to date”; educated; responsible; legitimate; sophisticated; subtle; trust or cynicism regarding others’ models/narratives

Naïve epistemology mostly models the world; savvy epistemology mostly models other people’s models of the world.  Naïve epistemology avoids the social field; savvy epistemology swims in it.

The virtues of naiveté

In the past year, I’ve strongly updated towards using more naïve epistemology than savvy epistemology.  Savvy epistemology is more informed in some ways, but is also more corrupt.  It will tend to be slanted by whoever has local political power, rather than being purely optimized for accuracy.  Some examples:

  • Tetlock’s book Superforecasting shows that some non-experts using relatively naïve methods can predict world events significantly better than savvier experts.  This indicates that these experts’ opinions are not very well optimized for accuracy (and are most likely optimized for something other than accuracy).
  • I am convinced by Robin Hanson’s argument that medical care has little association with health outcomes in the US.  The “savvy” responses to his “naïve” article (linked at the bottom of the page) seem to basically be misdirections intended to avoid taking seriously his main point (that cutting medical spending by 50% across the board would make little difference to health outcomes).
  • AI risk was discussed on LessWrong many years before the topic received serious mainstream attention, mostly from a naïve epistemology perspective.  The savvy expert view started out saying it wasn’t a serious problem (for various reasons), and has since drifted somewhat towards the LessWrong view.  From thinking extensively about AI alignment and talking with people over the past few years, I feel that I have gotten significant personal evidence in favor of the LessWrong view that the alignment problem is very difficult and that no one has a good idea of how to solve it.

The corruption of savvy epistemology does not make it useless: it usually contains a great deal of information not captured by naïve epistemology, and use of savvy epistemology allows for easier coordination with others using it.  But it does mean that it’s highly valuable to develop a naïve perspective in addition to a savvy one.

Why be afraid of naïve epistemology?

There are legitimate reasons to be afraid that someone taking naïve epistemology seriously will cause harm:

  • Uncooperative actions: Taking one’s own models seriously can lead one to take actions that are uncooperative with others who have different models, such as dishonesty or violence.  See The Unilateralist’s Curse.  The “social field” that savviness swims in is a coordination mechanism that, among other things, prevents people from taking some uncooperative actions.
  • Safety: Common recommendations for what to believe, though often false, at least offer safety in the medium term for most people.  Having and acting on unusual beliefs can be dangerous to one’s self and others, since at that point you’re not using a set of beliefs that’s already been optimized for medium-term safety.  (In the long term, safety from some risks such as large-scale wars and some medical issues might require nonconformist epistemics)
  • Disharmony: To the extent that words ostensibly about the world are used to coordinate actions, people using different words (e.g. those generated by naïve epistemology) can generate confusion about what actions the group is supposed to be taking.
  • Politics: Saying naïve things can disrupt politics, by e.g. offending allies or potential allies.  This can trigger a defensive reaction.
  • Legibility bias: Naïve models tend to be legible.  It can be tempting to make oversimplified models that don’t actually summarize the fiddly details well, leading to failures such as utopian totalitarianism.

Social technology for being naïve

I think that people who want good epistemics should have the affordance to use and act on naïve epistemology.  This requires addressing the problems in the previous section.  Here are some tools I use to give myself the affordance to do naïve epistemology without paying large costs:

  • Decision theory: Preferentially use naïve epistemology when making bets using one’s own resources, and use shared epistemology when making bets using others’ resources.  Detect “prisoner’s dilemma” or “unilateralist’s curse” situations (including epistemic ones) and make a good-faith effort to achieve mutual cooperation.  Develop and signal the intent to honestly inform others and to learn from others.  Take the true outside view and act as you would want others in your position to act.  (See also: Guided by the beauty of our weapons)
  • Testing things: If a naïve model implies some surprising prediction, it can be good to test it out (if it isn’t very costly).  If the prediction is confirmed, great; if not, then perhaps create a new naïve model that takes this new piece of evidence into account.
  • Separate magisteria: Calling forms of model-building “naïve” puts these models in a separate magisterium where it isn’t socially expected that they will be taken seriously, triggering less defensive reaction.  (I bet many people would be less receptive to this post if I had called it “world-modeling epistemology” rather than “naïve epistemology”).  Other ways of putting model-building in a separate magisterium: calling yourself “a contrarian acting on a weird inside view”; saying you’re “discussing weird ideas”; etc.
  • Translation: Some naïve models can eventually be translated into savvy arguments.  (In practice, not all of them can be translated without losing fidelity).
  • Arguing against savviness:  In general, internalizing arguments against “mainstream” opinions makes it psychologically and socially easier to do “non-mainstream” things.  This post itself contains arguments against savvy epistemology.  Getting arguments against specific savvy beliefs/worldviews can also be helpful.
  • Playing with identity: In general, some identities (“independent”, “truth-seeking”, “weird”) will create a local incentive to lean towards naïve epistemology,  while others (“responsible”, “official”, “educated”) will create a local incentive to lean towards savvy epistemology.  Someone who leans savvy might benefit by creating a context where they can embody a more naïve identity.

I would like it if more people had the affordance to develop a naïve epistemology, so that they can make important inferences that savvy epistemology obscures.  I don’t think that naïve epistemology is an “endpoint” of epistemology (it seems desirable to eventually develop epistemology that is fundamentally truth-tracking the way naïve epistemology is in addition to being highly informed and coordinated across people), but I expect it to be an important stepping stone for people currently embedded in savvy epistemology.

Beware defensibility

[epistemic status: confident this is a thing, not sure if I’m framing it right]

I’ve noticed that my writing is usually better when I write it quickly and don’t edit it as much.  Continuously adjusting it results in something blander.  I think this is about defensibility.  Some examples:

  • Academic papers are more defensible than blog posts.
  • Public relations speech is more defensible that private speech.
  • Proofs are more defensible than intuitive arguments.
  • Slowly-written things are more defensible than quickly-written things.
  • “Literal” speech is more defensible than “metaphorical” speech.

Defensibility is about resisting all possible attacks from some class of adversaries.  The more you expect your expression to be picked apart and used against you, the more defensible your expression will be.  Slowly-written things look more polished and the flaws stand out more, so they get nitpicked.

Defensibility can be good.  In mathematics, a defensible argument (i.e. a proof) is more likely to be correct.  In science, a defensible statistical result is more likely to replicate.  Defensible results become blocks of knowledge that others can (and sometimes must) build upon.

Defensibility can be bad.  Academic papers are usually worse at explaining things than blog posts.  PR is usually highly misleading.  Fully formal proofs are usually harder to understand than intuitive arguments.  Defensible art is bland.

Defensibility requires conformity.  If anything you say can and will be used against you, it is better to say the same things others are saying.  Defensible expressions happen in a shared ontology, such as formal logic, or the “ordinary official speech” ontology that Wikipedia uses.

Some expression should be defensible (against different classes of adversaries).  Some shouldn’t be.  It is virtuous to be flexible about defensibility.

Rationality techniques as patterns

[epistemic status: optimistic vision that currently seems highly useful]

In Christopher Alexander’s work, a pattern consists of:

  1. A way of perceiving an existing tension in a living system.
  2. A model for why the tension exists.
  3. A way of adapting the system in a way that resolves the tension.

Christopher Alexander describes hundreds of patterns for towns and buildings in A Pattern Language.  Consider pattern #126, SOMETHING ROUGHLY IN THE MIDDLE:

A public space without a middle is quite likely to stay empty.

if there is a reasonable area in the middle, intended for public use, it will be wasted unless there are trees, monuments, seats, fountains—a place where people can protect their backs, as easily as they can around the edge.

Therefore:

Between the natural paths which cross a public square or courtyard or piece of common land choose something to stand roughly in the middle: a fountain, a tree, a statue, a clock-tower with seats, a windmill, a bandstand.  Make it something which gives a strong and steady pulse to the square, drawing people in twoards the center.  Leave it exactly where it falls between the paths; resist the impulse to put it exactly in the middle.

This pattern points out a tension: a public space without a middle will feel weird and people will avoid it.  Next, it gives a model for the tension: it’s because people want their back to be against something.  Finally, it gives a way to resolve this tension: put something roughly in the middle.

Patterns that work well with each other form a pattern language.  SOMETHING ROUGHLY IN THE MIDDLE works with SMALL PUBLIC SQUARES; along with the other patterns, they form a pattern language.

The initial tension will sometimes be diffuse and hard to pin down.  People might feel “off” when they’re in a large empty space but not have the words to say why.  They might not even consciously realize there is a problem and instead just have slightly elevated stress levels.  The tension becomes more pointed if it is explicitly pointed out and especially if there is a model for it.

Many of the rationality techniques discussed in the LessWrong-descended rationality community are patterns in this sense.  The technique of leaving a line of retreat is about noticing a tension (some possibility X seems hard to seriously consider) and resolving it by imagining a world in which X is true in detail (in order to allow yourself to compare the world where X is true and the world where X is false even-handedly).  The technique of finding a true rejection is about noticing a tension (people have arguments where they give false reasons for their beliefs and don’t actually change their minds when these reasons are refuted) and resolving it by urging people to find the actual historical reasons for their beliefs.  CFAR‘s debugging techniques are also usually patterns of this form (they resolve some sort of tension within a person or between a person and their environment, usually reducing stress levels).

Resolving tensions fully brings to mind the image of a placid lake.  But resolving tensions doesn’t necessarily lead to placidity, because systems involving humans are alive.  Solving all of someone’s immediate problems might make them bored, and might make them want to help others.

Christopher Alexander praises the way that well-built towns and buildings bring out life in The Timeless Way of Building (I highly recommend reading the first 2 chapters):

There is one timeless way of building.

It is thousands of years old, and the same today as it has always been.

The great traditional buildings of the past, the villages and tents and temples in which man feels at home, have always been made by people who were very close to the center of this way.  It is not possible to make great buildings, or great towns, beautiful places, places where you feel yourself, places where you feel alive, except by following this way.  And, as you will see, this way will lead anyone who looks for it to buildings which are themselves as ancient in their form, as the trees and hills, and as our faces are.

It is a process through which the order of a building or a town grows out directly from the inner nature of the people, and the animals, and plants, and matter which are in it.

It is a process which allows the life inside a person, or a family, or a town, to flourish, openly, in freedom, so vividly that it gives birth, of its own accord, to the natural order which is needed to sustain this life.

But as things are, we have so far beset ourselves with rules, and concepts, and ideas of what must be done to make a building or a town alive, that we have become afraid of what will happen naturally, and convinced that we must work within a “system” and with “methods” since without them our surroundings will come tumbling down in chaos.

The thoughts and fears which feed these methods are illusions.

I find Christopher Alexander’s vision highly appealing.  Something happened in Western architecture that interfered with the timeless way of building; this probably had to do with the increase in degrees of freedom in the transition from pre-modern to modern societies.  Similar phenomena have happened in other domains.  Luckily, the way can be regained through the discipline of pattern languages, and eventually through dissolving disciplines and doing what is natural.  There are obvious parallels with Taoism.

I want pattern languages for improving people’s psychological functioning, for improving groups’ ability to work together, for improving models of the world, for making good institutions, for all sorts of things.  Groups of people are developing pattern languages in many different places, mostly without explicitly knowing what they are doing.  I think a more explicit understanding of pattern languages and knowledge of how past pattern languages were created will help to design better ones in the future.  Maybe with the right pattern languages (and with the right going-beyond-them), we can become much more alive, as individuals and as groups, and much more able to coordinate to accomplish very hard things.

The true outside view

[epistemic status: strong opinions weakly held, not very original]

The false outside view says that if you have a contrarian inside view, you’re probably wrong, so you should not act like you’re correct. The false outside view is driven by intuitions about social status and/or fear of being wrong.

The true outside view says that a great deal of good comes from contrarians pushing on their contrarian inside views (acting on them, talking about them with others, etc), so if you have one of those you should maybe do that, while additionally taking others’ perspectives seriously, testing your ideas often, being honest, avoiding some potentially-catastrophic unilateral actions, etc.

The true outside view follows from any halfway-decent decision theory (something like “updateless decision theory for humans”, as gestured at before. This decision theory might look “epistemically modest” sometimes and “epistemically arrogant” other times, but these descriptions impose ego on something that by construction doesn’t have an ego.