Consider the following procedure:
- Create unreasonably high standards that people are supposed to follow.
- Watch as people fail to meet them and thereby accumulate “debt”.
- Provide a way for people to discharge their debt by sacrificing their agency to some entity (concrete or abstract).
This is a common way to subjugate people and extract resources from them. Some examples:
- Christianity: Christianity defines many natural human emotions and actions as “sins” (i.e. things that accumulate debt), such that almost all Christians sin frequently. Even those who follow all the rules have “original sin”. Christianity allows people to discharge their debt by asking Jesus to bear their sins (thus becoming subservient to Jesus/God).
- The Western education system: Western schools (and many non-Western schools) create unnatural standards of behavior that are hard for students to follow. When students fail to meet these standards, they are told they deserve punishments including public humiliation and being poor as an adult. School doesn’t give a way to fully discharge debts, leading to anxiety and depression in many students and former students, but people can partially discharge debt by admitting that they are in an important sense subservient to the education system (e.g. accepting domination from the more-educated boss in the workplace).
- Effective altruism: The drowning child argument (promoted by effective altruists such as Peter Singer) argues that middle-class Americans have an obligation to sacrifice luxuries to save the lives of children in developing countries, or do something at least this effective (in practice, many effective altruists instead support animal welfare or existential risk organizations). This is an unreasonably high standard; nearly no one actually sacrifices all their luxuries (living in poverty) to give away more money. Effective altruism gives a way to discharge this debt: you can just donate 10% of your income to an effective charity (sacrificing some of your agency to it), or change your career to a more good-doing one. (This doesn’t work for everyone, and many “hardcore EAs” continue to struggle with scrupulosity despite donating much more than 10% of their income or changing their career plans significantly, since they always could be doing more).
- The rationalist community: I hesitate to write this section for a few reasons (specifically, it’s pretty close to home and is somewhat less clear given that some rationalists have usefully criticized some of the dynamics I’m complaining about). But a subtext I see in the rationalist community says something like: “You’re biased so you’re likely to be wrong and make bad decisions that harm other people if you take actions in the world, and it’ll be your fault. Also, the world is on fire and you’re one of the few people who knows about this, so it’s your responsibility to do something about it. Luckily, you can discharge some of your debts by improving your own rationality, following the advice of high-level rationalists, and perhaps giving them money.” That’s clearly an instance of this pattern; no one is unbiased, “high-level rationalists” included. (It’s hard to say where exactly this subtext comes from, and I don’t think it’s anyone’s “fault”, but it definitely seems to exist; I’ve been affected by it myself, and I think it’s part of what causes akrasia in many rationalists.)
There are many more examples; I’m sure you can think of some. Setting up a system like this has some effects:
- Hypocrisy: Almost no one actually follows the standards, but they sometimes pretend they do. Since standards are unreasonably high, they are enforced inconsistently, often against the most-vulnerable members of a group, while the less-vulnerable maintain the illusion that they are actually following the standards.
- Self-violence: Buying into unreasonably high standards will make someone turn their mind against itself. Their mind will split between the “righteous” part that is trying to follow and enforce the unreasonably high standards, and the “sinful” part that is covertly disobeying these standards in order to get what the mind actually wants (which is often in conflict with the standards). Through neglect and self-violence, the “sinful” part of the mind develops into a shadow. Self-hatred is a natural results of this process.
- Distorted perception and cognition: The righteous part of the mind sometimes has trouble looking at ways in which the person is failing to meet standards (e.g. it will avoid looking at things that the person might be responsible for fixing). Consciousness will dim when there’s risk of seeing that one is not meeting the standards (and sometimes also when there’s risk of seeing that others are not meeting the standards). Concretely, one can imagine someone who gets lost surfing the internet to avoid facing some difficult work they’re supposed to do, or someone who avoids thinking about the ways in which their project is likely to fail. Given the extent of the high standards and the debt that most people feel they are in, this will often lead to extremely distorted perception and cognition, such that coming out of it feels like waking from a dream.
- Motivational problems: Working is one way to discharge debt, but working is less motivating if all products of your work go to debt-collectors rather than yourself. The “sinful” part of the mind will resist work, as it expects to derive little benefit from it.
- Fear: Accumulating lots of debt gives one the feeling that, at any time, debt-collectors could come and demand anything of you. This causes the scrupulous to live in fear. Sometimes, there isn’t even a concretely-identifiable entity they’re afraid of, but it’s clear that they’re afraid of something.
Systems involving unreasonably high standards could theoretically be justified if they were good coordination mechanisms. But it seems implausible that they are. Why not just make the de jure norms ones that people are actually likely to follow? Surely a sufficient set of norms exists, since people are already following the de facto ones. You can coordinate a lot without optimizing your coordination mechanism for putting everyone in debt!
I take the radical position that TAKING UNREASONABLY HIGH STANDARDS SERIOUSLY IS A REALLY BAD IDEA and ALL OF MY FRIENDS AND PERHAPS ALL HUMANS SHOULD STOP DOING IT. Unreasonably high standards are responsible for a great deal of violence against life, epistemic problems, and horribleness in general.
(It’s important to distinguish having unreasonably high standards from having a preference ordering whose most-preferred state is impractical to attain; the second does not lead to the same problems unless there’s some way of obligating people to reach an unreasonably good state in the preference ordering. Attaining a decent but non-maximally-preferred state should perhaps feel annoying or aesthetically displeasing, but not anxiety-inducing.)
My advice to the scrupulous: you are being scammed and you are giving your life away to scammers. The debts that are part of this scam are fake, and you can safely ignore almost all of them since they won’t actually be enforced. The best way to make the world better involves first refusing to be scammed, so that you can benefit from the products of your own labor (thereby developing intrinsic motivation to do useful things) instead of using them to pay imaginary debts, and so you can perceive the world accurately without fear. You almost certainly have significant intrinsic motivation for helping others; you are more likely to successfully help them if your help comes from intrinsic motivation and abundance rather than fear and obligation.
18 thoughts on “Against unreasonably high standards”
I agree with a lot in this post, but I do think that perfectionism is a useful thing that many people underrate. You write a paper once but many people read it (similarly for talks). It’s worth holding yourself to an incredibly high standard for these, especially because e.g. people go to lots of talks and will only remember the ones that stood out. As an example, I recently spent 20 hours preparing a 20-minute talk and felt like it was worth it.
Thanks for this; I think it articulates a real problem that we need to guard against. Two thoughts:
What about war? That seems to me to be a case of very high standards: “The enemy is invading our country; we all need to devote 100% of our waking lives to throwing them out.” Yet it seems to me to be a case where the very high standards are justified & effective.
…If you agree with me on that, then I’d argue that EA would also be justified in having standards that high, because the problems it is dealing with are at least as serious as war and arguably more.
You write “It’s important to distinguish having unreasonably high standards from having a preference ordering whose most-preferred state is impractical to attain;”
Many EAs and rationalists explicitly and publicly take the latter option, AFAICT. Many don’t.
Some talk about “enormous opportunity to do good” and some talk about “our obligations to help those in need.” Would your concrete recommendation be to shift rhetoric to be more of the first quote and less of the second quote? (Keep in mind, people outside EA have sometimes criticized EA for saying things like that first quote. There is a strain of criticism that says that EA’s standards aren’t high enough!)
Devoting 100% of your waking life to throwing out the enemy is insane, you would neglect your own needs and die pretty quickly. (Even the metaphorical version of that is a bad idea; things like rest and play serve important cognitive functions that can help in wars)
If a group of people is in danger of being killed by another group, they will be more motivated to fight, so somewhat-high requirements might actually be followed. (I think in practice militaries sometimes do the “unreasonably high standards” thing rather than the “reasonably high standards” thing, and this seems bad to me)
In practice EAs only have moderate intrinsic motivation to directly do good in ways they expect not to be rewarded for, and will develop motivation problems if they self-sacrifice too much. This seems correct to me; immediately martyring yourself is usually a stupid idea. It usually makes more sense to build capacity for yourself and people in your network who you like (often by doing good things and getting something in return), so you can do bigger things in the future.
In general, if for some goal X, intrinsic motivation works better for accomplishing X than unreasonably high standards do, then it doesn’t matter how important X is, it’s still better to use intrinsic motivation rather than unreasonably high standards. There’s no importance to “being justified in high standards” if having high standards is a silly idea in the first place.
In general I frown upon phrases like “our obligations to help those in need”, though probably the appropriateness of phrases like this depends on social context and such.
I don’t know what “strain of criticism” you’re referring to. Someone outside the movement probably thinks EA is optimizing in the wrong direction (e.g. individual rather than systematic change), not that its standards are too low.
I agree that the issue of how high our standards should be hinges on what the most effective level of standards is. I brought up the war example as a case where it seems like having very high standards is more effective than having medium or low standards. I agree that, unlike in war, people in EA don’t feel personally threatened and so are less likely to comply with high standards, and that this is a reason to have lower standards than in war. (Though often times wars don’t personally threaten many of the soldiers. The US population weren’t personally threatened in WW2 or WW1, for example.)
At this point I think I’d want to start looking for empirical data. When people act altruistically, do they succeed more or less often when their standards are high? Etc.
…In the meantime, of course, I’m not about to go around casting judgment on anyone for not doing enough, at least not if they are doing more than the average for their society.
You could add a certain stereotype of military honor-culture (think drill sergeant out of movies) on this list without changing the rest of the post. I don’t personally know if this would be representative of modern armies nor if it’s effective compared to a less demanding culture. We do have reason to believe this kind of system should have worked better in militaries than in the other examples due to the feedback loops being connected to material reality rather than just social reality.
LikeLiked by 1 person
(Speaking personally, not for my employer.)
I think there are some good points about motivation here, and I generally think examining how one’s motivation works and what’s happening to it over time is really important.
But I also think at least for some of us, a sense of responsibility is part of a sustainable motivation.
The part I don’t get is “The debts that are part of this scam are fake, and you can safely ignore almost all of them since they won’t actually be enforced.”
Let’s say I’m an effective altruist who wants to improve animal welfare. I know that billions of animals are raised in gruesome conditions every year. By donating to organizations that lobby for better conditions, I have some chance of reducing their horrible suffering. I will need to make decisions about how to trade off between my donations and other things I would like to do with my money. Those decisions will involve both altruism (if I wear myself down, I become less useful to those I want to help) and self-regard (I want to enjoy my life, and that requires spending some money.)
What part of this is fake? What part will not be enforced?
The fake part is where the idea of not doing enough to help is scary, not just unappealing. I don’t know how your psyche works, but in the past I’ve had mental imagery of undergoing lots of suffering in order to discharge my responsibilities to help others, because it would be so much easier than actually living up to my responsibilites to help others (by figuring out “the right thing to do” using rationality and then doing it). This is a counterproductive form of fear: there isn’t actually any enforcer who will harm me for not helping others, and imagining one was bad for my epistemics and motivation.
Someone who cares about animal welfare choosing to donate to organizations that lobby for better conditions might be choosing correctly, but it’s pretty likely they’re trying to discharge responsibility by picking a “straightforward” action that has clear standards, due to fear of what happens if they don’t satisfy their responsibilities. In practice, someone who does this is likely to be misled by organizations like ACE into doing things that are not especially likely to be effective.
I don’t know exactly what someone who cares about animal welfare but is running on intrinsic rather than responsibility-based motivation would do. They might investigate what happened with ACE and figure out the structural factors that led to this outcome. They might become curious about how to do epistemics in a world where charity evaluations are misleading. They might talk to Whole Foods and see how accurate their animal welfare labeling is, how much customers are willing to pay for meat from higher-welfare animals, and how much the different meats cost. They might become curious about abstract violence and its history (since the main thing making factory-farmed meat seem acceptable to people is the abstraction) and read Foucault or something. They might build community with other thoughtful people who care about animal welfare and start companies together to generate resources for this group of people and animal welfare projects they think are likely to work. They might infer that a different form of lobbying than the kind they originally imagined would be more effective and do those instead.
Yes, people with responsibility-based motivation could do any of these things, but to me they seem less likely to. I strongly suspect that the responsibility framing biases people towards straightforward ways of doing good that only require following simple rules, and away from less-straightforward ways that are higher-risk but also have higher expected payoff.
I think we’re talking about three rather than two things here:
1. Internal sense of responsibility, with the sense that the bad thing if you don’t act is that other people will suffer (“obligation framing”)
2. Fear of some external force, with the sense that the bad thing if you don’t appear to be doing your duty is that someone will punish you – I had actually not realized this was a thing, thank you for explaining it
3. Intrinsic motivation (“excited altruism”)
I agree that 3 is more likely than 1 to involve curiosity and innovation, but I also think people with a naturally curious/innovative approach who would otherwise be applying it to non-altruistic pursuit sometimes turn it to altruistic pursuits if they’re first motivated by a sense of responsibility.
> 2. Fear of some external force, with the sense that the bad thing if you don’t appear to be doing your duty is that someone will punish you – I had actually not realized this was a thing, thank you for explaining it
The social connotations of how we make claims about what motivations people commonly have can imply things about what motivations we think it’s ok for people to have. When enough people are (perhaps unintentionally) using enough subtext to communicate what sorts of actions they approve of, this causes a powerful internal experience of guilt, or of feeling gaslit about what you value. This is a powerful force worth fighting against, which is part of what the phrase “the debts that are part of this scam are fake” is doing.
The phrasing “fear of some external force” and “someone will punish you” socially imply that this force oughtn’t be able to influence a sensible self-grounded person, by making it sound like a force that comes from the outside and thereby has little social legitimacy. But the internal experience of being affected by this force is much stronger than this, and it is hard to convey how powerful it is explicitly–it seems like enough people understand that much on a gut level, given how well this piece has been received. I’d say that EAs outside the bay are subject to much less of this force than EAs and rationalists in the bay, so it’s natural that Julia wouldn’t think in terms of this force as much.
Most people work around this by being S1-averse to language that uses subtext to communicate that they ought to do something they don’t themselves approve of doing, and will usually respond to such language in a way that disincentivises it. However, EAs and especially bay rationalists tend to be better at S2 noticing language that more explicitly communicates that they ought to do something they don’t want to do, and much worse at S1 noticing when language does this, especially via a more implicit subtext. This leads to very slow feedback loops related to disincentivising this sort of thing.
My view? That this has had the interesting effect of making the bay rationalist community more driven by the subtler forms of the “create standards -> (watch people fail to meet them ->) let them discharge their social debt by subjugating themselves to an entity -> extract resources or social concessions from them” process than, say, Catholicism, which was already notorious for being guilt-driven.
I agree with Julia that invoking responsibility can help to attract curious/innovative thinkers, and also with Jessica that it can create a bunch of harmful effects by distorting motivation.
This creates a nasty tension.
In this context, it feels more correct to me to see the pledge as a helpful piece of machinery, giving a salient and not-too-expensive way to discharge the sense of debt and perhaps become freer to follow intrinsic motivation (but having had enough exposure to a sense of responsibility that that is likely to be more guiding of intrinsic motivation than before, at least for a slice of the population). This is somewhat close to Jessica’s original framing, but she casts it as part of a deliberate system to extract resources, where I see the sense of obligation as creating negatives (as well as positives), and the sanctioned way of relieving debt as mitigating those negatives.
Differential enforcement is likely to go along with the hypocrisy.
In R.A.Lafferty’s Dottie, there’s a description of discrimination against Catholic medical students– in the 1950s, I think. The medical school standards were impossible to follow with getting slack from the people in charge, and some people got slack and other people didn’t.
That should have been “without getting slack from the people in charge”.
Also Karen Horney (an early psychoanalyst) wrote about people developing irrationally high standards for themselves if they’d been neglected, abused, or pushed to mature faster than they could. The idea was that if child felt that being a human wasn’t good enough, then they’d think they had to be always right, always good, always lucky, or whatever.
“Why not just make the de jure norms ones that people are actually likely to follow?”
Humans probably have an intrinsic tendency to follow de jure norms as weakly as possible, skirting the borderline of violating them and even occasionally explicitly violating them. So if you change the de jure norms to match the current de facto ones, then new de facto norms will develop, leaving people potentially worse off.
I generally sympathize with your point of view but I think there is some truth in this response.
This reads as a defense of guilt culture from reinterpretations and virtualizations of honor culture. I agree with your point insofar as guilt cultures have been proven nicer places to live, but retain some respect for honor cultures, like the western military/education system, as effective ways of accomplishing isolated tasks. Wartime mobilization is a thing, and trading of niceness for effectiveness in obtaining future niceness is a natural thing. The problem lies actually making the trade worthwhile, and here I’m skeptical of the non-military examples having their validation be systems sufficiently influenced by evidence for them to work well enough. Perhaps something like an alternate version of EA built around war games rather than academia would work.
Good point about guilt culture vs. honor culture, I hadn’t seen that connection. I expect that honor cultures work better when they’re intentionally designed with norms that are not too hard to follow and mechanisms for making up for norm violations (e.g. see the section on saving face in Duncan’s Dragon Army post). Covert honor cultures (e.g. school) can go way off the rails since it’s hard to pay attention to the honor culture mechanisms and deal with them productively, leading to anxiety and depression.
Agree that military seems like the best use case for honor culture.
Famous book… ‘War is a Racket’
Fighting when invaded isn’t a high standard at all.
Historical wars are very frequently between people motivated by glory and people motivated by a gun to their backs. On the rare occasions when there are good guys, the latter have usually been the good guys.
also maybe sometimes against: unreasonably high standards of yourself.