There’s a pretty common analysis of human behavior that goes something like this:
“People claim that they want X. However, their actions are optimizing towards Y instead of X. If they really cared about X, they would do something else instead. Therefore, they actually want Y, and not X.”
This is revealed preference analysis. It’s quite useful, in that if people’s actions are effectively optimizing for Y and not X, then an agent-based model of the system will produce better predictions by predicting that people want Y and not X.
So, revealed preference analysis is great for analyzing a multi-agent system in equilibrium. However, it often has trouble predicting what would happen when a major change happens to the system.
As an example, consider a conclusion Robin Hanson gives on school:
School isn’t about learning “material,” school is about learning to accept workplace domination and ranking, and tolerating long hours of doing boring stuff exactly when and how you are told.
(note that I don’t think Hanson is claiming things about what people “really want” in this particular post, although he does make such claims in other writing)
Hanson correctly infers from the fact that most schools are highly authoritarian that school is effectively “about” learning to accept authoritarian work environments. We could make “about” more specific: the agents who determine what happens in schools (administrators, teachers, voters, parents, politicians, government employees) are currently taking actions that cause schools to be authoritarian, in a coordinated fashion, with few people visibly resisting this optimization.
This revealed preference analysis is highly useful. However, it leaves degrees of freedom open in what the agents terminally want. These degrees of freedom matter when predicting how those agents will act under different circumstances (their conditional revealed preferences). For example:
- Perhaps many of the relevant agents actually do want schools to help children learn, but were lied to about what forms of school are effective for learning. This would predict that, upon receiving credible evidence that free schools are more effective for learning while being less authoritarian, they would support free schools instead.
- Perhaps many of the relevant agents want school to be about learning, but find themselves in a grim trigger equilibrium where they expect to get punished for speaking out about the actual nature of school, and also to be punished for not punishing those who speak out. This would predict that, upon seeing enough examples of people speaking out and not being punished, they would join the new movement.
- Perhaps many of the relevant agents have very poor world models of their own, and must therefore navigate according to imitation and to “official reality” narratives, which constrain them to acting as if school is for learning. This would predict that, upon gaining much more information about the world and gaining experience in navigating it according to their models (rather than the official narratives), they would favor free schools over authoritarian schools.
It’s hard to tell which if these hypotheses (or other hypotheses) are true given only information about how people act in the current equilibrium. These hypotheses make conditional and counterfactual predictions: they predict what people would do, given different circumstances than their current ones.
This is not to say that people’s stories about what they want are to be taken at face value; the gold standard for determining what people want is not what they say, but what they actually optimize for under various circumstances, including ones substantially different from present ones. (Obviously, their words can be evidence about their counterfactual actions, to the extent that they are imaginative and honest about the counterfactual scenarios)
To conclude, I suggest the following heuristics:
- In analyzing an equilibrium, look mainly at what people actually optimize for with their actions, not what they say they’re optimizing for.
- In guessing what they “really want”, additionally imagine their actions in alternative scenarios where they e.g. have more information and more ability to coordinate with those who have similar opinions.
- Actually find data about these alternative scenarios, by e.g. actually informing people, or finding people who were informed and seeing how their actions changed.