[epistemic status: confident this distinction exists and is important, unsure about details/framing]
- Naïve epistemology: simple models; mathematics and logic; “prima facie”; original seeing; “natural” descriptions of facts on the ground; “from the mouth of a babe”; not trusting others’ models/narratives (but often trusting their reports of basic observable facts); blunt
- Savvy epistemology: responding to the dominant opinions in a social group such as expert consensus (agreeing with parts of it and/or disagreeing with parts of it); “this is a complex topic”; not wanting to omit considerations that others know about; making sure to be “informed” and “up to date”; educated; responsible; legitimate; sophisticated; subtle; trust or cynicism regarding others’ models/narratives
Naïve epistemology mostly models the world; savvy epistemology mostly models other people’s models of the world. Naïve epistemology avoids the social field; savvy epistemology swims in it.
The virtues of naiveté
In the past year, I’ve strongly updated towards using more naïve epistemology than savvy epistemology. Savvy epistemology is more informed in some ways, but is also more corrupt. It will tend to be slanted by whoever has local political power, rather than being purely optimized for accuracy. Some examples:
- Tetlock’s book Superforecasting shows that some non-experts using relatively naïve methods can predict world events significantly better than savvier experts. This indicates that these experts’ opinions are not very well optimized for accuracy (and are most likely optimized for something other than accuracy).
- I am convinced by Robin Hanson’s argument that medical care has little association with health outcomes in the US. The “savvy” responses to his “naïve” article (linked at the bottom of the page) seem to basically be misdirections intended to avoid taking seriously his main point (that cutting medical spending by 50% across the board would make little difference to health outcomes).
- AI risk was discussed on LessWrong many years before the topic received serious mainstream attention, mostly from a naïve epistemology perspective. The savvy expert view started out saying it wasn’t a serious problem (for various reasons), and has since drifted somewhat towards the LessWrong view. From thinking extensively about AI alignment and talking with people over the past few years, I feel that I have gotten significant personal evidence in favor of the LessWrong view that the alignment problem is very difficult and that no one has a good idea of how to solve it.
The corruption of savvy epistemology does not make it useless: it usually contains a great deal of information not captured by naïve epistemology, and use of savvy epistemology allows for easier coordination with others using it. But it does mean that it’s highly valuable to develop a naïve perspective in addition to a savvy one.
Why be afraid of naïve epistemology?
There are legitimate reasons to be afraid that someone taking naïve epistemology seriously will cause harm:
- Uncooperative actions: Taking one’s own models seriously can lead one to take actions that are uncooperative with others who have different models, such as dishonesty or violence. See The Unilateralist’s Curse. The “social field” that savviness swims in is a coordination mechanism that, among other things, prevents people from taking some uncooperative actions.
- Safety: Common recommendations for what to believe, though often false, at least offer safety in the medium term for most people. Having and acting on unusual beliefs can be dangerous to one’s self and others, since at that point you’re not using a set of beliefs that’s already been optimized for medium-term safety. (In the long term, safety from some risks such as large-scale wars and some medical issues might require nonconformist epistemics)
- Disharmony: To the extent that words ostensibly about the world are used to coordinate actions, people using different words (e.g. those generated by naïve epistemology) can generate confusion about what actions the group is supposed to be taking.
- Politics: Saying naïve things can disrupt politics, by e.g. offending allies or potential allies. This can trigger a defensive reaction.
- Legibility bias: Naïve models tend to be legible. It can be tempting to make oversimplified models that don’t actually summarize the fiddly details well, leading to failures such as utopian totalitarianism.
Social technology for being naïve
I think that people who want good epistemics should have the affordance to use and act on naïve epistemology. This requires addressing the problems in the previous section. Here are some tools I use to give myself the affordance to do naïve epistemology without paying large costs:
- Decision theory: Preferentially use naïve epistemology when making bets using one’s own resources, and use shared epistemology when making bets using others’ resources. Detect “prisoner’s dilemma” or “unilateralist’s curse” situations (including epistemic ones) and make a good-faith effort to achieve mutual cooperation. Develop and signal the intent to honestly inform others and to learn from others. Take the true outside view and act as you would want others in your position to act. (See also: Guided by the beauty of our weapons)
- Testing things: If a naïve model implies some surprising prediction, it can be good to test it out (if it isn’t very costly). If the prediction is confirmed, great; if not, then perhaps create a new naïve model that takes this new piece of evidence into account.
- Separate magisteria: Calling forms of model-building “naïve” puts these models in a separate magisterium where it isn’t socially expected that they will be taken seriously, triggering less defensive reaction. (I bet many people would be less receptive to this post if I had called it “world-modeling epistemology” rather than “naïve epistemology”). Other ways of putting model-building in a separate magisterium: calling yourself “a contrarian acting on a weird inside view”; saying you’re “discussing weird ideas”; etc.
- Translation: Some naïve models can eventually be translated into savvy arguments. (In practice, not all of them can be translated without losing fidelity).
- Arguing against savviness: In general, internalizing arguments against “mainstream” opinions makes it psychologically and socially easier to do “non-mainstream” things. This post itself contains arguments against savvy epistemology. Getting arguments against specific savvy beliefs/worldviews can also be helpful.
- Playing with identity: In general, some identities (“independent”, “truth-seeking”, “weird”) will create a local incentive to lean towards naïve epistemology, while others (“responsible”, “official”, “educated”) will create a local incentive to lean towards savvy epistemology. Someone who leans savvy might benefit by creating a context where they can embody a more naïve identity.
I would like it if more people had the affordance to develop a naïve epistemology, so that they can make important inferences that savvy epistemology obscures. I don’t think that naïve epistemology is an “endpoint” of epistemology (it seems desirable to eventually develop epistemology that is fundamentally truth-tracking the way naïve epistemology is in addition to being highly informed and coordinated across people), but I expect it to be an important stepping stone for people currently embedded in savvy epistemology.