High-precision claims may be refuted without being replaced with other high-precision claims

There’s a common criticism of theory-criticism which goes along the lines of:

Well, sure, this theory isn’t exactly right. But it’s the best theory we have right now. Do you have a better theory? If not, you can’t really claim to have refuted the theory, can you?

This is wrong. This is falsification-resisting theory-apologism. Karl Popper would be livid.

The relevant reason why it’s wrong is that theories make high-precision claims. For example, the standard theory of arithmetic says 561+413=974. Not 975 or 973 or 97.4000001, but exactly 974. If arithmetic didn’t have this guarantee, math would look very different from how it currently looks (it would be necessary to account for possible small jumps in arithmetic operations).

A single bit flip in the state of a computer process can crash the whole program. Similarly, high-precision theories rely on precise invariants, and even small violations of these invariants sink the theory’s claims.

To a first approximation, a computer either (a) almost always works (>99.99% probability of getting the right answer) or (b) doesn’t work (<0.01% probability of getting the right answer). There are edge cases such as randomly crashing computers or computers with small floating point errors. However, even a computer that crashes every few minutes functions very precisely correctly in >99% of seconds that it runs.

If a computer makes random small errors 0.01% of the time in e.g. arithmetic operations, it’s not an almost-working computer, it’s a completely non-functioning computer, that will crash almost immediately.

The claim that a given algorithm or circuit really adds two numbers is very precise. Even a single pair of numbers that it adds incorrectly refutes the claim, and very much risks making this algorithm/circuit useless. (The rest of the program would not be able to rely on guarantees, and would instead need to know the domain in which the algorithm/circuit functions; this would significantly complicate the reasoning about correctness)

Importantly, such a refutation does not need to come along with an alternative theory of what the algorithm/circuit does. To refute the claim that it adds numbers, it’s sufficient to show a single counterexample without suggesting an alternative. Quality assurance processes are primarily about identifying errors, not about specifying the behavior of non-functioning products.

A Bayesian may argue that the refuter must have an alternative belief about the circuit. While this is true assuming the refuter is Bayesian, such a belief need not be high-precision. It may be a high-entropy distribution. And if the refuter is a human, they are not a Bayesian (that would take too much compute), and will instead have a vague representation of the circuit as “something doing some unspecified thing”, with some vague intuitions about what sorts of things are more likely than other things. In any case, the Bayesian criticism certainly doesn’t require the refuter to replace the claim about the circuit with an alternative high-precision claim; either a low-precision belief or a lack-of-belief will do.

The case of computer algorithms is particularly clear, but of course this applies elsewhere:

  • If there’s a single exception to conservation of energy, then a high percentage of modern physics theories completely break. The single exception may be sufficient to, for example, create perpetual motion machines. Physics, then, makes a very high-precision claim that energy is conserved, and a refuter of this claim need not supply an alternative physics.
  • If a text is claimed to be the word of God and totally literally true, then a single example of a definitely-wrong claim in the text is sufficient to refute the claim. It isn’t necessary to supply a better religion; the original text should lose any credit it was assigned for being the word of God.
  • If rational agent theory is a bad fit for effective human behavior, then the precise predictions of microeconomic theory (e.g. the option of trade never reducing expected utility for either actor, or the efficient market hypothesis being true) are almost certainly false. It isn’t necessary to supply an alternative theory of effective human behavior to reject these predictions.
  • If it is claimed philosophically that agents can only gain knowledge through sense-data, then a single example of an agent gaining knowledge without corresponding sense-data (e.g. mental arithmetic) is sufficient to refute the claim. It isn’t necessary to supply an alternative theory of how agents gain knowledge for this to refute the strongly empirical theory.
  • If it is claimed that hedonic utility is the only valuable thing, then a single example of a valuable thing other than hedonic utility is sufficient to refute the claim. It isn’t necessary to supply an alternative theory of value.

A theory that has been refuted remains contextually “useful” in a sense, but it’s the walking dead. It isn’t really true everywhere, and:

  • Machines believed to function on the basis of the theory cannot be trusted to be highly reliable
  • Exceptions to the theory can sometimes be manufactured at will (this is relevant in both security and philosophy)
  • The theory may make significantly worse predictions on average than a skeptical high-entropy prior or low-precision intuitive guesswork, due to being precisely wrong rather than imprecise
  • Generative intellectual processes will eventually discard it, preferring instead an alternative high-precision theory or low-precision intuitions or skepticism
  • The theory will go on doing damage through making false high-precision claims

The fact that false high-precision claims are generally more damaging than false low-precision claims is important ethically. High-precision claims are often used to ethically justify coercion, violence, and so on, where low-precision claims would have been insufficient. For example, imprisoning someone for a long time may be ethically justified if they definitely committed a serious crime, but is much less likely to be if the belief that they committed a crime is merely a low-precision guess, not validated by any high-precision checking machine. Likewise for psychiatry, which justifies incredibly high levels of coercion on the basis of precise-looking claims about different kinds of cognitive impairment and their remedies.

Therefore, I believe there is an ethical imperative to apply skepticism to high-precision claims, and to allow them to be falsified by evidence, even without knowing what the real truth is other than that it isn’t as the high-precision claim says it is.

On hiding the source of knowledge

I notice that when I write for a public audience, I usually present ideas in a modernist, skeptical, academic style; whereas, the way I come up with ideas is usually in part by engaging in epistemic modalities that such a style has difficulty conceptualizing or considers illegitimate, including:

  • Advanced introspection and self-therapy (including focusing and meditation)
  • Mathematical and/or analogical intuition applied everywhere with only spot checks (rather than rigorous proof) used for confirmation
  • Identity hacking, including virtue ethics, shadow-eating, and applied performativity theory
  • Altered states of mind, including psychotic and near-psychotic experiences
  • Advanced cynicism and conflict theory, including generalization from personal experience
  • Political radicalism and cultural criticism
  • Eastern mystical philosophy (esp. Taoism, Buddhism, Tantra)
  • Literal belief in self-fulfilling prophecies, illegible spiritual phenomena, etc, sometimes with decision-theoretic and/or naturalistic interpretations

This risks hiding where the knowledge actually came from. Someone could easily be mistaken into thinking they can do what I do, intellectually, just by being a skeptical academic.

I recall a conversation I had where someone (call them A) commented that some other person (call them B) had developed some ideas, then afterwards found academic sources agreeing with these ideas (or at least, seeming compatible), and cited these as sources in the blog post write-ups of these ideas. Person A believed that this was importantly bad in that it hides where the actual ideas came from, and assigned credit for them to a system that did not actually produce the ideas.

On the other hand, citing academics that agree with you is helpful to someone who is relying on academic peer-review as part of their epistemology. And, similarly, offering a rigorous proof is helpful for convincing someone of a mathematical principle they aren’t already intuitively convinced of (in addition to constituting an extra check of this principle).

We can distinguish, then, the source of an idea from the presented epistemic justification of it. And the justificatory chain (to a skeptic) doesn’t have to depend on the source. So, there is a temptation to simply present the justificatory chain, and hide the source. (Especially if the source is somehow embarrassing or delegitimized)

But, this creates a distortion, if people assume the justificatory chains are representative of the source. Information consumers may find themselves in an environment where claims are thrown around with various justifications, but where they would have quite a lot of difficulty coming up with and checking similar claims.

And, a lot of the time, the source is important in the justification, because the source was the original reason for privileging the hypothesis. Many things can be partially rationally justified without such partial justification being sufficient for credence, without also knowing something about the source. (The problems of skepticism in philosophy in part relate to this: “but you have the intuition too, don’t you?” only works if the other person has the same intuition (and admits to it), and arguing without appeals to intuition is quite difficult)

In addition, even if the idea is justified, the intuition itself is an artifact of value; knowing abstractly that “X” does not imply the actual ability to, in real situations, quickly derive the implications of “X”. And so, sharing the source of the original intuition is helpful to consumers, if it can be shared. Very general sources are even more valuable, since they allow for generation of new intuitions on the fly.

Unfortunately, many such sources can’t easily be shared. Some difficulties with doing so are essential and some are accidental. The essential difficulties have to do with the fact that teaching is hard; you can’t assume the student already has the mental prerequisites to learn whatever you are trying to teach, as there is significant variation between different minds. The accidental difficulties have to do with social stigma, stylistic limitations, embarrassment, politics, privacy of others, etc.

Some methods for attempting to share such intuitions may result in text that seems personal and/or poetic, and be out of place in a skeptical academic context. This is in large part because such text isn’t trying to justify itself by the skeptical academic standards, and is nevertheless attempting to communicate something.

Noticing this phenomenon has led me to more appreciate forewards and prefaces of books. These sections often discuss more of the messiness of idea-development than the body of the book does. There may be a nice stylistic way of doing something similar for blog posts; perhaps, an extended bibliography that includes free-form text.

I don’t have a solution to this problem at the moment. However, I present this phenomenon as a problem, in the spirit of discussing problems before proposing solutions. I hope it is possible to reduce the accidental difficulties in sharing sources of knowledge, and actually-try on the essential difficulties, in a way that greatly increases the rate of interpersonal model-transfer.

On the ontological development of consciousness

This post is about what consciousness is, ontologically, and how ontologies that include consciousness develop.

The topic of consciousness is quite popular, and confusing, in philosophy. While I do not seek to fully resolve the philosophy of consciousness, I hope to offer an angle on the question I have not seen before. This angle is that of developmental ontology: how are “later” ontologies developed from “earlier” ontologies? I wrote on developmental ontology in a previous post, and this post can be thought of as an elaboration, which can be read on its own, and specifically tackles the problem of consciousness.

Much of the discussion of stabilization is heavily inspired by On the Origin of Objects, an excellent book on reference and ontology, to which I owe much of my ontological development. To the extent that I have made any philosophical innovation, it is in combining this book’s concepts with the minimum-description-length principle, and analytic philosophy of mind.

World-perception ontology

I’m going to write a sequence of statements, which each make sense in terms of an intuitive world-perception ontology.

  • There’s a real world outside of my head.
  • I exist and am intimately connected with, if not identical with, some body in this world.
  • I only see some of the world. What I can see is like what a camera placed at the point my eyes are can see.
  • The world contains objects. These objects have properties like shape, color, etc.
  • When I walk, it is me who moves, not everything around me. Most objects are not moving most of the time, even if they look like they’re moving in my visual field.
  • Objects, including my body, change and develop over time. Changes proceed, for the most part, in a continuous way, so e.g. object shapes and sizes rarely change, and teleportation doesn’t happen.

These all seem common-sensical; it would be strange to doubt them. However, achieving the ontology by which such statements are common-sensical is nontrivial. There are many moving parts here, which must be working in their places before the world seems as sensible as it is.

Let’s look at the “it is me who moves, not everything around me” point, because it’s critical. If you try shaking your head right now, you will notice that your visual field changes rapidly. An object (such as a computer screen) in your vision is going to move side-to-side (or top-to-bottom), from one side of your visual field to another.

However, despite this, there is an intuitive sense of the object not moving. So, there is a stabilization process involved. Image stabilization (example here) is an excellent analogy for this process (indeed, the brain could be said to engage in image stabilization in a literal sense).

The world-perception ontology is, much of the time, geocentric, rather than egocentric or heliocentric. If you walk, it usually seems like the ground is still and you are moving, rather than the ground moving while you’re still (egocentrism), or both you and the ground moving very quickly (heliocentrism). There are other cases such as vehicle interiors where what is stabilized is not the Earth, but the vehicle itself; and, “tearing” between this reference frame and the geocentric reference frame can cause motion sickness.

Notably, world-perception ontology must contain both (a) a material world and (b) “my perceptions of it”. Hence, the intuitive ontological split between material and consciousness. To take such a split to be metaphysically basic is to be a Descartes-like dualist. And the split is ontologically compelling enough that such a metaphysics can be tempting.

Pattern-only ontology

William James famously described the baby’s sense of the world as a “blooming, buzzing confusion”. The image presented is one of dynamism and instability, very different from world-perception ontology.

The baby’s ontology is closer to raw percepts than an adult’s is; it’s less developed, fewer things are stabilized, and so on. Babies generally haven’t learned object permanence; this is a stabilization that is only developed later.

The most basic ontology consists of raw percepts (which cannot even be considered “percepts” from within this ontology), not even including shapes; these percepts may be analogous to pixel-maps in the case of vision, or spectrograms in the case of hearing, but I am unsure of these low-level details, and the rest of this post would still apply if the basic percepts were e.g. lines in vision. Shapes (which are higher-level percepts) must be recognized in the sea of percepts, in a kind of unsupervised learning.

The process of stabilization is intimately related to a process of pattern-detection. If you can detect patterns of shapes across time, you may reify such patterns as an object. (For example, a blue circle that is present in the visual field, and retains the same shape even as it moves around the field, or exits and re-enters, may be reified as a circular object). Such pattern-reification is analogous to folding a symmetric image in half: it allows the full image to be described using less information than was contained in the original image.

In general, the minimum description length principle says it is epistemically correct to posit fewer objects to explain many. And, positing a small number of shapes to explain many basic percepts, or a small number of objects to explain a large number of shapes, are examples of this.

From having read some texts on meditation (especially Mastering the Core Teachings of the Buddha), and having meditated myself, I believe that meditation can result in getting more in-touch with pattern-only ontology, and that this is an intended result, as the pattern-only ontology necessarily contains two of the three characteristics (specifically, impermanence and no-self).

To summarize: babies start from a confusing point, where there are low-level percepts, and patterns progressively recognized in them, which develops ontology including shapes and objects.

World-perception ontology results from stabilization

The thesis of this post may now be stated: world-perception ontology results from stabilizing a previous ontology that is itself closer to pattern-only ontology.

One of the most famous examples of stabilization in science is the movement from geocentrism to heliocentrism. Such stabilization explains many epicycles in terms of few cycles, by changing where the center is.

The move from egocentrism to geocentrism is quite analogous. An egocentric reference frame will contain many “epicycles”, which can be explained using fewer “cycles” in geocentrism.

These cycles are literal in the case of a person spinning around in a circle. In a pattern-only ontology (which is, necessarily, egocentric, for the same reason it doesn’t have a concept of self), that person will see around them shapes moving rapidly in the same direction. There are many motions to explain here. In a world-percept ontology, most objects around are not moving rapidly; rather, it is believed that the self is spinning.

So, the egocentric-to-geocentric shift is compelling for the same reason the geocentric-to-heliocentric shift is. It allows one to posit that there are few motions, instead of many motions. This makes percepts easier to explain.

Consciousness in world-perception ontology

The upshot of what has been said so far is: the world-perception ontology results from Occamian symmetry-detection and stabilization starting from a pattern-only ontology (or, some intermediate ontology).

And, the world-perception ontology has conscious experience as a component. For, how else can what were originally perceptual patterns be explained, except by positing that there is a camera-like entity in the world (attached to some physical body) that generates such percepts?

The idea that consciousness doesn’t exist (which is asserted by some forms of eliminative materialism) doesn’t sit well with this picture. The ontological development that produced the idea of the material world, also produced the idea of consciousness, as a dual. And both parts are necessary to make sense of percepts. So, consciousness-eliminativism will continue to be unintuitive (and for good epistemic reasons!) until it can replace world-perception ontology with one that achieves percept-explanation that is at least as effective. And that looks to be difficult or impossible.

To conclude: the ontology that allows one to conceptualize the material world as existing and not shifting constantly, includes as part of it conscious perception, and could not function without including it. Without such a component, there would be no way to refactor rapidly shifting perceptual patterns into a stable outer world and a moving point-of-view contained in it.