On the falsifiability of hypercomputation, part 2: finite input streams

In part 1, I discussed the falsifiability of hypercomputation in a typed setting where putative oracles may be assumed to return natural numbers. In this setting, there are very powerful forms of hypercomputation (at least as powerful as each level in the Arithmetic hierarchy) that are falsifiable.

However, as Vanessa Kosoy points out, this typed setting has difficulty applying to the real world, where agents may only observe a finite number of bits at once:

The problem with constructive halting oracles is, they assume the ability to output an arbitrary natural number. But, realistic agents can observe only a finite number of bits per unit of time. Therefore, there is no way to directly observe a constructive halting oracle. We can consider a realization of a constructive halting oracle in which the oracle outputs a natural number one digit at a time. The problem is, since you don’t know how long the number is, a candidate oracle might never stop producing digits. In particular, take any non-standard model of PA and consider an oracle that behaves accordingly. On some machines that don’t halt, such an oracle will claim they do halt, but when asked for the time it will produce an infinite stream of digits. There is no way to distinguish such an oracle from the real thing (without assuming axioms beyond PA).

This is an important objection. I will address it in this post by considering only oracles which return Booleans. In this setting, there is a form of hypercomputation that is falsifiable, although this hypercomputation is less powerful than a halting oracle.

Define a binary Turing machine to be a machine that outputs a Boolean (0 or 1) whenever it halts. Each binary Turing machine either halts and outputs 0, halts and outputs 1, or never halts.

Define an arbitration oracle to be a function that takes as input a specification of a binary Turing machine, and always outputs a Boolean in response. This oracle must always return 0 if the machine eventually outputs 0, and must always return 1 if the machine eventually outputs 1; it may decide arbitrarily if the machine never halts. Note that this can be emulated using a halting oracle, and is actually less powerful. (This definition is inspired by previous work in reflective oracles)

The hypothesis that a putative arbitration oracle (with the correct type signature, MachineSpec → Boolean) really is one is falsifiable. Here is why:

  1. Suppose for some binary Turing machine M that halts and returns 1, the oracle O wrongly has O(M) = 0. Then this can be proven by exhibiting M along with the number of steps required for the machine to halt.
  2. Likewise if M halts and returns 0, and the oracle O wrongly has O(M) = 1.

Since the property of some black-box being an arbitration oracle is falsifiable, we need only show at this point that there is no computable arbitration oracle. For this proof, assume (for the sake of contradiction) that O is a computable arbitration oracle.

Define a binary Turing machine N() := 1 – O(N). This definition requires quining, but this is acceptable for the usual reasons. Note that N always halts, as O always halts. Therefore we must have N() = O(N). However also N() = 1 – O(N), a contradiction (as O(N) is a Boolean).

Therefore, there is no computable arbitration oracle.

Higher hypercomputation?

At this point, it is established that there is a form of hypercomputation (specifically, arbitration oracles) that is falsifiable. But, is this universal? That is, is it possible that higher forms of hypercomputation are falsifiable in the same setting?

We can note that it’s possible to use an arbitration oracle to construct a model of PA, one statement at a time. To do this, first note that for any statement, it is possible to construct a binary Turing machine that returns 1 if the statement is provable, 0 if it is disprovable, and never halts if neither is the case. So we can iterate through all PA statements, and use an arbitration oracle to commit to that statement being true or false, on the basis of provability/disprovability given previous commitments, in a way that ensures that commitments are never contradictory (as long as PA itself is consistent). This is essentially the same construction idea as in the Demski prior over logical theories.

Suppose there were some PA-definable property P that a putative oracle O (mapping naturals to Booleans) must have (e.g. the property of being a halting oracle, for some encoding of Turing machines as naturals). Then, conditional on the PA-consistency of the existence of an oracle with property P, we can use the above procedure to construct a model of PA + existence of O satisfying P (i.e. a theory that says what PA says and also contains a function symbol O that axiomatically satisfies P). For any PA-definable statement about this oracle, this procedure will, at some finite time, have made a commitment about this statement.

So, access to an arbitration oracle allows emulating any other PA-definable oracle, in a way that will not be falsified by PA. It follows that hypercomputation past the level of arbitration oracles is not falsifiable by a PA-reasoner who can access the oracle, as PA cannot rule out that it is actually looking at something produced by only arbitration-oracle levels of hypercomputation.

Moreover, giving the falsifier access to an arbitration oracle can’t increase the range of oracles that are falsifiable. This is because, for any oracle-property P, we may consider a corresponding property on an oracle-pair (which may be represented by a single oracle-property through interleaving), stating that the first oracle is an arbitration oracle, and the second satisfies property P. This oracle pair property is falsifiable iff the property P is falsifiable by a falsifier with access to an arbitration oracle. This is because we may consider a joint search for falsifications, that simultaneously tries to prove the first oracle isn’t an arbitration oracle, and one that tries to prove that the second oracle doesn’t satisfy P assuming the first oracle is an arbitration oracle. Since the oracle pair property is PA-definable, it is emulable by a Turing machine with access to an arbitration oracle, and the pair property is unfalsifiable if it requires hypercomputation past arbitration oracle. But this implies that the original oracle property P is unfalsifiable by a falsifier with access to an arbitration oracle, if P requires hypercomputation past arbitration oracle.

So, arbitration oracles form a ceiling on what can be falsified unassisted, and also are unable to assist in falsifying higher levels of hypercomputation.

Conclusion

Given that arbitration oracles form a ceiling of computable falsifiability (in the setting considered here, which is distinct from the setting of the previous post), it may or may not be possible to define a logic that allows reasoning about levels of computation up to arbitration oracles, but which does not allow computation past arbitration oracles to be defined. Such a project could substantially clarify logical foundations for mathematics, computer science, and the empirical sciences.

On the falsifiability of hypercomputation

[ED NOTE: see Vanessa Kosoy’s comment here; this post assumes a setting in which the oracle may be assumed to return a standard natural.]

It is not immediately clear whether hypercomputers (i.e. objects that execute computations that Turing machines cannot) are even conceivable, hypothesizable, meaningful, clearly definable, and so on. They may be defined in the notation of Peano arithmetic or ZFC, however this does not imply conceivability/hypothesizability/etc. For example, a formalist mathematician may believe that the Continuum hypothesis does not have a meaningful truth value (as it is independent of ZFC), and likewise for some higher statements in the arithmetic hierarchy that are independent of Peano Arithmetic and/or ZFC.

A famous and useful criterion of scientific hypotheses, proposed by Karl Popper, is that they are falsifiable. Universal laws (of the form “∀x. p(x)”) are falsifiable for testable p, as they can be proven false by exhibiting some x such that p(X) is false. In an oracle-free computational setting, the falsifiable hypotheses are exactly those in ∏₁ (i.e. of the form “∀n. p(n)” for natural n and primitive recursive p).

However, ∏₁ hypotheses do not hypothesize hypercomputation; they hypothesize (computably checkable) universal laws of the naturals. To specify the falsifiability criterion for hypercomputers, we must introduce oracles.

Let a halting oracle be defined as a function O which maps Turing machine-specifications to Booleans, which outputs “true” on exactly those Turing machines which eventually halt. Then, we can ask: is the hypothesis that “O is a halting oracle” falsifiable?

We can immediately see that, if O ever outputs “false” for a Turing machine which does eventually halt, it is possible to exhibit a proof of this, by exhibiting both the Turing machine and the number of steps it takes to halts. On the other hand, if O ever outputs “true” for a Turing machine which never halts, it is not in general possible to prove this; to check such a proof in general would require solving the halting problem, which is uncomputable.

Therefore, the hypothesis that “O is a halting oracle” is not falsifiable in a computational setting with O as an oracle.

However, there is a different notion of halting oracle whose definition is falsifiable. Let a constructive halting oracle be defined as a function O which maps Turing machine-specifications to elements of the set {∅} ∪ ℕ (i.e. either a natural number or null), such that it returns ∅ on those Turing machines which never halt, and returns some natural on Turing machines that do halt, such that the machine halts by the number of steps given by that natural. This definition corresponds to the most natural definition of a halting oracle in Heyting arithmetic, a constructive variant of Peano Arithmetic.

We can see that:

  1. If there exists a machine M such that O(M) = ∅ and M halts, it is possible to prove that O is not a constructive halting oracle, by exhibiting M and the time step on which M halts.
  2. If there exists a machine M such that O(M) ≠ ∅ and M does not halt by O(M) time steps, it is possible to prove that O is not a constructive halting oracle, by exhibiting M.

Therefore, the hypothesis “O is a constructive halting oracle” is computably falsifiable.

What about higher-level constructive halting oracles, corresponding to Σₙ in the Heyting Arithmetic interpretation of the arithmetic hierarchy? The validity of a constructive Σₙ-oracle is, indeed, falsifiable for arbitrary n, as shown in the appendix.

Therefore, the hypothesis that some black-box is a (higher-level) constructive halting oracle is falsifiable, in an idealized computational setting. It is, then, meaningful to speak of some black-box being a hypercomputer or not, on an account of meaningfulness at least as expansive as the falsifiability criterion.

This provides a kind of bridge between empiricism and rationalism. While rationalism may reason directly about the logical implications of halting oracles, empiricism is more skeptical about the meaningfulness of the hypothesis. However, by the argument given, an empiricism that accepts the meaningfulness of falsifiable statements must accept the meaningfulness of the hypothesis that some black-box O is a constructive halting oracle.

I think this is a fairly powerful argument that hypercomputation should not be ruled out a-priori as “meaningless”, and should instead be considered a viable hypothesis a-priori, even if it is not likely given other evidence about physics, anthropics, etc.

Appendix: higher-level halting oracles

We will now reason directly about the Heyting arithmetic hierarchy rather than dealing with Turing machines for simplicity, though these are logically equivalent. Σₙ₊₁ propositions can be written as ∃x₁∀y₁…∃xₙ∀yₙ.f(x₁, y₁, …, xₙ, yₙ) for some primitive-recursive f. The converse of this proposition (which is in ∏ₙ₊₁) is of the form ∀x₁∃y₁…∀xₙ∃yₙ.¬f(x₁, y₁, …, xₙ, yₙ).

An oracle O constructively deciding Σₙ₊₁ is most naturally interpreted as a function from a specification of f to (∃x₁∀y₁…∃xₙ∀yₙ.f(x₁, y₁, …, xₙ, yₙ)) ∨ (∀x₁∃y₁…∀xₙ∃yₙ.¬f(x₁, y₁, …, xₙ, yₙ)); that is, it decides whether the Σₙ₊₁ proposition is true or its converse ∏ₙ₊₁ proposition is, and provides a witness either way.

What is a natural interpretation of the witness? A witness for Σₙ₊₁ maps y₁…yₙ to x₁…xₙ (and asserts f(x₁, y₁, …, xₙ, yₙ)), while a witness for ∏ₙ₊₁ maps x₁..xₙ to y₁…yₙ (and asserts ¬f(x₁, y₁, …, xₙ, yₙ)). (Note that the witness must satisfy regularity conditions, e.g. the x₁ returned by a Σₙ₊₁ witness must not depend on the witness’s input; we assume that even invalid witnesses still satisfy these regularity conditions, as it is easy to ensure they are satisfied by specifying the right witness type)

Now, we can ask four questions:

  1. Fix f; suppose the Σₙ₊₁ proposition is true, and an invalid Σₙ₊₁-witness is returned; then, is it possible to prove the oracle false?
  2. Fix f; suppose the ∏ₙ₊₁ proposition is true, and an invalid ∏ₙ₊₁-witness is returned; then, is it possible to prove the oracle false?
  3. Fix f; suppose the Σₙ₊₁ proposition is true, and a ∏ₙ₊₁-witness is returned; then, is it possible to prove the oracle false?
  4. Fix f; suppose the ∏ₙ₊₁ proposition is true, and a Σₙ₊₁-witness is returned; then, is it possible to prove the oracle false?

These conditions are necessary and sufficient for O’s correctness to be falsifiable, because O will satisfy one of the above 4 conditions for some f iff it is invalid.

First let’s consider question 1. Since the witness (call it g) is invalid, it maps some y₁…yₙ to some x₁…xₙ such that ¬f(x₁, y₁, …, xₙ, yₙ). We may thus prove the witness’s invalidity by exhibiting y₁…yₙ. So the answer is yes, and similarly for question 2.

Now for question 3. Let the witness be g. Since the Σₙ₊₁ proposition is true, there is some x₁ for which ∀y₁…∃xₙ∀yₙ.f(x₁, y₁, …, xₙ, yₙ). Now, we may feed x₁ into the witness g to get a y₁ for which the oracle asserts ∀x₂∃y₂…∀xₙ∃yₙ.¬f(x₁, y₁, …, xₙ, yₙ). (Note, g’s returned y₁ must not depend on x’s after x₁, by regularity, so we may set the rest of the x’s to 0)

We proceed recursively, yielding x₁…xₙ and y₁…yₙ for which f(x₁, y₁, …, xₙ, yₙ), and for which the oracle asserts ¬f(x₁, y₁, …, xₙ, yₙ), hence proving the oracle invalid (through exhibiting these x’s and y’s). So we may answer question 3 with a “yes”.

For question 4, the proof proceeds similarly, except we start by getting x₁ from the witness. The answer is, then, also “yes”.

Therefore, O’s validity as a constructive Σₙ₊₁-oracle is falsifiable.

Philosophical self-ratification

“Ratification” is defined as “the act or process of ratifying something (such as a treaty or amendment) : formal confirmation or sanction”. Self-ratification, then, is assigning validity to one’s self. (My use of the term “self-ratification” follows philosophical usage in analysis of causal decision theory)

At first this seems like a trivial condition. It is, indeed, easy to write silly sentences such as “This sentence is true and also the sky is green”, which are self-ratifying. However, self-ratification combined with other ontological and epistemic coherence conditions is a much less trivial condition, which I believe to be quite important for philosophical theory-development and criticism.

I will walk through some examples.

Causal decision theory

Formal studies of causal decision theory run into a problem with self-ratification. Suppose some agent A is deciding between two actions, L and R. Suppose the agent may randomize their action, and that their payoff equals their believed probability that they take the action other than the one they actually take. (For example, if the agent takes action L with 40% probability and actually takes action R, the agent’s payoff is 0.4)

If the agent believes they will take action L with 30% probability, then, if they are a causal decision theorist, they will take action L with 100% probability, because that leads to 0.7 payoff instead of 0.3 payoff. But, if they do so, this invalidates their original belief that they will take action L with 30% probability. Thus, the agent’s belief that they will take action L with 30% probability is not self-ratifying: the fact of the agent having this belief leads to the conclusion that they take action L with 100% probability, not 30%, which contradicts the original belief.

The only self-ratifying belief is that the agent will take each action with 50% probability; this way, both actions yield equal expected utility, and so a policy 50/50 randomization is compatible with causal decision theory, and this policy ratifies the original belief.

Genetic optimism

(This example is due to Robin Hanson’s “Uncommon Priors Require Origin Disputes”.)

Suppose Oscar and Peter are brothers. Oscar is more optimistic than Peter. Oscar comes to believe that the reason he is more optimistic is due to inheriting a gene that inflates beliefs about positive outcomes, whereas Peter did not inherit this same gene.

Oscar’s belief-set is now not self-ratifying. He believes the cause of his belief that things will go well to be a random gene, not correlation with reality. This means that, according to his own beliefs, his optimism is untrustworthy.

Low-power psychological theories

Suppose a psychological researcher, Beth, believes that humans are reinforcement-learning stimulus-response machines, and that such machines are incapable of reasoning about representations of the world. She presents a logical specification of stimulus-response machines that she believes applies to all humans. (For similar real-world theories, see: Behaviorism, Associationism, Perceptual Control Theory)

However, a logical implication of Beth’s beliefs is that she herself is a stimulus-response machine, and incapable of reasoning about world-representations. Thus, she cannot consistently believe that her specification of stimulus-response machines is likely to be an accurate, logically coherent representation of humans. Her belief-set, then, fails to self-ratify, on the basis that it assigns to herself a level of cognitive power insufficient to come to know that her belief-set is true.

Moral realism and value drift

Suppose a moral theorist, Valerie, believes:

  • Societies’ moral beliefs across history follow a random walk, not directed anywhere.
  • Her own moral beliefs, for the most part, follow society’s beliefs.
  • There is a true morality which is stable and unchanging.
  • Almost all historical societies’ moral beliefs are terribly, terribly false.

From these it follows that, absent further evidence, the moral beliefs of Valerie’s society should not be expected to be more accurate (according to estimation of the objective morality that Valerie believes exists) than the average moral beliefs across historical societies, since there is no moral progress in expectation. However, this implies that the moral beliefs of her own society are likely to be terribly, terribly false. Therefore, Valerie’s adoption of her society’s beliefs would imply that her own moral beliefs are likely to be terribly, terrible false: a failure of self-ratification.

Trust without honesty

Suppose Larry is a blogger who reads other blogs. Suppose Larry believes:

  • The things he reads in other blogs are, for the most part, true (~90% likely to be correct).
  • He’s pretty much the same as other bloggers; there is a great degree of subjunctive dependence between his own behavior and other bloggers’ behaviors (including their past behaviors).

Due to the first belief, he concludes that lying in his own blog is fine, as there’s enough honesty out there that some additional lies won’t pose a large problem. So he starts believing that he will lie and therefore his own blog will contain mostly falsehoods (~90%).

However, an implication of his similarity to other bloggers is that other bloggers will reason similarly, and lie in their own blog posts. Since this applies to past behavior as well, a further implication is that the things he reads in other blogs are, for the most part, false. Thus the belief-set, and his argument for lying, fail to self-ratify.

(I presented a similar example in “Is Requires Ought”.)

Mental nonrealism

Suppose Phyllis believes that the physical world exists, but that minds don’t exist. That is, there are not entities that are capable of observation, thought, etc. (This is a rather simple, naive formulation of eliminative materialism)

Her reason for this belief is that she has studied physics, and believes that physics is sufficient to explain everything, such that there is no reason to additionally posit the existence of minds.

However, if she were arguing for the accuracy of her beliefs about physics, she would have difficulty arguing except in terms of e.g. physicists making and communicating observations, theorists having logical thoughts, her reading and understanding physics books, etc.

Thus, her belief that minds don’t exist fails to self-ratify. It would imply that she lacks evidential basis for belief in the accuracy of physics. (On the other hand, she may be able to make up for this by coming up with a non-mentalistic account for how physics can come to be “known”, though this is difficult, as it is not clear what there is that could possibly have knowledge. Additionally, she could believe that minds exist but are somehow “not fundamental”, in that they are determined by physics; however, specifying how they are determined by physics requires assuming they exist at all and have properties in the first place.)

Conclusion

I hope the basic picture is clear by now. Agents have beliefs, and some of these beliefs imply beliefs about the trustworthiness of their own beliefs, primarily due to the historical origins of the beliefs (e.g. psychology, society, history). When the belief-set implies that it itself is untrustworthy (being likely to be wrong), there is a failure of self-ratification. Thus, self-ratification, rather than being a trivial condition, is quite nontrivial when combined with other coherence conditions.

Why would self-ratification be important? Simply put, a non-self-ratifying belief set cannot be trustworthy; if it were trustworthy then it would be untrustworthy, which shows untrustworthiness by contradiction. Thus, self-ratification points to a rich set of philosophical coherence conditions that may be neglected if one is only paying attention to surface-level features such as logical consistency.

Self-ratification as a philosophical coherence condition points at naturalized epistemology being an essential philosophical achievement. While epistemology may possibly start non-naturalized, as it gains self-consciousness of the fact of its embeddedness in a natural world, such self-consciousness imposes additional self-ratification constraints.

Using self-ratification in practice often requires flips between treating one’s self as a subject and as an object. This kind of dual self-consciousness is quite interesting and is a rich source of updates to both self-as-subject beliefs and self-as-object beliefs.

Taking coherence conditions including self-ratification to be the only objective conditions of epistemic justification is a coherentist theory of justification; note that coherentists need not believe that all “justified” belief-sets are likely to be true (and indeed, such a belief would be difficult to hold given the possibility of coherent belief-sets very different from one’s own and from each other).

Appendix: Proof by contradiction is consistent with self-ratification

There is a possible misinterpretation of self-ratification that says: “You cannot assume a belief to be true in the course of refuting it; the assumption would then fail to self-ratify”.

Classical logic permits proof-by-contradiction, indicating that this interpretation is wrong. The thing that a proof by contradiction does is show that some other belief-set (not the belief-set held by the arguer) fails to self-ratify (and indeed, self-invalidates). If the arguer actually believed in the belief-set that they are showing to be self-invalidating, then, indeed, that would be a self-ratification problem for the arguer. However, the arguer’s belief is that some proposition P implies not-P, not that P is true, so this does not present a self-ratification problem.

High-precision claims may be refuted without being replaced with other high-precision claims

There’s a common criticism of theory-criticism which goes along the lines of:

Well, sure, this theory isn’t exactly right. But it’s the best theory we have right now. Do you have a better theory? If not, you can’t really claim to have refuted the theory, can you?

This is wrong. This is falsification-resisting theory-apologism. Karl Popper would be livid.

The relevant reason why it’s wrong is that theories make high-precision claims. For example, the standard theory of arithmetic says 561+413=974. Not 975 or 973 or 97.4000001, but exactly 974. If arithmetic didn’t have this guarantee, math would look very different from how it currently looks (it would be necessary to account for possible small jumps in arithmetic operations).

A single bit flip in the state of a computer process can crash the whole program. Similarly, high-precision theories rely on precise invariants, and even small violations of these invariants sink the theory’s claims.

To a first approximation, a computer either (a) almost always works (>99.99% probability of getting the right answer) or (b) doesn’t work (<0.01% probability of getting the right answer). There are edge cases such as randomly crashing computers or computers with small floating point errors. However, even a computer that crashes every few minutes functions very precisely correctly in >99% of seconds that it runs.

If a computer makes random small errors 0.01% of the time in e.g. arithmetic operations, it’s not an almost-working computer, it’s a completely non-functioning computer, that will crash almost immediately.

The claim that a given algorithm or circuit really adds two numbers is very precise. Even a single pair of numbers that it adds incorrectly refutes the claim, and very much risks making this algorithm/circuit useless. (The rest of the program would not be able to rely on guarantees, and would instead need to know the domain in which the algorithm/circuit functions; this would significantly complicate the reasoning about correctness)

Importantly, such a refutation does not need to come along with an alternative theory of what the algorithm/circuit does. To refute the claim that it adds numbers, it’s sufficient to show a single counterexample without suggesting an alternative. Quality assurance processes are primarily about identifying errors, not about specifying the behavior of non-functioning products.

A Bayesian may argue that the refuter must have an alternative belief about the circuit. While this is true assuming the refuter is Bayesian, such a belief need not be high-precision. It may be a high-entropy distribution. And if the refuter is a human, they are not a Bayesian (that would take too much compute), and will instead have a vague representation of the circuit as “something doing some unspecified thing”, with some vague intuitions about what sorts of things are more likely than other things. In any case, the Bayesian criticism certainly doesn’t require the refuter to replace the claim about the circuit with an alternative high-precision claim; either a low-precision belief or a lack-of-belief will do.

The case of computer algorithms is particularly clear, but of course this applies elsewhere:

  • If there’s a single exception to conservation of energy, then a high percentage of modern physics theories completely break. The single exception may be sufficient to, for example, create perpetual motion machines. Physics, then, makes a very high-precision claim that energy is conserved, and a refuter of this claim need not supply an alternative physics.
  • If a text is claimed to be the word of God and totally literally true, then a single example of a definitely-wrong claim in the text is sufficient to refute the claim. It isn’t necessary to supply a better religion; the original text should lose any credit it was assigned for being the word of God.
  • If rational agent theory is a bad fit for effective human behavior, then the precise predictions of microeconomic theory (e.g. the option of trade never reducing expected utility for either actor, or the efficient market hypothesis being true) are almost certainly false. It isn’t necessary to supply an alternative theory of effective human behavior to reject these predictions.
  • If it is claimed philosophically that agents can only gain knowledge through sense-data, then a single example of an agent gaining knowledge without corresponding sense-data (e.g. mental arithmetic) is sufficient to refute the claim. It isn’t necessary to supply an alternative theory of how agents gain knowledge for this to refute the strongly empirical theory.
  • If it is claimed that hedonic utility is the only valuable thing, then a single example of a valuable thing other than hedonic utility is sufficient to refute the claim. It isn’t necessary to supply an alternative theory of value.

A theory that has been refuted remains contextually “useful” in a sense, but it’s the walking dead. It isn’t really true everywhere, and:

  • Machines believed to function on the basis of the theory cannot be trusted to be highly reliable
  • Exceptions to the theory can sometimes be manufactured at will (this is relevant in both security and philosophy)
  • The theory may make significantly worse predictions on average than a skeptical high-entropy prior or low-precision intuitive guesswork, due to being precisely wrong rather than imprecise
  • Generative intellectual processes will eventually discard it, preferring instead an alternative high-precision theory or low-precision intuitions or skepticism
  • The theory will go on doing damage through making false high-precision claims

The fact that false high-precision claims are generally more damaging than false low-precision claims is important ethically. High-precision claims are often used to ethically justify coercion, violence, and so on, where low-precision claims would have been insufficient. For example, imprisoning someone for a long time may be ethically justified if they definitely committed a serious crime, but is much less likely to be if the belief that they committed a crime is merely a low-precision guess, not validated by any high-precision checking machine. Likewise for psychiatry, which justifies incredibly high levels of coercion on the basis of precise-looking claims about different kinds of cognitive impairment and their remedies.

Therefore, I believe there is an ethical imperative to apply skepticism to high-precision claims, and to allow them to be falsified by evidence, even without knowing what the real truth is other than that it isn’t as the high-precision claim says it is.

On hiding the source of knowledge

I notice that when I write for a public audience, I usually present ideas in a modernist, skeptical, academic style; whereas, the way I come up with ideas is usually in part by engaging in epistemic modalities that such a style has difficulty conceptualizing or considers illegitimate, including:

  • Advanced introspection and self-therapy (including focusing and meditation)
  • Mathematical and/or analogical intuition applied everywhere with only spot checks (rather than rigorous proof) used for confirmation
  • Identity hacking, including virtue ethics, shadow-eating, and applied performativity theory
  • Altered states of mind, including psychotic and near-psychotic experiences
  • Advanced cynicism and conflict theory, including generalization from personal experience
  • Political radicalism and cultural criticism
  • Eastern mystical philosophy (esp. Taoism, Buddhism, Tantra)
  • Literal belief in self-fulfilling prophecies, illegible spiritual phenomena, etc, sometimes with decision-theoretic and/or naturalistic interpretations

This risks hiding where the knowledge actually came from. Someone could easily be mistaken into thinking they can do what I do, intellectually, just by being a skeptical academic.

I recall a conversation I had where someone (call them A) commented that some other person (call them B) had developed some ideas, then afterwards found academic sources agreeing with these ideas (or at least, seeming compatible), and cited these as sources in the blog post write-ups of these ideas. Person A believed that this was importantly bad in that it hides where the actual ideas came from, and assigned credit for them to a system that did not actually produce the ideas.

On the other hand, citing academics that agree with you is helpful to someone who is relying on academic peer-review as part of their epistemology. And, similarly, offering a rigorous proof is helpful for convincing someone of a mathematical principle they aren’t already intuitively convinced of (in addition to constituting an extra check of this principle).

We can distinguish, then, the source of an idea from the presented epistemic justification of it. And the justificatory chain (to a skeptic) doesn’t have to depend on the source. So, there is a temptation to simply present the justificatory chain, and hide the source. (Especially if the source is somehow embarrassing or delegitimized)

But, this creates a distortion, if people assume the justificatory chains are representative of the source. Information consumers may find themselves in an environment where claims are thrown around with various justifications, but where they would have quite a lot of difficulty coming up with and checking similar claims.

And, a lot of the time, the source is important in the justification, because the source was the original reason for privileging the hypothesis. Many things can be partially rationally justified without such partial justification being sufficient for credence, without also knowing something about the source. (The problems of skepticism in philosophy in part relate to this: “but you have the intuition too, don’t you?” only works if the other person has the same intuition (and admits to it), and arguing without appeals to intuition is quite difficult)

In addition, even if the idea is justified, the intuition itself is an artifact of value; knowing abstractly that “X” does not imply the actual ability to, in real situations, quickly derive the implications of “X”. And so, sharing the source of the original intuition is helpful to consumers, if it can be shared. Very general sources are even more valuable, since they allow for generation of new intuitions on the fly.

Unfortunately, many such sources can’t easily be shared. Some difficulties with doing so are essential and some are accidental. The essential difficulties have to do with the fact that teaching is hard; you can’t assume the student already has the mental prerequisites to learn whatever you are trying to teach, as there is significant variation between different minds. The accidental difficulties have to do with social stigma, stylistic limitations, embarrassment, politics, privacy of others, etc.

Some methods for attempting to share such intuitions may result in text that seems personal and/or poetic, and be out of place in a skeptical academic context. This is in large part because such text isn’t trying to justify itself by the skeptical academic standards, and is nevertheless attempting to communicate something.

Noticing this phenomenon has led me to more appreciate forewards and prefaces of books. These sections often discuss more of the messiness of idea-development than the body of the book does. There may be a nice stylistic way of doing something similar for blog posts; perhaps, an extended bibliography that includes free-form text.

I don’t have a solution to this problem at the moment. However, I present this phenomenon as a problem, in the spirit of discussing problems before proposing solutions. I hope it is possible to reduce the accidental difficulties in sharing sources of knowledge, and actually-try on the essential difficulties, in a way that greatly increases the rate of interpersonal model-transfer.

On the ontological development of consciousness

This post is about what consciousness is, ontologically, and how ontologies that include consciousness develop.

The topic of consciousness is quite popular, and confusing, in philosophy. While I do not seek to fully resolve the philosophy of consciousness, I hope to offer an angle on the question I have not seen before. This angle is that of developmental ontology: how are “later” ontologies developed from “earlier” ontologies? I wrote on developmental ontology in a previous post, and this post can be thought of as an elaboration, which can be read on its own, and specifically tackles the problem of consciousness.

Much of the discussion of stabilization is heavily inspired by On the Origin of Objects, an excellent book on reference and ontology, to which I owe much of my ontological development. To the extent that I have made any philosophical innovation, it is in combining this book’s concepts with the minimum-description-length principle, and analytic philosophy of mind.

World-perception ontology

I’m going to write a sequence of statements, which each make sense in terms of an intuitive world-perception ontology.

  • There’s a real world outside of my head.
  • I exist and am intimately connected with, if not identical with, some body in this world.
  • I only see some of the world. What I can see is like what a camera placed at the point my eyes are can see.
  • The world contains objects. These objects have properties like shape, color, etc.
  • When I walk, it is me who moves, not everything around me. Most objects are not moving most of the time, even if they look like they’re moving in my visual field.
  • Objects, including my body, change and develop over time. Changes proceed, for the most part, in a continuous way, so e.g. object shapes and sizes rarely change, and teleportation doesn’t happen.

These all seem common-sensical; it would be strange to doubt them. However, achieving the ontology by which such statements are common-sensical is nontrivial. There are many moving parts here, which must be working in their places before the world seems as sensible as it is.

Let’s look at the “it is me who moves, not everything around me” point, because it’s critical. If you try shaking your head right now, you will notice that your visual field changes rapidly. An object (such as a computer screen) in your vision is going to move side-to-side (or top-to-bottom), from one side of your visual field to another.

However, despite this, there is an intuitive sense of the object not moving. So, there is a stabilization process involved. Image stabilization (example here) is an excellent analogy for this process (indeed, the brain could be said to engage in image stabilization in a literal sense).

The world-perception ontology is, much of the time, geocentric, rather than egocentric or heliocentric. If you walk, it usually seems like the ground is still and you are moving, rather than the ground moving while you’re still (egocentrism), or both you and the ground moving very quickly (heliocentrism). There are other cases such as vehicle interiors where what is stabilized is not the Earth, but the vehicle itself; and, “tearing” between this reference frame and the geocentric reference frame can cause motion sickness.

Notably, world-perception ontology must contain both (a) a material world and (b) “my perceptions of it”. Hence, the intuitive ontological split between material and consciousness. To take such a split to be metaphysically basic is to be a Descartes-like dualist. And the split is ontologically compelling enough that such a metaphysics can be tempting.

Pattern-only ontology

William James famously described the baby’s sense of the world as a “blooming, buzzing confusion”. The image presented is one of dynamism and instability, very different from world-perception ontology.

The baby’s ontology is closer to raw percepts than an adult’s is; it’s less developed, fewer things are stabilized, and so on. Babies generally haven’t learned object permanence; this is a stabilization that is only developed later.

The most basic ontology consists of raw percepts (which cannot even be considered “percepts” from within this ontology), not even including shapes; these percepts may be analogous to pixel-maps in the case of vision, or spectrograms in the case of hearing, but I am unsure of these low-level details, and the rest of this post would still apply if the basic percepts were e.g. lines in vision. Shapes (which are higher-level percepts) must be recognized in the sea of percepts, in a kind of unsupervised learning.

The process of stabilization is intimately related to a process of pattern-detection. If you can detect patterns of shapes across time, you may reify such patterns as an object. (For example, a blue circle that is present in the visual field, and retains the same shape even as it moves around the field, or exits and re-enters, may be reified as a circular object). Such pattern-reification is analogous to folding a symmetric image in half: it allows the full image to be described using less information than was contained in the original image.

In general, the minimum description length principle says it is epistemically correct to posit fewer objects to explain many. And, positing a small number of shapes to explain many basic percepts, or a small number of objects to explain a large number of shapes, are examples of this.

From having read some texts on meditation (especially Mastering the Core Teachings of the Buddha), and having meditated myself, I believe that meditation can result in getting more in-touch with pattern-only ontology, and that this is an intended result, as the pattern-only ontology necessarily contains two of the three characteristics (specifically, impermanence and no-self).

To summarize: babies start from a confusing point, where there are low-level percepts, and patterns progressively recognized in them, which develops ontology including shapes and objects.

World-perception ontology results from stabilization

The thesis of this post may now be stated: world-perception ontology results from stabilizing a previous ontology that is itself closer to pattern-only ontology.

One of the most famous examples of stabilization in science is the movement from geocentrism to heliocentrism. Such stabilization explains many epicycles in terms of few cycles, by changing where the center is.

The move from egocentrism to geocentrism is quite analogous. An egocentric reference frame will contain many “epicycles”, which can be explained using fewer “cycles” in geocentrism.

These cycles are literal in the case of a person spinning around in a circle. In a pattern-only ontology (which is, necessarily, egocentric, for the same reason it doesn’t have a concept of self), that person will see around them shapes moving rapidly in the same direction. There are many motions to explain here. In a world-percept ontology, most objects around are not moving rapidly; rather, it is believed that the self is spinning.

So, the egocentric-to-geocentric shift is compelling for the same reason the geocentric-to-heliocentric shift is. It allows one to posit that there are few motions, instead of many motions. This makes percepts easier to explain.

Consciousness in world-perception ontology

The upshot of what has been said so far is: the world-perception ontology results from Occamian symmetry-detection and stabilization starting from a pattern-only ontology (or, some intermediate ontology).

And, the world-perception ontology has conscious experience as a component. For, how else can what were originally perceptual patterns be explained, except by positing that there is a camera-like entity in the world (attached to some physical body) that generates such percepts?

The idea that consciousness doesn’t exist (which is asserted by some forms of eliminative materialism) doesn’t sit well with this picture. The ontological development that produced the idea of the material world, also produced the idea of consciousness, as a dual. And both parts are necessary to make sense of percepts. So, consciousness-eliminativism will continue to be unintuitive (and for good epistemic reasons!) until it can replace world-perception ontology with one that achieves percept-explanation that is at least as effective. And that looks to be difficult or impossible.

To conclude: the ontology that allows one to conceptualize the material world as existing and not shifting constantly, includes as part of it conscious perception, and could not function without including it. Without such a component, there would be no way to refactor rapidly shifting perceptual patterns into a stable outer world and a moving point-of-view contained in it.

Is requires ought

The thesis of this post is: “Each ‘is’ claim relies implicitly or explicitly on at least one ‘ought’ claim.”

I will walk through a series of arguments that suggest that this claim is true, and then flesh out the picture towards the end.

(note: I discovered after writing this post that my argument is similar to Cuneo’s argument for moral realism; I present it anyway in the hope that it is additionally insightful)

Epistemic virtue

There are epistemic virtues, such as:

  • Try to have correct beliefs.
  • When you’re not sure about something, see if there’s a cheap way to test it.
  • Learn to distinguish between cases where you (or someone else) is rationalizing, versus when you/they are offering actual reasons for belief.
  • Notice logical inconsistencies in your beliefs and reflect on them.
  • Try to make your high-level beliefs accurately summarize low-level facts.

These are all phrased as commands, which are a type of ought claim. Yet, they all assist one following such commands to have more accurate beliefs.

Indeed, it is hard to imagine how someone who does not (explicitly or implicitly) follow rules like these could come to have accurate beliefs. There are many ways to end up in lala land, and guidelines are essential for staying on the path.

So, “is” claims that rely on the speaker of the claim having epistemic virtue to be taken seriously, rely on the “ought” claims of epistemic virtue itself.

Functionalist theory of mind

The functionalist theory of mind is “the doctrine that what makes something a mental state of a particular type does not depend on its internal constitution, but rather on the way it functions, or the role it plays, in the system of which it is a part.” For example, according to functionalism, for myself to have a world-representing mind, part of my brain must be performing the function of representing the world.

I will not here argue for the functionalist theory of mind, and instead will assume it to be true.

Consider the following “is” claim: “There is a plate on my desk.”

I believe this claim to be true. But why? I see a plate on my desk. But what does that mean?

Phenomenologically, I have the sense that there is a round object on my desk, and that this object is a plate. But it seems that we are now going in a loop.

Here’s an attempt at a way out. “My visual system functions to present me with accurate information about the objects around me. I believe it to be functioning well. And I believe my phenomenological sense of there being a plate on my desk to be from my visual system. Therefore, there is a plate on my desk.”

Well, this certainly relies on a claim of “function”. That’s not an “ought” claim about me, but it is similar (and perhaps identical) to an “ought” claim about my visual system: that presenting me with information about objects is what my visual system ought to do.

Things get hairy when examining the second sentence. “I believe it to be functioning well.” Why do I believe that?

I can consider evidence like “my visual system, along with my other sensory modalities, presents me with a coherent world that has few anomalies.” That’s a complex claim, and checking it requires things like checking my memories of how coherent the world my senses present to me is, which is again relying on the parts of my mind to perform their functions.

I can’t doubt my mind except by using my mind. And using my mind requires, at least tentatively, accepting claims like “my visual system is there for presenting me with accurate information about the objects around me.”

Indeed, even making sense of a claim such as “there is a plate on my desk” requires me to use some intuition-reliant faculty I have of mapping words to concepts; without trust in such a faculty, the claim is meaningless.

I, therefore, cannot make meaningful “is” claims without at the same time using at least some parts of my mind as tools, applying “ought” claims to them.

Social systems

Social systems, such as legal systems, academic disciplines, and religions, contain “ought” claims. Witnesses ought to be allowed to say what they saw. Judges ought to weigh the evidence presented. People ought not to murder each other. Mathematical proofs ought to be checked by peers before being published.

Many such oughts are essential for the system’s epistemology. If the norms of mathematics do not include “check proofs for accuracy” and so on, then there is little reason to believe the mathematical discipline’s “is” claims such as “Fermat’s last theorem is true.”

Indeed, it is hard for claims such as “Fermat’s last theorem is true” to even be meaningful without oughts. For, there are oughts involved in interpreting mathematical notation, and in resolving verbal references to theorems. Such as, “the true meaning of ‘+’ is integer addition, which can be computed using the following algorithm.”

Without mathematical “ought”s, “Fermat’s last theorem is true” isn’t just a doubtful claim, it’s a meaningless one, which is not even wrong.

Language itself can be considered as a social system. When people misuse language (such as by lying), their statements cannot be taken seriously, and sometimes can’t even be interpreted as having meaning.

(A possible interpretation of Baudrillard’s simulacrum theory is that level 1 is when there are sufficient “ought”s both to interpret claims and to ensure that they are true for the most part; level 2 is when there are sufficient “ought”s to meaningfully interpret claims but not to ensure that they are true; level 3 is when “ought”s are neither sufficient to interpret claims nor to ensure that they are true, but are sufficient for claims to superficially look like meaningful ones; and level 4 is where “ought”s are not even sufficient to ensure that claims superficially look meaningful.)

Nondualist epistemology

One might say to the arguments so far:

“Well, certainly, my own ‘is’ claims require some entities, each of which may be a past iteration of myself, a part of my mind, or another person, to be following oughts, in order for my claims be meaningful and/or correct. But, perhaps such oughts do not apply to me, myself, here and now.”

However, such a self/other separation is untenable.

Suppose I am a mathematical professor, who is considering performing academic fraud, to ensure that false theorems end up in journals. If I corrupt the mathematical process, then I cannot, in the future, rely on the claims of mathematical journals to be true. Additionally, if others are behaving similarly to me, then my own decision to corrupt the process is evidence that others also decide to corrupt the process. Some of these others are in the past; my own decision to corrupt the process is evidence that my own mathematical knowledge is false, as it is evidence that those before me have decided similarly. So, my own mathematical “is” claims rely on myself following mathematical “ought” claims.

(More precisely, both evidential decision theory and functional decision theory have a notion by which present decisions can have past consequences, including past consequences affecting the accuracy of presently-available information)

Indeed, the idea of corrupting the mathematical process would be horrific to most good mathematicians, in a quasi-religious way. These mathematicians’ own ability to take their work seriously enough to attain rigor depends on such a quasi-religious respect for the mathematical discipline.

Nondualist epistemology cannot rely on a self/other boundary by which decisions made in the present moment have no effects on the information available in the present moment. Lying to similar agents, thus, undermines both the meaningfulness and the truth of one’s own beliefs.

Conclusion

I will summarize the argument thusly:

  • Each “is” claim may or may not be justified.
  • An “is” claim is only justified if the system producing the claim is functioning well at the epistemology of this claim.
  • Specifically, an “is” claim that you make is justified only if some system you are part of is functioning well at the epistemology of that claim. (You are the one making the claim, after all, so the system must include the you who makes the claim)
  • That system (that you are part of) can only function well at the epistemology of that claim if you have some function in that system and you perform that function satisfactorily. (Functions of wholes depend on functions of parts; even if all you do is listen for a claim and repeat it, that is a function)
  • Therefore, an “is” claim that you make is justified only if you have some specific function and you expect to perform that function satisfactorily.
  • If a reasonable agent expects itself to perform some function satisfactorily, then according to that agent, that agent ought to perform that function satisfactorily.
  • Therefore, if you are a reasonable agent who accepts the argument so far, you believe that your “is” claims are only justified if you have oughts.

The second-to-last point is somewhat subtle. If I use a fork as a tool, then I am applying an “ought” to the fork; I expect it ought to function as an eating utensil. Similar to using another person as a tool (alternatively “employee” or “service worker”), giving them commands and expecting that they ought to follow them. If my own judgments functionally depend on myself performing some function, then I am using myself as a tool (expecting myself to perform that function). To avoid self-inconsistency between myself-the-tool-user and myself-the-tool, I must accept an ought, which is that I ought to satisfactorily perform the tool-function I am expecting myself to perform; if I do not accept that ought, I must drop any judgment whose justification requires me to perform the function generating this ought.

It is possible to make a similar argument about meaningfulness; the key point is that the meaningfulness of a claim depends on the functioning of an interpretive system that this claim is part of. To fail to follow the oughts implied by the meaningfulness of ones’ statements is not just to be wrong, but to collapse into incoherence.

Certainly, this argument does not imply that all “ought”s can be derived from “is”es. In particular, an agent may have degrees of freedom in how it performs its functions satisfactorily, or in doing things orthogonal to performing its functions. What the argument suggests instead is that each “is” depends on at least one “ought”, which itself may depend on an “is”, in a giant web of interdependence.

There are multiple possible interdependent webs (multiple possible mind designs, multiple possible social systems), such that a different web could have instead come in to existence, and our own web may evolve into any one of a number of future possibilities. Though, we can only reason about hypothetical webs from our own actual one.

Furthermore, it is difficult to conceive of what it would mean for the oughts being considered to be “objective”; indeed, an implication of the argument is that objectivity itself depends on oughts, at least some of which must be pre-objective or simultaneous with objectivity.

Related, at least some of those oughts that are necessary as part of the constitution of “is”, must themselves be pre-“is” or simultaneous with “is”, and thus must not themselves depend on already-constituted “is”es. A possible candidate for such an ought is: “organize!” For the world to produce a map without already containing one, it must organize itself into a self-representing structure, from a position of not already being self-representing. (Of course, here I am referring to the denotation of “organize!”, which is a kind of directed motion, rather than to the text “organize!”; the text cannot itself have effective power outside the context of a text-interpretation system)

One can, of course, sacrifice epistemology, choosing to lie and to confuse one’s self, in ways that undermine both the truth and meaningfulness of one’s own “is” claims.

But, due to the anthropic principle, we (to be a coherent “we” that can reason) are instead at an intermediate point of a process that does not habitually make such decisions, or one which tends to correct them. A process that made such decisions without correcting them would result in rubble, not reason. (And whether our own process results in rubble or reason in the future is, in part, up to us, as we are part of this process)

And so, when we are a we that can reason, we accept at least those oughts that our own reason depends on, while acknowledging the existence of non-reasoning processes that do not.

Truth-telling is aggression in zero-sum frames

If you haven’t seen The Invention Of Lying, watch some of this clip (1 minute long).

If you’re like most people, this will induce a cringe reaction. The things these people are saying, while true, are rude and would ordinarily be interpreted as socially aggressive.

In a world where white lies (and hiding things for the sake of politeness) are normalized, such truth-telling is highly unusual. One automatically suspects the motives of the truth-teller. Maybe the waiter is saying “I’m embarrassed I work here” in order to manipulate the others by garnering pity. Maybe the woman is saying the man is unattractive in order to lower his self-esteem and gain advantage over him.

These interpretations are false in the world of The Invention Of Lying, because everyone talks that way. So, revealing such information does not indicate any special plot going on, it’s just the thing to do.

In our world, revealing such information does (usually) indicate a special plot, because it is so unusual. It’s erratic , and quite possibly dangerous.

Special social plots are usually interpreted as aggressive. It’s as if the game has reached an equilibrium state, and out-of-equilibrium actions are surprise attacks.

Wiio’s law states: “communication usually fails, except by accident”. The equilibrium of the game is for no communication to happen. Breaks in the game allow real communication, something most hope for but rarely find.

If we adopt a frame that says that unusual social plots are actions that are against someone (which is a zero-sum frame), this leads to the conclusion that truth-telling is aggression, as it is necessarily part of an unusual social plot.

Non-zero-sum frames, of course, usually interpret truth-telling positively: it contributes to a shared information commons, which helps just about everyone, with few exceptions. People are often capable of switching to non-zero-sum frames in natural emergency situations, but such situations are rare.

To transition from a zero-sum frame to a non-zero-sum frame, from normalized lying to normalized truth-telling, requires a special social plot involving unusual truth-telling. Because it almost never happens by default.

Such plots are always acts of aggression, when interpreted from within a zero-sum frame. And this concern is not without merit. When lying is built into the system, and so is punishment for actions labeled as “lying”, punishment of an ordinary instance of lying (a likely result of uncareful truth-telling) isn’t part of a functional behavioral control system, it’s a random act of scapegoating.

And so, there is, in practice, a limit on the rate that truth will be told. Because truth-telling uncovers local norm-violations (which are normal), leading to scapegoating. And people who fear this (or, who detect an unusual social plot happening and reflexively oppose it) will coordinate to suppress truth-telling.

Metaphorical extensions and conceptual figure-ground inversions

Consider the following sentence: “A glacier is a river of ice.”

This is metaphorical. In some sense, a glacier isn’t actually a river. A “literal” river has flowing liquid water, not ice.

Let a river(1) be defined to be an archetypal flowing-water river. A glacier isn’t a river(1). Rather, a glacier shares some structure in common with a river(1). We may define river(2) to mean some broader category, of things that flow like a river(1), such as:

  • A glacier
  • A flowing of earth matter in a landslide
  • A flowing of chemicals down an incline in a factory

and so on.

A river(2) is a concept by metaphorically extending river(1). It is, in fact, easier to explain the concept of river(2) by first clearly delineating what a river(1) is. A child will have trouble grasping the metaphorical language of “a glacier is a river of ice” until understanding what a river(1) is, such that the notion of flow that generates river(2) can be pointed to with concrete examples.

Formally, we could think of metaphorical extensions in terms of generative probabilistic models: river(2) is formed by taking some generator behind river(1) (namely, the generator of flowing substance) and applying it elsewhere. But, such formalization isn’t necessary to get the idea intuitively. See also the picture theory of language; language draws pictures in others’ minds, and those pictures are formed generatively/recursively out of different structures; see also generative grammar, the idea that sentences are formed out of lawful recursive structures.

Is ice a form of water?

Consider the sentence: “Ice is a form of water.”

What does that mean? Suppose that, by definition, ice is frozen water. Then, the sentence is tautological.

However, the sentence may be new information to a child. What’s going on?

Suppose the child has seen liquid water, which we will call water(1). The child has also seen ice, in the form of ice cubes; call the ice of ice cubes ice(1). It is new information to this child that ice(1) is a form of water(1). Concretely, you can get ice(1) by reducing the temperature of water(1) sufficiently and waiting.

At some point, water(1) is metaphorically extended into water(2) to include ice, liquid water, and water vapor. Tautologically, ice(1) is water(2). It is not strange for someone to say “The tank contains water, and some of it is frozen.” However, the water(1) concept is still sometimes used, as in the sentence “Water is a liquid.”

The water/ice example is, in many ways, much like the river/glacier example (and not just because both are about liquid/solid water): water(1) is metaphorically extended into water(2).

(An etymology question: why do we say that ice is a form of water, not that water is a form of ice? A philosophy question: what would be the difference between the two?)

While I’ve focused on extensions, other conceptual/metaphorical refinements are also possible; note that the preformal concept of temperature (temperature(1)) which means approximately “things that feel hot, make things melt/boil, and heat up nearby things” is refined into the physics definition of temperature(2) as “average kinetic energy per molecule”.

Figure-ground inversion

A special case of metaphorical extension is a figure-ground inversion. Consider the following statements:

  • All is nature (naturalism).
  • All is material (materialism).
  • All is physical (physicalism).
  • All is mental (idealism).
  • All is God (pantheism).
  • All is one (monism).
  • All is meaningless (nihilism).

Let’s examine naturalism first. A child has a preformal concept of nature (nature(1)) from concrete acquaintance with trees, forests, wild animals, rocks, oceans, etc. Nature(1) doesn’t include plastic, computers, thoughts, etc.

According to naturalism, all is nature. But, clearly, not all is nature(1). Trivially, plastic isn’t nature(1).

However, it is possible to see an important sense in which all is nature(2) (things produces by the same causal laws that produce nature(1)). After all, even humans are animals (note, this is also an extension!), and the activities of humans, including the production of artifacts such as plastic, are the activities of animals, which happen according to the causal laws of the universe.

Naturalism is a kind of figure-ground inversion. We start with nature(1), initially constituting a particular part of reality (trees, rocks, etc). Then, nature(1) is metaphorically extended into nature(2), a “universal generator” than encompasses all of reality, such that even plastic is a form of nature(2). What starts as figure in the ground, becomes the ground in which all figures exist.

And, even after performing this extension, the nature(1) concept remains useful, for delineating what it delineates. While (according to naturalism) all is nature(2), some nature(2) is natural(1), while other nature(2) is unnatural(1). In fact, the nature(2) concept is mostly only useful for pointing at the way in which everything can be generated by metaphorically extending nature(1); after this extension happens, nature(2) is simply the totality, and does not need to be delineated from anything else.

Similarly, materialism extends material(1) (wood, brick, stone, water, etc) to material(2) (things that have substance and occupy space) such that material(2) encompasses all of reality. Notably, materialism is, in a sense, compatible with naturalism, in that perhaps all of reality can be formed out of the nature(2) generator, and all of reality can be formed out of the material(2) generator.

The other cases are left as exercises for the reader.

(thanks to Cassandra McClure for coming up with the terminology of figure-ground inversions applied to concepts)

Dialogue on Appeals to Consequences

[note: the following is essentially an expanded version of this LessWrong comment on whether appeals to consequences are normative in discourse. I am exasperated that this is even up for debate, but I figure that making the argumentation here explicit is helpful]

Carter and Quinn are discussing charitable matters in the town square, with a few onlookers.

Carter: “So, this local charity, People Against Drowning Puppies (PADP), is nominally opposed to drowning puppies.”

Quinn: “Of course.”

Carter: “And they said they’d saved 2170 puppies last year, whereas their total spending was $1.2 million, so they estimate they save one puppy per $553.”

Quinn: “Sounds about right.”

Carter: “So, I actually checked with some of their former employees, and if what they say and my corresponding calculations are right, they actually only saved 138 puppies.”

Quinn: “Hold it right there. Regardless of whether that’s true, it’s bad to say that.”

Carter: “That’s an appeal to consequences, well-known to be a logical fallacy.”

Quinn: “Is that really a fallacy, though? If saying something has bad consequences, isn’t it normative not to say it?”

Carter: “Well, for my own personal decisionmaking, I’m broadly a consequentialist, so, yes.”

Quinn: “Well, it follows that appeals to consequences are valid.”

Carter: “It isn’t logically valid. If saying something has bad consequences, that doesn’t make it false.”

Quinn: “But it is decision-theoretically compelling, right?”

Carter: “In theory, if it could be proven, yes. But, you haven’t offered any proof, just a statement that it’s bad.”

Quinn: “Okay, let’s discuss that. My argument is: PADP is a good charity. Therefore, they should be getting more donations. Saying that they didn’t save as many puppies as they claimed they did, in public (as you just did), is going to result in them getting fewer donations. Therefore, your saying that they didn’t save as many puppies as they claimed to is bad, and is causing more puppies to drown.”

Carter: “While I could spend more effort to refute that argument, I’ll initially note that you only took into account a single effect (people donating less to PADP) and neglected other effects (such as people having more accurate beliefs about how charities work).”

Quinn: “Still, you have to admit that my case is plausible, and that some onlookers are convinced.”

Carter: “Yes, it’s plausible, in that I don’t have a full refutation, and my models have a lot of uncertainty. This gets into some complicated decision theory and sociological modeling. I’m afraid we’ve gotten sidetracked from the relatively clear conversation, about how many puppies PADP saved, to a relatively unclear one, about the decision theory of making actual charity effectiveness clear to the public.”

Quinn: “Well, sure, we’re into the weeds now, but this is important! If it’s actually bad to say what you said, it’s important that this is widely recognized, so that we can have fewer… mistakes like that.”

Carter: “That’s correct, but I feel like I might be getting trolled. Anyway, I think you’re shooting the messenger: when I started criticizing PADP, you turned around and made the criticism about me saying that, directing attention against PADP’s possible fraudulent activity.”

Quinn: “You still haven’t refuted my argument. If you don’t do so, I win by default.”

Carter: “I’d really rather that we just outlaw appeals to consequences, but, fine, as long as we’re here, I’m going to do this, and it’ll be a learning experience for everyone involved. First, you said that PADP is a good charity. Why do you think this?”

Quinn: “Well, I know the people there and they seem nice and hardworking.”

Carter: “But, they said they saved over 2000 puppies last year, when they actually only saved 138, indicating some important dishonesty and ineffectiveness going on.”

Quinn: “Allegedly, according to your calculations. Anyway, saying that is bad, as I’ve already argued.”

Carter: “Hold up! We’re in the middle of evaluating your argument that saying that is bad! You can’t use the conclusion of this argument in the course of proving it! That’s circular reasoning!”

Quinn: “Fine. Let’s try something else. You said they’re being dishonest. But, I know them, and they wouldn’t tell a lie, consciously, although it’s possible that they might have some motivated reasoning, which is totally different. It’s really uncivil to call them dishonest like that. If everyone did that with the willingness you had to do so, that would lead to an all-out rhetorical war…”

Carter: “God damn it. You’re making another appeal to consequences.”

Quinn: “Yes, because I think appeals to consequences are normative.”

Carter: “Look, at the start of this conversation, your argument was that saying PADP only saved 138 puppies is bad.”

Quinn: “Yes.”

Carter: “And now you’re in the course of arguing that it’s bad.”

Quinn: “Yes.”

Carter: “Whether it’s bad is a matter of fact.”

Quinn: “Yes.”

Carter: “So we have to be trying to get the right answer, when we’re determining whether it’s bad.”

Quinn: “Yes.”

Carter: “And, while appeals to consequences may be decision theoretically compelling, they don’t directly bear on the facts.”

Quinn: “Yes.”

Carter: “So we shouldn’t have appeals to consequences in conversations about whether the consequences of saying something is bad.”

Quinn: “Why not?”

Carter: “Because we’re trying to get to the truth.”

Quinn: “But aren’t we also trying to avoid all-out rhetorical wars, and puppies drowning?”

Carter: “If we want to do those things, we have to do them by getting to the truth.”

Quinn: “The truth, according to your opinion-

Carter: “God damn it, you just keep trolling me, so we never get to discuss the actual facts. God damn it. Fuck you.”

Quinn: “Now you’re just spouting insults. That’s really irresponsible, given that I just accused you of doing something bad, and causing more puppies to drown.”

Carter: “You just keep controlling the conversation by OODA looping faster than me, though. I can’t refute your argument, because you appeal to consequences again in the middle of the refutation. And then we go another step down the ladder, and never get to the truth.”

Quinn: “So what do you expect me to do? Let you insult well-reputed animal welfare workers by calling them dishonest?”

Carter: “Yes! I’m modeling the PADP situation using decision-theoretic models, which require me to represent the knowledge states and optimization pressures exerted by different agents (both conscious and unconscious), including when these optimization pressures are towards deception, and even when this deception is unconscious!”

Quinn: “Sounds like a bunch of nerd talk. Can you speak more plainly?”

Carter: “I’m modeling the actual facts of how PADP operates and how effective they are, not just how well-liked the people are.”

Quinn: “Wow, that’s a strawman.”

Carter: “Look, how do you think arguments are supposed to work, exactly? Whoever is best at claiming that their opponent’s argumentation is evil wins?”

Quinn: “Sure, isn’t that the same thing as who’s making better arguments?”

Carter: “If we argue by proving our statements are true, we reach the truth, and thereby reach the good. If we argue by proving each other are being evil, we don’t reach the truth, nor the good.”

Quinn: “In this case, though, we’re talking about drowning puppies. Surely, the good in this case is causing fewer puppies to drown, and directing more resources to the people saving them.”

Carter: “That’s under contention, though! If PADP is lying about how many puppies they’re saving, they’re making the epistemology of the puppy-saving field worse, leading to fewer puppies being saved. And, they’re taking money away from the next-best-looking charity, which is probably more effective if, unlike PADP, they’re not lying.”

Quinn: “How do you know that, though? How do you know the money wouldn’t go to things other than saving drowning puppies if it weren’t for PADP?”

Carter: “I don’t know that. My guess is that the money might go to other animal welfare charities that claim high cost-effectiveness.”

Quinn: “PADP is quite effective, though. Even if your calculations are right, they save about one puppy per $10,000. That’s pretty good.”

Carter: “That’s not even that impressive, but even if their direct work is relatively effective, they’re destroying the epistemology of the puppy-saving field by lying. So effectiveness basically caps out there instead of getting better due to better epistemology.”

Quinn: “What an exaggeration. There are lots of other charities that have misleading marketing (which is totally not the same thing as lying). PADP isn’t singlehandedly destroying anything, except instances of puppies drowning.”

Carter: “I’m beginning to think that the difference between us is that I’m anti-lying, whereas you’re pro-lying.”

Quinn: “Look, I’m only in favor of lying when it has good consequences. That makes me different from pro-lying scoundrels.”

Carter: “But you have really sloppy reasoning about whether lying, in fact, has good consequences. Your arguments for doing so, when you lie, are made of Swiss cheese.”

Quinn: “Well, I can’t deductively prove anything about the real world, so I’m using the most relevant considerations I can.”

Carter: “But you’re using reasoning processes that systematically protect certain cached facts from updates, and use these cached facts to justify not updating. This was very clear when you used outright circular reasoning, to use the cached fact that denigrating PADP is bad, to justify terminating my argument that it wasn’t bad to denigrate them. Also, you said the PADP people were nice and hardworking as a reason I shouldn’t accuse them of dishonesty… but, the fact that PADP saved far fewer puppies than they claimed actually casts doubt on those facts, and the relevance of them to PADP’s effectiveness. You didn’t update when I first told you that fact, you instead started committing rhetorical violence against me.”

Quinn: “Hmm. Let me see if I’m getting this right. So, you think I have false cached facts in my mind, such as PADP being a good charity.”

Carter: “Correct.”

Quinn: “And you think those cached facts tend to protect themselves from being updated.”

Carter: “Correct.”

Quinn: “And you think they protect themselves from updates by generating bad consequences of making the update, such as fewer people donating to PADP.”

Carter: “Correct.”

Quinn: “So you want to outlaw appeals to consequences, so facts have to get acknowledged, and these self-reinforcing loops go away.”

Carter: “Correct.”

Quinn: “That makes sense from your perspective. But, why should I think my beliefs are wrong, and that I have lots of bad self-protecting cached facts?”

Carter: “If everyone were as willing as you to lie, the history books would be full of convenient stories, the newspapers would be parts of the matrix, the schools would be teaching propaganda, and so on. You’d have no reason to trust your own arguments that speaking the truth is bad.”

Quinn: “Well, I guess that makes sense. Even though I lie in the name of good values, not everyone agrees on values or beliefs, so they’ll lie to promote their own values according to their own beliefs.”

Carter: “Exactly. So you should expect that, as a reflection to your lying to the world, the world lies back to you. So your head is full of lies, like the ‘PADP is effective and run by good people’ one.”

Quinn: “Even if that’s true, what could I possibly do about it?”

Carter: “You could start by not making appeals to consequences. When someone is arguing that a belief of yours is wrong, listen to the argument at the object level, instead of jumping to the question of whether saying the relevant arguments out loud is a good idea, which is a much harder question.”

Quinn: “But how do I prevent actually bad consequences from happening?”

Carter: “If your head is full of lies, you can’t really trust ad-hoc object-level arguments against speech, like ‘saying PADP didn’t save very many puppies is bad because PADP is a good charity’. You can instead think about what discourse norms lead to the truth being revealed, and which lead to it being obscured. We’ve seen, during this conversation, that appeals to consequences tend to obscure the truth. And so, if we share the goal of reaching the truth together, we can agree not to do those.”

Quinn: “That still doesn’t answer my question. What about things that are actually bad, like privacy violations?”

Carter: “It does seem plausible that there should be some discourse norms that protect privacy, so that some facts aren’t revealed, if such norms have good consequences overall. Perhaps some topics, such as individual people’s sex lives, are considered to be banned topics (in at least some spaces), unless the person consents.”

Quinn: “Isn’t that an appeal to consequences, though?”

Carter: “Not really. Deciding what privacy norms are best requires thinking about consequences. But, once those norms have been decided on, it is no longer necessary to prove that privacy violations are bad during discussions. There’s a simple norm to appeal to, which says some things are out of bounds for discussion. And, these exceptions can be made without allowing appeals to consequences in full generality.”

Quinn: “Okay, so we still have something like appeals to consequences at the level of norms, but not at the level of individual arguments.”

Carter: “Exactly.”

Quinn: “Does this mean I have to say a relevant true fact, even if I think it’s bad to say it?”

Carter: “No. Those situations happen frequently, and while some radical honesty practitioners try not to suppress any impulse to say something true, this practice is probably a bad idea for a lot of people. So, of course you can evaluate consequences in your head before deciding to say something.”

Quinn: “So, in summary: if we’re going to have suppression of some facts being said out loud, we should have that through either clear norms designed with consequences (including consequences for epistemology) in mind, or individuals deciding not to say things, but otherwise our norms should be protecting true speech, and outlawing appeals to consequences.”

Carter: “Yes, that’s exactly right! I’m glad we came to agreement on this.”