On the falsifiability of hypercomputation, part 2: finite input streams

In part 1, I discussed the falsifiability of hypercomputation in a typed setting where putative oracles may be assumed to return natural numbers. In this setting, there are very powerful forms of hypercomputation (at least as powerful as each level in the Arithmetic hierarchy) that are falsifiable.

However, as Vanessa Kosoy points out, this typed setting has difficulty applying to the real world, where agents may only observe a finite number of bits at once:

The problem with constructive halting oracles is, they assume the ability to output an arbitrary natural number. But, realistic agents can observe only a finite number of bits per unit of time. Therefore, there is no way to directly observe a constructive halting oracle. We can consider a realization of a constructive halting oracle in which the oracle outputs a natural number one digit at a time. The problem is, since you don’t know how long the number is, a candidate oracle might never stop producing digits. In particular, take any non-standard model of PA and consider an oracle that behaves accordingly. On some machines that don’t halt, such an oracle will claim they do halt, but when asked for the time it will produce an infinite stream of digits. There is no way to distinguish such an oracle from the real thing (without assuming axioms beyond PA).

This is an important objection. I will address it in this post by considering only oracles which return Booleans. In this setting, there is a form of hypercomputation that is falsifiable, although this hypercomputation is less powerful than a halting oracle.

Define a binary Turing machine to be a machine that outputs a Boolean (0 or 1) whenever it halts. Each binary Turing machine either halts and outputs 0, halts and outputs 1, or never halts.

Define an arbitration oracle to be a function that takes as input a specification of a binary Turing machine, and always outputs a Boolean in response. This oracle must always return 0 if the machine eventually outputs 0, and must always return 1 if the machine eventually outputs 1; it may decide arbitrarily if the machine never halts. Note that this can be emulated using a halting oracle, and is actually less powerful. (This definition is inspired by previous work in reflective oracles)

The hypothesis that a putative arbitration oracle (with the correct type signature, MachineSpec → Boolean) really is one is falsifiable. Here is why:

  1. Suppose for some binary Turing machine M that halts and returns 1, the oracle O wrongly has O(M) = 0. Then this can be proven by exhibiting M along with the number of steps required for the machine to halt.
  2. Likewise if M halts and returns 0, and the oracle O wrongly has O(M) = 1.

Since the property of some black-box being an arbitration oracle is falsifiable, we need only show at this point that there is no computable arbitration oracle. For this proof, assume (for the sake of contradiction) that O is a computable arbitration oracle.

Define a binary Turing machine N() := 1 – O(N). This definition requires quining, but this is acceptable for the usual reasons. Note that N always halts, as O always halts. Therefore we must have N() = O(N). However also N() = 1 – O(N), a contradiction (as O(N) is a Boolean).

Therefore, there is no computable arbitration oracle.

Higher hypercomputation?

At this point, it is established that there is a form of hypercomputation (specifically, arbitration oracles) that is falsifiable. But, is this universal? That is, is it possible that higher forms of hypercomputation are falsifiable in the same setting?

We can note that it’s possible to use an arbitration oracle to construct a model of PA, one statement at a time. To do this, first note that for any statement, it is possible to construct a binary Turing machine that returns 1 if the statement is provable, 0 if it is disprovable, and never halts if neither is the case. So we can iterate through all PA statements, and use an arbitration oracle to commit to that statement being true or false, on the basis of provability/disprovability given previous commitments, in a way that ensures that commitments are never contradictory (as long as PA itself is consistent). This is essentially the same construction idea as in the Demski prior over logical theories.

Suppose there were some PA-definable property P that a putative oracle O (mapping naturals to Booleans) must have (e.g. the property of being a halting oracle, for some encoding of Turing machines as naturals). Then, conditional on the PA-consistency of the existence of an oracle with property P, we can use the above procedure to construct a model of PA + existence of O satisfying P (i.e. a theory that says what PA says and also contains a function symbol O that axiomatically satisfies P). For any PA-definable statement about this oracle, this procedure will, at some finite time, have made a commitment about this statement.

So, access to an arbitration oracle allows emulating any other PA-definable oracle, in a way that will not be falsified by PA. It follows that hypercomputation past the level of arbitration oracles is not falsifiable by a PA-reasoner who can access the oracle, as PA cannot rule out that it is actually looking at something produced by only arbitration-oracle levels of hypercomputation.

Moreover, giving the falsifier access to an arbitration oracle can’t increase the range of oracles that are falsifiable. This is because, for any oracle-property P, we may consider a corresponding property on an oracle-pair (which may be represented by a single oracle-property through interleaving), stating that the first oracle is an arbitration oracle, and the second satisfies property P. This oracle pair property is falsifiable iff the property P is falsifiable by a falsifier with access to an arbitration oracle. This is because we may consider a joint search for falsifications, that simultaneously tries to prove the first oracle isn’t an arbitration oracle, and one that tries to prove that the second oracle doesn’t satisfy P assuming the first oracle is an arbitration oracle. Since the oracle pair property is PA-definable, it is emulable by a Turing machine with access to an arbitration oracle, and the pair property is unfalsifiable if it requires hypercomputation past arbitration oracle. But this implies that the original oracle property P is unfalsifiable by a falsifier with access to an arbitration oracle, if P requires hypercomputation past arbitration oracle.

So, arbitration oracles form a ceiling on what can be falsified unassisted, and also are unable to assist in falsifying higher levels of hypercomputation.

Conclusion

Given that arbitration oracles form a ceiling of computable falsifiability (in the setting considered here, which is distinct from the setting of the previous post), it may or may not be possible to define a logic that allows reasoning about levels of computation up to arbitration oracles, but which does not allow computation past arbitration oracles to be defined. Such a project could substantially clarify logical foundations for mathematics, computer science, and the empirical sciences.

[EDIT: cousin_it pointed out that Scott Aaronson’s consistent guessing problem is identical to the problem solved by arbitration oracles.]

On the falsifiability of hypercomputation

[ED NOTE: see Vanessa Kosoy’s comment here; this post assumes a setting in which the oracle may be assumed to return a standard natural.]

It is not immediately clear whether hypercomputers (i.e. objects that execute computations that Turing machines cannot) are even conceivable, hypothesizable, meaningful, clearly definable, and so on. They may be defined in the notation of Peano arithmetic or ZFC, however this does not imply conceivability/hypothesizability/etc. For example, a formalist mathematician may believe that the Continuum hypothesis does not have a meaningful truth value (as it is independent of ZFC), and likewise for some higher statements in the arithmetic hierarchy that are independent of Peano Arithmetic and/or ZFC.

A famous and useful criterion of scientific hypotheses, proposed by Karl Popper, is that they are falsifiable. Universal laws (of the form “∀x. p(x)”) are falsifiable for testable p, as they can be proven false by exhibiting some x such that p(X) is false. In an oracle-free computational setting, the falsifiable hypotheses are exactly those in ∏₁ (i.e. of the form “∀n. p(n)” for natural n and primitive recursive p).

However, ∏₁ hypotheses do not hypothesize hypercomputation; they hypothesize (computably checkable) universal laws of the naturals. To specify the falsifiability criterion for hypercomputers, we must introduce oracles.

Let a halting oracle be defined as a function O which maps Turing machine-specifications to Booleans, which outputs “true” on exactly those Turing machines which eventually halt. Then, we can ask: is the hypothesis that “O is a halting oracle” falsifiable?

We can immediately see that, if O ever outputs “false” for a Turing machine which does eventually halt, it is possible to exhibit a proof of this, by exhibiting both the Turing machine and the number of steps it takes to halts. On the other hand, if O ever outputs “true” for a Turing machine which never halts, it is not in general possible to prove this; to check such a proof in general would require solving the halting problem, which is uncomputable.

Therefore, the hypothesis that “O is a halting oracle” is not falsifiable in a computational setting with O as an oracle.

However, there is a different notion of halting oracle whose definition is falsifiable. Let a constructive halting oracle be defined as a function O which maps Turing machine-specifications to elements of the set {∅} ∪ ℕ (i.e. either a natural number or null), such that it returns ∅ on those Turing machines which never halt, and returns some natural on Turing machines that do halt, such that the machine halts by the number of steps given by that natural. This definition corresponds to the most natural definition of a halting oracle in Heyting arithmetic, a constructive variant of Peano Arithmetic.

We can see that:

  1. If there exists a machine M such that O(M) = ∅ and M halts, it is possible to prove that O is not a constructive halting oracle, by exhibiting M and the time step on which M halts.
  2. If there exists a machine M such that O(M) ≠ ∅ and M does not halt by O(M) time steps, it is possible to prove that O is not a constructive halting oracle, by exhibiting M.

Therefore, the hypothesis “O is a constructive halting oracle” is computably falsifiable.

What about higher-level constructive halting oracles, corresponding to Σₙ in the Heyting Arithmetic interpretation of the arithmetic hierarchy? The validity of a constructive Σₙ-oracle is, indeed, falsifiable for arbitrary n, as shown in the appendix.

Therefore, the hypothesis that some black-box is a (higher-level) constructive halting oracle is falsifiable, in an idealized computational setting. It is, then, meaningful to speak of some black-box being a hypercomputer or not, on an account of meaningfulness at least as expansive as the falsifiability criterion.

This provides a kind of bridge between empiricism and rationalism. While rationalism may reason directly about the logical implications of halting oracles, empiricism is more skeptical about the meaningfulness of the hypothesis. However, by the argument given, an empiricism that accepts the meaningfulness of falsifiable statements must accept the meaningfulness of the hypothesis that some black-box O is a constructive halting oracle.

I think this is a fairly powerful argument that hypercomputation should not be ruled out a-priori as “meaningless”, and should instead be considered a viable hypothesis a-priori, even if it is not likely given other evidence about physics, anthropics, etc.

Appendix: higher-level halting oracles

We will now reason directly about the Heyting arithmetic hierarchy rather than dealing with Turing machines for simplicity, though these are logically equivalent. Σₙ₊₁ propositions can be written as ∃x₁∀y₁…∃xₙ∀yₙ.f(x₁, y₁, …, xₙ, yₙ) for some primitive-recursive f. The converse of this proposition (which is in ∏ₙ₊₁) is of the form ∀x₁∃y₁…∀xₙ∃yₙ.¬f(x₁, y₁, …, xₙ, yₙ).

An oracle O constructively deciding Σₙ₊₁ is most naturally interpreted as a function from a specification of f to (∃x₁∀y₁…∃xₙ∀yₙ.f(x₁, y₁, …, xₙ, yₙ)) ∨ (∀x₁∃y₁…∀xₙ∃yₙ.¬f(x₁, y₁, …, xₙ, yₙ)); that is, it decides whether the Σₙ₊₁ proposition is true or its converse ∏ₙ₊₁ proposition is, and provides a witness either way.

What is a natural interpretation of the witness? A witness for Σₙ₊₁ maps y₁…yₙ to x₁…xₙ (and asserts f(x₁, y₁, …, xₙ, yₙ)), while a witness for ∏ₙ₊₁ maps x₁..xₙ to y₁…yₙ (and asserts ¬f(x₁, y₁, …, xₙ, yₙ)). (Note that the witness must satisfy regularity conditions, e.g. the x₁ returned by a Σₙ₊₁ witness must not depend on the witness’s input; we assume that even invalid witnesses still satisfy these regularity conditions, as it is easy to ensure they are satisfied by specifying the right witness type)

Now, we can ask four questions:

  1. Fix f; suppose the Σₙ₊₁ proposition is true, and an invalid Σₙ₊₁-witness is returned; then, is it possible to prove the oracle false?
  2. Fix f; suppose the ∏ₙ₊₁ proposition is true, and an invalid ∏ₙ₊₁-witness is returned; then, is it possible to prove the oracle false?
  3. Fix f; suppose the Σₙ₊₁ proposition is true, and a ∏ₙ₊₁-witness is returned; then, is it possible to prove the oracle false?
  4. Fix f; suppose the ∏ₙ₊₁ proposition is true, and a Σₙ₊₁-witness is returned; then, is it possible to prove the oracle false?

These conditions are necessary and sufficient for O’s correctness to be falsifiable, because O will satisfy one of the above 4 conditions for some f iff it is invalid.

First let’s consider question 1. Since the witness (call it g) is invalid, it maps some y₁…yₙ to some x₁…xₙ such that ¬f(x₁, y₁, …, xₙ, yₙ). We may thus prove the witness’s invalidity by exhibiting y₁…yₙ. So the answer is yes, and similarly for question 2.

Now for question 3. Let the witness be g. Since the Σₙ₊₁ proposition is true, there is some x₁ for which ∀y₁…∃xₙ∀yₙ.f(x₁, y₁, …, xₙ, yₙ). Now, we may feed x₁ into the witness g to get a y₁ for which the oracle asserts ∀x₂∃y₂…∀xₙ∃yₙ.¬f(x₁, y₁, …, xₙ, yₙ). (Note, g’s returned y₁ must not depend on x’s after x₁, by regularity, so we may set the rest of the x’s to 0)

We proceed recursively, yielding x₁…xₙ and y₁…yₙ for which f(x₁, y₁, …, xₙ, yₙ), and for which the oracle asserts ¬f(x₁, y₁, …, xₙ, yₙ), hence proving the oracle invalid (through exhibiting these x’s and y’s). So we may answer question 3 with a “yes”.

For question 4, the proof proceeds similarly, except we start by getting x₁ from the witness. The answer is, then, also “yes”.

Therefore, O’s validity as a constructive Σₙ₊₁-oracle is falsifiable.

Philosophical self-ratification

“Ratification” is defined as “the act or process of ratifying something (such as a treaty or amendment) : formal confirmation or sanction”. Self-ratification, then, is assigning validity to one’s self. (My use of the term “self-ratification” follows philosophical usage in analysis of causal decision theory)

At first this seems like a trivial condition. It is, indeed, easy to write silly sentences such as “This sentence is true and also the sky is green”, which are self-ratifying. However, self-ratification combined with other ontological and epistemic coherence conditions is a much less trivial condition, which I believe to be quite important for philosophical theory-development and criticism.

I will walk through some examples.

Causal decision theory

Formal studies of causal decision theory run into a problem with self-ratification. Suppose some agent A is deciding between two actions, L and R. Suppose the agent may randomize their action, and that their payoff equals their believed probability that they take the action other than the one they actually take. (For example, if the agent takes action L with 40% probability and actually takes action R, the agent’s payoff is 0.4)

If the agent believes they will take action L with 30% probability, then, if they are a causal decision theorist, they will take action L with 100% probability, because that leads to 0.7 payoff instead of 0.3 payoff. But, if they do so, this invalidates their original belief that they will take action L with 30% probability. Thus, the agent’s belief that they will take action L with 30% probability is not self-ratifying: the fact of the agent having this belief leads to the conclusion that they take action L with 100% probability, not 30%, which contradicts the original belief.

The only self-ratifying belief is that the agent will take each action with 50% probability; this way, both actions yield equal expected utility, and so a policy 50/50 randomization is compatible with causal decision theory, and this policy ratifies the original belief.

Genetic optimism

(This example is due to Robin Hanson’s “Uncommon Priors Require Origin Disputes”.)

Suppose Oscar and Peter are brothers. Oscar is more optimistic than Peter. Oscar comes to believe that the reason he is more optimistic is due to inheriting a gene that inflates beliefs about positive outcomes, whereas Peter did not inherit this same gene.

Oscar’s belief-set is now not self-ratifying. He believes the cause of his belief that things will go well to be a random gene, not correlation with reality. This means that, according to his own beliefs, his optimism is untrustworthy.

Low-power psychological theories

Suppose a psychological researcher, Beth, believes that humans are reinforcement-learning stimulus-response machines, and that such machines are incapable of reasoning about representations of the world. She presents a logical specification of stimulus-response machines that she believes applies to all humans. (For similar real-world theories, see: Behaviorism, Associationism, Perceptual Control Theory)

However, a logical implication of Beth’s beliefs is that she herself is a stimulus-response machine, and incapable of reasoning about world-representations. Thus, she cannot consistently believe that her specification of stimulus-response machines is likely to be an accurate, logically coherent representation of humans. Her belief-set, then, fails to self-ratify, on the basis that it assigns to herself a level of cognitive power insufficient to come to know that her belief-set is true.

Moral realism and value drift

Suppose a moral theorist, Valerie, believes:

  • Societies’ moral beliefs across history follow a random walk, not directed anywhere.
  • Her own moral beliefs, for the most part, follow society’s beliefs.
  • There is a true morality which is stable and unchanging.
  • Almost all historical societies’ moral beliefs are terribly, terribly false.

From these it follows that, absent further evidence, the moral beliefs of Valerie’s society should not be expected to be more accurate (according to estimation of the objective morality that Valerie believes exists) than the average moral beliefs across historical societies, since there is no moral progress in expectation. However, this implies that the moral beliefs of her own society are likely to be terribly, terribly false. Therefore, Valerie’s adoption of her society’s beliefs would imply that her own moral beliefs are likely to be terribly, terrible false: a failure of self-ratification.

Trust without honesty

Suppose Larry is a blogger who reads other blogs. Suppose Larry believes:

  • The things he reads in other blogs are, for the most part, true (~90% likely to be correct).
  • He’s pretty much the same as other bloggers; there is a great degree of subjunctive dependence between his own behavior and other bloggers’ behaviors (including their past behaviors).

Due to the first belief, he concludes that lying in his own blog is fine, as there’s enough honesty out there that some additional lies won’t pose a large problem. So he starts believing that he will lie and therefore his own blog will contain mostly falsehoods (~90%).

However, an implication of his similarity to other bloggers is that other bloggers will reason similarly, and lie in their own blog posts. Since this applies to past behavior as well, a further implication is that the things he reads in other blogs are, for the most part, false. Thus the belief-set, and his argument for lying, fail to self-ratify.

(I presented a similar example in “Is Requires Ought”.)

Mental nonrealism

Suppose Phyllis believes that the physical world exists, but that minds don’t exist. That is, there are not entities that are capable of observation, thought, etc. (This is a rather simple, naive formulation of eliminative materialism)

Her reason for this belief is that she has studied physics, and believes that physics is sufficient to explain everything, such that there is no reason to additionally posit the existence of minds.

However, if she were arguing for the accuracy of her beliefs about physics, she would have difficulty arguing except in terms of e.g. physicists making and communicating observations, theorists having logical thoughts, her reading and understanding physics books, etc.

Thus, her belief that minds don’t exist fails to self-ratify. It would imply that she lacks evidential basis for belief in the accuracy of physics. (On the other hand, she may be able to make up for this by coming up with a non-mentalistic account for how physics can come to be “known”, though this is difficult, as it is not clear what there is that could possibly have knowledge. Additionally, she could believe that minds exist but are somehow “not fundamental”, in that they are determined by physics; however, specifying how they are determined by physics requires assuming they exist at all and have properties in the first place.)

Conclusion

I hope the basic picture is clear by now. Agents have beliefs, and some of these beliefs imply beliefs about the trustworthiness of their own beliefs, primarily due to the historical origins of the beliefs (e.g. psychology, society, history). When the belief-set implies that it itself is untrustworthy (being likely to be wrong), there is a failure of self-ratification. Thus, self-ratification, rather than being a trivial condition, is quite nontrivial when combined with other coherence conditions.

Why would self-ratification be important? Simply put, a non-self-ratifying belief set cannot be trustworthy; if it were trustworthy then it would be untrustworthy, which shows untrustworthiness by contradiction. Thus, self-ratification points to a rich set of philosophical coherence conditions that may be neglected if one is only paying attention to surface-level features such as logical consistency.

Self-ratification as a philosophical coherence condition points at naturalized epistemology being an essential philosophical achievement. While epistemology may possibly start non-naturalized, as it gains self-consciousness of the fact of its embeddedness in a natural world, such self-consciousness imposes additional self-ratification constraints.

Using self-ratification in practice often requires flips between treating one’s self as a subject and as an object. This kind of dual self-consciousness is quite interesting and is a rich source of updates to both self-as-subject beliefs and self-as-object beliefs.

Taking coherence conditions including self-ratification to be the only objective conditions of epistemic justification is a coherentist theory of justification; note that coherentists need not believe that all “justified” belief-sets are likely to be true (and indeed, such a belief would be difficult to hold given the possibility of coherent belief-sets very different from one’s own and from each other).

Appendix: Proof by contradiction is consistent with self-ratification

There is a possible misinterpretation of self-ratification that says: “You cannot assume a belief to be true in the course of refuting it; the assumption would then fail to self-ratify”.

Classical logic permits proof-by-contradiction, indicating that this interpretation is wrong. The thing that a proof by contradiction does is show that some other belief-set (not the belief-set held by the arguer) fails to self-ratify (and indeed, self-invalidates). If the arguer actually believed in the belief-set that they are showing to be self-invalidating, then, indeed, that would be a self-ratification problem for the arguer. However, the arguer’s belief is that some proposition P implies not-P, not that P is true, so this does not present a self-ratification problem.