Modeling naturalized decision problems in linear logic

The following is a model of a simple decision problem (namely, the 5 and 10 problem) in linear logic. Basic familiarity with linear logic is assumed (enough to know what it means to say linear logic is a resource logic), although knowing all the operators isn’t necessary.

The 5 and 10 problem is, simply, a choice between taking a 5 dollar bill and a 10 dollar bill, with the 10 dollar bill valued more highly.

While the problem itself is trivial, the main theoretical issue is in modeling counterfactuals. If you took the 10 dollar bill, what would have happened if you had taken the 5 dollar bill? If your source code is fixed, then there isn’t a logically coherent possible world where you took the 5 dollar bill.

I became interested in using linear logic to model decision problems due to noticing a structural similarity between linear logic and the real world, namely irreversibility. A vending machine may, in linear logic, be represented as a proposition “$1 → CandyBar”, encoding the fact that $1 may be exchanged for a candy bar, being consumed in the process. Since the $1 is consumed, the operation is irreversible. Additionally, there may be multiple options offered, e.g. “$1 → Gumball”, such that only one option may be taken. (Note that I am using “→” as notation for linear implication.)

This is a good fit for real-world decision problems, where e.g. taking the $10 bill precludes also taking the $5 bill. Modeling decision problems using linear logic may, then, yield insights regarding the sense in which counterfactuals do or don’t exist.

First try: just the decision problem

As a first try, let’s simply try to translate the logic of the 5 and 10 situation into linear logic. We assume logical atoms named “Start”, “End”, “$5”, and “$10”. Respectively, these represent: the state of being at the start of the problem, the state of being at the end of the problem, having $5, and having $10.

To represent that we have the option of taking either bill, we assume the following implications:

TakeFive : Start → End ⊗ $5

TakeTen : Start → End ⊗ $10

The “⊗” operator can be read as “and” in the sense of “I have a book and some cheese on the table”; it combines multiple resources into a single linear proposition.

So, the above implications state that it is possible, starting from the start state, to end up in the end state, yielding $5 if you took the five dollar bill, and $10 if you took the 10 dollar bill.

The agent’s goal is to prove “Start → End ⊗ $X”, for X as high as possible. Clearly, “TakeTen” is a solution for X = 10. Assuming the logic is consistent, no better proof is possible. By the Curry-Howard isomorphism, the proof represents a computational strategy for acting in the world, namely, taking the $10 bill.

Second try: source code determining action

The above analysis is utterly trivial. What makes the 5 and 10 problem nontrivial is naturalizing it, to the point where the agent is a causal entity similar to the environment. One way to model the agent being a causal entity is to assume that it has source code.

Let “M” be a Turing machine specification. Let “Ret(M, x)” represent the proposition that M returns x. Note that, if M never halts, then Ret(M, x) is not true for any x.

How do we model the fact that the agent’s action is produced by a computer program? What we would like to be able to assume is that the agent’s action is equal to the output of some machine M. To do this, we need to augment the TakeFive/TakeTen actions to yield additional data:

TakeFive : Start → End ⊗ $5 ⊗ ITookFive

TakeTen : Start → End ⊗ $10 ⊗ ITookTen

The ITookFive / ITookTen propositions are a kind of token assuring that the agent (“I”) took five or ten. (Both of these are interpreted as classical propositions, so they may be duplicated or deleted freely).

How do we relate these propositions to the source code, M? We will say that M must agree with whatever action the agent took:

MachineFive : ITookFive → Ret(M, “Five”)

MachineTen : ITookTen → Ret(M, “Ten”)

These operations yield, from the fact that “I” have taken five or ten, that the source code “M” eventually returns a string identical with this action. Thus, these encode the assumption that “my source code is M”, in the sense that my action always agrees with M’s.

Operationally speaking, after the agent has taken 5 or 10, the agent can be assured of the mathematical fact that M returns the same action. (This is relevant in more complex decision problems, such as twin prisoner’s dilemma, where the agent’s utility depends on mathematical facts about what values different machines return)

Importantly, the agent can’t use MachineFive/MachineTen to know what action M takes before actually taking the action. Otherwise, the agent could take the opposite of the action they know they will take, causing a logical inconsistency. The above construction would not work if the machine were only run for a finite number of steps before being forced to return an answer; that would lead to the agent being able to know what action it will take, by running M for that finite number of steps.

This model naturally handles cases where M never halts; if the agent never executes either TakeFive or TakeTen, then it can never execute either MachineFive or MachineTen, and so cannot be assured of Ret(M, x) for any x; indeed, if the agent never takes any action, then Ret(M, x) isn’t true for any x, as that would imply that the agent eventually takes action x.

Interpreting the counterfactuals

At this point, it’s worth discussing the sense in which counterfactuals do or do not exist. Let’s first discuss the simpler case, where there is no assumption about source code.

First, from the perspective of the logic itself, only one of TakeFive or TakeTen may be evaluated. There cannot be both a fact of the matter about what happens if the agent takes five, and a fact of the matter about what happens if the agent takes ten. This is because even defining both facts at once requires re-using the Start proposition.

So, from the perspective of the logic, there aren’t counterfactuals; only one operation is actually run, and what “would have happened” if the other operation were run is undefinable.

On the other hand, there is an important sense in which the proof system contains counterfactuals. In constructing a linear logic proof, different choices may be made. Given “Start” as an assumption, I may prove “End ⊗ $5” by executing TakeFive, or “End ⊗ $10” by executing TakeTen, but not both.

Proof systems are, in general, systems of rules for constructing proofs, which leave quite a lot of freedom in which proofs are constructed. By the Curry-Howard isomorphism, the freedom in how the proofs are constructed corresponds to freedom in how the agent behaves in the real world; using TakeFive in a proof has the effect, if executed, of actually (irreversibly) taking the $5 bill.

So, we can say, by reasoning about the proof system, that if TakeFive is run, then $5 will be yielded, and if TakeTen is run, then $10 will be yielded, and only one of these may be run.

The logic itself says there can’t be a fact of the matter about both what happens if 5 is taken and if 10 is taken. On the other hand, the proof system says that both proofs that get $5 by taking 5, and proofs that get $10 by taking 10, are possible.

How to interpret this difference? One way is by asserting that the logic is about the territory, while the proof system is about the map; so, counterfactuals are represented in the map, even though the map itself asserts that there is only a singular territory.

And, importantly, the map doesn’t represent the entire territory; it’s a proof system for reasoning about the territory, not the territory itself. The map may, thus, be “looser” than the territory, allowing more possibilities than could possibly be actually realized.

What prevents the map from drawing out logical implications to the point where it becomes clear that only one action may possibly be taken? Given the second-try setup, the agent simply cannot use the fact of their source code being M, until actually taking the action; thus, no amount of drawing implications can conclude anything about the relationship between M and the agent’s action. In addition to this, reasoning about M itself becomes harder the longer M runs, i.e. the longer the agent is waiting to make the decision; so, simply reasoning about the map, without taking actions, need not conclude anything about which action will be taken, leaving both possibilities live until one is selected.

Conclusion

This approach aligns significantly with the less-formal descriptions given of subjective implication decision theory and counterfactual nonrealism. Counterfactuals aren’t real in the sense that they are definable after having taken the relevant action; rather, an agent in a state of uncertainty about which action it will take may consider multiple possibilities as freely selectable, even if they are assured that their selection will be equal to the output of some computer program.

The linear logic formalization increases my confidence in this approach, by providing a very precise notion of the sense in which the counterfactuals do and don’t exist, which would be hard to make precise without similar formalism.

I am, at this point, less worried about the problems with counterfactual nonrealism (such as global accounting) than I was when I wrote the post, and more worried about the problems of policy-dependent source code (which requires the environment to be an ensemble of deterministic universes, rather than a single one), such that I have updated towards counterfactual nonrealism as a result of this analysis, although I am still not confident.

Overall, I find linear logic quite promising for modeling embedded decision problems from the perspective of an embedded agent, as it builds critical facts such as non-reversibility into the logic itself.

Appendix: spurious counterfactuals

The following describes the problem of spurious counterfactuals in relation to the model.

Assume the second-try setup. Suppose the agent becomes assured that Ret(M, “Five”); that is, that M returns the action “Five”. From this, it is provable that the agent may, given Start, attain the linear logic proposition 0, by taking action “Ten” and then running MachineTen to get Ret(M, “Ten”), which yields inconsistency with Ret(M, “Five”). From 0, anything follows, e.g. $1000000, by the principle of explosion.

If the agent is maximizing guaranteed utility, then they will take the $10 bill, to be assured of the highest utility possible. So, it cannot be the case that the agent can be correctly assured that they will take action five, as that would lead to them taking a different action.

If, on the other hand, the agent would have provably taken the $5 bill upon receiving the assurance (say, because they notice that taking the $10 bill could result in the worst possible utility), then there is a potential issue with this assurance being a self-fulfilling prophecy. But, if the agent is constructing proofs (plans for action) so as to maximize guaranteed utility, this will not occur.

This solution is essentially the same as the one given in the paper on UDT with a known search order.

Topological metaphysics: relating point-set topology and locale theory

The following is an informal exposition of some mathematical concepts from Topology via Logic, with special attention to philosophical implications. Those seeking more technical detail should simply read the book.

There are, roughly, two ways of doing topology:

  • Point-set topology: Start with a set of points. Consider a topology as a set of subsets of these points which are “open”, where open sets must satisfy some laws.
  • Locale theory: Start with a set of opens (similar to propositions), which are closed under some logical operators (especially and and or), and satisfy logical relations.

What laws are satisfied?

  • For point-set topology: The empty set and the full set must both be open; finite intersections and infinite unions of opens must be open.
  • For local theory: “True” and “false” must be opens; the opens must be closed under finite “and” and infinite “or”; and some logical equivalences must be satisfied, such that “and” and “or” work as expected.

Roughly, open sets and opens both correspond to verifiable propositions. If X and Y are both verifiable, then both “X or Y” and “X and Y” are verifiable; and, indeed, even countably infinite disjunctions of verifiable statements are verifiable, by exhibiting the particular statement in the disjunction that is verified as true.

What’s the philosophical interpretation of the difference between point-set topology and locale theory, then?

  • Point-set topology corresponds to the theory of possible worlds. There is a “real state of affairs”, which can be partially known about. Open sets are “events” that are potentially observable (verifiable). Ontology comes before epistemology. Possible worlds are associated with classical logic and classical probability/utility theory.
  • Local theory corresponds to the theory of situation semantics. There are facts that are true in a particular situation, which have logical relations with each other. The first three lines of Wittgenstein’s Tracatus Logico-Philosophicus are: “The world is everything that is the case. / The world is the totality of facts, not of things. / The world is determined by the facts, and by these being all the facts.” Epistemology comes before ontology. Situation semantics is associated with intuitionist logic and Jeffrey-Bolker utility theory (recently discussed by Abram Demski).

Thus, they correspond to fairly different metaphysics. Can these different metaphysics be converted to each other?

  • Converting from point-set topology to locale theory is easy. The opens are, simply, the open sets; their logical relations (and/or) are determined by set operations (intersection/union). They automatically satisfy the required laws.
  • To convert from locale theory to point-set topology, construct possible worlds as sets of opens (which must be logically coherent, e.g. the set of opens can’t include “A and B” without including “A”), which are interpreted as the set of opens that are true of that possible world. The open sets of the topology correspond with the opens, as sets of possible words which contain the open.

From assumptions about possible worlds and possible observations of it, it is possible to derive a logic of observations; from assumptions about the logical relations of different propositions, it is possible to consider a set of possible worlds and interpretations of the propositions as world-properties.

Metaphysically, we can consider point-set topology as ontology-first, and locale theory as epistemology-first. Point-set topology starts with possible worlds, corresponding to Kantian noumena; locale theory starts with verifiable propositions, corresponding to Kantian phenomena.

While the interpretation of a given point-set topology as a locale is trivial, the interpretation of a locale theory as a point-set topology is less so. What this construction yields is a way of getting from observations to possible worlds. From the set of things that can be known (and knowable logical relations between these knowables), it is possible to conjecture a consistent set of possible worlds and ways those knowables relate to the possible worlds.

Of course, the true possible worlds may be finer-grained than these consistent set; however, it cannot be coarser-grained, or else the same possible world would result in different observations. No finer potentially-observable (verifiable or falsifiable) distinctions may be made between possible worlds than the ones yielded by this transformation; making finer distinctions risks positing unreferenceable entities in a self-defeating manner.

How much extra ontological reach does this transformation yield? If the locale has a countable basis, then the point-set topology may have an uncountable point-set (specifically, of the same cardinality as the reals). The continuous can, then, be constructed from the discrete, as the underlying continuous state of affairs that could generate any given possibly-infinite set of discrete observations.

In particular, the reals may be constructed from a locale based on open intervals whose beginning/end are rational numbers. That is: a real r may be represented as a set of (a, b) pairs where a and b are rational, and a < r < b. The locale whose basis is rational-delimited open intervals (whose elements are countable unions of such open intervals, and which specifies logical relationships between them, e.g. conjunction) yields the point-set topology of the reals. (Note that, although including all countable unions of basis elements would make the locale uncountable, it is possible to weaken the notion of locale to only require unions of recursively enumerable sets, which preserves countability)

If metaphysics may be defined as the general framework bridging between ontology and epistemology, then the conversions discussed provide a metaphysics: a way of relating that-which-could-be to that-which-can-be-known.

I think this relationship is quite interesting and clarifying. I find it useful in my own present philosophical project, in terms of relating subject-centered epistemology to possible centered worlds. Ontology can reach further than epistemology, and topology provides mathematical frameworks for modeling this.

That this construction yields continuous from discrete is an added bonus, which should be quite helpful in clarifying the relation between the mental and physical. Mental phenomena must be at least partially discrete for logical epistemology to be applicable; meanwhile, physical theories including Newtonian mechanics and standard quantum theory posit that physical reality is continuous, consisting of particle positions or a wave function. Thus, relating discrete epistemology to continuous ontology is directly relevant to philosophy of science and theory of mind.

Two Alternatives to Logical Counterfactuals

The following is a critique of the idea of logical counterfactuals. The idea of logical counterfactuals has appeared in previous agent foundations research (especially at MIRI): here, here. “Impossible possible worlds” have been considered elsewhere in the literature; see the SEP article for a summary.

I will start by motivating the problem, which also gives an account for what a logical counterfactual is meant to be.

Suppose you learn about physics and find that you are a robot. You learn that your source code is “A”. You also believe that you have free will; in particular, you may decide to take either action X or action Y. In fact, you take action X. Later, you simulate “A” and find, unsurprisingly, that when you give it the observations you saw up to deciding to take action X or Y, it outputs action X. However, you, at the time, had the sense that you could have taken action Y instead. You want to be consistent with your past self, so you want to, at this later time, believe that you could have taken action Y at the time. If you could have taken Y, then you do take Y in some possible world (which still satisfies the same laws of physics). In this possible world, it is the case that “A” returns Y upon being given those same observations. But, the output of “A” when given those observations is a fixed computation, so you now need to reason about a possible world that is logically incoherent, given your knowledge that “A” in fact returns X. This possible world is, then, a logical counterfactual: a “possible world” that is logically incoherent.

To summarize: a logical counterfactual is a notion of “what would have happened” had you taken a different action after seeing your source code, and in that “what would have happened”, the source code must output a different action than what you actually took; hence, this “what would have happened” world is logically incoherent.

It is easy to see that this idea of logical counterfactuals is unsatisfactory. For one, no good account of them has yet been given. For two, there is a sense in which no account could be given; reasoning about logically incoherent worlds can only be so extensive before running into logical contradiction.

To extensively refute the idea, it is necessary to provide an alternative account of the motivating problem(s) which dispenses with the idea. Even if logical counterfactuals are unsatisfactory, the motivating problem(s) remain.

I now present two alternative accounts: counterfactual nonrealism, and policy-dependent source code.

Counterfactual nonrealism

According to counterfactual nonrealism, there is no fact of the matter about what “would have happened” had a different action been taken. There is, simply, the sequence of actions you take, and the sequence of observations you get. At the time of taking an action, you are uncertain about what that action is; hence, from your perspective, there are multiple possibilities.

Given this uncertainty, you may consider material conditionals: if I take action X, will consequence Q necessarily follow? An action may be selected on the basis of these conditionals, such as by determining which action results in the highest guaranteed expected utility if that action is taken.

This is basically the approach taken in my post on subjective implication decision theory. It is also the approach taken by proof-based UDT.

The material conditionals are ephemeral, in that at a later time, the agent will know that they could only have taken a certain action (assuming they knew their source code before taking the action), due to having had longer to think by then; hence, all the original material conditionals will be vacuously true. The apparent nondeterminism is, then, only due to the epistemic limitation of the agent at the time of making the decision, a limitation not faced by a later version of the agent (or an outside agent) with more computation power.

This leads to a sort of relativism: what is undetermined from one perspective may be determined from another. This makes global accounting difficult: it’s hard for one agent to evaluate whether another agent’s action is any good, because the two agents have different epistemic states, resulting in different judgments on material conditionals.

A problem that comes up is that of “spurious counterfactuals” (analyzed in the linked paper on proof-based UDT). An agent may become sure of its own action before that action is taken. Upon being sure of that action, the agent will know the material implication that, if they take a different action, something terrible will happen (this material implication is vacuously true). Hence the agent may take the action they were sure they would take, making the original certainty self-fulfilling. (There are technical details with how the agent becomes certain having to do with Löb’s theorem).

The most natural decision theory resulting in this framework is timeless decision theory (rather than updateless decision theory). This is because the agent updates on what they know about the world so far, and considers the material implications of themselves taken a certain action; these implications include logical implications if the agent knows their source code. Note that timeless decision theory is dynamically inconsistent in the counterfactual mugging problem.

Policy-dependent source code

A second approach is to assert that one’s source code depends on one’s entire policy, rather than only one’s actions up to seeing one’s source code.

Formally, a policy is a function mapping an observation history to an action. It is distinct from source code, in that the source code specifies the implementation of the policy in some programming language, rather than itself being a policy function.

Logically, it is impossible for the same source code to generate two different policies. There is a fact of the matter about what action the source code outputs given an observation history (assuming the program halts). Hence there is no way for two different policies to be compatible with the same source code.

Let’s return to the robot thought experiment and re-analyze it in light of this. After the robot has seen that their source code is “A” and taken action X, the robot considers what would have happened if they had taken action Y instead. However, if they had taken action Y instead, then their policy would, trivially, have to be different from their actual policy, which takes action X. Hence, their source code would be different. Hence, they would not have seen that their source code is “A”.

Instead, if the agent were to take action Y upon seeing that their source code is “A”, their source code must be something else, perhaps “B”. Hence, which action the agent would have taken depends directly on their policy’s behavior upon seeing that the source code is “B”, and indirectly on the entire policy (as source code depends on policy).

We see, then, that the original thought experiment encodes a reasoning error. The later agent wants to ask what would have happened if they had taken a different action after knowing their source code; however, the agent neglects that such a policy change would have resulted in seeing different source code! Hence, there is no need to posit a logically incoherent possible world.

The reasoning error came about due to using a conventional, linear notion of interactive causality. Intuitively, what you see up to time t depends only on your actions before time t. However, policy-dependent source code breaks this condition. What source code you see that you have depends on your entire policy, not just what actions you took up to seeing your source code. Hence, reasoning under policy-dependent source code requires abandoning linear interactive causality.

The most natural decision theory resulting from this approach is updateless decision theory, rather that timeless decision theory, as it is the entire policy that the counterfactual is on.

Conclusion

Before very recently, my philosophical approach had been counterfactual nonrealism. However, I am now more compelled by policy-dependent source code, after having analyzed it. I believe this approach fixes the main problem of counterfactual nonrealism, namely relativism making global accounting difficult. It also fixes the inherent dynamic inconsistency problems that TDT has relative to UDT (which are related to the relativism).

I believe the re-analysis I have provided of the thought experiment motivating logical counterfactuals is sufficient to refute the original interpretation, and thus to de-motivate logical counterfactuals.

The main problem with policy-dependent source code is that, since it violates linear interactive causality, analysis is correspondingly more difficult. Hence, there is further work to be done in considering simplified environment classes where possible simplifying assumptions (including linear interactive causality) can be made. It is critical, though, that the linear interactive causality assumption not be used in analyzing cases of an agent learning their source code, as this results in logical incoherence.

What is metaphysical free will?

This is an attempt to explain metaphysical free will. This serves to explain metaphysics in general.

First: on the distinction between subject-properties and object-properties. The subject-object relation holds between some subject and some object. For example, a person might be a subject looking at a table, which is an object. Objects are, roughly, entities that could potentially be beheld by some subject.

Metaphysical free will is a property of subjects rather than objects. This will make more sense if I first contrast it with object-properties.

Objects can be defined by some properties: location, color, temperature, and so on. These properties yield testable predictions. Objects that are hot will be painful to touch, for example.

Object properties are best-defined when they are closely connected with testable predictions. The logical positivist program, though ultimately unsuccessful, is quite effective when applied to defining object properties. Similarly, the falsificationist program is successful in clarifying the meaning of a variety of scientific hypotheses in terms of predictions.

Intuitively, free will has to do with the ability of a someone to choose from one of multiple options. This implies a kind of unpredictability, at least from the perspective of the one making the choice.

Hence, there is a tension in considering free will as an object-property, in that object properties are about predictable relations, whereas free will is about choice. (Probabilistic randomness would not much help either, as e.g. taking an action with 50% probability does not match the intuitive notion of choice)

The most promising attempts to define free will as an object-property are within the physicalist school that includes Gary Drescher and Daniel Dennett. These define choice in terms of optimization: selection of the best action from a list of options, based upon anticipated consequences. This remains an object-property, because it yields a testable prediction: that the chosen action will be the one that is predicted to lead to the best consequences (and if the agent is well-informed, one that actually will). Drescher calls this “mechanical choice”.

I will now contrast object-properties (including mechanical choice) with subject-properties.

The distinction between subjects and objects is, to a significant extent, grammatical. Subjects do things, objects have things done to them. “I repaired the table with some glue.”

It is easy to detect notions of choice in ordinary language. “I could have gone to the store but I chose not to”; “you don’t have to do all that work”; “this software has so many options and capabilities“.

Functional definitions of objects are often defined in terms of the capabilities the subject has in using the object. For example, an axe can (roughly) be defined as an object that can be swung to hit another object and create a rift.

The desiderata of products, including software, are about usability. The desire is for an object that can be used in a number of ways.

Moral language, too, refers to capabilities. What one should do depends on what one can do; see Ought implies Can.

We could say, then, that this sort of subjunctive language is tied with orienting towards reality in a certain way. The orientation is, specifically, about noticing the capabilities that one’s self (and perhaps others) have, and communicating about these capabilities. I find that replacing the word “metaphysics” with the word “orientation” is often illuminating.

When this orientation is coupled with language, the language describes itself as between observation and action. That is: we talk as if we may take action on the basis of our speech. Thus, our language refers to, among other things, our capabilities, which are decision-relevant. This is in contrast to thinking of language as a side effect, or as an action in itself.

This could be studied in AI terms. An AI may be programmed to assume it has control of “its action”, and may have a model of what the consequences of various actions are, which correspond to its capabilities. From the AI’s perspective, it has a choice among multiple actions, hence in a sense “believing in metaphysical free will”. To program an AI to take effective actions, it isn’t sufficient for it to develop a model of what is; it must also develop a model of what could be made to happen. (The AI may, like a human, generate verbal reports of its capabilities, and select actions on the basis of these verbal reports)

Even relatively objective ways of orienting towards reality notice capabilities. I’ve already noted the phenomenon of functional definitions. If you look around, you will see many objects, and you will also likely notice affordances: ways these objects may be used. It may seem that these affordances inhere in the objects, although it would be more precise to say that affordances exist in the subject-object relationship rather than the object itself, as they depend on the subject.

Metaphysics isn’t directly an object of scientific study, but can be seen in the scientific process itself, in the way that one must comport one’s self towards reality to do science. This comportment includes tool usage, logic, testing, observation, recording, abstraction, theorizing, and so on. The language scientists use in the course of their scientific study, and their communication about the results, reveals this metaphysics.

(Yes, recordings of scientific practice may be subject to scientific study, but interpreting the raw data of the recordings as e.g. “testing” requires a theory bridging between the objective recorded data and whatever “testing” is, where “testing” is naively a type of intentional action)

Upon noticing choice in one’s metaphysics, one may choose to philosophize on it, to see if it holds up to consistency checks. If the metaphysics leads to inconsistencies, then it should be modified or discarded.

The most obvious possible source of inconsistency is in the relation between the metaphysical “I” and the physical body. If the “I” is identical with one’s own physical body, then metaphysical properties of the self, such as freedom of choice, must be physical properties, leading to the usual problems.

If, on the other hand, the “I” is not identical with one’s physical body, then it must be explained why the actions and observations of the “I” so much align with the actions of the body; the mind-body relation must be clarified.

Another issue is akrasia; sometimes it seems that the mind decides to take an action but the body does not move accordingly. Thus, free will may be quite partial, even if it exists.

I’ve written before about reconciliation between metaphysical free will and the predictions of physics. I believe this account is better than the others I have seen, although nowhere near complete.

It is worth contrasting the position of believing in metaphysical free will with its converse. For example, in the Bhagavad Gita, Krishna states that the wise do not identify with the doer:

All actions are performed by the gunas of prakriti. Deluded by identification with the ego, a person thinks, “I am the doer.” But the illumined man or woman understands the domain of the gunas and is not attached. Such people know that the gunas interact with each other; they do not claim to be the doer.

Bhagavad Gita, Easwaran translation, ch. 3, 27-28

In this case the textual “I” is dissociated from the “doer” which takes action. Instead, the “I” is more like a placeholder in a narrative created by natural mental processes (gunas), not an agent in itself. (The interpretation here is not entirely clear, as Krishna also gives commands to Arjuna)

This specific discussion of metaphysical free will generalizes to metaphysics in general. Metaphysics deals with the basic entities/concepts associated with reality, subjects, and objects. It is contrasted with physics, which deals with objects, generalizing from observable properties of them (and the space they exist in and so on) to lawful theories.

To summarize metaphysical free will:

  • We talk in ways that imply that we and others have capabilities and make choices.
  • This way of talking is possible and sufficiently-motivated because of the way we comport ourselves towards reality, noticing our capabilities.
  • Effective AIs should similarly be expected to model their own capabilities as distinct from the present state of the world.
  • It is difficult to coherently identify these capabilities we talk as if we have, with physical properties of our bodies.
  • Therefore, it may be a reasonable (at least provisional) assumption that the capabilities we have are not physical properties of our bodies, and are metaphysical.
  • The implications of this assumption can be philosophically investigated, to build out a more coherent account, or to find difficulties in doing so.
  • There are ways of critiquing metaphysical free will. The assumption may lead to contradictions, with observations, well-supported scientific theories, and so on.

The absurdity of un-referenceable entities

Whereof one cannot speak, thereof one must be silent.

Ludwig Wittgenstein, Tractatus Logico-Philosophicus

Some criticism of my post on physicalism is that it discusses reference, not the world. To quote one comment: “I consider references to be about agents, not about the world.” To quote another: “Remember, you have only established that indexicality is needed for reference, ie. semantic, not that it applies to entities in themselves” and also “you need to show that standpoints are ontologically fundamental, not just epistemically or semantically.” A post containing answers says: “However, everyone already kind of knows the we can’t definitely show the existence of any objective reality behind our observations and that we can only posit it.” (Note, I don’t mean to pick on these commentators, they’re expressing a very common idea)

These criticisms could be rephrased in this way:

“You have shown limits on what can be referenced. However, that in no way shows limits on the world itself. After all, there may be parts of the world that cannot be referenced.”

This sounds compelling at first: wouldn’t it be strange to think that properties of the world can be deduced from properties of human reference?

But, a slight amount of further reflection betrays the absurdity involved in asserting the possible existence of un-referenceable entities. “Un-referenceable entities” is, after all, a reference.

A statement such as “there exist things that cannot be referenced” is comically absurd, in that it refers to things in the course of denying their referenceability.

We may say, then, that it is not the case that there exist things that cannot be referenced. The assumption that this is the case leads to contradiction.

I believe this sort of absurdity is quite related to Kantian philosophy. Kant distinguished phenomena (appearances) from noumena (things-in-themselves), and asserted that through observation and understanding we can only understand phenomena, not noumena. Quoting Kant:

Appearances, to the extent that as objects they are thought in ac­cordance with the unity of the categories, are called phaenomena. If, however, I suppose there to be things that are merely objects of the un­derstanding and that, nevertheless, can be given to an intuition, although not to sensible intuition, then such things would be called noumena.

Critique of Pure Reason, Chapter III

Kant at least grants that noumena are given to some “intuition”, though not a sensible intuition. This is rather less ridiculous than asserting un-referenceability.

It is ironic that noumena-like entity being hypothesized in the present case (the physical world) would, by Kant’s criterion, be considered a scientific entity, a phenomenon.

Part of the absurdity in saying that the physical world may be un-referenceable is that it is at odds with the claim that physics is known through observation and experimentation. After all, un-referenceable observations and experimental results are of no use in science; they couldn’t made their way into theories. So the shadow of the world that can be known (and known about) by science is limited to the referenceable. The un-referenceable may, at best, be inferred (although, of course, this statement is absurd in refererring to the un-referenceable).

It’s easy to make fun of this idea of un-referenceable entities (infinitely more ghostly than ghosts), but it’s worth examining what is compelling about this (absurd) position, to see what, if anything, can be salvaged.

From a modern perspective, we can see things that a pre-modern perspective cannot conceptualize. For example, we know about gravitational lensing, quantum entanglement, Cesium, and so on. It seems that, from our perspective, these things-in-themselves did not appear in the pre-modern phenomenal world. While they had influence, they did not appear in a way clear enough for a concept to be developed.

We may believe it is, then, normative for the pre-moderns to accept, in humility, that there are things-in-themselves they lack the capacity to conceptualize. And we may, likewise, admit this of the modern perspective, in light of the likelihood of future scientific advances.

However, conceptualizability is not the same as referenceability. Things can be pointed to that don’t yet have clear concepts associated with them, such as the elusive phenomena seen in dreams.

In this case, pre-moderns may point to modern phenomena as “those things that will be phenomena in 500 years”. We can talk about those things our best theories don’t conceptualize that will be conceptualized later. And this is a kind of reference; it travels through space-time to access phenomena not immediately present.

This reference is vague, in that it doesn’t clearly define what things are modern phenomena, and also doesn’t allow one to know ahead of time what these phenomena are. But it’s finitely vague, in contrast to the infinite vagueness of “un-referenceable entities”. It’s at least possible to imagine accessing them, by e.g. becoming immortal and living until modern times.

A case that our current condition (e.g. modernity) cannot know about something can be translated into a reference: a reference to that which we cannot know on account of our conditions but could know under other imaginable conditions. Which is, indeed, unsurprising, given that any account of something outside our understanding existing, must refer to that thing outside our understanding.

My critique of an un-refererenceable physical world is quite similar to Nietzsche’s of Kant’s unknowable noumena. Nietzsche wrote:

The “thing-in-itself” nonsensical. If I remove all the relationships, all the “properties,” all the “activities” of a thing, the thing does not remain over; because thingness has only been invented by us owing to the requirements of logic, thus with the aim of defining, communication (to bind together the multiplicity of relationships, properties, activities).

Will to Power, sec. 558

I continue to be struck by the irony of the transition from physical phenomena to physical noumena. Kant’s positing of a realm of noumena was, perhaps, motivated by a kind of humility, a kind of respect for morality, an appeasement of theological elements in society, while still making a place for thinking-for-one’s-self, science, and so on, in a separate magisterium that can’t collide with the noumenal realm.

Any idea, whether it’s God, Physics, or Objectivity, can disconnect from the human cognitive faculty that relates ideas to the world of experience, and remain as a mere signifier, which persists as a form of unfalsifiable control. When Physics and Objectivity take on theological significance (as they do in modern times), a move analogous to Kant’s will place them in an un-falsifiable noumenal realm, with the phenomenal realm being the subjective and/or intersubjective. This is extremely ironic.

Puzzles for physicalists

The following is a list of puzzles that are hard to answer within a broadly-physicalist, objective paradigm. I believe critical agentialism can answer these better than competing frameworks; indeed, I developed it through contemplation on these puzzles, among others. This post will focus on the questions, though, rather than the answers. (Some of the answers can be found in the linked post)

In a sense what I have done is located “anomalies” relative to standard accounts, and concentrated more attention on these anomalies, attempting to produce a theory that explains them, without ruling out its ability to explain those things the standard account already explains well.

Indexicality

(This section would be philosophical plagiarism if I didn’t cite On the Origin of Objects.)

Indexicals are phrases whose interpretation depends on the speaker’s standpoint, such as “my phone” or “the dog over there”. It is often normal to treat indexicals as a kind of shorthand: “my phone” is shorthand for “the phone belonging to Jessica Taylor”, and “the dog over there” is shorthand for “the dog existing at coordinates 37.856570, -122.284176”. This expansion allows indexicals to be accounted for within an objective, standpoint-independent frame.

However, even these expanded references aren’t universally unique. In a very large universe, there may be a twin Earth which also has a dog at coordinates 37.856570, -122.284176. As computer scientists will find obvious, specifying spacial coordinates requires a number of bits logarithmic in the amount of space addressed. These globally unique identifiers get more and more unwieldy the more space is addressed.

Since we don’t expand out references enough to be sure they’re globally unique, our use of them couldn’t depend on such global uniqueness. An accounting of how we refer to things, therefore, cannot posit any causally-effective standpoint-independent frame that assigns semantics.

Indeed, the trouble of globally unique references can also be seen by studying physics itself. Physical causality is spacially local; a particle affects nearby particles, and there’s a speed-of-light limitation. For spacial references to be effective (e.g. to connect to observation and action), they have to themselves “move through” local space-and-time.

This is a bit like the problem of having a computer refer to itself. A computer may address computers by IP address. The IP address “127.0.0.1” always refers to this computer. These references can be resolved even without an Internet connection. It would be totally unnecessary and unwieldy for a computer to refer to itself (e.g. for the purpose of accessing files) through a globally-unique IP address, resolved through Internet routing.

Studying enough examples like these (real and hypothetical) leads to the conclusion that indexicality (and more specifically, deixis) are fundamental, and that even spacial references that appear to be globally unique are resolved deictically.

How does this relate to physics? It means references to “the objective world” or “the physical world” must also be resolved indexically, from some standpoint. Paying attention to how these references are resolved is critical.

The experimental results you see are the ones in front of you. You can’t see experimental results that don’t, through spacio-temporal information flows, make it to you. Thus, references to the physical which go through discussing “the thing causing experimental predictions” or “the things experiments failed to falsify” are resolved in a standpoint-dependent way.

It could be argued that physical law is standpoint-independent, because it is, symmetrically, true at each point in space-time. However, this excludes virtual standpoints (e.g. existing in a computer simulation), and additionally, this only means the laws are standpoint-independent, not the contents of the world, the things described by the laws.

Pre-reduction references

(For previous work, see “Reductive Refrerence”.)

Indexicality by itself undermines view-from-nowhere mythology, but perhaps not physicalism itself. What presents a greater challenge for physicalism is the problem of pre-reduced references (which are themselves deictic).

Let’s go back to the twin Earth thought experiment. Suppose we are in pre-chemistry times. We still know about water. We know water through our interactions with it. Later, chemistry will find that water has a particular chemical formula.

In pre-chemistry times, it cannot be known whether the formula is H2O, XYZ, etc, and these formulae are barely symbolically meaningful. If we discover that water is H2O, we will, after-the-fact, define “water” to mean H2O; if we discover that water is XYZ, we will, after-the-fact, define “water” to mean XYZ.

Looking back, it’s clear that “water” has to be H2O, but this couldn’t have been clear at the time. Pre-chemistry, “water” doesn’t yet have a physical definition; a physical definition is assigned later, which rationalizes previous use of the word “water” into a physicalist paradigm.

A philosophical account of reductionism needs to be able to discuss how this happens. To do this, it needs to be able to discuss the ontological status of entities such as “water” (pre-chemistry) that do not yet have a physical definition. In this intermediate state, the philosophy is talking about two entities, pre-reduced entities and physics, and considering various bridgings between them. So the intermediate state needs to contain entities that are not yet conceptualized physically.

A possible physicalist objection is that, while it may be a provisional truth that water is definitionally the common drinkable liquid found in rivers and so on, it is ultimately true that water is H20, and so physicalism is ultimately true. (This is very similar to the two truths doctrine in Buddhism).

Now, expanding out this account needs to provide an account of the relation between provisional and ultimate truth. Even if such an account could be provided, it would appear that, in our current state, we must accept it as provisionally true that some mental entities (e.g. imagination) do not have physical definitions, since a good-enough account has not yet been provided. And we must have a philosophy that can grapple with this provisional state of affairs, and judge possible bridgings as fitting/unfitting.

Moreover, there has never been a time without provisional definition. So this idea of ultimate truth functions as a sort of utopia, which is either never achieved, or is only achieved after very great advances in philosophy, science, and so on. The journey is, then, more important than the destination, and to even approach the destination, we need an ontology that can describe and usably function within the journeying process; this ontology will contain provisional definitions.

The broader point here is that, even if we have the idea of “ultimate truth”, that idea isn’t meaningful (in terms of observations, actions, imaginations, etc) to a provisional perspective, unless somehow the provisional perspective can conceptualize the relation between itself and the ultimate truth. And, if the ultimate truth contains all provisional truths (as is true if forgetting is not epistemically normative), the ultimate truth needs to conceptualize this as well.

Epistemic status of physics

Consider the question: “Why should I believe in physics?”. The conventional answer is: “Because it predicts experimental results.” Someone who can observe these experimental results can, thus, have epistemic justification for belief in physics.

This justificatory chain implies that there are cognitive actors (such as persons or social processes) that can do experiments and see observations. These actors are therefore, in a sense, agents.

A physicalist philosophical paradigm should be able to account for epistemic justifications of physics, else fails to self-ratify. So the paradigm needs to account for observers (and perhaps specifically active observers), who are the ones having epistemic justification for belief in physics.

Believing in observers leads to the typical mind-body problems. Disbelieving in observers fails to self-ratify. (Whenever a physicalist says “an observation is X physical entity”, it can be asked why X counts as an observation of the sort that is epistemically compelling; the answer to this question must bridge the mental and the physical, e.g. by saying the brain is where epistemic cognition happens. And saying “you know your observations are the things processed in this brain region because of physics” is circular.)

What mind-body problems? There are plenty.

Anthropics

The anthropic principle states, roughly, that epistemic agents must believe that the universe contains epistemic agents. Else, they would believe themselves not to exist.

The language of physics, on its own, doesn’t have the machinery to say what an observer is. Hence, anthropics is a philosophical problem.

The standard way of thinking about anthropics (e.g. SSA/SIA) is to consider the universe from a view-from-nowhere, and then assume that “my” body is in some way sampled “randomly” from this viewed-from-nowhere universe, such that I proceed to get observations (e.g. visual) from this body.

This is already pretty wonky. Indexicality makes the view-from-nowhere problematic. And the idea that “I” am “randomly” placed into a body is a rather strange metaphysics (when and where does this event happen?).

But perhaps the most critical issue is that the physicalist anthropic paradigm assumes it’s possible to take a physical description of the universe (e.g. as an equation) and locate observers in it.

There are multiple ways of considering doing so, and perhaps the best is functionalism, which will be discussed later. However, I’ll note that a subjectivist paradigm can easily find at least one observer: I’m right here right now.

This requires some explaining. Say you’re lost in an amusement park. There are about two ways of thinking about this:

  1. You don’t know where you are, but you know where the entrance is.
  2. You don’t know where the entrance is, but you know where you are.

Relatively speaking, 1 is an “objective” (relatively standpoint-independent) answer, and 2 is a “subjective” (relatively standpoint-dependent) answer.

2 has the intuitive advantage that you can point to yourself, but not to the entrance. This is because pointing is deictic.

Even while being lost, you can still find your way around locally. You might know where the Ferris wheel is, or the food stand, or your backpack. And so you can make a local map, which has not been placed relative to the entrance. This map is usable despite its disconnection from a global reference frame.

Anthropics seems to be saying something similar to (1). The idea is that I, initially, don’t know “where I am” in the universe. But, the deictic critique applies to anthropics as it applies to the amusement park case. I know where I am, I’m right here. I know where the Earth is, it’s under me. And so on.

This way of locating (at least one) observer works independent of ability to pick out observers given a physical description of the universe. Rather than finding myself relative to physics, I find physics relative to me.

Of course, the subjectivist framework has its own problems, such as difficulty finding other observers. So there is a puzzle here.

Tool use and functionalism

Functionalism is perhaps the current best answer as to how to locate observers in physics. Before discussing functionalism, though, I’ll discuss tools.

What’s a hammer? It’s a thing you can swing to apply lots of force to something at once. Hammers can be made of many physical materials, such as stone, iron, or wood. It’s about the function, not the substance.

The definition I gave refers to a “you” who can swing the hammer. Who is the “you”? Well, that’s standpoint-dependent. Someone without arms can’t use a conventional hammer to apply lots of force. The definition relativizes to the potential user. (Yes, a person without arms may say conventional hammers are hammers due to social convention, but this social convention is there because conventional hammers work for most people, so it still relativizes to a population.)

Let’s talk about functionalism now. Functionalism is based on the idea of multiple realizability: that a mind can be implemented on many different substrates. A mind is defined by its functions rather than its substrate. This idea is very familiar to computer programmers, who can hide implementation details behind an interface, and don’t need to care about hardware architecture for the most part.

This brings us back to tools. The definition I gave of “hammer” is an interface: it says how it can be used (and what effects it should create upon being used).

What sort of functions does a mind have? Observation, prediction, planning, modeling, acting, and so on. Now, the million-dollar question: Who is (actually or potentially) using it for these functions?

There are about three different answers to this:

  1. The mind itself. I use my mind for functions including planning and observation. It functions as a mind as long as I can use it this way.
  2. Someone or something else. A corporation, a boss, a customer, the government. Someone or something who wants to use another mind for some purpose.
  3. It’s objective. Things have functions or not independent of the standpoint.

I’ll note that 1 and 2 are both standpoint-dependent, thus subjectivist. They can’t be used to locate minds in physics; there would have to be some starting point, of having someone/something intending to use a mind for something.

3 is interesting. However, we now have a disanalogy from the hammer case, where we could identify some potential user. It’s also rather theological, in saying the world has an observer-independent telos. I find the theological implications of functionalism to be quite interesting and even inspiring, but that still doesn’t help physicalism, because physicalist ontology doesn’t contain standpoint-independent telos. We could, perhaps, say that physicalism plus theism yields objective functionalism. And this requires adding a component beyond the physical equation of the universe, if we wish to find observers in it.

Causality versus logic

Causality contains the idea that things “could” go one way or another. Else, causal claims reduce to claims about state; there wouldn’t be a difference between “if X, then Y” and “X causes Y”.

Pearlian causality makes this explicit; causal relations are defined in terms of interventions, which come from outside the causal network itself.

The ontology of physics itself is causal. It is asserted, not just that some state will definitely follow some previous state, but that there are dynamics that push previous states to new states, in a necessary way. (This is clear in the case of dynamical systems)

Indeed, since experiments may be thought of as interventions, it is entirely sensible that a physical theory that predicts the results of these interventions must be causal.

These “coulds” have a difficult status in relation to logic. Someone who already knows the initial state of a system can logically deduce its eventual state. To them, there is inevitability, and no logically possible alternative.

It appears that, while “could”s exist from the standpoint of an experimenter, they do not exist from the standpoint of someone capable of predicting the experimenter, such as Laplace’s demon.

This is not much of a problem if we’ve already accepted fundamental deixis and rejected the view-from-nowhere. But it is a problem for those who haven’t.

Trying to derive decision-theoretic causality from physical causality results in causal decision theory, which is known to have a number of bugs, due to its reliance on hypothetical extra-physical interventions.

An alternative is to try to develop a theory of “logical causality”, by which some logical facts (such as “the output of my decision process”, assuming you know your source code) can cause others. However, this is oxymoronic, because logic does not contain the affordance for intervention. Logic contains the affordance for constructing and checking proofs. It does not contain the affordance for causing 3+4 to equal 8. A sufficiently good reasoner can immediately see that “3+4=8” runs into contradiction; there is no way to construct a possible world in which 3+4=8.

Hence, it is hard to say that “coulds” exist in a standpoint-independent way. We may, then, accept standpoint-dependence of causation (as I do), or reject causation entirely.

Conclusion

My claim isn’t that physicalism is false, or that there don’t exist physicalist answers to these puzzles. My claim, rather, is that these puzzles are at least somewhat difficult, and that sufficient contemplation on them will destabilize many forms of physicalism. The current way I answer these puzzles is through a critical agential framework, but other ways of answering them are possible as well.

A conversation on theory of mind, subjectivity, and objectivity

I recently had a Twitter conversation with Roko Mijic. I believe it contains ideas that a wider philosophical/rationalist audience may find valuable, and so include here a transcript (quoted with permission).


Jessica: There are a number of “runs on top of” relations in physicalism:

  • mind runs on top of body
  • discrete runs on top of continuous
  • choice runs on top of causality

My present philosophy inverts the metaphysical order: mind/discrete/choice is more basic.

This is less of a problem than it first appears, because mind/discrete/choice can conceptualize, hypothesize, and learn about body/continuous/causality, and believe in a “effectively runs on” relation between the two.

In contrast, starting from body/continuous/causality has trouble with getting to mind/discrete/choice as even being conceptualizable, hence tending towards eliminativism.

Roko: Eliminitivism has a good track record though.

Jessica: Nah, it can’t account for what an “observation” is so can’t really explain observations.

Roko: I don’t really see a problem here. It makes perfect sense within a reductionist or eliminativist paradigm for a robot to have some sensors and to sense its environment. You don’t need a soul, or god, or strong free will, or objective person-independent values for that.

Jessica: Subjective Occam’s razor (incl. Solomonoff induction) says I should adopt the explanation that best explains my observations. Eliminativism can’t really say what “my” means here. If it believed in “my observations” it would believe in consciousness.

It has to do some ontological reshuffling around what “observations” are that, I think, undermines the case for believing in physics in the first place, which is that it explains my observations.

Roko: It means the observations that are caused by sensors plugged into the hardware that your algorithm instance is running on.

Jessica: That means “my algorithm instance” exists. Sounds like a mental entity. Can’t really have those under eliminativism (but can under functionalism etc).

Roko: I don’t want to eliminate my mental instance from my philosophy, that would be kind of ridiculous.

Jessica: Well, yes, so eliminativism is false. I understand eliminativism to mean there is only physical, no mental. Believing mental runs on physical could be functionalism, property dualism, or some other non-eliminativist position.

Roko: I think it makes more sense to think of mental things as existing subjectively (i.e. if they belong to you) and physical things as existing objectively. I definitely think that dualism is making a mistake in thinking of objectively-existing mental things

Jessica: I don’t think this objective/subjective dichotomy works out. I haven’t seen a good positive case, and my understanding of deixis leads me to believe that references to the objective must be resolved subjectively. See also On the Origin of Objects.

Basically I don’t see how we can, in a principled way, have judgments like “X exists but only subjectively, not objectively”. It would appear that by saying “X exists” I am asserting that X is an existent object (i.e. I’m saying something objective).

See also Thomas Nagel’s The View From Nowhere. Spoiler alert: there isn’t a view from nowhere, it’s an untenable concept.

Roko: My sensation of the flavor of chocolate exists but only subjectively.

Jessica: We’re now talking about the sensation of the flavor of the chocolate though. Is this really that different from talking about “that car over there”? I don’t see how some entities can, in a principled way, be classified as objective and some as subjective.

Like, in talking about “X” I’m porting something in my mental world-representation into the discursive space. I don’t at all see how to classify some of these portings as objective and some as subjective.

See also writing on the difficulty of the fact/opinion distinction.

Roko: It’s not actually the flavor “of” the chocolate though. It’s the sensation of flavor that your brain generates for you only, in response to certain nerve stimuli.

> I don’t see how some entities can, in a principled way, be classified as objective and some as subjective.

It’s very easy actually. Subjectives are the things that you cannot possibly be mistaken about, the “I think therefore I am’s”.

No deceiving demon can fool you into thinking that you’re experiencing the taste of chocolate, the color purple, or an orgasm. No deceiving demon can fool you into thinking that you’re visualizing the number 4.

Jessica: I don’t think this is right. The thought follows the experience. There can be mistranslations along the way. This might seem like a pedantic point but we’re talking about linguistic subjective statements so it’s relevant.

Translating the subjective into words can introduce errors. It’s at least as hard as, say, adding small numbers. So your definition means “1+1=2” is also subjective.

Roko: I think that it’s reasonable to see small number math instances as subjectives. I can see 3 pens. I can conceive of 3 dots, that’s a subjective thing. It’s in the same class as seeing red or smelling a rose.

[continuing from the deceiving demon thread] These are the things that are inherently part of your instance or mind. The objective, on the other hand, is always somewhat uncertain and inferred. Things are out there and they send signals to you. But you are inferring their existence.

Jessica: Okay, I agree with this sort of mental/outside-mental distinction, and you can define subjective/objective to mean that. This certainly doesn’t bring in other connotations of the objective, such as view-from-nowhere or observer-independence; I can be wrong about indexicals too.

Roko: Well it happens to be a property of our world that when different people infer the shape of the objective (i.e. draw maps), they always converge. This is what being in a shared reality means.

I mean they always converge if they follow the right principles, e.g. complexity priors, and those same principles are the ones that allow us to successfully manipulate reality via actions. That’s what the objective world out there is.

Jessica: Two reasons they could converge:

  1. Symmetry (this explains math)
  2. Existence of same entities (e.g. landmarks)

I’m fine with calling 1 observer-independent. Problem: your view of, and references to, 2, depend on your standpoint. Because of deixis.

Obvious deictic references are things like “the car over there” or “the room I’m in”. It is non-obvious but, I think, true, that all physical references are deictic. Which makes sense because physical causality is deictic (locally causal and symmetric).

Even “the Great Wall of China” refers to the Great Wall of China on our Earth. It couldn’t refer to the one on the twin Earth. And the people on twin Earth have “the Great Wall of China” refer to the one on the twin Earth, not ours.

At the same time, maps created starting from different places can be patched together, in a collage. However, pasting these together requires taking into account the standpoint-dependence of the individual maps being pasted together.

And at no point does this pasting-together result in a view from nowhere. It might seem that way because it keeps getting bigger and more zoomed-out. But at each individual time it’s finite.

Roko: Yes this is all nice but I think the point where we get to hard questions is when we think about mental phenomena that I would classify as subjectives as being part of the objective reality.

This is the petrl.org problem, or @reducesuffering worrying about whether plankton or insects “really do” have subjective experiences etc

Jessica: In my view “my observation” is an extremely deictic reference, to something maximally here-and-now, such that there isn’t any stabilization to do. Intermediate maps paste these extremely deictic maps together into less-deictic, but still deictic, maps. It never gets non-deictic.

It’s hard to pin down intersubjectively precisely because it’s so deictic. I can’t really port my here-and-now to your here-and-now without difficulty.

Subjective implication decision theory in critical agentialism

This is a follow-up to a previous post on critical agentialism, to explore the straightforward decision-theoretic consequences. I call this subjective implication decision theory, since the agent is looking at the logical implications of their decision according to their beliefs.

We already covered observable action-consequences. Since these are falsifiable, they have clear semantics in the ontology. So we will in general assume observable rewards, as in reinforcement learning, while leaving un-observable goals for later work.

Now let’s look at a sequence of decision theory problems. We will assume, as before, the existence of some agent that falsifiably believes itself to run on at least one computer, C.

5 and 10

Assume the agent is before a table containing a 5 dollar bill and a 10 dollar bill. The agent will decide which dollar bill to take. Thereafter, the agent will receive a reward signal: 5 if the 5 dollar bill is taken, and 10 if the 10 dollar bill is taken.

The agent may have the following beliefs about action-consequences: “If I take action 5, then I will get 5 reward. If I take action 10, then I will get 10 reward.” These beliefs follow directly from the problem description. Notably, the beliefs include beliefs about actions that might not actually be taken; it is enough that these actions are possible that their consequences are falsifiable.

Now, how do we translate these beliefs about action-consequences into decisions? The most straightforward way to do so is to select the policy that is believed to return the most reward. (This method is ambiguous under conditions of partial knowledge, though that is not a problem for 5 and 10).

This method (which I will call “subjective implication decision theory”) yields the action 10 in this case.

This is all extremely straightforward. We directly translated the problem description into a set of beliefs about action consequences. And these beliefs, along with the rule of subjective causal decision theory, yield an optimal action.

The difficulty of 5 and 10 comes when the problem is naturalized. The devil is in the details: how to naturalize the problem? The previous post examined a case of both external and internal physics, compatible with free will. There is no obvious obstacle to translating these physical beliefs to the 5 and 10 case: the dollar bills may be hypothesized to follow physical laws, as may the computer C.

Realistically, the agent should assume that the proximate cause of the selection of the dollar bill is not their action, but C’s action. Recall that the agent falsifiably believes it runs on C, in the sense that its observations/actions necessarily equal C’s.

Now, “I run on C” implies in particular: “If I select ‘pick up the 5 dollar bill’ at time t, then C does. If I select ‘pick up the 10 dollar bill’ at time t, then C does.” And the assumption that C controls the dollar bill implies: “If C selects ‘pick up the 5 dollar bill at time t‘, then the 5 dollar bill will be held at some time between t and t+k“, and also for the 10 dollar bill (for some k that is an upper bound of the time it takes for the dollar bill to be picked up). Together, these beliefs imply: “If I select ‘pick up the 5 dollar bill’ at time t, then the 5 dollar bill will be held at some time between t and t+k“, and likewise for the 10 dollar bill. At this point, the agent’s beliefs include ones quite similar to the ones in the non-naturalized case, and so subjective implication decision theory selects the 10 dollar bill.

Twin prisoner’s dilemma

Consider an agent that believes itself to run on computer C. It also believes there is another computer, C’, which has identical initial state and dynamics to C.

Each computer will output an action; the agent will receive 10 reward if C’ cooperates plus 1 reward if C defects (receiving 0 reward for defection).

As in 5 and 10, the agent believes: “If I cooperate, C cooperates. If I defect, C defects.” However, this does not specify the behavior of C’ as a function of the agent’s action.

It can be noted at this point that, because the agent believes C’ has identical initial state and dynamics to C, the agent believes (falsifiably) that C’ must output the same actions as C on each time step, as long as C and C’ receive idential observations. Since, in this setup, observations are assumed to be equal until C receives the reward (with C’ perhaps receiving a different reward), these beliefs imply: “If I cooperate, C’ cooperates. If I defect, C’ defects”.

In total we now have: “If I cooperate, C and C’ both cooperate. If I defect, C and C’ both defect”. Thus the agent believes itself to be straightforwardly choosing between a total reward of 10 for cooperation, and a total of 1 reward for defection. And so subjective implication decision theory cooperates.

Note that this comes apart from the conventional interpretation of CDT, which considers interventions on C’s action, rather than on “my action”. CDT’s hypothesized intervention updates C but not C’, as C and C’ are physically distinct.

Newcomb’s problem

This is very much similar to twin prisoner’s dilemma. The agent may falsifiably believe: “The Predictor filled box A with $1,000,000 if and only if I will choose only box A.” From here it is straightforward to derive that the agent believes: “If I choose to take only box A, then I will have $1,000,000. If I choose to take both boxes, then I will have $1,000.” Hence subjective implication decision theory selects only box A.

The usual dominance argument for selecting both boxes does not apply. The agent is not considering interventions on C’s action, but rather on “my action”, which is falsifiably predicted to be identical with C’s action.

Counterfactual mugging

In this problem, a Predictor flips a coin; if the coin is heads, the Predictor asks the agent for $10 (and the agent may or may not give it); if the coin is tails, the Predictor gives the agent $1,000,000 iff the Predictor predicts the agent would have given $10 in the heads case.

We run into a problem with translating this to a critical agential ontology. Since both branches don’t happen in the same world, it is not possible to state the Predictor’s accuracy as a falsifiable statement, as it relates two incompatible branches.

To avoid this problem, we will say that the Predictor predicts the agent’s behavior ahead of time, before flipping the coin. This prediction is not told to the agent in the heads case.

Now, the agent falsifiably believes the following:

  • If the coin is heads, then the Predictor’s prediction is equal to my choice.
  • If the coin is tails, then I get $1,000,000 if the Predictor’s prediction is that I’d give $10, otherwise $0.
  • If the coin is heads, then I get $0 if I don’t give the predictor $10, and -$10 if I do give the predictor $10.

From the last point, it is possible to show that, after the agent observes heads, the agent believes they get $0 if they don’t give $10, and -$10 if they do give $10. So subjective implication decision theory doesn’t pay.

This may be present a dynamic inconsistency in that the agent’s decision does not agree with what they would previously have wished they would decide. Let us examine this.

In a case where the agent chooses their action before the coin flip, the agent believes that, if they will pay up, the Predictor will predict this, and likewise for not paying up. Therefore, the agent believes they will get $1,000,000 if they decide to pay up and then the coin comes up tails.

If the agent weights the heads/tails branches evenly, then the agent will decide to pay up. This presents a dynamic inconsistency.

My sense is that this inconsistency should be resolved by considering theories of identity other than closed individualism. That is, it seems possible that the abstraction of receiving an observation and taking on action on each time step, while having a linear lifetime, is not a good-enough fit for the counterfactual mugging problem to achieve dynamic consistency.

Conclusion

It seems that subjective implication decision theory agrees with timeless decision theory and evidential decision theory on the problems considered, while diverging from causal decision theory and functional decision theory.

I consider this a major advance, in that the ontology is more cleanly defined than the ontology of timeless decision theory, which considers interventions on logical facts. It is not at all clear what it means to “intervene on a logical fact”; the ontology of logic does not natively contain the affordance of intervention. The motivation for considering logical interventions was the belief that the agent is identical with some computation, such that its actions are logical facts. Critical agential ontology, on the other hand, does not say the agent is identical with any computation, but rather than the agent effectively runs on some computer (which implements some computation), while still being metaphysically distinct. Thus, we need not consider “logical counterfactuals” directly; rather, we consider subjective implications, and consider whether these subjective implications are consistent with the agent effectively running on some computer.

To handle cases such as counterfactual mugging in a dynamically consistent way (similar to functional decision theory), I believe that it will be necessary to consider agents outside the closed-individualist paradigm, in which one is assumed to have a linear lifetime with memory and observations/actions on each time step. However, I have not proceeded exploring in this direction yet.

[ED NOTE: After the time of writing I realized subjective implication decision theory, being very similar to proof-based UDT, has problems with spurious counterfactuals by default, but can similarly avoid these problems by “playing chicken with the universe”, i.e. taking some action it has proven it will not take.]

A critical agential account of free will, causation, and physics

This is an account of free choice in a physical universe. It is very much relevant to decision theory and philosophy of science. It is largely metaphysical, in terms of taking certain things to be basically real and examining what can be defined in terms of these things.

The starting point of this account is critical and agential. By agential, I mean that the ontology I am using is from the point of view of an agent: a perspective that can, at the very least, receive observations, have cognitions, and take actions. By critical, I mean that this ontology involves uncertain conjectures subject to criticism, such as criticism of being logically incoherent or incompatible with observations. This is very much in a similar spirit to critical rationalism.

Close attention will be paid to falsifiability and refutation, principally for ontological purposes, and secondarily for epistemic purposes. Falsification conditions specify the meanings of laws and entities relative to the perspective of some potentially falsifying agent. While the agent may believe in unfalsifiable entities, falsification conditions will serve to precisely pin down that which can be precisely pinned down.

I have only seen “agential” used in the philosophical literature in the context of agential realism, a view I do not understand well enough to comment on. I was tempted to use “subjective”; however, while subjects have observations, they do not necessarily have the ability to take actions. Thus I believe “agential” has a more concordant denotation.

You’ll note that my notion of “agent” already assumes one can take actions. Thus, a kind of free will is taken as metaphysically basic. This presupposition may cause problems later. However, I will try to show that, if careful attention is paid, the obvious problems (such as contradiction with determinism) can be avoided.

The perspective in this post can be seen as starting from agency, defining consequences in terms of agency, and defining physics in terms of consequences. In contrast, the most salient competing decision theory views (including framings of CDT, EDT, and FDT) define agency in terms of consequences (“expected utility maximization”), and consequences in terms of physics (“counterfactuals”). So I am rebasing the ontological stack, turning it upside-down. This is less absurd than it first appears, as will become clear.

(For simplicity, assume observations and actions are both symbols taken from some finite alphabet.)

Naive determinism

Let’s first, within a critical agential ontology, disprove some very basic forms of determinism.

Let A be some action. Consider the statement: “I will take action A”. An agent believing this statement may falsify it by taking any action B not equal to A. Therefore, this statement does not hold as a law. It may be falsified at will.

Let f() be some computable function returning an action. Consider the statement: “I will take action f()”. An agent believing this statement may falsify it by taking an action B not equal to f(). Note that, since the agent is assumed to be able to compute things, f() may be determined. So, indeed, this statement does not hold as a law, either.

This contradicts a certain strong formulation of naive determinism: the idea that one’s action is necessarily determined by some known, computable function.

Action-consequences

But wait, what about physics? To evaluate what physical determinism even means, we need to translate physics into a critical agential ontology. However, before we turn to physics, we will first consider action-consequences, which are easier to reason about.

Consider the statement: “If I take action A, I will immediately there-after observe O.” This statement is falsifiable, which means that if it is false, there is some policy the agent can adopt that will falsify it. Specifically, the agent may adopt the policy of taking action A. If the agent will, in fact, not observe O after taking this action, then the agent will learn this, falsifying the statement. So the statement is falsifiable.

Finite conjunctions of falsifiable statements are themselves falsifiable. Therefore, the conjunction “If I take action A, I will immediately there-after observe O; if I take action B, I will immediately there-after observe P” is, likewise, falsifiable.

Thus, the agent may have falsifiable beliefs about observable consequences of actions. This is a possible starting point for decision theory: actions having consequences is already assumed in the ontology of VNM utility theory.

Falsification and causation

Now, the next step is to account for physics. Luckily, the falsificationist paradigm was designed around demarcating scientific hypotheses, such that it naturally describes physics.

Interestingly, falsificationism takes agency (in terms of observations, computation, and action) as more basic than physics. For a thing to be falsifiable, it must be able to be falsified by some agent, seeing some observation. And the word able implies freedom.

Let’s start with some basic Popperian logic. Let f be some testable function (say, connected to a computer terminal) taking in a natural number and returning a Boolean. Consider the hypothesis: “For all x, f(x) is true”. This statement is falsifiable: if it’s false, then there exists some action-sequence an agent can take (typing x into the terminal, one digit at a time) that will prove it to be false.

The given hypothesis is a kind of scientific law. It specifies a regularity in the environment.

Note that there is a “bridge condition” at play here. That bridge condition is that the function f is, indeed, connected to the terminal, such that the agent’s observations of f are trustworthy. In a sense, the bridge condition specifies what f is, from the agent’s perspective; it allows the agent to locate f as opposed to some other function.

Let us now consider causal hypotheses. We already considered action-consequences. Now let us extend this analysis to reasoning about causation between external entities.

Consider the hypothesis: “If the match is struck, then it will alight immediately”. This hypothesis is falsifiable by an agent who is able to strike the match. If the hypothesis is false, then the agent may refute it by choosing to strike the match and then seeing the result. However, an agent who is unable to strike the match cannot falsify it. (Of course, this assumes the agent may see whether the match is alight after striking it)

Thus, we are defining causality in terms of agency. The falsification conditions for a causal hypothesis refer to the agent’s abilities. This seems somewhat wonky at first, but it is quite similar to Pearlian casuality, which defines causation in terms of metaphysically-real interventions. This order of definition radically reframes the determinism vs. free will apparent paradox, by defining the conditions of determinism (causality) in terms of potential action.

External physics

Let us now continue, proceeding to more universal physics. Consider the law of gravity, according to which a dropped object will accelerate downward at a near-constant weight. How might we port this law into an agential ontology?

Here is the assumption about how the agent interacts with gravity. The agent will choose some natural number as the height of an object. Thereafter, the object will fall, while a camera will record the height of the object at each natural-number time expressed in milliseconds, to the nearest natural-number millimeter from the ground. The agent may observe a printout of the camera data afterwards.

Logically, constant gravity implies, and is implied by, a particular quadratic formula for the height of the object as a function of the object’s starting height and the amount of time that has passed. This formula implies the content of the printout, as a function of the chosen height. So, the agent may falsify constant gravity (in the observable domain) by choosing an object-height, placing an object at that height, letting it fall, and checking the printout, which will show the law of constant gravity to be false, if the law in fact does not hold for objects dropped at that height (to the observed level of precision).

Universal constant gravity is not similarly falsifiable by this agent, because this agent may only observe this given experimental setup. However, a domain-limited law, stating that the law of constant gravity holds for all possible object-heights in this setup, up to the camera’s precision, is falsifiable.

It may seem that I am being incredibly pedantic about what a physical law is and what the falsification conditions are; however, I believe this level of pedantry is necessary for critically examining the notion of physical determinism to a high-enough level of rigor to check interaction with free will.

Internal physics

We have, so far, considered the case of an agent falsifying a physical law that applies to an external object. To check interaction with free will, we must interpret physical law applied to the agent’s internals, on which the agent’s cognition is, perhaps, running in a manner similar to software.

Let’s consider the notion that the agent itself is “running on” some Turing machine. We will need to specify precisely what such “running on” means.

Let C be the computer that the agent is considering whether it is running on. C has, at each time, a tape-state, a Turing machine state, an input, and an output. The input is attached to a sensor (such as a camera), and the output is attached to an actuator (such as a motor).

For simplicity, let us say that the history of tapes, states, inputs, and outputs is saved, such that it can be queried at a later time.

We may consider the hypothesis that C, indeed, implements the correct dynamics for a given Turing machine specification. These dynamics imply a relation between future states and past states. An agent may falsify these dynamics by checking the history and seeing if the dynamics hold.

Note that, because some states or tapes may be unreachable, it is not possible to falsify the hypothesis that C implements correct dynamics starting from unreachable states. Rather, only behavior following from reachable states may be checked.

Now, let us think on an agent considering whether they “run on” this computer C. The agent may be assumed to be able to query the history of C, such that it may itself falsify the hypothesis that C implements Turing machine specification M, and other C-related hypotheses as well.

Now, we can already name some ways that “I run on C” may be falsified:

  • Perhaps there is a policy I may adopt, and a time t, such that if I implement this policy, I will observe O at time t, but C will observe something other than O at time t.
  • Perhaps there is a policy I may adopt, and a time t, such that if I implement this policy, I will take action A at time t, but C will take an action other than A at time t.

The agent may prove these falsification conditions by adopting a given policy until some time t, and then observing C’s observation/action at time t, compared to their own observation/action.

I do not argue that the converse of these conditions exhaust what it means that “I run on C”. However, they at least restrict the possibility space by a very large amount. For the falsification conditions given to not hold, the observations and behavior of C must be identical with the agent’s own observations and behavior, for all possible policies the agent may adopt.

I will name the hypothesis with the above falsification conditions: “I effectively run on C”. This conveys that these conditions may not be exhaustive, while still being quite specific, and relating to effects between the agent and the environment (observations and actions).

Note that the agent can hypothesize itself to effectively run on multiple computers! The conditions for effectively running on one computer do not contradict the conditions for effectively running on another computer. This naturally handles cases of identical physical instantiations of a single agent.

At this point, we have an account of an agent who:

  • Believes they have observations and take free actions
  • May falsifiably hypothesize physical law
  • May falsifiably hypothesize that some computer implements a Turing machine specification
  • May falsifiably hypothesize that they themselves effectively run on some computer

I have not yet shown that this account is consistent. There may be paradoxes. However, this at least represents the subject matter covered in a unified critical agential ontology.

Paradoxes sought and evaluated

Let us now seek out paradox. We showed before that the hypothesis “I take action f()” may be refuted at will, and therefore does not hold as a necessary law. We may suspect that “I effectively run on C” runs into similar problems.

Self-contradiction

Remember that, for the “I effectively run on C” hypothesis to be falsified, it must be falsified at some time, at which the agent’s observation/action comes apart from C’s. In the “I take action f()” case, we had the agent simulate f() in order to take an opposite action. However, C need not halt, so the agent cannot simulate C until halting. Instead, the agent may select some time t, and run C for t steps. But, by the time the agent has simulated C for t steps, the time is already past t, and so the agent may not contradict C’s behavior at time t, by taking an opposite action. Rather, the agent only knows what C does at time t at some time later than t, and only their behavior after this time may depend on this knowledge.

So, this paradox is avoided by the fact that the agent cannot contradict its own action before knowing it, but cannot know it before taking it.

We may also try to create a paradox by assuming an external super-fast computer runs a copy of C in parallel, and feeds this copy’s action on subjective time-step t into the original C’s observation before time t; this way, the agent may observe its action before it takes it. However, now the agent’s action is dependent on its observation, and so the external super-fast computer must decide which observation to feed into the parallel C. The external computer cannot know what C will do before producing this observation, and so this attempt at a paradox cannot stand without further elaboration.

We see, now, that if free will and determinism are compatible, it is due to limitations on the agent’s knowledge. The agent, knowing it runs on C, cannot thereby determine what action it takes at time t, until a later time. And the initial attempt to provide this knowledge externally fails.

Downward causation

Let us now consider a general criticism of functionalist views, which is that of downward causation: if a mental entity (such as observation or action) causes a physical entity, doesn’t that either mean that the mental entity is physical, or that physics is not causally closed?

Recall that we have defined causation in terms of the agent’s action possibilities. It is straightforwardly the case, then, that the agent’s action at time t causes changes in the environment.

But, what of the physical cause? Perhaps it is also the case that C’s action at time t causes changes in the environment. If so, there is a redundancy, in that the change in the environment is caused both by the agent’s action and by C’s action. We will examine this possible redundancy to find potential conflicts.

To consider ways that C’s action may change the environment, we must consider how the agent may intervene on C’s action. Let us say we are concerned with C’s action at time t. Then we may consider the agent at some time u < t taking an action that will cause C’s action at time t to be over-written. For example, the agent may consider programming an external circuit that will interact with C’s circuit (“its circuit”).

However, if the agent performs this intervention, then the agent’s action at time t has no influence on C’s action at time t. This is because C’s action is, necessarily, equal to the value chosen at time u. (Note that this lack of influence means that the agent does not effectively run on C, for the notion of “effectively run on” considered! However, the agent may be said to effectively run on C with one exception.)

So, there is no apparent way to set up a contradiction between these interventions. If the agent decides early (at time u) to determine C’s action at time t, then that decision causes C’s action at time t; if the agent does not do so, then the agent’s decision at time t causes C’s action at time t; and these are mutually exclusive. Hence, there is not an apparent problem with redundant causality.

Epiphenomenalism

It may be suspected that the agent I take to be real is epiphenomenal. Perhaps all may be explained in a physicalist ontology, with no need to posit that there exists an agent that has observations and takes actions. (This is a criticism levied at some views on consciousness; my notion of metaphysically-real observations is similar enough to consciousness that these criticisms are potentially applicable)

The question in regards to explanatory power is: what is being explained, in terms of what? My answer is: observations are being explained, in terms of hypotheses that may be falsified by action/observations.

An eliminativist perspective denies the agent’s observations, and thus fails to explain what ought to be explained, in my view. However, eliminativists will typically believe that “scientific observation” is possible, and seek to explain scientific observations.

A relevant point to make here is that the notion of scientific observation assumes there is some scientific process happening that has observations. Indeed, the scientific method includes actions, such as testing, which rely on the scientific process taking actions. Thus, scientific processes may be considered as agents in the sense I am using the term.

My view is that erasing the agency of both individual scientists, and of scientific processes, puts the ontological and epistemic status of physics on shaky ground. It is hard to say why one should believe in physics, except in terms of it explaining observations, including experimental observations that require taking actions. And it is hard to say what it means for a physical hypothesis to be true, with no reference to how the hypothesis connects with observation and action.

In any case, the specter of epiphenomenalism presents no immediate paradox, and I believe that it does not succeed as a criticism.

Comparison to Gary Drescher’s view

I will now compare my account to Gary Drescher’s view. I have found Drescher’s view to be both particularly systematic and compelling, and to be quite similar to the views of other relevant philosophers such as Daniel Dennett and Eliezer Yudkowsky. Therefore, I will compare and contrast my view with Drescher’s. This will dispel the illusion that I am not saying anything new.

Notably, Drescher makes a similar observation to mine on Pearl: “Pearl’s formalism models free will rather than mechanical choice.”

Quoting section 5.3 of Good and Real:

Why did it take that action? In pursuit of what goal was the action selected? Was that goal achieved? Would the goal have been achieved if the machine had taken this other action instead? The system includes the assertion that if the agent were to do X, then Y would (probably) occur; is that assertion true? The system does not include the assertion that if it were to do P, Q would probably occur; is that omitted assertion true? Would the system have taken some other action just now if it had included that assertion? Would it then have better achieved its goals?

Insofar as such questions are meaningful and answerable, the agent makes choices in at least the sense that the correctness of its actions with respect to its designated goals is analyzable. That is to say, there can be means-end connections between its actions and its goals: its taking an action for the sake of a goal can make sense. And this is so despite the fact that everything that will happen-including every action taken and every goal achieved or not-is inalterably determined once the system starts up. Accordingly, I propose to call such an agent a choice machine.

Drescher is defining conditions of choice and agency in terms of whether the decisions “make sense” with respect to some goal, in terms of means-end connections. This is a “outside” view of agency in contrast with my “inside” view. That is, it says some thing is an agent when its actions connect with some goal, and when the internal logic of that thing takes into account this connection.

This is in contrast to my view, which takes agency to be metaphysically basic, and defines physical outside views (and indeed, physics itself) in terms of agency.

My view would disagree with Drescher’s on the “inalterably determined” assertion. In an earlier chapter, Drescher describes a deterministic block-universe view. This view-from-nowhere implies that future states are determinable from past states. In contrast, the view I present here rejects views-from-nowhere, instead taking the view of some agent in the universe, from whose perspective the future course is not already determined (as already argued in examinations of paradox).

Note that these disagreements are principally about metaphysics and ontology, rather than scientific predictions. I am unlikely to predict the results of scientific experiments differently from Drescher on account of this view, but am likely to account for the scientific process, causation, choice, and so on in different language, and using a different base model.

Conclusion and further research

I believe the view I have presented to be superior to competing views on multiple fronts, most especially logical/philosophical systematic coherence. I do not make the full case for this in this post, but take the first step, of explicating the basic ontology and how it accounts for phenomena that are critically necessary to account for.

An obvious next step is to tackle decision theory. Both Bayesianism and VNM decision theory are quite concordant with critical agential ontology, in that they propose coherence conditions on agents, which can be taken as criticisms. Naturalistic decision theory involves reconciling choice with physics, and so a view that already includes both is a promising starting point.

Multi-agent systems are quite important as well. The view presented so far is near-solipsistic, in that there is a single agent who conceptualizes the world. It will need to be defined what it means for there to be “other” agents. Additionally, “aggregative” agents, such as organizations, are important to study, including in terms of what it means for a singular agent to participate in an aggregative agent. “Standardized” agents, such as hypothetical skeptical mathematicians or philosophers, are also worthy subjects of study; these standardized agents are relevant in reasoning about argumentation and common knowledge. Also, while the discussion so far has been in terms of closed individualism, alternative identity views such as empty individualism and open individualism are worth considering from a critical agential perspective.

Other areas of study include naturalized epistemology and philosophy of mathematics. The view so far is primarily ontological, secondarily epistemological. With the ontology in place, epistemology can be more readily explored.

I hope to explore the consequences of this metaphysics further, in multiple directions. Even if I ultimately abandon it, it will have been useful to develop a coherent view leading to an illuminating refutation.

On the falsifiability of hypercomputation, part 2: finite input streams

In part 1, I discussed the falsifiability of hypercomputation in a typed setting where putative oracles may be assumed to return natural numbers. In this setting, there are very powerful forms of hypercomputation (at least as powerful as each level in the Arithmetic hierarchy) that are falsifiable.

However, as Vanessa Kosoy points out, this typed setting has difficulty applying to the real world, where agents may only observe a finite number of bits at once:

The problem with constructive halting oracles is, they assume the ability to output an arbitrary natural number. But, realistic agents can observe only a finite number of bits per unit of time. Therefore, there is no way to directly observe a constructive halting oracle. We can consider a realization of a constructive halting oracle in which the oracle outputs a natural number one digit at a time. The problem is, since you don’t know how long the number is, a candidate oracle might never stop producing digits. In particular, take any non-standard model of PA and consider an oracle that behaves accordingly. On some machines that don’t halt, such an oracle will claim they do halt, but when asked for the time it will produce an infinite stream of digits. There is no way to distinguish such an oracle from the real thing (without assuming axioms beyond PA).

This is an important objection. I will address it in this post by considering only oracles which return Booleans. In this setting, there is a form of hypercomputation that is falsifiable, although this hypercomputation is less powerful than a halting oracle.

Define a binary Turing machine to be a machine that outputs a Boolean (0 or 1) whenever it halts. Each binary Turing machine either halts and outputs 0, halts and outputs 1, or never halts.

Define an arbitration oracle to be a function that takes as input a specification of a binary Turing machine, and always outputs a Boolean in response. This oracle must always return 0 if the machine eventually outputs 0, and must always return 1 if the machine eventually outputs 1; it may decide arbitrarily if the machine never halts. Note that this can be emulated using a halting oracle, and is actually less powerful. (This definition is inspired by previous work in reflective oracles)

The hypothesis that a putative arbitration oracle (with the correct type signature, MachineSpec → Boolean) really is one is falsifiable. Here is why:

  1. Suppose for some binary Turing machine M that halts and returns 1, the oracle O wrongly has O(M) = 0. Then this can be proven by exhibiting M along with the number of steps required for the machine to halt.
  2. Likewise if M halts and returns 0, and the oracle O wrongly has O(M) = 1.

Since the property of some black-box being an arbitration oracle is falsifiable, we need only show at this point that there is no computable arbitration oracle. For this proof, assume (for the sake of contradiction) that O is a computable arbitration oracle.

Define a binary Turing machine N() := 1 – O(N). This definition requires quining, but this is acceptable for the usual reasons. Note that N always halts, as O always halts. Therefore we must have N() = O(N). However also N() = 1 – O(N), a contradiction (as O(N) is a Boolean).

Therefore, there is no computable arbitration oracle.

Higher hypercomputation?

At this point, it is established that there is a form of hypercomputation (specifically, arbitration oracles) that is falsifiable. But, is this universal? That is, is it possible that higher forms of hypercomputation are falsifiable in the same setting?

We can note that it’s possible to use an arbitration oracle to construct a model of PA, one statement at a time. To do this, first note that for any statement, it is possible to construct a binary Turing machine that returns 1 if the statement is provable, 0 if it is disprovable, and never halts if neither is the case. So we can iterate through all PA statements, and use an arbitration oracle to commit to that statement being true or false, on the basis of provability/disprovability given previous commitments, in a way that ensures that commitments are never contradictory (as long as PA itself is consistent). This is essentially the same construction idea as in the Demski prior over logical theories.

Suppose there were some PA-definable property P that a putative oracle O (mapping naturals to Booleans) must have (e.g. the property of being a halting oracle, for some encoding of Turing machines as naturals). Then, conditional on the PA-consistency of the existence of an oracle with property P, we can use the above procedure to construct a model of PA + existence of O satisfying P (i.e. a theory that says what PA says and also contains a function symbol O that axiomatically satisfies P). For any PA-definable statement about this oracle, this procedure will, at some finite time, have made a commitment about this statement.

So, access to an arbitration oracle allows emulating any other PA-definable oracle, in a way that will not be falsified by PA. It follows that hypercomputation past the level of arbitration oracles is not falsifiable by a PA-reasoner who can access the oracle, as PA cannot rule out that it is actually looking at something produced by only arbitration-oracle levels of hypercomputation.

Moreover, giving the falsifier access to an arbitration oracle can’t increase the range of oracles that are falsifiable. This is because, for any oracle-property P, we may consider a corresponding property on an oracle-pair (which may be represented by a single oracle-property through interleaving), stating that the first oracle is an arbitration oracle, and the second satisfies property P. This oracle pair property is falsifiable iff the property P is falsifiable by a falsifier with access to an arbitration oracle. This is because we may consider a joint search for falsifications, that simultaneously tries to prove the first oracle isn’t an arbitration oracle, and one that tries to prove that the second oracle doesn’t satisfy P assuming the first oracle is an arbitration oracle. Since the oracle pair property is PA-definable, it is emulable by a Turing machine with access to an arbitration oracle, and the pair property is unfalsifiable if it requires hypercomputation past arbitration oracle. But this implies that the original oracle property P is unfalsifiable by a falsifier with access to an arbitration oracle, if P requires hypercomputation past arbitration oracle.

So, arbitration oracles form a ceiling on what can be falsified unassisted, and also are unable to assist in falsifying higher levels of hypercomputation.

Conclusion

Given that arbitration oracles form a ceiling of computable falsifiability (in the setting considered here, which is distinct from the setting of the previous post), it may or may not be possible to define a logic that allows reasoning about levels of computation up to arbitration oracles, but which does not allow computation past arbitration oracles to be defined. Such a project could substantially clarify logical foundations for mathematics, computer science, and the empirical sciences.