What is metaphysical free will?

This is an attempt to explain metaphysical free will. This serves to explain metaphysics in general.

First: on the distinction between subject-properties and object-properties. The subject-object relation holds between some subject and some object. For example, a person might be a subject looking at a table, which is an object. Objects are, roughly, entities that could potentially be beheld by some subject.

Metaphysical free will is a property of subjects rather than objects. This will make more sense if I first contrast it with object-properties.

Objects can be defined by some properties: location, color, temperature, and so on. These properties yield testable predictions. Objects that are hot will be painful to touch, for example.

Object properties are best-defined when they are closely connected with testable predictions. The logical positivist program, though ultimately unsuccessful, is quite effective when applied to defining object properties. Similarly, the falsificationist program is successful in clarifying the meaning of a variety of scientific hypotheses in terms of predictions.

Intuitively, free will has to do with the ability of a someone to choose from one of multiple options. This implies a kind of unpredictability, at least from the perspective of the one making the choice.

Hence, there is a tension in considering free will as an object-property, in that object properties are about predictable relations, whereas free will is about choice. (Probabilistic randomness would not much help either, as e.g. taking an action with 50% probability does not match the intuitive notion of choice)

The most promising attempts to define free will as an object-property are within the physicalist school that includes Gary Drescher and Daniel Dennett. These define choice in terms of optimization: selection of the best action from a list of options, based upon anticipated consequences. This remains an object-property, because it yields a testable prediction: that the chosen action will be the one that is predicted to lead to the best consequences (and if the agent is well-informed, one that actually will). Drescher calls this “mechanical choice”.

I will now contrast object-properties (including mechanical choice) with subject-properties.

The distinction between subjects and objects is, to a significant extent, grammatical. Subjects do things, objects have things done to them. “I repaired the table with some glue.”

It is easy to detect notions of choice in ordinary language. “I could have gone to the store but I chose not to”; “you don’t have to do all that work”; “this software has so many options and capabilities“.

Functional definitions of objects are often defined in terms of the capabilities the subject has in using the object. For example, an axe can (roughly) be defined as an object that can be swung to hit another object and create a rift.

The desiderata of products, including software, are about usability. The desire is for an object that can be used in a number of ways.

Moral language, too, refers to capabilities. What one should do depends on what one can do; see Ought implies Can.

We could say, then, that this sort of subjunctive language is tied with orienting towards reality in a certain way. The orientation is, specifically, about noticing the capabilities that one’s self (and perhaps others) have, and communicating about these capabilities. I find that replacing the word “metaphysics” with the word “orientation” is often illuminating.

When this orientation is coupled with language, the language describes itself as between observation and action. That is: we talk as if we may take action on the basis of our speech. Thus, our language refers to, among other things, our capabilities, which are decision-relevant. This is in contrast to thinking of language as a side effect, or as an action in itself.

This could be studied in AI terms. An AI may be programmed to assume it has control of “its action”, and may have a model of what the consequences of various actions are, which correspond to its capabilities. From the AI’s perspective, it has a choice among multiple actions, hence in a sense “believing in metaphysical free will”. To program an AI to take effective actions, it isn’t sufficient for it to develop a model of what is; it must also develop a model of what could be made to happen. (The AI may, like a human, generate verbal reports of its capabilities, and select actions on the basis of these verbal reports)

Even relatively objective ways of orienting towards reality notice capabilities. I’ve already noted the phenomenon of functional definitions. If you look around, you will see many objects, and you will also likely notice affordances: ways these objects may be used. It may seem that these affordances inhere in the objects, although it would be more precise to say that affordances exist in the subject-object relationship rather than the object itself, as they depend on the subject.

Metaphysics isn’t directly an object of scientific study, but can be seen in the scientific process itself, in the way that one must comport one’s self towards reality to do science. This comportment includes tool usage, logic, testing, observation, recording, abstraction, theorizing, and so on. The language scientists use in the course of their scientific study, and their communication about the results, reveals this metaphysics.

(Yes, recordings of scientific practice may be subject to scientific study, but interpreting the raw data of the recordings as e.g. “testing” requires a theory bridging between the objective recorded data and whatever “testing” is, where “testing” is naively a type of intentional action)

Upon noticing choice in one’s metaphysics, one may choose to philosophize on it, to see if it holds up to consistency checks. If the metaphysics leads to inconsistencies, then it should be modified or discarded.

The most obvious possible source of inconsistency is in the relation between the metaphysical “I” and the physical body. If the “I” is identical with one’s own physical body, then metaphysical properties of the self, such as freedom of choice, must be physical properties, leading to the usual problems.

If, on the other hand, the “I” is not identical with one’s physical body, then it must be explained why the actions and observations of the “I” so much align with the actions of the body; the mind-body relation must be clarified.

Another issue is akrasia; sometimes it seems that the mind decides to take an action but the body does not move accordingly. Thus, free will may be quite partial, even if it exists.

I’ve written before about reconciliation between metaphysical free will and the predictions of physics. I believe this account is better than the others I have seen, although nowhere near complete.

It is worth contrasting the position of believing in metaphysical free will with its converse. For example, in the Bhagavad Gita, Krishna states that the wise do not identify with the doer:

All actions are performed by the gunas of prakriti. Deluded by identification with the ego, a person thinks, “I am the doer.” But the illumined man or woman understands the domain of the gunas and is not attached. Such people know that the gunas interact with each other; they do not claim to be the doer.

Bhagavad Gita, Easwaran translation, ch. 3, 27-28

In this case the textual “I” is dissociated from the “doer” which takes action. Instead, the “I” is more like a placeholder in a narrative created by natural mental processes (gunas), not an agent in itself. (The interpretation here is not entirely clear, as Krishna also gives commands to Arjuna)

This specific discussion of metaphysical free will generalizes to metaphysics in general. Metaphysics deals with the basic entities/concepts associated with reality, subjects, and objects. It is contrasted with physics, which deals with objects, generalizing from observable properties of them (and the space they exist in and so on) to lawful theories.

To summarize metaphysical free will:

  • We talk in ways that imply that we and others have capabilities and make choices.
  • This way of talking is possible and sufficiently-motivated because of the way we comport ourselves towards reality, noticing our capabilities.
  • Effective AIs should similarly be expected to model their own capabilities as distinct from the present state of the world.
  • It is difficult to coherently identify these capabilities we talk as if we have, with physical properties of our bodies.
  • Therefore, it may be a reasonable (at least provisional) assumption that the capabilities we have are not physical properties of our bodies, and are metaphysical.
  • The implications of this assumption can be philosophically investigated, to build out a more coherent account, or to find difficulties in doing so.
  • There are ways of critiquing metaphysical free will. The assumption may lead to contradictions, with observations, well-supported scientific theories, and so on.

The absurdity of un-referenceable entities

Whereof one cannot speak, thereof one must be silent.

Ludwig Wittgenstein, Tractatus Logico-Philosophicus

Some criticism of my post on physicalism is that it discusses reference, not the world. To quote one comment: “I consider references to be about agents, not about the world.” To quote another: “Remember, you have only established that indexicality is needed for reference, ie. semantic, not that it applies to entities in themselves” and also “you need to show that standpoints are ontologically fundamental, not just epistemically or semantically.” A post containing answers says: “However, everyone already kind of knows the we can’t definitely show the existence of any objective reality behind our observations and that we can only posit it.” (Note, I don’t mean to pick on these commentators, they’re expressing a very common idea)

These criticisms could be rephrased in this way:

“You have shown limits on what can be referenced. However, that in no way shows limits on the world itself. After all, there may be parts of the world that cannot be referenced.”

This sounds compelling at first: wouldn’t it be strange to think that properties of the world can be deduced from properties of human reference?

But, a slight amount of further reflection betrays the absurdity involved in asserting the possible existence of un-referenceable entities. “Un-referenceable entities” is, after all, a reference.

A statement such as “there exist things that cannot be referenced” is comically absurd, in that it refers to things in the course of denying their referenceability.

We may say, then, that it is not the case that there exist things that cannot be referenced. The assumption that this is the case leads to contradiction.

I believe this sort of absurdity is quite related to Kantian philosophy. Kant distinguished phenomena (appearances) from noumena (things-in-themselves), and asserted that through observation and understanding we can only understand phenomena, not noumena. Quoting Kant:

Appearances, to the extent that as objects they are thought in ac­cordance with the unity of the categories, are called phaenomena. If, however, I suppose there to be things that are merely objects of the un­derstanding and that, nevertheless, can be given to an intuition, although not to sensible intuition, then such things would be called noumena.

Critique of Pure Reason, Chapter III

Kant at least grants that noumena are given to some “intuition”, though not a sensible intuition. This is rather less ridiculous than asserting un-referenceability.

It is ironic that noumena-like entity being hypothesized in the present case (the physical world) would, by Kant’s criterion, be considered a scientific entity, a phenomenon.

Part of the absurdity in saying that the physical world may be un-referenceable is that it is at odds with the claim that physics is known through observation and experimentation. After all, un-referenceable observations and experimental results are of no use in science; they couldn’t made their way into theories. So the shadow of the world that can be known (and known about) by science is limited to the referenceable. The un-referenceable may, at best, be inferred (although, of course, this statement is absurd in refererring to the un-referenceable).

It’s easy to make fun of this idea of un-referenceable entities (infinitely more ghostly than ghosts), but it’s worth examining what is compelling about this (absurd) position, to see what, if anything, can be salvaged.

From a modern perspective, we can see things that a pre-modern perspective cannot conceptualize. For example, we know about gravitational lensing, quantum entanglement, Cesium, and so on. It seems that, from our perspective, these things-in-themselves did not appear in the pre-modern phenomenal world. While they had influence, they did not appear in a way clear enough for a concept to be developed.

We may believe it is, then, normative for the pre-moderns to accept, in humility, that there are things-in-themselves they lack the capacity to conceptualize. And we may, likewise, admit this of the modern perspective, in light of the likelihood of future scientific advances.

However, conceptualizability is not the same as referenceability. Things can be pointed to that don’t yet have clear concepts associated with them, such as the elusive phenomena seen in dreams.

In this case, pre-moderns may point to modern phenomena as “those things that will be phenomena in 500 years”. We can talk about those things our best theories don’t conceptualize that will be conceptualized later. And this is a kind of reference; it travels through space-time to access phenomena not immediately present.

This reference is vague, in that it doesn’t clearly define what things are modern phenomena, and also doesn’t allow one to know ahead of time what these phenomena are. But it’s finitely vague, in contrast to the infinite vagueness of “un-referenceable entities”. It’s at least possible to imagine accessing them, by e.g. becoming immortal and living until modern times.

A case that our current condition (e.g. modernity) cannot know about something can be translated into a reference: a reference to that which we cannot know on account of our conditions but could know under other imaginable conditions. Which is, indeed, unsurprising, given that any account of something outside our understanding existing, must refer to that thing outside our understanding.

My critique of an un-refererenceable physical world is quite similar to Nietzsche’s of Kant’s unknowable noumena. Nietzsche wrote:

The “thing-in-itself” nonsensical. If I remove all the relationships, all the “properties,” all the “activities” of a thing, the thing does not remain over; because thingness has only been invented by us owing to the requirements of logic, thus with the aim of defining, communication (to bind together the multiplicity of relationships, properties, activities).

Will to Power, sec. 558

I continue to be struck by the irony of the transition from physical phenomena to physical noumena. Kant’s positing of a realm of noumena was, perhaps, motivated by a kind of humility, a kind of respect for morality, an appeasement of theological elements in society, while still making a place for thinking-for-one’s-self, science, and so on, in a separate magisterium that can’t collide with the noumenal realm.

Any idea, whether it’s God, Physics, or Objectivity, can disconnect from the human cognitive faculty that relates ideas to the world of experience, and remain as a mere signifier, which persists as a form of unfalsifiable control. When Physics and Objectivity take on theological significance (as they do in modern times), a move analogous to Kant’s will place them in an un-falsifiable noumenal realm, with the phenomenal realm being the subjective and/or intersubjective. This is extremely ironic.

Puzzles for physicalists

The following is a list of puzzles that are hard to answer within a broadly-physicalist, objective paradigm. I believe critical agentialism can answer these better than competing frameworks; indeed, I developed it through contemplation on these puzzles, among others. This post will focus on the questions, though, rather than the answers. (Some of the answers can be found in the linked post)

In a sense what I have done is located “anomalies” relative to standard accounts, and concentrated more attention on these anomalies, attempting to produce a theory that explains them, without ruling out its ability to explain those things the standard account already explains well.

Indexicality

(This section would be philosophical plagiarism if I didn’t cite On the Origin of Objects.)

Indexicals are phrases whose interpretation depends on the speaker’s standpoint, such as “my phone” or “the dog over there”. It is often normal to treat indexicals as a kind of shorthand: “my phone” is shorthand for “the phone belonging to Jessica Taylor”, and “the dog over there” is shorthand for “the dog existing at coordinates 37.856570, -122.284176”. This expansion allows indexicals to be accounted for within an objective, standpoint-independent frame.

However, even these expanded references aren’t universally unique. In a very large universe, there may be a twin Earth which also has a dog at coordinates 37.856570, -122.284176. As computer scientists will find obvious, specifying spacial coordinates requires a number of bits logarithmic in the amount of space addressed. These globally unique identifiers get more and more unwieldy the more space is addressed.

Since we don’t expand out references enough to be sure they’re globally unique, our use of them couldn’t depend on such global uniqueness. An accounting of how we refer to things, therefore, cannot posit any causally-effective standpoint-independent frame that assigns semantics.

Indeed, the trouble of globally unique references can also be seen by studying physics itself. Physical causality is spacially local; a particle affects nearby particles, and there’s a speed-of-light limitation. For spacial references to be effective (e.g. to connect to observation and action), they have to themselves “move through” local space-and-time.

This is a bit like the problem of having a computer refer to itself. A computer may address computers by IP address. The IP address “127.0.0.1” always refers to this computer. These references can be resolved even without an Internet connection. It would be totally unnecessary and unwieldy for a computer to refer to itself (e.g. for the purpose of accessing files) through a globally-unique IP address, resolved through Internet routing.

Studying enough examples like these (real and hypothetical) leads to the conclusion that indexicality (and more specifically, deixis) are fundamental, and that even spacial references that appear to be globally unique are resolved deictically.

How does this relate to physics? It means references to “the objective world” or “the physical world” must also be resolved indexically, from some standpoint. Paying attention to how these references are resolved is critical.

The experimental results you see are the ones in front of you. You can’t see experimental results that don’t, through spacio-temporal information flows, make it to you. Thus, references to the physical which go through discussing “the thing causing experimental predictions” or “the things experiments failed to falsify” are resolved in a standpoint-dependent way.

It could be argued that physical law is standpoint-independent, because it is, symmetrically, true at each point in space-time. However, this excludes virtual standpoints (e.g. existing in a computer simulation), and additionally, this only means the laws are standpoint-independent, not the contents of the world, the things described by the laws.

Pre-reduction references

(For previous work, see “Reductive Refrerence”.)

Indexicality by itself undermines view-from-nowhere mythology, but perhaps not physicalism itself. What presents a greater challenge for physicalism is the problem of pre-reduced references (which are themselves deictic).

Let’s go back to the twin Earth thought experiment. Suppose we are in pre-chemistry times. We still know about water. We know water through our interactions with it. Later, chemistry will find that water has a particular chemical formula.

In pre-chemistry times, it cannot be known whether the formula is H2O, XYZ, etc, and these formulae are barely symbolically meaningful. If we discover that water is H2O, we will, after-the-fact, define “water” to mean H2O; if we discover that water is XYZ, we will, after-the-fact, define “water” to mean XYZ.

Looking back, it’s clear that “water” has to be H2O, but this couldn’t have been clear at the time. Pre-chemistry, “water” doesn’t yet have a physical definition; a physical definition is assigned later, which rationalizes previous use of the word “water” into a physicalist paradigm.

A philosophical account of reductionism needs to be able to discuss how this happens. To do this, it needs to be able to discuss the ontological status of entities such as “water” (pre-chemistry) that do not yet have a physical definition. In this intermediate state, the philosophy is talking about two entities, pre-reduced entities and physics, and considering various bridgings between them. So the intermediate state needs to contain entities that are not yet conceptualized physically.

A possible physicalist objection is that, while it may be a provisional truth that water is definitionally the common drinkable liquid found in rivers and so on, it is ultimately true that water is H20, and so physicalism is ultimately true. (This is very similar to the two truths doctrine in Buddhism).

Now, expanding out this account needs to provide an account of the relation between provisional and ultimate truth. Even if such an account could be provided, it would appear that, in our current state, we must accept it as provisionally true that some mental entities (e.g. imagination) do not have physical definitions, since a good-enough account has not yet been provided. And we must have a philosophy that can grapple with this provisional state of affairs, and judge possible bridgings as fitting/unfitting.

Moreover, there has never been a time without provisional definition. So this idea of ultimate truth functions as a sort of utopia, which is either never achieved, or is only achieved after very great advances in philosophy, science, and so on. The journey is, then, more important than the destination, and to even approach the destination, we need an ontology that can describe and usably function within the journeying process; this ontology will contain provisional definitions.

The broader point here is that, even if we have the idea of “ultimate truth”, that idea isn’t meaningful (in terms of observations, actions, imaginations, etc) to a provisional perspective, unless somehow the provisional perspective can conceptualize the relation between itself and the ultimate truth. And, if the ultimate truth contains all provisional truths (as is true if forgetting is not epistemically normative), the ultimate truth needs to conceptualize this as well.

Epistemic status of physics

Consider the question: “Why should I believe in physics?”. The conventional answer is: “Because it predicts experimental results.” Someone who can observe these experimental results can, thus, have epistemic justification for belief in physics.

This justificatory chain implies that there are cognitive actors (such as persons or social processes) that can do experiments and see observations. These actors are therefore, in a sense, agents.

A physicalist philosophical paradigm should be able to account for epistemic justifications of physics, else fails to self-ratify. So the paradigm needs to account for observers (and perhaps specifically active observers), who are the ones having epistemic justification for belief in physics.

Believing in observers leads to the typical mind-body problems. Disbelieving in observers fails to self-ratify. (Whenever a physicalist says “an observation is X physical entity”, it can be asked why X counts as an observation of the sort that is epistemically compelling; the answer to this question must bridge the mental and the physical, e.g. by saying the brain is where epistemic cognition happens. And saying “you know your observations are the things processed in this brain region because of physics” is circular.)

What mind-body problems? There are plenty.

Anthropics

The anthropic principle states, roughly, that epistemic agents must believe that the universe contains epistemic agents. Else, they would believe themselves not to exist.

The language of physics, on its own, doesn’t have the machinery to say what an observer is. Hence, anthropics is a philosophical problem.

The standard way of thinking about anthropics (e.g. SSA/SIA) is to consider the universe from a view-from-nowhere, and then assume that “my” body is in some way sampled “randomly” from this viewed-from-nowhere universe, such that I proceed to get observations (e.g. visual) from this body.

This is already pretty wonky. Indexicality makes the view-from-nowhere problematic. And the idea that “I” am “randomly” placed into a body is a rather strange metaphysics (when and where does this event happen?).

But perhaps the most critical issue is that the physicalist anthropic paradigm assumes it’s possible to take a physical description of the universe (e.g. as an equation) and locate observers in it.

There are multiple ways of considering doing so, and perhaps the best is functionalism, which will be discussed later. However, I’ll note that a subjectivist paradigm can easily find at least one observer: I’m right here right now.

This requires some explaining. Say you’re lost in an amusement park. There are about two ways of thinking about this:

  1. You don’t know where you are, but you know where the entrance is.
  2. You don’t know where the entrance is, but you know where you are.

Relatively speaking, 1 is an “objective” (relatively standpoint-independent) answer, and 2 is a “subjective” (relatively standpoint-dependent) answer.

2 has the intuitive advantage that you can point to yourself, but not to the entrance. This is because pointing is deictic.

Even while being lost, you can still find your way around locally. You might know where the Ferris wheel is, or the food stand, or your backpack. And so you can make a local map, which has not been placed relative to the entrance. This map is usable despite its disconnection from a global reference frame.

Anthropics seems to be saying something similar to (1). The idea is that I, initially, don’t know “where I am” in the universe. But, the deictic critique applies to anthropics as it applies to the amusement park case. I know where I am, I’m right here. I know where the Earth is, it’s under me. And so on.

This way of locating (at least one) observer works independent of ability to pick out observers given a physical description of the universe. Rather than finding myself relative to physics, I find physics relative to me.

Of course, the subjectivist framework has its own problems, such as difficulty finding other observers. So there is a puzzle here.

Tool use and functionalism

Functionalism is perhaps the current best answer as to how to locate observers in physics. Before discussing functionalism, though, I’ll discuss tools.

What’s a hammer? It’s a thing you can swing to apply lots of force to something at once. Hammers can be made of many physical materials, such as stone, iron, or wood. It’s about the function, not the substance.

The definition I gave refers to a “you” who can swing the hammer. Who is the “you”? Well, that’s standpoint-dependent. Someone without arms can’t use a conventional hammer to apply lots of force. The definition relativizes to the potential user. (Yes, a person without arms may say conventional hammers are hammers due to social convention, but this social convention is there because conventional hammers work for most people, so it still relativizes to a population.)

Let’s talk about functionalism now. Functionalism is based on the idea of multiple realizability: that a mind can be implemented on many different substrates. A mind is defined by its functions rather than its substrate. This idea is very familiar to computer programmers, who can hide implementation details behind an interface, and don’t need to care about hardware architecture for the most part.

This brings us back to tools. The definition I gave of “hammer” is an interface: it says how it can be used (and what effects it should create upon being used).

What sort of functions does a mind have? Observation, prediction, planning, modeling, acting, and so on. Now, the million-dollar question: Who is (actually or potentially) using it for these functions?

There are about three different answers to this:

  1. The mind itself. I use my mind for functions including planning and observation. It functions as a mind as long as I can use it this way.
  2. Someone or something else. A corporation, a boss, a customer, the government. Someone or something who wants to use another mind for some purpose.
  3. It’s objective. Things have functions or not independent of the standpoint.

I’ll note that 1 and 2 are both standpoint-dependent, thus subjectivist. They can’t be used to locate minds in physics; there would have to be some starting point, of having someone/something intending to use a mind for something.

3 is interesting. However, we now have a disanalogy from the hammer case, where we could identify some potential user. It’s also rather theological, in saying the world has an observer-independent telos. I find the theological implications of functionalism to be quite interesting and even inspiring, but that still doesn’t help physicalism, because physicalist ontology doesn’t contain standpoint-independent telos. We could, perhaps, say that physicalism plus theism yields objective functionalism. And this requires adding a component beyond the physical equation of the universe, if we wish to find observers in it.

Causality versus logic

Causality contains the idea that things “could” go one way or another. Else, causal claims reduce to claims about state; there wouldn’t be a difference between “if X, then Y” and “X causes Y”.

Pearlian causality makes this explicit; causal relations are defined in terms of interventions, which come from outside the causal network itself.

The ontology of physics itself is causal. It is asserted, not just that some state will definitely follow some previous state, but that there are dynamics that push previous states to new states, in a necessary way. (This is clear in the case of dynamical systems)

Indeed, since experiments may be thought of as interventions, it is entirely sensible that a physical theory that predicts the results of these interventions must be causal.

These “coulds” have a difficult status in relation to logic. Someone who already knows the initial state of a system can logically deduce its eventual state. To them, there is inevitability, and no logically possible alternative.

It appears that, while “could”s exist from the standpoint of an experimenter, they do not exist from the standpoint of someone capable of predicting the experimenter, such as Laplace’s demon.

This is not much of a problem if we’ve already accepted fundamental deixis and rejected the view-from-nowhere. But it is a problem for those who haven’t.

Trying to derive decision-theoretic causality from physical causality results in causal decision theory, which is known to have a number of bugs, due to its reliance on hypothetical extra-physical interventions.

An alternative is to try to develop a theory of “logical causality”, by which some logical facts (such as “the output of my decision process”, assuming you know your source code) can cause others. However, this is oxymoronic, because logic does not contain the affordance for intervention. Logic contains the affordance for constructing and checking proofs. It does not contain the affordance for causing 3+4 to equal 8. A sufficiently good reasoner can immediately see that “3+4=8” runs into contradiction; there is no way to construct a possible world in which 3+4=8.

Hence, it is hard to say that “coulds” exist in a standpoint-independent way. We may, then, accept standpoint-dependence of causation (as I do), or reject causation entirely.

Conclusion

My claim isn’t that physicalism is false, or that there don’t exist physicalist answers to these puzzles. My claim, rather, is that these puzzles are at least somewhat difficult, and that sufficient contemplation on them will destabilize many forms of physicalism. The current way I answer these puzzles is through a critical agential framework, but other ways of answering them are possible as well.

A conversation on theory of mind, subjectivity, and objectivity

I recently had a Twitter conversation with Roko Mijic. I believe it contains ideas that a wider philosophical/rationalist audience may find valuable, and so include here a transcript (quoted with permission).


Jessica: There are a number of “runs on top of” relations in physicalism:

  • mind runs on top of body
  • discrete runs on top of continuous
  • choice runs on top of causality

My present philosophy inverts the metaphysical order: mind/discrete/choice is more basic.

This is less of a problem than it first appears, because mind/discrete/choice can conceptualize, hypothesize, and learn about body/continuous/causality, and believe in a “effectively runs on” relation between the two.

In contrast, starting from body/continuous/causality has trouble with getting to mind/discrete/choice as even being conceptualizable, hence tending towards eliminativism.

Roko: Eliminitivism has a good track record though.

Jessica: Nah, it can’t account for what an “observation” is so can’t really explain observations.

Roko: I don’t really see a problem here. It makes perfect sense within a reductionist or eliminativist paradigm for a robot to have some sensors and to sense its environment. You don’t need a soul, or god, or strong free will, or objective person-independent values for that.

Jessica: Subjective Occam’s razor (incl. Solomonoff induction) says I should adopt the explanation that best explains my observations. Eliminativism can’t really say what “my” means here. If it believed in “my observations” it would believe in consciousness.

It has to do some ontological reshuffling around what “observations” are that, I think, undermines the case for believing in physics in the first place, which is that it explains my observations.

Roko: It means the observations that are caused by sensors plugged into the hardware that your algorithm instance is running on.

Jessica: That means “my algorithm instance” exists. Sounds like a mental entity. Can’t really have those under eliminativism (but can under functionalism etc).

Roko: I don’t want to eliminate my mental instance from my philosophy, that would be kind of ridiculous.

Jessica: Well, yes, so eliminativism is false. I understand eliminativism to mean there is only physical, no mental. Believing mental runs on physical could be functionalism, property dualism, or some other non-eliminativist position.

Roko: I think it makes more sense to think of mental things as existing subjectively (i.e. if they belong to you) and physical things as existing objectively. I definitely think that dualism is making a mistake in thinking of objectively-existing mental things

Jessica: I don’t think this objective/subjective dichotomy works out. I haven’t seen a good positive case, and my understanding of deixis leads me to believe that references to the objective must be resolved subjectively. See also On the Origin of Objects.

Basically I don’t see how we can, in a principled way, have judgments like “X exists but only subjectively, not objectively”. It would appear that by saying “X exists” I am asserting that X is an existent object (i.e. I’m saying something objective).

See also Thomas Nagel’s The View From Nowhere. Spoiler alert: there isn’t a view from nowhere, it’s an untenable concept.

Roko: My sensation of the flavor of chocolate exists but only subjectively.

Jessica: We’re now talking about the sensation of the flavor of the chocolate though. Is this really that different from talking about “that car over there”? I don’t see how some entities can, in a principled way, be classified as objective and some as subjective.

Like, in talking about “X” I’m porting something in my mental world-representation into the discursive space. I don’t at all see how to classify some of these portings as objective and some as subjective.

See also writing on the difficulty of the fact/opinion distinction.

Roko: It’s not actually the flavor “of” the chocolate though. It’s the sensation of flavor that your brain generates for you only, in response to certain nerve stimuli.

> I don’t see how some entities can, in a principled way, be classified as objective and some as subjective.

It’s very easy actually. Subjectives are the things that you cannot possibly be mistaken about, the “I think therefore I am’s”.

No deceiving demon can fool you into thinking that you’re experiencing the taste of chocolate, the color purple, or an orgasm. No deceiving demon can fool you into thinking that you’re visualizing the number 4.

Jessica: I don’t think this is right. The thought follows the experience. There can be mistranslations along the way. This might seem like a pedantic point but we’re talking about linguistic subjective statements so it’s relevant.

Translating the subjective into words can introduce errors. It’s at least as hard as, say, adding small numbers. So your definition means “1+1=2” is also subjective.

Roko: I think that it’s reasonable to see small number math instances as subjectives. I can see 3 pens. I can conceive of 3 dots, that’s a subjective thing. It’s in the same class as seeing red or smelling a rose.

[continuing from the deceiving demon thread] These are the things that are inherently part of your instance or mind. The objective, on the other hand, is always somewhat uncertain and inferred. Things are out there and they send signals to you. But you are inferring their existence.

Jessica: Okay, I agree with this sort of mental/outside-mental distinction, and you can define subjective/objective to mean that. This certainly doesn’t bring in other connotations of the objective, such as view-from-nowhere or observer-independence; I can be wrong about indexicals too.

Roko: Well it happens to be a property of our world that when different people infer the shape of the objective (i.e. draw maps), they always converge. This is what being in a shared reality means.

I mean they always converge if they follow the right principles, e.g. complexity priors, and those same principles are the ones that allow us to successfully manipulate reality via actions. That’s what the objective world out there is.

Jessica: Two reasons they could converge:

  1. Symmetry (this explains math)
  2. Existence of same entities (e.g. landmarks)

I’m fine with calling 1 observer-independent. Problem: your view of, and references to, 2, depend on your standpoint. Because of deixis.

Obvious deictic references are things like “the car over there” or “the room I’m in”. It is non-obvious but, I think, true, that all physical references are deictic. Which makes sense because physical causality is deictic (locally causal and symmetric).

Even “the Great Wall of China” refers to the Great Wall of China on our Earth. It couldn’t refer to the one on the twin Earth. And the people on twin Earth have “the Great Wall of China” refer to the one on the twin Earth, not ours.

At the same time, maps created starting from different places can be patched together, in a collage. However, pasting these together requires taking into account the standpoint-dependence of the individual maps being pasted together.

And at no point does this pasting-together result in a view from nowhere. It might seem that way because it keeps getting bigger and more zoomed-out. But at each individual time it’s finite.

Roko: Yes this is all nice but I think the point where we get to hard questions is when we think about mental phenomena that I would classify as subjectives as being part of the objective reality.

This is the petrl.org problem, or @reducesuffering worrying about whether plankton or insects “really do” have subjective experiences etc

Jessica: In my view “my observation” is an extremely deictic reference, to something maximally here-and-now, such that there isn’t any stabilization to do. Intermediate maps paste these extremely deictic maps together into less-deictic, but still deictic, maps. It never gets non-deictic.

It’s hard to pin down intersubjectively precisely because it’s so deictic. I can’t really port my here-and-now to your here-and-now without difficulty.

Subjective implication decision theory in critical agentialism

This is a follow-up to a previous post on critical agentialism, to explore the straightforward decision-theoretic consequences. I call this subjective implication decision theory, since the agent is looking at the logical implications of their decision according to their beliefs.

We already covered observable action-consequences. Since these are falsifiable, they have clear semantics in the ontology. So we will in general assume observable rewards, as in reinforcement learning, while leaving un-observable goals for later work.

Now let’s look at a sequence of decision theory problems. We will assume, as before, the existence of some agent that falsifiably believes itself to run on at least one computer, C.

5 and 10

Assume the agent is before a table containing a 5 dollar bill and a 10 dollar bill. The agent will decide which dollar bill to take. Thereafter, the agent will receive a reward signal: 5 if the 5 dollar bill is taken, and 10 if the 10 dollar bill is taken.

The agent may have the following beliefs about action-consequences: “If I take action 5, then I will get 5 reward. If I take action 10, then I will get 10 reward.” These beliefs follow directly from the problem description. Notably, the beliefs include beliefs about actions that might not actually be taken; it is enough that these actions are possible that their consequences are falsifiable.

Now, how do we translate these beliefs about action-consequences into decisions? The most straightforward way to do so is to select the policy that is believed to return the most reward. (This method is ambiguous under conditions of partial knowledge, though that is not a problem for 5 and 10).

This method (which I will call “subjective implication decision theory”) yields the action 10 in this case.

This is all extremely straightforward. We directly translated the problem description into a set of beliefs about action consequences. And these beliefs, along with the rule of subjective causal decision theory, yield an optimal action.

The difficulty of 5 and 10 comes when the problem is naturalized. The devil is in the details: how to naturalize the problem? The previous post examined a case of both external and internal physics, compatible with free will. There is no obvious obstacle to translating these physical beliefs to the 5 and 10 case: the dollar bills may be hypothesized to follow physical laws, as may the computer C.

Realistically, the agent should assume that the proximate cause of the selection of the dollar bill is not their action, but C’s action. Recall that the agent falsifiably believes it runs on C, in the sense that its observations/actions necessarily equal C’s.

Now, “I run on C” implies in particular: “If I select ‘pick up the 5 dollar bill’ at time t, then C does. If I select ‘pick up the 10 dollar bill’ at time t, then C does.” And the assumption that C controls the dollar bill implies: “If C selects ‘pick up the 5 dollar bill at time t‘, then the 5 dollar bill will be held at some time between t and t+k“, and also for the 10 dollar bill (for some k that is an upper bound of the time it takes for the dollar bill to be picked up). Together, these beliefs imply: “If I select ‘pick up the 5 dollar bill’ at time t, then the 5 dollar bill will be held at some time between t and t+k“, and likewise for the 10 dollar bill. At this point, the agent’s beliefs include ones quite similar to the ones in the non-naturalized case, and so subjective implication decision theory selects the 10 dollar bill.

Twin prisoner’s dilemma

Consider an agent that believes itself to run on computer C. It also believes there is another computer, C’, which has identical initial state and dynamics to C.

Each computer will output an action; the agent will receive 10 reward if C’ cooperates plus 1 reward if C defects (receiving 0 reward for defection).

As in 5 and 10, the agent believes: “If I cooperate, C cooperates. If I defect, C defects.” However, this does not specify the behavior of C’ as a function of the agent’s action.

It can be noted at this point that, because the agent believes C’ has identical initial state and dynamics to C, the agent believes (falsifiably) that C’ must output the same actions as C on each time step, as long as C and C’ receive idential observations. Since, in this setup, observations are assumed to be equal until C receives the reward (with C’ perhaps receiving a different reward), these beliefs imply: “If I cooperate, C’ cooperates. If I defect, C’ defects”.

In total we now have: “If I cooperate, C and C’ both cooperate. If I defect, C and C’ both defect”. Thus the agent believes itself to be straightforwardly choosing between a total reward of 10 for cooperation, and a total of 1 reward for defection. And so subjective implication decision theory cooperates.

Note that this comes apart from the conventional interpretation of CDT, which considers interventions on C’s action, rather than on “my action”. CDT’s hypothesized intervention updates C but not C’, as C and C’ are physically distinct.

Newcomb’s problem

This is very much similar to twin prisoner’s dilemma. The agent may falsifiably believe: “The Predictor filled box A with $1,000,000 if and only if I will choose only box A.” From here it is straightforward to derive that the agent believes: “If I choose to take only box A, then I will have $1,000,000. If I choose to take both boxes, then I will have $1,000.” Hence subjective implication decision theory selects only box A.

The usual dominance argument for selecting both boxes does not apply. The agent is not considering interventions on C’s action, but rather on “my action”, which is falsifiably predicted to be identical with C’s action.

Counterfactual mugging

In this problem, a Predictor flips a coin; if the coin is heads, the Predictor asks the agent for $10 (and the agent may or may not give it); if the coin is tails, the Predictor gives the agent $1,000,000 iff the Predictor predicts the agent would have given $10 in the heads case.

We run into a problem with translating this to a critical agential ontology. Since both branches don’t happen in the same world, it is not possible to state the Predictor’s accuracy as a falsifiable statement, as it relates two incompatible branches.

To avoid this problem, we will say that the Predictor predicts the agent’s behavior ahead of time, before flipping the coin. This prediction is not told to the agent in the heads case.

Now, the agent falsifiably believes the following:

  • If the coin is heads, then the Predictor’s prediction is equal to my choice.
  • If the coin is tails, then I get $1,000,000 if the Predictor’s prediction is that I’d give $10, otherwise $0.
  • If the coin is heads, then I get $0 if I don’t give the predictor $10, and -$10 if I do give the predictor $10.

From the last point, it is possible to show that, after the agent observes heads, the agent believes they get $0 if they don’t give $10, and -$10 if they do give $10. So subjective implication decision theory doesn’t pay.

This may be present a dynamic inconsistency in that the agent’s decision does not agree with what they would previously have wished they would decide. Let us examine this.

In a case where the agent chooses their action before the coin flip, the agent believes that, if they will pay up, the Predictor will predict this, and likewise for not paying up. Therefore, the agent believes they will get $1,000,000 if they decide to pay up and then the coin comes up tails.

If the agent weights the heads/tails branches evenly, then the agent will decide to pay up. This presents a dynamic inconsistency.

My sense is that this inconsistency should be resolved by considering theories of identity other than closed individualism. That is, it seems possible that the abstraction of receiving an observation and taking on action on each time step, while having a linear lifetime, is not a good-enough fit for the counterfactual mugging problem to achieve dynamic consistency.

Conclusion

It seems that subjective implication decision theory agrees with timeless decision theory on the problems considered, while diverging from causal decision theory, evidential decision theory, and functional decision theory.

I consider this a major advance, in that the ontology is more cleanly defined than the ontology of timeless decision theory, which considers interventions on logical facts. It is not at all clear what it means to “intervene on a logical fact”; the ontology of logic does not natively contain the affordance of intervention. The motivation for considering logical interventions was the belief that the agent is identical with some computation, such that its actions are logical facts. Critical agential ontology, on the other hand, does not say the agent is identical with any computation, but rather than the agent effectively runs on some computer (which implements some computation), while still being metaphysically distinct. Thus, we need not consider “logical counterfactuals” directly; rather, we consider subjective implications, and consider whether these subjective implications are consistent with the agent effectively running on some computer.

To handle cases such as counterfactual mugging in a dynamically consistent way (similar to functional decision theory), I believe that it will be necessary to consider agents outside the closed-individualist paradigm, in which one is assumed to have a linear lifetime with memory and observations/actions on each time step. However, I have not proceeded exploring in this direction yet.

[ED NOTE: After the time of writing I realized subjective implication decision theory, being very similar to proof-based UDT, has problems with spurious counterfactuals by default, but can similarly avoid these problems by “playing chicken with the universe”, i.e. taking some action it has proven it will not take.]

A critical agential account of free will, causation, and physics

This is an account of free choice in a physical universe. It is very much relevant to decision theory and philosophy of science. It is largely metaphysical, in terms of taking certain things to be basically real and examining what can be defined in terms of these things.

The starting point of this account is critical and agential. By agential, I mean that the ontology I am using is from the point of view of an agent: a perspective that can, at the very least, receive observations, have cognitions, and take actions. By critical, I mean that this ontology involves uncertain conjectures subject to criticism, such as criticism of being logically incoherent or incompatible with observations. This is very much in a similar spirit to critical rationalism.

Close attention will be paid to falsifiability and refutation, principally for ontological purposes, and secondarily for epistemic purposes. Falsification conditions specify the meanings of laws and entities relative to the perspective of some potentially falsifying agent. While the agent may believe in unfalsifiable entities, falsification conditions will serve to precisely pin down that which can be precisely pinned down.

I have only seen “agential” used in the philosophical literature in the context of agential realism, a view I do not understand well enough to comment on. I was tempted to use “subjective”; however, while subjects have observations, they do not necessarily have the ability to take actions. Thus I believe “agential” has a more concordant denotation.

You’ll note that my notion of “agent” already assumes one can take actions. Thus, a kind of free will is taken as metaphysically basic. This presupposition may cause problems later. However, I will try to show that, if careful attention is paid, the obvious problems (such as contradiction with determinism) can be avoided.

The perspective in this post can be seen as starting from agency, defining consequences in terms of agency, and defining physics in terms of consequences. In contrast, the most salient competing decision theory views (including framings of CDT, EDT, and FDT) define agency in terms of consequences (“expected utility maximization”), and consequences in terms of physics (“counterfactuals”). So I am rebasing the ontological stack, turning it upside-down. This is less absurd than it first appears, as will become clear.

(For simplicity, assume observations and actions are both symbols taken from some finite alphabet.)

Naive determinism

Let’s first, within a critical agential ontology, disprove some very basic forms of determinism.

Let A be some action. Consider the statement: “I will take action A”. An agent believing this statement may falsify it by taking any action B not equal to A. Therefore, this statement does not hold as a law. It may be falsified at will.

Let f() be some computable function returning an action. Consider the statement: “I will take action f()”. An agent believing this statement may falsify it by taking an action B not equal to f(). Note that, since the agent is assumed to be able to compute things, f() may be determined. So, indeed, this statement does not hold as a law, either.

This contradicts a certain strong formulation of naive determinism: the idea that one’s action is necessarily determined by some known, computable function.

Action-consequences

But wait, what about physics? To evaluate what physical determinism even means, we need to translate physics into a critical agential ontology. However, before we turn to physics, we will first consider action-consequences, which are easier to reason about.

Consider the statement: “If I take action A, I will immediately there-after observe O.” This statement is falsifiable, which means that if it is false, there is some policy the agent can adopt that will falsify it. Specifically, the agent may adopt the policy of taking action A. If the agent will, in fact, not observe O after taking this action, then the agent will learn this, falsifying the statement. So the statement is falsifiable.

Finite conjunctions of falsifiable statements are themselves falsifiable. Therefore, the conjunction “If I take action A, I will immediately there-after observe O; if I take action B, I will immediately there-after observe P” is, likewise, falsifiable.

Thus, the agent may have falsifiable beliefs about observable consequences of actions. This is a possible starting point for decision theory: actions having consequences is already assumed in the ontology of VNM utility theory.

Falsification and causation

Now, the next step is to account for physics. Luckily, the falsificationist paradigm was designed around demarcating scientific hypotheses, such that it naturally describes physics.

Interestingly, falsificationism takes agency (in terms of observations, computation, and action) as more basic than physics. For a thing to be falsifiable, it must be able to be falsified by some agent, seeing some observation. And the word able implies freedom.

Let’s start with some basic Popperian logic. Let f be some testable function (say, connected to a computer terminal) taking in a natural number and returning a Boolean. Consider the hypothesis: “For all x, f(x) is true”. This statement is falsifiable: if it’s false, then there exists some action-sequence an agent can take (typing x into the terminal, one digit at a time) that will prove it to be false.

The given hypothesis is a kind of scientific law. It specifies a regularity in the environment.

Note that there is a “bridge condition” at play here. That bridge condition is that the function f is, indeed, connected to the terminal, such that the agent’s observations of f are trustworthy. In a sense, the bridge condition specifies what f is, from the agent’s perspective; it allows the agent to locate f as opposed to some other function.

Let us now consider causal hypotheses. We already considered action-consequences. Now let us extend this analysis to reasoning about causation between external entities.

Consider the hypothesis: “If the match is struck, then it will alight immediately”. This hypothesis is falsifiable by an agent who is able to strike the match. If the hypothesis is false, then the agent may refute it by choosing to strike the match and then seeing the result. However, an agent who is unable to strike the match cannot falsify it. (Of course, this assumes the agent may see whether the match is alight after striking it)

Thus, we are defining causality in terms of agency. The falsification conditions for a causal hypothesis refer to the agent’s abilities. This seems somewhat wonky at first, but it is quite similar to Pearlian casuality, which defines causation in terms of metaphysically-real interventions. This order of definition radically reframes the determinism vs. free will apparent paradox, by defining the conditions of determinism (causality) in terms of potential action.

External physics

Let us now continue, proceeding to more universal physics. Consider the law of gravity, according to which a dropped object will accelerate downward at a near-constant weight. How might we port this law into an agential ontology?

Here is the assumption about how the agent interacts with gravity. The agent will choose some natural number as the height of an object. Thereafter, the object will fall, while a camera will record the height of the object at each natural-number time expressed in milliseconds, to the nearest natural-number millimeter from the ground. The agent may observe a printout of the camera data afterwards.

Logically, constant gravity implies, and is implied by, a particular quadratic formula for the height of the object as a function of the object’s starting height and the amount of time that has passed. This formula implies the content of the printout, as a function of the chosen height. So, the agent may falsify constant gravity (in the observable domain) by choosing an object-height, placing an object at that height, letting it fall, and checking the printout, which will show the law of constant gravity to be false, if the law in fact does not hold for objects dropped at that height (to the observed level of precision).

Universal constant gravity is not similarly falsifiable by this agent, because this agent may only observe this given experimental setup. However, a domain-limited law, stating that the law of constant gravity holds for all possible object-heights in this setup, up to the camera’s precision, is falsifiable.

It may seem that I am being incredibly pedantic about what a physical law is and what the falsification conditions are; however, I believe this level of pedantry is necessary for critically examining the notion of physical determinism to a high-enough level of rigor to check interaction with free will.

Internal physics

We have, so far, considered the case of an agent falsifying a physical law that applies to an external object. To check interaction with free will, we must interpret physical law applied to the agent’s internals, on which the agent’s cognition is, perhaps, running in a manner similar to software.

Let’s consider the notion that the agent itself is “running on” some Turing machine. We will need to specify precisely what such “running on” means.

Let C be the computer that the agent is considering whether it is running on. C has, at each time, a tape-state, a Turing machine state, an input, and an output. The input is attached to a sensor (such as a camera), and the output is attached to an actuator (such as a motor).

For simplicity, let us say that the history of tapes, states, inputs, and outputs is saved, such that it can be queried at a later time.

We may consider the hypothesis that C, indeed, implements the correct dynamics for a given Turing machine specification. These dynamics imply a relation between future states and past states. An agent may falsify these dynamics by checking the history and seeing if the dynamics hold.

Note that, because some states or tapes may be unreachable, it is not possible to falsify the hypothesis that C implements correct dynamics starting from unreachable states. Rather, only behavior following from reachable states may be checked.

Now, let us think on an agent considering whether they “run on” this computer C. The agent may be assumed to be able to query the history of C, such that it may itself falsify the hypothesis that C implements Turing machine specification M, and other C-related hypotheses as well.

Now, we can already name some ways that “I run on C” may be falsified:

  • Perhaps there is a policy I may adopt, and a time t, such that if I implement this policy, I will observe O at time t, but C will observe something other than O at time t.
  • Perhaps there is a policy I may adopt, and a time t, such that if I implement this policy, I will take action A at time t, but C will take an action other than A at time t.

The agent may prove these falsification conditions by adopting a given policy until some time t, and then observing C’s observation/action at time t, compared to their own observation/action.

I do not argue that the converse of these conditions exhaust what it means that “I run on C”. However, they at least restrict the possibility space by a very large amount. For the falsification conditions given to not hold, the observations and behavior of C must be identical with the agent’s own observations and behavior, for all possible policies the agent may adopt.

I will name the hypothesis with the above falsification conditions: “I effectively run on C”. This conveys that these conditions may not be exhaustive, while still being quite specific, and relating to effects between the agent and the environment (observations and actions).

Note that the agent can hypothesize itself to effectively run on multiple computers! The conditions for effectively running on one computer do not contradict the conditions for effectively running on another computer. This naturally handles cases of identical physical instantiations of a single agent.

At this point, we have an account of an agent who:

  • Believes they have observations and take free actions
  • May falsifiably hypothesize physical law
  • May falsifiably hypothesize that some computer implements a Turing machine specification
  • May falsifiably hypothesize that they themselves effectively run on some computer

I have not yet shown that this account is consistent. There may be paradoxes. However, this at least represents the subject matter covered in a unified critical agential ontology.

Paradoxes sought and evaluated

Let us now seek out paradox. We showed before that the hypothesis “I take action f()” may be refuted at will, and therefore does not hold as a necessary law. We may suspect that “I effectively run on C” runs into similar problems.

Self-contradiction

Remember that, for the “I effectively run on C” hypothesis to be falsified, it must be falsified at some time, at which the agent’s observation/action comes apart from C’s. In the “I take action f()” case, we had the agent simulate f() in order to take an opposite action. However, C need not halt, so the agent cannot simulate C until halting. Instead, the agent may select some time t, and run C for t steps. But, by the time the agent has simulated C for t steps, the time is already past t, and so the agent may not contradict C’s behavior at time t, by taking an opposite action. Rather, the agent only knows what C does at time t at some time later than t, and only their behavior after this time may depend on this knowledge.

So, this paradox is avoided by the fact that the agent cannot contradict its own action before knowing it, but cannot know it before taking it.

We may also try to create a paradox by assuming an external super-fast computer runs a copy of C in parallel, and feeds this copy’s action on subjective time-step t into the original C’s observation before time t; this way, the agent may observe its action before it takes it. However, now the agent’s action is dependent on its observation, and so the external super-fast computer must decide which observation to feed into the parallel C. The external computer cannot know what C will do before producing this observation, and so this attempt at a paradox cannot stand without further elaboration.

We see, now, that if free will and determinism are compatible, it is due to limitations on the agent’s knowledge. The agent, knowing it runs on C, cannot thereby determine what action it takes at time t, until a later time. And the initial attempt to provide this knowledge externally fails.

Downward causation

Let us now consider a general criticism of functionalist views, which is that of downward causation: if a mental entity (such as observation or action) causes a physical entity, doesn’t that either mean that the mental entity is physical, or that physics is not causally closed?

Recall that we have defined causation in terms of the agent’s action possibilities. It is straightforwardly the case, then, that the agent’s action at time t causes changes in the environment.

But, what of the physical cause? Perhaps it is also the case that C’s action at time t causes changes in the environment. If so, there is a redundancy, in that the change in the environment is caused both by the agent’s action and by C’s action. We will examine this possible redundancy to find potential conflicts.

To consider ways that C’s action may change the environment, we must consider how the agent may intervene on C’s action. Let us say we are concerned with C’s action at time t. Then we may consider the agent at some time u < t taking an action that will cause C’s action at time t to be over-written. For example, the agent may consider programming an external circuit that will interact with C’s circuit (“its circuit”).

However, if the agent performs this intervention, then the agent’s action at time t has no influence on C’s action at time t. This is because C’s action is, necessarily, equal to the value chosen at time u. (Note that this lack of influence means that the agent does not effectively run on C, for the notion of “effectively run on” considered! However, the agent may be said to effectively run on C with one exception.)

So, there is no apparent way to set up a contradiction between these interventions. If the agent decides early (at time u) to determine C’s action at time t, then that decision causes C’s action at time t; if the agent does not do so, then the agent’s decision at time t causes C’s action at time t; and these are mutually exclusive. Hence, there is not an apparent problem with redundant causality.

Epiphenomenalism

It may be suspected that the agent I take to be real is epiphenomenal. Perhaps all may be explained in a physicalist ontology, with no need to posit that there exists an agent that has observations and takes actions. (This is a criticism levied at some views on consciousness; my notion of metaphysically-real observations is similar enough to consciousness that these criticisms are potentially applicable)

The question in regards to explanatory power is: what is being explained, in terms of what? My answer is: observations are being explained, in terms of hypotheses that may be falsified by action/observations.

An eliminativist perspective denies the agent’s observations, and thus fails to explain what ought to be explained, in my view. However, eliminativists will typically believe that “scientific observation” is possible, and seek to explain scientific observations.

A relevant point to make here is that the notion of scientific observation assumes there is some scientific process happening that has observations. Indeed, the scientific method includes actions, such as testing, which rely on the scientific process taking actions. Thus, scientific processes may be considered as agents in the sense I am using the term.

My view is that erasing the agency of both individual scientists, and of scientific processes, puts the ontological and epistemic status of physics on shaky ground. It is hard to say why one should believe in physics, except in terms of it explaining observations, including experimental observations that require taking actions. And it is hard to say what it means for a physical hypothesis to be true, with no reference to how the hypothesis connects with observation and action.

In any case, the specter of epiphenomenalism presents no immediate paradox, and I believe that it does not succeed as a criticism.

Comparison to Gary Drescher’s view

I will now compare my account to Gary Drescher’s view. I have found Drescher’s view to be both particularly systematic and compelling, and to be quite similar to the views of other relevant philosophers such as Daniel Dennett and Eliezer Yudkowsky. Therefore, I will compare and contrast my view with Drescher’s. This will dispel the illusion that I am not saying anything new.

Notably, Drescher makes a similar observation to mine on Pearl: “Pearl’s formalism models free will rather than mechanical choice.”

Quoting section 5.3 of Good and Real:

Why did it take that action? In pursuit of what goal was the action selected? Was that goal achieved? Would the goal have been achieved if the machine had taken this other action instead? The system includes the assertion that if the agent were to do X, then Y would (probably) occur; is that assertion true? The system does not include the assertion that if it were to do P, Q would probably occur; is that omitted assertion true? Would the system have taken some other action just now if it had included that assertion? Would it then have better achieved its goals?

Insofar as such questions are meaningful and answerable, the agent makes choices in at least the sense that the correctness of its actions with respect to its designated goals is analyzable. That is to say, there can be means-end connections between its actions and its goals: its taking an action for the sake of a goal can make sense. And this is so despite the fact that everything that will happen-including every action taken and every goal achieved or not-is inalterably determined once the system starts up. Accordingly, I propose to call such an agent a choice machine.

Drescher is defining conditions of choice and agency in terms of whether the decisions “make sense” with respect to some goal, in terms of means-end connections. This is a “outside” view of agency in contrast with my “inside” view. That is, it says some thing is an agent when its actions connect with some goal, and when the internal logic of that thing takes into account this connection.

This is in contrast to my view, which takes agency to be metaphysically basic, and defines physical outside views (and indeed, physics itself) in terms of agency.

My view would disagree with Drescher’s on the “inalterably determined” assertion. In an earlier chapter, Drescher describes a deterministic block-universe view. This view-from-nowhere implies that future states are determinable from past states. In contrast, the view I present here rejects views-from-nowhere, instead taking the view of some agent in the universe, from whose perspective the future course is not already determined (as already argued in examinations of paradox).

Note that these disagreements are principally about metaphysics and ontology, rather than scientific predictions. I am unlikely to predict the results of scientific experiments differently from Drescher on account of this view, but am likely to account for the scientific process, causation, choice, and so on in different language, and using a different base model.

Conclusion and further research

I believe the view I have presented to be superior to competing views on multiple fronts, most especially logical/philosophical systematic coherence. I do not make the full case for this in this post, but take the first step, of explicating the basic ontology and how it accounts for phenomena that are critically necessary to account for.

An obvious next step is to tackle decision theory. Both Bayesianism and VNM decision theory are quite concordant with critical agential ontology, in that they propose coherence conditions on agents, which can be taken as criticisms. Naturalistic decision theory involves reconciling choice with physics, and so a view that already includes both is a promising starting point.

Multi-agent systems are quite important as well. The view presented so far is near-solipsistic, in that there is a single agent who conceptualizes the world. It will need to be defined what it means for there to be “other” agents. Additionally, “aggregative” agents, such as organizations, are important to study, including in terms of what it means for a singular agent to participate in an aggregative agent. “Standardized” agents, such as hypothetical skeptical mathematicians or philosophers, are also worthy subjects of study; these standardized agents are relevant in reasoning about argumentation and common knowledge. Also, while the discussion so far has been in terms of closed individualism, alternative identity views such as empty individualism an open individualism are worth considering from a critical agential perspective.

Other areas of study include naturalized epistemology and philosophy of mathematics. The view so far is primarily ontological, secondarily epistemological. With the ontology in place, epistemology can be more readily explored.

I hope to explore the consequences of this metaphysics further, in multiple directions. Even if I ultimately abandon it, it will have been useful to develop a coherent view leading to an illuminating refutation.