“Infohazard” is a predominantly conflict-theoretic concept

Nick Bostrom writes about “information hazards”, or “infohazards”:

Information hazards are risks that arise from the dissemination or the potential dissemination of true information that may cause harm or enable some agent to cause harm. Such hazards are often subtler than direct physical threats, and, as a consequence, are easily overlooked. They can, however, be important. This paper surveys the terrain and proposes a taxonomy.

The paper considers both cases of (a) the information causing harm directly, and (b) the information enabling some agent to cause harm.

The main point I want to make is: cases of information being harmful are easier to construct when different agents’ interests/optimization are misaligned; when agents’ interests are aligned, infohazards still exist, but they’re weirder edge cases.  Therefore, “infohazard” being an important concept is Bayesian evidence for misalignment of interests/optimizations, which would be better-modeled by conflict theory than mistake theory.

Most of the infohazard types in Bostrom’s paper involve conflict and/or significant misalignment between different agents’ interests:

1.  Data hazard: followed by discussion of a malicious user of technology (adversarial)

2.  Idea hazard: also followed by discussion of a malicious user of technology (adversarial)

3.  Attention hazard: followed by a discussion including the word “adversary” (adversarial)

4.  Template hazard: follows discussion of competing firms copying each other (adversarial)

5.  Signaling hazard: follows discussion of people avoiding revealing their properties to others, followed by discussion of crackpots squeezing out legitimate research (adversarial)

6.  Evocation hazard: follows discussion of activation of psychological processes through presentation (ambiguously adversarial, non-VNM)

7.  Enemy hazard: by definition adversarial

8.  Competitiveness hazard: by definition adversarial

9.  Intellectual property hazard: by definition adversarial

10. Commitment hazard: follows discussion of commitments in adversarial situations (adversarial)

11. Knowing-too-much hazard: followed by discussion of political suppression of information (adversarial)

12. Norm hazard: followed by discussion of driving on sides of road, corruption, and money (includes adversarial situations)

13. Information asymmetry hazard: followed by discussion of “market for lemons” (adversarial)

14. Unveiling hazard: followed by discussion of iterated prisoner’s dilemma (misalignment of agents’ interests)

15. Recognition hazard: followed by discussion of avoiding common knowledge about a fart (non-VNM, non-adversarial, ambiguous whether this is a problem on net)

16. Ideological hazard: followed by discussion of true-but-misleading information resulting from someone starting with irrational beliefs (non-VNM, non-adversarial, not a strong argument against generally spreading information)

17. Distraction and temptation hazards: followed by discussion of TV watching (non-VNM, though superstimuli are ambiguously adversarial)

18. Role model hazard: followed by discussion of copycat suicides (non-VNM, non-adversarial, ambiguous whether this is a problem on net)

19. Biasing hazard: followed by discussion of double-blind experiments (non-VNM, non-adversarial)

20. De-biasing hazard: follows discussion of individual biases helping society (misalignment of agents’ interests)

21. Neuropsychological hazard: followed by discussion of limitations of memory architecture (non-VNM, non-adversarial)

22. Information-burying hazard: follows discussion of irrelevant information making relevant information harder to find (non-adversarial, though uncompelling as an argument against sharing relevant information)

23. Psychological reaction hazard: follows discussion of people being disappointed (non-VNM, non-adversarial)

24. Belief-constituted value hazard: defined as a psychological issue (non-VNM, non-adversarial)

25. Disappointment hazard: subset of psychological reaction hazard (non-VNM, non-adversarial, ambiguous whether this is a problem on net)

26. Spoiler hazard: followed by discussion of movies and TV being less fun when the outcome is known (non-VNM, non-adversarial, ambiguous whether this is a problem on net)

27. Mindset hazard: followed by discussion of cynicism and atrophy of spirit (non-VNM, non-adversarial, ambiguous whether this is a problem on net)

28. Embarrassment hazard: followed by discussion of self-image and competition between firms (non-VNM, includes adversarial situations)

29. Information system hazard: follows discussion of viruses and other inputs to programs that cause malfunctioning (includes adversarial situations)

30. Information infrastructure failure hazard: definition mentions cyber attacks (adversarial)

31. Information infrastructure misuse hazard: follows discussion of Stalin reading emails, followed by discussion of unintentional misuse (includes adversarial situations)

32. Robot hazard: followed by discussion of a robot programmed to launch missiles under some circumstances (adversarial)

33. Artificial intelligence hazard: followed by discussion of AI outcompeting and manipulating humans (adversarial)

Of these 33 types, 14 are unambiguously adversarial, 3 include adversarial situations, 2 are ambiguously adversarial, and 2 include significant misalignment of interests between different agents.  The remaining 12 generally involve non-VNM behavior, although there is one case (information-burying hazard) where the agent in question might be a utility maximizer (though, this type of hazard is not an argument against sharing relevant information).  I have tagged multiple of these as “ambiguous whether this is a problem on net”, to indicate the lack of a strong argument that the information in question (e.g. disappointing information) is actually bad for the receiver on net.

Simply counting examples in the paper isn’t a particularly strong argument, however.  Perhaps the examples have been picked through a biased process.  Here I’ll present some theoretical arguments.

There is a standard argument that the value of information is non-negative, that every rational agent from its own perspective cannot expect to be harmed by learning anything.  I will present this argument here.

Let’s say the actual state of the world is W \in \mathcal{W}, and the agent will take some action A \in \mathcal{A}.  The agent’s utility will be u(W, A) \in \mathbb{R}.  The agent starts with a distribution over W, P(W).  Additionally, the agent has the option of observing an additional fact m(W) \in \mathcal{O}, which it will in the general case not know at the start.  (I chose m to represent “measure”.)

Now, the question is, can the agent achieve lower utility in expectation if they learn m(W) than if they don’t?

Assume the agent doesn’t learn m(W).  Then the expected utility by taking some action a equals \sum_{w \in \mathcal{W}}P(W=w)u(w, a).  The maximum achievable expected utility is therefore

max_{a \in \mathcal{A}} \sum_{w \in \mathcal{W}} P(W=w) u(w, a).

On the other hand, suppose the agent learns m(W) = o.  Then the expected utility by taking action a equals \sum_{w \in \mathcal{W}} P(W=w|m(W)=o)u(w, a), and the maximum achievable expected utility is

max_{a \in \mathcal{A}} \sum_{w \in \mathcal{W}} P(W=w|m(W)=o)u(w, a).

Under uncertainty about m(W), the agent’s expected utility equals

\sum_{o \in \mathcal{O}}P(m(W)=o) \max_{a \in \mathcal{A}} \sum_{w \in \mathcal{W}}P(W=w|m(W)=o)u(w, a).

Due to convexity of the \max function, this is greater than or equal to:

\max_{a \in \mathcal{A}} \sum_{o \in \mathcal{O}}P(m(W)=o) \sum_{w \in \mathcal{W}}P(W=w|m(W)=o)u(w, a).

Re-arranging the summation and applying the definition of conditional probability, this is equal to:

\max_{a \in \mathcal{A}} \sum_{o \in \mathcal{O}} \sum_{w \in \mathcal{O}} P(m(W)=o \wedge W = w)u(w, a).

Marginalizing over $o$, this is equal to:

max_{a \in \mathcal{A}} \sum_{w \in \mathcal{W}} P(W=w) u(w, a).

But this is the same as the utility achieved without learning m(W).  This is sufficient to show that, by learning m(W), the agent does not achieve a lower expected utility.

(Note that this argument is compatible with the agent getting lower utility u(W, A) in some possible worlds due to knowing m(W), which would be a case of true-but-misleading information; the argument deals in expected utility, implying that the cases of true-but-misleading information are countervailed by cases of true-and-useful information.)

Is it possible to construct a multi-agent problem, where the agents have the same utility function, and they are all harmed by some of them learning something? Suppose Alice and Bob are deciding on a coffee shop to meet without being able to communicate beforehand, by finding a Schelling point.  The only nearby coffee shop they know about is Carol’s.  Derek also owns a coffee shop which is nearby.  Derek has the option of telling Alice and Bob about his coffee shop (and how good it is); they can’t contact him or each other, but they can still receive his message (e.g. because he advertises it on a billboard).

If Alice and Bob don’t know about Derek’s coffee shop, they successfully meet at Carol’s coffee shop with high probability.  But, if they learn about Derek’s coffee shop, they may find it hard to decide which one to go to, and therefore fail to meet at the same one.  (I have made the point previously that about-equally-good options can raise problems in coordination games).

This result is interesting because it’s a case of agents with the same goal (meeting at a good coffee shop) accomplishing that goal worse by knowing something than by not knowing it.  There are some problems with this example, however.  For one, Derek’s coffee shop may be significantly better than Carol’s, in which case Derek informing both Alice and Bob leads to them both meeting at Derek’s coffee shop, which is better than Carol’s.  If Derek’s coffee shop is significantly worse, then Derek informing Alice and Bob does not impact their ability to meet at Carol’s coffee shop.  So Derek could only predictably make their utility worse if somehow he knew that his shop was about as good to them as Carol’s.  But then it could be argued that, by remaining silent, Derek is sending Alice and Bob a signal that his coffee shop is about as good, since he would not have remained silent in other cases.

So even when I try to come up with a case of infohazards among cooperative agents, the example has problems.  Perhaps other people are better than me at coming up with such examples.  (While Bostrom presents examples of information hazards among agents with aligned interests in the paper, these lack enough mathematical detail to formally analyze them with utility theory to the degree that the coffee shop example can be analyzed.)

It is also possible that utility theory is substantially false, that humans don’t really “have utility functions” and therefore there can be information hazards.  Bostrom’s paper presents multiple examples of non-VNM behavior in humans.  This would call for revision of utility theory in general, which is a project beyond the scope of this post.

It is, in contrast, trivial to come up with examples of information hazards in competitive games.  Suppose Alice and Bob are playing Starcraft.  Alice is creating lots of some unit (say, zerglings).  Alice could tell Bob about this.  If Bob knew this, he would be able to prepare for an attack by this unit.  This would be bad for Alice’s ability to win the game.

It is still the case that Bob gains higher expected utility by knowing about Alice’s zerglings, which makes it somewhat strange to call this an “information hazard”; it’s more natural to say that Alice is benefitting from an information asymmetry.  Since she’s playing a zero-sum game with Bob, anything that increases Bob’s (local) utility function, including having more information and options, decreases Alice’s (local) utility function.  It is, therefore, unsurprising that the original “value of information is non-negative” argument can be turned on its head to show that “your opponent having information is bad for you”.

There are, of course, games other than common-payoff games and zero-sum games, which could also contain cases of some agent being harmed by another agent having information.

It is, here, useful to distinguish the broad sense of “infohazard” that Bostrom uses, which includes multi-agent situations, from a narrower sense of “self-infohazards”, in which a given individual gains a lower utility by knowing something.  The value-of-information argument presented at the start shows that there are no self-infohazards in an ideal game-theoretic case.  Cooperative situations, such as the coffee shop example, aren’t exactly cases of a self-infohazard (which would violate the original value-of-information theorem), although there is a similarity in that we could consider Alice and Bob as parts of a single agent given that they have the same local utility function.  The original value of information argument doesn’t quite apply to these (which allows the coffee shop example to be constructed), but almost does, which is why the example is such an edge case.

Some apparent cases of self-infohazards are actually cases where it is bad for some agent A to be believed by some agent B to know some fact X.  For example, the example Bostrom gives of political oppression of people knowing some fact is a case of the harm to the knower coming not from their own knowledge, but from others’ knowledge of their knowledge.

The Sequences contain quite a significant amount of advice to ignore the idea that information might be bad for you, to learn the truth anyway: the Litany of Tarski, the Litany of Gendlin, “that which can be destroyed by the truth should be”.  This seems like basically good advice even if there are some edge-case exceptions; until coming up with a better policy than “always be willing to learn true relevant information”, making exceptions risks ending up in a simulacrum with no way out.

A case of some agent A denying information to some agent B with the claim that it is to agent B’s benefit is, at the very least, suspicious.  As I’ve argued, self-infohazards are impossible in the ideal utility theoretic case.  To the extent that human behavior and values deviate from utility theory, such cases could be constructed.  Even if such cases exist, however, it is hard for agent B to distinguish this case from one where agent A’s interests and/or optimization are misaligned with B’s, so that the denial of information is about maintaining an information asymmetry that advantages A over B.

Sociologically, it is common in “cult” situations for the leader(s) to deny information to the followers, often with the idea that it is to the followers’ benefit, that they are not yet ready for this information.  Such esotericism allows the leaders to maintain an information asymmetry over the followers, increasing their degree of control.  The followers may trust the leaders to really be withholding only the information that would be harmful to them.  But this is a very high degree of trust.  It makes the leaders effectively unaccountable, since they are withholding the information that could be used to evaluate their claims, including the claim that withholding the information is good for the followers.  The leaders, correspondingly, take on quite a high degree of responsibility for the followers’ lives, like a zookeeper takes on responsibility for the zoo animals’ lives; given that the followers don’t have important information, they are unable to make good decisions when such decisions depend on this information.

It is common in a Christian context for priests to refer to their followers as a “flock”, a herd of people being managed and contained, partially through information asymmetry: use of very selective readings of the Bible, without disclaimers about the poor historical evidence for the stories’ truth (despite priests’ own knowledge of Biblical criticism), to moralize about ways of life.  It is, likewise, common for parents to lie to children partially to “maintain their innocence”, in a context where the parents have quite a lot of control over the childrens’ lives, as their guardians.  My point here isn’t that this is always bad for those denied information (although I think it is in the usual case), but that it requires a high degree of trust and requires the information-denier to take on responsibility for making decisions that the one denied information is effectively unable to make due to the information disadvantage.

The Garden of Eden is a mythological story of a self-infohazard: learning about good and evil makes Eve and Adam less able to be happy animals, more controlled by shame.  It is, to a significant degree, a rigged situation, since it is set up by Yahweh.  Eve’s evaluation, that learning information will be to her benefit, is, as argued, true in most cases; she would have to extend quite a lot of trust to her captor to believe that she should avoid information that would be needed to escape from the zoo.  In this case her captor is, by construction, Yahweh, so a sentimentally pro-Yahweh version of the story shows mostly negative consequences from this choice.  (There are also, of course, sentimentally anti-Yahweh interpretations of the story, in Satanism and Gnosticism, which consider Eve’s decision to learn about good and evil to be wise.)

Situations that are explicitly low-trust usually don’t use the concept of a self-infohazard.  In a corporate setting, it’s normal to think that it’s good for your own company to have more information, but potentially bad for other companies to have information such as trade secrets or information that could be used to make legal threats.  The goal of corporate espionage isn’t to spread information to opponents while avoiding learning information about them, it’s to learn about the opponents while preventing them from learning about your company, which may include actively misleading them by presenting them with false or misleading information.  The harms of receiving misleading information are mitigated, not by not gathering information in the first place, but by gathering enough information to cross-check and build a more complete picture.

The closest corporate case I know of to belief in self-infohazards is in a large tech company which has a policy of not allowing engineers to read the GDPR privacy law; instead, their policy is to have lawyers read the law, and give engineers guidelines for “complying with the law”.  The main reason for this is that following the law literally as stated would not be possible while still providing the desired service.  Engineers, who are more literal-minded than lawyers, are more likely to be hindered by knowing the literal content of the law than they are if they receive easier guidelines from lawyers.  This is still somewhat of an edge case, since information isn’t being denied to the engineers for their own sake so much as so the company can claim to not be knowingly violating the law; given the potential for employees to be called to the witness stand, denying information to employees can protect the company as a whole.  So it is still, indirectly, a case of denying information to potential adversaries (such as prosecutors).

In a legal setting, there are cases where information is denied to people, e.g. evidence is considered inadmissible due to police not following procedure in gaining that information.  This information is not denied to the jury primarily because it would be bad for the jury; rather, it’s denied to them because it would be unfair to one side in the case (such as the defendant), and because admitting such information would create bad incentives for information-gatherers such as police detectives, which is bad for information-gatherers who are following procedure; it would also increase executive power, likely at the expense of the common people.

So, invocation of the notion of a self-infohazard is Bayesian evidence, not just of a conflict situation, but of a concealed conflict situation, where outsiders are more likely than insiders to label the situation as a conflict, e.g. in a cult.

It is important to keep in mind that, for A to have information they claim to be denying to B for B’s benefit, A must have at some point decided to learn this information.  I have rarely, if ever, heard cases where A, upon learning the information, actively regrets it; rather, their choice to learn about it shows that they expected such learning to be good for them, and this expectation is usually agreed with later.  I infer that it is common for A to be applying a different standard to B than to A; to consider B weaker, more in need of protection, and less agentic than A.

To quote @jd_pressman on Twitter:

Empathy based ethics in a darwinian organism often boils down to “Positive utilitarianism for me, negative utilitarianism for thee.”

Different standards are often applied because the situation actually is more of a conflict situation than is being explicitly represented.  One applies to one’s self a standard that values positively one’s agency, information, capacity, and existence, and one applies to others a standard that values negatively their agency, information, capacity, and existence; such differential application increases one’s position in the conflict (e.g. evolutionary competition) relative to others.  This can, of course, be rhetorically justified in various ways by appealing to the idea that the other would “suffer” by having greater capacities, or would “not be able to handle it” and is “in need of protection”.  These rhetorical justifications aren’t always false, but they are suspicious in light of the considerations presented.

Nick Bostrom, for example, despite discussing “disappointment risks”, spends quite a lot of his time thinking about very disappointing scenarios, such as AGI killing everyone, or nuclear war happening.  This shows a revealed preference for, not against, receiving disappointing information.

An important cultural property of the word “infohazard” is that it is used quite differently in a responsible/serious and a casual/playful context.  In a responsible/serious context, the concept is used to invoke the idea that terrible consequences, such as the entire world being destroyed, could result from people talking openly about certain topics, justifying centralization of information in a small inner ring.  In a casual/playful context, “infohazard” means something other people don’t want you to know, something exciting the way occult and/or Eldritch concepts are exciting, something you could use to gain an advantage over others, something delicious.

Here are a few Twitter examples:

  • “i subsist on a diet consisting mostly of infohazards” (link)
  • “maintain a steady infohazard diet like those animals that eat poisonous plants, so that your mind will poison those that try to eat it” (link)
  • “oooooh an information hazard… Googling, thanks” (link)
  • “are you mature/cool enough to handle the infohazard that a lot of conversations about infohazards are driven more by games around who is mature/cool enough than by actual reasoned concern about info & hazards?” (link)

The idea that you could come up with an idea that harms people in weird ways when they learn about it is, in a certain light, totally awesome, the way mind control powers are awesome, or the way being an advanced magical user (wizard/witch/warlock/etc) is awesome.  The idea is fun the way the SCP wiki is fun (especially the parts about antimemetics).

It is understandable that this sort of value inversion would come from an opposition attitude to “responsible” misinforming of others, as a form of reverse psychology that is closely related to the Streisand effect.  Under a conflict theory, someone not wanting you to know something is evidence for it being good for you to learn!

This can all still be true even if there are some actual examples of self-infohazards, due to non-VNM values or behavior in humans.  However, given the argument I am making, the more important the “infohazard” concept is considered, the more evidence there is of a possibly-concealed conflict; continuing to apply a mistake theory to the situation becomes harder and harder, in a Bayesian sense, as this information (about people encouraging each other not to accumulate more information) accumulates.

As a fictional example, the movie They Live (1988) depicts a situation in which aliens have taken over and are ruling Earth.  The protagonist acquires sunglasses that show him the ways aliens control himself and others.  He attempts to put the sunglasses on his friend, to show him the political situation; however, his friend physically tries to fight off this attempt, treating the information revealed by the sunglasses as a self-infohazard.  This is in large part because, by seeing the concealed conflict, the friend could be uncomfortably forced to modify his statements and actions accordingly, such as by picking sides.

The movie Bird Box (2018) is a popular and evocative depiction of a self-infohazard (similar in many ways to Langford’s basilisk), in the form of a monster that, when viewed, causes the viewer to die with high probability, and to with low probability become a “psycho” who tries to show the monster to everyone else forcefully.  The main characters use blindfolds and other tactics to avoid viewing the monster.  There was a critical discussion of this movie that argued that the monster represents racism.  The protagonists, who are mostly white (although there is a black man who is literally an uncle named “Tom”), avoid seeing inter-group conflict; such a strategy only works for people with a certain kind of “privilege”, who don’t need to directly see the conflict to navigate daily life.  Such an interpretation of the movie is in line with the invocation of “infohazards” being Bayesian evidence of concealed conflicts.

What is one to do if one feels like something might be an “infohazard” but is convinced by this argument that there is likely some concealed conflict?  An obvious step is to model the conflict, as I did in the case of the tech company “complying” with GDPR by denying engineers information.  Such a multi-agent model makes it clear why it may be in some agents’ interest for some agents (themselves or others) to be denied information.  It also makes it clearer that there are generally losers, not just winners, when information is hidden, and makes it clearer who those losers are.

There is a saying about adversarial situations such as poker games: “If you look around the table and you can’t tell who the sucker is, it’s you”.  If you’re in a conflict situation (which the “infohazard” concept is Bayesian evidence for), and you don’t know who is losing by information being concealed, that’s Bayesian evidence that you are someone who is harmed by this concealment; those who are tracking the conflict situation, by knowing who the losers are, are more likely to ensure that they end up ahead.

As a corollary of the above (reframing “loser” as “adversary”): if you’re worried about information spreading because someone might be motivated to use it to do something bad for you, knowing who that someone is and the properties of them and their situation allows you to better minimize the costs and maximize the benefits of spreading or concealing information, e.g. by writing the information in such a way that some audiences are more likely than others to read it and consider it important.

Maybe the “infohazard” situation you’re thinking about really isn’t a concealed conflict and it’s actually a violation of VNM utility; in that case, the consideration to make clear is how and why VNM doesn’t apply to the situation.  Such a consideration would be a critique of Bayesian/utility based models applying to humans of the sort studied by the field of behavioral economics.  I expect that people will often be biased towards looking for exceptions to VNM rather than looking for concealed conflicts (as they are, by assumption, concealing the conflict); however, that doesn’t mean that such exceptions literally never occur.

Selfishness, preference falsification, and AI alignment

If aliens were to try to infer human values, there are a few information sources they could start looking at.  One would be individual humans, who would want things on an individual basis.  Another would be expressions of collective values, such as Internet protocols, legal codes of states, and religious laws.  A third would be values that are implied by the presence of functioning minds in the universe at all, such as a value for logical consistency.

It is my intuition that much less complexity of value would be lost by looking at the individuals than looking at protocols or general values of minds.

Let’s first consider collective values.  Inferring what humanity collectively wants from internet protocol documents would be quite difficult; the fact a SYN packet must be followed by a SYN-ACK packet is a decision made in order to allow communication to be possible rather than an expression of a deep value.  Collective values, in general, involve protocols that allow different individuals to cooperate with each other despite their differences; they need not contain the complexity of individual values, as individuals within the collective will pursue these anyway.

Distinctions between different animal brains form more natural categories than distinctions between institutional ideologies (e.g. in terms of density of communication, such as in neurons), so that determining values by looking at individuals leads to value-representations that are more reflective of the actual complexity of the present world in comparison to determining values by looking at institutional ideologies.

There are more degenerate attractors in the space of collective values than in individual values, e.g. with each person trying to optimize “the common good” in a way that means that they say they want “the common good”, which means “the common good” (as a rough average of individuals’ stated preferences) thinks their utility function is mostly identical with “the common good”, such that “the common good” becomes  a mostly self-referential phrase, referring to something with little resemblance to what anyone wanted in the first place.  (This has a lot in common with Ayn Rand’s writing in favor of “selfishness”.)

There is reason to expect that spite strategies, which involve someone paying to harm others, are collective, rather than individual.  Imagine that there are 100 different individuals competing, and that they have the option of paying 1 unit of their own energy to deduct 10 units of another individual’s energy.  This is clearly not worth it in terms of increasing their own energy, and is also not worth it in terms of increasing the percentage of the total energy owned by them, since paying 1 energy only deducts 0.1 units of energy from the average individual.  On the other hand, if there are 2 teams fighting each other, then a team that instructs its members to hurt the other team (at cost) gains in terms of the percentage of energy controlled by the team; this situation is important enough that we have a common term for it, “war”.  Therefore, collective values are more likely than individual values to encode conflicts in a way that makes them fundamentally irreconcilable.

Let us also consider values necessary for minds-in-general.  I talked with someone at a workshop recently who had the opinion that AGI should optimize an agent-neutral notion of “good”, coming from the teleology of the universe itself, rather than human values specifically, although it would optimize our values to the extent that our values already align with the teleology.  (This is similar to Eliezer Yudkowsky’s opinion in 1997.)

There are some values embedded in the very structure of thought itself, e.g. a value for logical consistency and the possibility of running computations.  However, none of these values are “human values” exactly; at the point where these are the main thing under consideration, it starts making more sense to talk about “the telos of the universe” or “objective morality” than “human values”.  Even a paperclip maximizer would pursue these values; they appear as convergent instrumental goals.

Even though these values are important, they can be assumed to be significantly satisfied by any sufficiently powerful AGI (though probably not optimally); the difference in the desirability between a friendly and unfriendly AGI, therefore, depends primarily on other factors.

There is a somewhat subtle point, made by Spinoza, which is that the telos of the universe includes our own values as a special case, at our location; we do “what the universe wants” by pursuing our values.  Even without understanding or agreeing with this point, however, we can look at the way pure pursuit of substrate-independent values seems subjectively wrong, and consider the implications of this subjective wrongness.

“I”, “you”, “here”, and “now” are indexicals: they refer to something different depending on when, where, and who speaks them. “My values” is indexical; it refers to different value-representations (e.g. utility functions) for different individuals.

“Human values” is also effectively indexical.  The “friendly AI (FAI) problem” is framed as aligning artificial intelligence with human values because of our time and place in history; in another timeline where octopuses became sapient and developed computers before humans, AI alignment researchers would be talking about “octopus values” instead of “human values”. Moreover, “human” is just a word; we interpret it by accessing actual humans, including ourselves and others, and that is always indexical, since which humans we find depends on our location in spacetime.

Eliezer’s metaethics sequence argues that our values are, importantly, something computed by our brains, evaluating different ways the future could go.  That doesn’t mean that “what score my brain computes on a possible future” is a valid definition of what is good, but rather, that the scoring is what leads to utterances about the good.

The fact that actions, including actions about what to say is “good”, are computed by the brain does mean that there is a strong selection effect in utterances about “good”.  To utter the sentence “restaurants are good”, the brain must decide to deliver energy towards this utterance.

The brain will optimize what it does to a significant degree (though not perfectly) for continuing to receive energy, e.g. handling digestion and causing feelings of hunger that lead to eating.  This is a kind of selfishness that is hard to avoid.  The brain’s perceptors and actuators are indexical (i.e. you see and interact with stuff near you), so at least some preferences will also be indexical in this way.  It would be silly for Alice’s brain to directly care about Bob’s digestion as much as it cares about Alice’s digestion, there is separation of concerns implemented by presence of nerves directly from Alice’s brain to Alice’s digestive system but not to Bob’s digestive system.

For an academic to write published papers about “the good”, they must additionally receive enough resources to survive (e.g. by being paid), provide a definition that others’ brains will approve of, and be part of a process that causes them to be there in the first place (e.g. which can raise children to be literate).  This obviously causes selection issues if the academics are being fed and educated by a system that continues asserting an ideology in a way not responsive to counter-evidence.  If the academics would lose their job if they defined “good” in a too-heretical way, one should expect to see few heretical papers on normative ethics.

(It is usual in analytic philosophy to assume that philosophers are working toward truths that are independent of their individual agendas and incentives, with bad academic incentives being a form of encroaching badness that could impede this, whereas in continental philosophy it is usual to assert that academic work is done by individuals who have agendas as part of a power structure, e.g. Foucault saying that schools are part of an imperial power structure.)

It’s possible to see a lot of bad ethics in other times and places as resulting from this sort of selection effect (e.g. people feeling pressure to agree with prevailing beliefs in their community even if they don’t make sense), although the effect is harder to see in our own time and place due to our own socialization.  It’s in some ways a similar sort of selection effect to the fact that utterances about “the good” must receive energy from a brain process, which means we refer to “human values” rather than “octopus values” since humans, not octopuses, are talking about AI alignment.  

In optimizing “human values” (something we have little choice in doing), we are accepting the results of evolutionary selection that happened in the past, in a “might makes right” way; human values are, to a significant extent, optimized so that humans having these values successfully survive and reproduce.  This is only a problem if we wanted to locate substrate-independent values (values applicable to minds in general); substrate-dependent values depend on the particular material history of the substrate, e.g. evolutionary history, and environmentally-influenced energy limitations are an inherent feature of this history.

In optimizing “the values of our society” (also something we have little choice in, although more than in the case of “human values”), we are additionally accepting the results of historical-social-cultural evolution, a process by which societies change over time and compete with each other.  As argued at the beginning, parsing values at the level of individuals leads to representing more of the complexity of the world’s already-existing agency, compared with parsing values at the level of collectives, although at least some important values are collective.

This leads to another framing on the relation between individual and collective values: preference falsification.  It’s well-known that people often report preferences they don’t act on, and that these reports are often affected by social factors.  To the extent that we are trying to get at “intrinsic values”, this is a huge problem; it means that with rare exceptions, we see reports of non-intrinsic values.

A few intuition pumps for the commonality of preference falsification:

1. Degree of difference in stated values in different historical time periods, exceeding actual change in human genetics, often corresponding to over-simplified values such as “maximizing productivity”, or simple religious values.

2. Commonality of people expressing lack of preference (e.g. about which restaurant to eat at), despite the experiences resulting from the different choices being pretty different.

3. Large differences between human stated values and predictions of evolutionary psychology, e.g. commonality people asserting that sexual repression is good.

4. Large differences in expressed values between children and adults, with children expressing more culturally-neutral values and adults expressing more culturally-specific ones.

5. “Akrasia”, people saying they “want” something without actually having the “motivation” to achieve it.

6. Feelings of “meaninglessness”, nihilism, persistent depression.

7. Schooling practices that have the effect of causing the student’s language to be aimed at pleasing authority figures rather than self-advocating.

Michelle Reilly writes on preference falsification:

Preference falsification is a reversal of the sign, and not simply a change in the magnitude, regarding some of your signaled value judgments. Each preference falsification creates some internal demand for ambiguity and a tendency to reverse the signs on all of your other preferences. Presumptively, any claim to having values differing from that which you think would maximize your inclusive fitness in the ancestral environment is either a lie, an error (potentially regarding your beliefs about what maximizes fitness, for instance, due to having mistakenly absorbed pop darwinist ideology), or a pointer to the outcome of a preference falsification imposed by culture.

(The whole article is excellent and worth reading.)

In general, someone can respond to a threat by doing what the threatener is threatening them to do, which includes hiding the threat (sometimes from consciousness itself; Jennier Freyd’s idea of betrayal trauma is related) and saying what one is being threatened into saying.  At the end of 1984, after being confined to a room and tortured, the protagonist says”I love Big Brother”, in the ultimate act of preference falsification.  Nothing following that statement can be taken as a credible statement of preferences; his expressions of preference have become ironic.

I recently had a conversation with Ben Hoffman where he zoomed in on how I wasn’t expressing coherent intentions.  More of the world around me came into the view of my consciousness, and I felt like I was representing the world more concretely in a way that led me to expressing simple preferences, such as that I liked restaurants and looking at pretty interesting things, while also feeling fear at the same time, as it seemed that what I had been doing previously was trying to be “at the ready” to answer arbitrary questions in a fear-based way; the moment faded, such that I am led to believe that it is uncommon for me to feel and express authentic preferences.  I do not think I am unusual in this regard; Michael Vassar, in a podcast with Spencer Greenberg (see also a summary by Eli Tyre), estimates that the majority of adults are “conflict theorists” who are radically falsifying their preferences, which is in line with Venkatesh Rao’s estimate that 80% of the population are “losers” who are acting from defensiveness and trying to make information relevant to comparisons between people illegible. In the “postrationalist” memespace, it is common to talk as if illegibility were an important protection; revealing information about one’s self is revealing vulnerabilities to potential attackers, making “hiding” as a generic, anonymous, history-free, hard-to-single-out person harder.

Can people who deeply falsify their preferences successfully create an aligned AI?  I argue “probably not”.  Imagine an institution that made everyone in it optimize for some utility function U that was designed by committee. That U wouldn’t be the human utility function (unless the design-by-committee process reliably determines human values, which would be extremely difficult), so forcing everyone to optimize U means you aren’t optimizing the human utility function; it has the same issues as a paperclip maximizer.

What if you try setting U = “make FAI”? “FAI” is a symbolic token (Eliezer writes about “LISP tokens”); for it to have semantics it has to connect with human value somehow, i.e. someone actually wanting something and being assisted in getting it.

Maybe it’s possible to have a research organization where some people deeply preference-falsify and some don’t, but for this to work the organization would need a legible distinction between the two classes, so no one gets confused into thinking they’re optimizing the preference-falsifiers’ utility function by constraining them to act against their values.  (I used the term “slavery” in the comment thread, which is somewhat politically charged, although it’s pointing at something important, which is that preference falsification causes someone to serve another’s values (or an imaginary other’s values) rather than their own)

In other words: the motion that builds a FAI must chain from at least one person’s actual values, but people under preference falsification can’t do complex research in a way that chains from their actual values, so someone who actually is planning from their values must be involved in the project, especially the part of the project that is determining how human values are defined (at object and process levels).

Competent humans are both moral agents and moral patients.  A sign that someone is preference-falsifying is that they aren’t treating themselves, or others like them, as moral patients.  They might signal costly that they aren’t optimizing for themselves, they’re optimizing for the common good, against their own interests.  But at least some intrinsic preferences are selfish, due to both (a) indexicality of perceptors/actuators and (b) evolutionary psychology.  So purely-altruistic preferences will, in the usual case, come from subtracting selfish preferences from one’s values (or, sublimating them into altruistic preferences).  Eliezer has written recently about the necessity of representing partly-selfish values rather that over-writing them with altruistic values, in line with much of what I am saying here

How does one treat one’s self as a moral agent and patient simultaneously, in a way compatible with others doing so?  We must (a) pursue our values and (b) have such pursuit not conflict too much with others’ pursuit of their values.  In mechanism design, we simultaneously have preferences over the mechanism (incentive structure) and the goods mediated by the incentive structure (e.g. goods being auctioned).  Similarly, Kant’s Categorical Imperative is a criterion for object-level preferences to be consistent with law-level preferences, which are like preferences about what legal structure to occupy; the object-level preferences are pursued subject to obeying this legal structure.  (There are probably better solutions than these, but this is a start.)

What has been stated so far is, to a significant extent, an argument for deontological ethics over utilitarian ethics.  Utilitarian ethics risks constraining everyone into optimizing “the common good” in a way that hides original preferences, which contain some selfish ones; deontological ethics allows pursuit of somewhat-selfish values as long as these values are pursued subject to laws that are willed in the same motion as willing the objects of these values themselves.

Consciousness is related to moral patiency (in that e.g. animal consciousness is regarded as an argument in favor of treating animals as moral patients), and is notoriously difficult to discuss.  I hypothesize that a lot of what is going on here is that:

1. There are many beliefs/representations that are used in different contexts to make decisions or say things.

2. The scientific method has criteria for discarding beliefs/representations, e.g. in cases of unfalsifiability, falsification by evidence, or complexity that is too high.

3. A scientific worldview will, therefore, contain a subset of the set of all beliefs had by someone.

4. It is unclear how to find the rest of the beliefs in the scientific worldview, since many have been discarded.

5. There is, therefore, a desire to be able to refer to beliefs/representations that didn’t make it into the scientific worldview, but which are still used to make decisions or say things; “consciousness” is a way of referring to beliefs/representations in a way inclusive of non-scientific beliefs.

6. There are, additionally, attempts to make consciousness and science compatible by locating conscious beliefs/representations within a scientific model, e.g. in functionalist theory of mind.

A chemist will have the experience of drinking coffee (which involves their mind processing information from the environment in a hard-to-formalize way) even if this experience is not encoded in their chemistry papers.  Alchemy, as a set of beliefs/representations, is part of experience/consciousness, but is not part of science, since it is pre-scientific.  Similarly, beliefs about ethics (at least, the ones that aren’t necessary for the scientific method itself) aren’t part of the scientific worldview, but may be experienced as valence.

Given this view, we care about consciousness in part because the representations used to read and write text like this “care about themselves”, wanting not to erase themselves from their own product.

There is, then, the question of how (or if) to extend consciousness to other representations, but at the very least, the representations used here-and-now for interpreting text are an example of consciousness.  (Obviously, “the representations used here-and-now” is indexical, connecting with the earlier discussion on the necessity of energy being provided for uttering sentences about “the good”.)

The issue of extension of consciousness is, again, similar to the issue of how different agents with somewhat-selfish goals can avoid getting into intractable conflicts.  Conflicts would result from each observer-moment assigning itself extreme importance based on its own consciousness, and not extending this to other observer-moments, especially if these other observer-moments are expected to recognize the consciousness of the first.

I perceive an important problem with the idea of “friendly AI” leading to nihilism, by the following process:

1. People want things, and wants that are more long-term and common-good-oriented are emphasized.

2. This leads people to think about AI, as it is important for automation, increasing capabilities in the long term.

3. This leads people to think about AI alignment, as it is important for the long-term future, given that AI will be relevant.

4. They have little actual understanding of AI alignment, so their thoughts are based on others’ thought, their idea of what good research should look like.

In the process their research has become disconnected from their original, ordinary wanting, which becomes subordinated to it.  But an extension of the original wanting is what “friendly AI” is trying to point at.  Unless these were connected somehow, there would be no reason or motive to value “friendly AI”; the case for it is based on reasoning about how the mind evaluates possible paths forward (e.g. in the metaethics sequence).

It becomes a paradoxical problem when people don’t feel motivated to “optimize the human utility function”.  But their utility function is what they’re motivated to do, so this is absurd, unless there is mental damage causing failure of motivations to cohere at all.  This could be imprecisely summarized as: “If you don’t want it, it’s not a friendly AI”.  The token “FAI” is meaningless unless it connects with a deep wanting.

This leads to a way that a friendly AI project could be more powerful than an unfriendly AI project: the people working on it would be more likely to actually want the result in a relatively-unconfused way, so they’d be more motivated to actually make the system work, rather than just pretending to try to make the system work.

Alignment researchers who were in touch with “wanting” would be treating themselves and others like them as moral patients.  This ties in to my discussion of my own experiences as an alignment researcher.  I said at the end:

Aside from whether things were “bad” or “not that bad” overall, understanding the specifics of what happened, including harms to specific people, is important for actually accomplishing the ambitious goals these projects are aiming at; there is no reason to expect extreme accomplishments to result without very high levels of epistemic honesty.

This is a pretty general statement, but now it’s possible to state the specifics better.  There is little reason to expect that alignment researchers that don’t treat themselves and others like them as moral patients are actually treating the rest of humanity as moral patients.  From a historical outside view, this is intergenerational trauma, “hurt people hurt people”, people who are used to being constrained/dominated in a certain way passing that along to others, which is generally part of an imperial structure that extends itself through colonization; colonizers often have narratives about how they’re acting in the interests of the colonized people, but these narratives can’t be evaluated neutrally if the colonized people in question cannot speak.  (The colonization of Liberia is a particularly striking example of colonial trauma). Treating someone as a moral patient requires accounting for costs and benefits to them, which requires either discourse with them or extreme, unprecedented advances in psychology.

I recall a conversation in 2017 where a CFAR employee told someone I knew (who was a trans woman) that there was a necessary decision between treating the trans woman in question “as a woman” or “as a man”, where “as a man” meant “as a moral agent” and “as a woman” meant “as a moral patient”, someone who’s having problems and needs help.  That same CFAR person later told me about how they are excited by the idea of “undoing gender”.  This turns out to align with the theory I am currently advocating, that it is necessary to consider one’s self as both a moral agent and a moral patient simultaneously, which is queer-coded in American 90s culture.

I can see now that, as long as I was doing “friendly AI research” from a frame of trying not to be bad or considered bad (implicitly, trying to appear to serve someone else’s goals), everything I was doing was a total confusion; I was pretending to try to solve the problem, which might have possibly worked for a much easier problem, but definitely not one as difficult as AI alignment.  After having left “the field” and gotten more of a life of my own, where there is relatively less requirement to please others by seeming abstractly good (or abstractly bad, in the case of vice signaling), I finally have an orientation that can begin to approach the real problem while seeing more of how hard it is.

The case of aligning AI with a single human is less complicated than the problem with aligning it with “all of humanity”, but this problem still contains most of the difficulty.  There is a potential failure mode where alignment researchers focus too much on their own utility function at the expense of considering others’, but (a) this is not the problem on the margin given that the problem of aligning AI with even a single human’s utility function contains most of the difficulty, and (b) this could potentially be solved with incentive alignment (inclusive of mechanism design and deontological ethics) rather than enforcing altruism, which is nearly certain to actually be enforcing preference-falsification given the difficulty of checking actual altruism.

“Credibility” for being unbelievable

The word “credible” is perversely ambiguous.  On the face of it, it means: being trustworthy, being believable (in a Bayesian sense), being likely to make true statements and pay one’s debts.  But there’s another way the word is used, which is to indicate authority and prestige: control over which propositions are considered “truthy” (and/or agreement with controlling processes), rather than prediction of which statements are actually true.

Control over narratives, however, is anticorrelated with, and opposed to, actual believability.  If you can control the narrative to say that some proposition is either X or ~X at will, arbitrarily, then you’re using a symmetric process for “convincing” others: it’s just as easy to use it to convince of falsehood as of truth.  This is as opposed to asymmetric processes which are easier to use to convince of truth than of falsehood, e.g. public experiments, logical debate.

(The word “authority” is interesting here: “authority”, “authoritarian”, and “author” come from the same root, indicating a relation between the “authoring” of arbitrary narratives, “authoritarian” use of force by some parties to control others, and “authority” assigned to statements and producers of statements.)

While oracular reality-trackers discern facts, authority creates facts, primarily social facts; if these are the “facts” used to determine credibility, then authority and those close to it can “win” credibility, while having no corresponding ability to discern truth.

Being in a position to control narratives means having power: having maneuvered into a position to exert arbitrary influence on others.  Since power is rivalrous (it can’t be the case that everyone has lots of arbitrary influence on everyone else), acquiring power requires winning zero-sum games.  Winning zero-sum games requires allocating attention to the game itself; unless the game is set up so as to correlate with truth (e.g. a formal debate judged according to pro-epistemology standards such as logical rigor and consistency with evidence), it will be won by actors who are barely paying attention to the truth, who are bullshitting (not simply lying!).

Beyond this, zero-sum game play is opposed to revelation of information; such revelation is interpreted as aggression, as it breaks the “nothing changes” power-maintaining equilibrium.

The “calling a deer a horse” story is illustrative, demonstrating more severity than simply not paying attention to the truth.  When Zhao Gao points to a deer and says it is a horse, he effectively controls the narrative: those who want to live will “agree” with him that it’s a horse.  He isn’t believable, but he’s authoritative; he’s “credible”, as are those who submit to the threat and “agree” (ironically) with him.  (Ironic agreement is a state of doublethink, of internally disbelieving while outwardly agreeing; such ironic states of mind are suited to environments of reversed credibility.)

This story is more severe than simple bullshit, in that it involves selectively promoting false statements.  Paying enough attention to the truth to invert it and thus gain an advantage over truth-based actors is, of course, compatible with zero-sum play.

If a government known to promote lots of false stories promotes a false story as part of mobilization of military/police threat (say, the story that Saddam Hussein purchased yellow cake), is this story “credible” or “non-credible”?  It will be printed in prestigious newspapers, and will become a default assumption in many discussions, but people tracking history will have a sense of the government’s track record and know that the claim is made by the sort of actor who gets there by bullshitting.

Fiat currency is an interestingly explicit case.  The US adopted a metallic standard in 1785; government-issued money notes (US dollars) were exchangeable for a particular amount of a precious metal, initially silver and then gold.  To value US dollars is to bet that the government will be willing to exchange it for silver/gold; the money is valuable insofar as this promise is credible.

However, around WWI (1914-1918), many governments (including the US) suspended convertibility.  If the value of the money were simply based on the belief that it could be exchanged for precious metal, then the value would plummet accordingly.  But by then the money unit was well-integrated into the economy: it was used to set prices, pay wages, pay taxes, be used for bank savings and loans, and so on.  Changing protocols everywhere to adopt a new currency would be slow and difficult, and (given taxation) would run into conflict with the government.  While the value of money did reduce substantially (e.g. prices doubled in the US), this was not the totalizing devaluation that would be naively expected from a collapse of convertibility.

During the Great Depression, through Executive Order 6102 of 1933, the US government confiscated the vast majority of gold, “exchanging” it for a fixed amount of US dollars.  By the time the government is confiscating almost all gold, it’s obvious that US dollars are not valued primarily due to the expectation that they could be exchanged for gold.

So, though the “credibility” (market value) of the US dollar originally came from the belief that it could be exchanged for gold, its credibility over time shifted to be backed primarily by the authority of the US government, which is opposed to the expectation that it will pay debts.  Even if US dollars can’t be exchanged for precious metal, they are (since 1884) legal tender, valid for paying public debts (e.g. taxes) and private debts.  Since US dollars are valid for private debts (according to US courts), it’s impractical for private debts between Americans to not be reliant on the “credibility” of the US dollar.

US dollars are, at this point, a stage 3-4 simulacrum with respect to the original claim of value.  This paves the way for further manipulation of currency through Federal Reserve policy implementing Keynesian macroeconomics, a form of military mobilization (the relation between macroeconomics and mobilization is de-obfuscated by Modern Monetary Theory).  Direct manipulation of the currency is, of course, a form of authority, opposed to believability, in that it undermines use of the currency to denominate unironic debts.

Back to the more general problem.  If you asked an average college-educated American whether institutions such as the CDC or the WHO are credible, they would probably say “yes”.  However, these institutions repeatedly made hard-to-believe claims during COVID, such as the claim that masks were unhelpful, or the claim that the virus was not airborne.  Prestigious news outlets such as the New York Times did not call out these claims as false early on, which is correlated with such outlets’ “credibility”; they’re “credible” due to repeating claims made by authoritative narrative-controllers (thus, being part of the narrative-control apparatus), not due to tracking reality.

As Nick Land asks: “Assuming the WHO, CDC, and FDA wanted to kill you, how would their behavior differ?”  It wouldn’t be a coincidence for authoritative institutions to be trying to kill those they exert authority over: power is the ability to threaten others, and threats can control narratives.

I’ve seen a lot of discussions where people with some shared explicit agenda (e.g. Effective Altruists) talk about the need to “gain credibility”, and assume that the way to do so is to be closer to power; their central example of a “credible” person would be a high-level corporate/government strategic consultant or a journalist of a prestigious publication.  Such talk doesn’t distinguish between credibility-as-believability and credibility-as-authority: is being a strategic consultant helpful for convincing others because it is correlated with saying true propositions, or is it helpful because the authority of the institution (or upstream institutions) intimidates people into accepting claims made by its members despite their unbelievability?

In conclusion:

  • “Credibility” conflates between believability (Bayesian evidence) and authority (ability to control narratives arbitrarily).
  • Authority is derived from zero-sum game play, which is opposed to revelation of new information, and which threatens those who authority is exerted over.
  • Thus, these different properties being conflated are opposed.

On commitments to anti-normativity

Normativity: morality, ethics, doing the right thing, treating others as one would want to be treated, respecting moral symmetries, telling the truth, keeping commitments, following rules that are there to restrict harmful behavior, behaving in a way that contributes to the benefit of one’s society.

The idea of commitment to normativity is familiar.  Someone can be committed to behaving ethically, to the point that they forego some narrowly self-interested benefit to avoid behaving unethically.

What about commitment to anti-normativity?  This is commitment to doing the wrong thing, treating others as one wouldn’t want to be treated, disregarding moral symmetries, lying, breaking commitments, preventing rules from being followed, and parasitizing one’s society.

It is, naively, unsurprising that some people behave non-normatively, because non-normative behavior can bring a selfish benefit.  It is rather more surprising that commitment to anti-normativity may be a thing; such a commitment would cause one to continue behaving anti-normatively, even when normative behavior would be selfishly optimal.

Let’s look at some examples of anti-normativity:

  • The phrase “snitches get stitches”, and the idea that whistleblower protections might be necessary, points at the commonality of criminal conspiracies, which punish members not for breaking the law, but for causing the law to be enforceable.  Turning in other members of a conspiracy one is part of is, in a sense, aggressing upon them: it’s causing them to face negative consequences they expected not to face.  Members of a conspiracy commit to hiding themselves and each other from the law.
  • Privacy-related social norms are optimized for obscuring behavior that could be punished if widely known.  A common justification for such norms is that behavior that would be punished if known about is common, hence actual punishment is unfair scapegoating based on unpredictable factors; under privacy norms, revelation is more rare.  Such norms are sometimes enshrined into law, e.g. the Right to be Forgotten, by which some people can force records of their own behavior to be deleted.  (Note, privacy norms are an example of a paradoxical norm that is opposed to enforcement of norms-in-general).
  • Traumatized people are forcefully made part of a conspiracy, and learn to side with the transgressor who is aggressing upon them.  Such learning generalizes to siding with transgressors in general, as described in The Body Keeps the Score; while watching a play about dating violence, the traumatized children yell things like “kill the bitch”, siding with the transgressor in the scene.  This is despite this transgressor not actually being powerful; in the outer setting in which the play is being put on, such behavior is frowned upon, so the traumatized kids are going against powerful social structures.  (It is easy for traumatized people to conflate transgressiveness with power, but these frequently come apart)
  • It’s very common to want to exclude people who are too “moralistic” or “judgy” from social groups.  If this were just a matter of disagreeing with these people about morality, then moral argumentation would be the most natural response; what is being opposed is, rather, individuals making moral judgments in a way that implies that some normal behaviors are unacceptable.  Being committed to behaving normally, then, means being committed not to follow moral laws that would compel behaving abnormally.  (Relatedly, “vice signalling”, e.g. smoking, can make others less afraid of moral judgment, as the vice signaller has morally lowered themselves, having less optionality to claim the moral high ground.  Many Christian teachings, e.g. “judge not lest you be judged”, “Recognize always that evil is your own doing, and to impute it to yourself.”, recommend the social strategy of not claiming moral high ground.)
  • Some social groups separate themselves from the “commoners”, making it clear that they’re a different class, not subject to the rules that constrain the commoners, e.g. militaries, intelligence agency members, high-level corporate executives, some professional classes, some spiritual practitioners, aristocracies throughout history.  The Inner Ring describes a general dynamic of this form.  They may transgression-bond with each other to show that they are not subject to the normal rules.  Nazi legal theorist Carl Schmitt writes that “Sovereign is he who decides on the exception”, i.e. the truly autonomous leader can allow rules to be broken at will; David Graeber describes royal and ritualistic power as involving socially tolerated value inversion in the last chapter of On Kings.

Why would dynamics like these result in commitments to anti-normativity? In some cases, like criminal conspiracy, the answer is obvious: exiting the conspiracy is, by default, dangerous. In general, being part of a conspiracy for enough time will cause conspiratorial behavior to seem “normal”, such that going back to non-conspiratorial behavior requires resetting one’s sense of normal behavior, as in cult deconversion.

Anti-normativity is closely related to motive ambiguity; if there is ambiguity between the motives of normativity and of local expediency (or other local social motives), then behaving anti-normatively signals that local expediency is what is being optimized for, and shows that one is giving up the option of blaming others for behaving non-normatively.

A bubble of anti-normativity is one where members are constantly signalling that they are behaving non-normativity and are encouraging others to behave non-normatively as well.  Such a bubble (essentially, a conspiracy) can maintain itself as long as it can continue meeting its constraints, e.g. intaking enough resources and not being successfully opposed.

How is anti-normativity related to oppression?  In a society that runs on normativity, there can be something approaching equality of opportunity; people can gain for themselves by following the rules and providing value to others.  In a society that runs on anti-normativity, such strategies will fail.  Instead of following the rules being the way to get ahead, accommodating anti-normativity while still conforming to local cultural expectations is necessary to get ahead.  Kelsey Piper recently described dynamics in bureaucracies by which lower-class people get treated worse than upper-middle-class people, despite appealing to the same rules.  Simply depending on the bureaucracies to follow the rules fails, since they don’t follow the rules; instead, it’s necessary to have more subtle social skills, such as knowing when to appeal, talking to people in a polite yet demanding way, seeming like the kind of person who society generally treats well, seeming to be expensive to mess with, and so on.

Our society has a term for people who follow rules consistently (Asperger’s syndrome); it is considered a mental disorder, one that sharply reduces people’s social skills.  While Asperger’s is adaptive in lawful societies, it is maladaptive in anti-lawful societies, such as Nazi Germany, where the term was coined.  Hans Asperger was a Nazi who euthanized some of his patients; he identified the flaw of Asperger’s patients as failure to be absorbed into the national super-organism, a flaw also attributed to Jews, who have a highly lawful religion and are disproportionately likely to be diagnosed with Asperger’s.

If bureaucracies followed rules consistently, then Asperger’s would not be a social disadvantage; it would imply a high ability to navigate society.  In a society where corporations and other bureaucracies are anti-normative moral mazes, Asperger’s is a disadvantage, because appealing to rules alone is not an effective way to cause bureaucracies to provide service.

(A common intuition is that bureaucracies are bad because they follow the rules consistently, lacking subtle human factors.  As a counter to this intuition, consider the case of MMORPG games; the game mechanics function as a rule-following bureaucracy, e.g. the mechanics of stores and banking in the game.  Such games are fun because of the consistency of the software rules; inconsistency in game mechanics decreases predictability of effects of action, thereby decreasing effective planning horizons and increasing perception of unfairness.)

One can appeal to institutions on the basis of rules, or one can appeal on the basis of privilege, being the sort of person who should be rewarded for no reason.  Social classes are a matter of privilege, of people being treated one way or another because of who they are, what category they fit in, based on largely aesthetic properties.

If treatment by institutions is a matter of illegible cultural factors, then a large part of what is important is to be “normal”: being near the center of some Gaussian-ish distribution over people, such as a social class.  When everyone is transgressing, non-transgression isn’t a defense, while not standing out from the crowd (hiding as a statistic) is, since it prevents being singled out for scapegoating. The behavior is much more Fristonian (avoiding surprise) than decision-theoretic (trying to accomplish something that isn’t already the case).

Culture is correlated with race, both because people of different ancestry have different histories, and because people treat each other differently depending on appearance.  If society’s institutions are disproportionately occupied by people of some cultural group, then their sense of “normal” will accord with what is normal for that cultural group, not what is normal for other cultural groups.

So, anti-normativity is racially/culturally biased by default, in a way that normativity isn’t, or at least is much less so.  While explicit rules can be followed by people of a variety of different cultures, implicit social expectations are naturally particular to a narrow set of cultures.  Anti-normativity will tend to force behavior to follow a Gaussian-like distribution, where more central behavior is, by default, more rewarded than extremal behavior (with the exception of savvy extremal behavior optimized for taking advantage of the anti-normative dynamic).

Therefore, explicit anti-racism is much more necessary for mitigating oppression if anti-normativity is dominant than if normativity is dominant; having institutions staffed by a people of a variety of different cultures broadens the set of what is considered normal by people in the institution, causing it to be more natural for the institution to service people of a variety of races/cultures.  This is, obviously, nowhere near a good solution, since institutions are still not following the rules, and not all cultures can be represented in a given institution; it is, rather, a harm-reduction measure given an already-bad situation.

Many-worlds versus discrete knowledge

[epistemic status: I’m a mathematical and philosophical expert but not a QM expert; conclusions are very much tentative]

There is tension between the following two claims:

  • The fundamental nature of reality consists of the wave function whose evolution follows the Schrödinger equation.
  • Some discrete facts are known.

(What is discrete knowledge? It is knowledge that some nontrivial proposition X is definitely true. The sort of knowledge a Bayesian may update on, and the sort of knowledge that logic applies to.)

The issue here is that facts are facts about something. If quantum mechanics has any epistemic basis, then at least some things are known, e.g. the words in a book on quantum mechanics, or the outcomes of QM experiments. The question is what this knowledge is about.

If the fundamental nature of reality is the wave function, then these facts must be facts about the wave function. But, this runs into problems.

Suppose the fact in question is “A photon passed through the measurement apparatus”. How does this translate to a fact about the wave function?

The wave function consists of a mapping from the configuration space (some subset of R^n) to complex numbers. Some configurations (R^n points) have a photon at a given location and some don’t. So the fact of a photon passing through the apparatus or not is a fact about configurations (or configuration-histories), not about wave functions over configurations.

Yes, some wave functions assign more amplitude to configurations in which the photon passes through the apparatus than others. Still, this does not allow discrete knowledge of the wave function to follow from discrete knowledge of measurements.

The Bohm interpretation, on the other hand, has an answer to this question. When we know a fact, we know a fact about the true configuration-history, which is an element of the theory.

In a sense, the Bohm interpretation states that indexical information about which world we are in is part of fundamental reality, unlike the many-worlds interpretation which states that fundamental reality contains no indexical information. (I have discussed the trouble of indexicals with respect to physicalism previously)

Including such indexical information as “part of reality” means that discrete knowledge is possible, as the discrete knowledge is knowledge of this indexical information.

For this reason, I significantly prefer the Bohm interpretation over the many-worlds interpretation, while acknowledging that there is a great deal of uncertainty here and that there may be a much better interpretation possible. Though my reservations about the many-worlds interpretation had led me to be ambivalent about the comparison between the many-worlds interpretation and the Copenhagen interpretation, I am not similarly ambivalent about Bohm versus many-worlds; I significantly prefer the Bohm interpretation to both many-worlds and to the Copenhagen interpretation.

Modeling naturalized decision problems in linear logic

The following is a model of a simple decision problem (namely, the 5 and 10 problem) in linear logic. Basic familiarity with linear logic is assumed (enough to know what it means to say linear logic is a resource logic), although knowing all the operators isn’t necessary.

The 5 and 10 problem is, simply, a choice between taking a 5 dollar bill and a 10 dollar bill, with the 10 dollar bill valued more highly.

While the problem itself is trivial, the main theoretical issue is in modeling counterfactuals. If you took the 10 dollar bill, what would have happened if you had taken the 5 dollar bill? If your source code is fixed, then there isn’t a logically coherent possible world where you took the 5 dollar bill.

I became interested in using linear logic to model decision problems due to noticing a structural similarity between linear logic and the real world, namely irreversibility. A vending machine may, in linear logic, be represented as a proposition “$1 → CandyBar”, encoding the fact that $1 may be exchanged for a candy bar, being consumed in the process. Since the $1 is consumed, the operation is irreversible. Additionally, there may be multiple options offered, e.g. “$1 → Gumball”, such that only one option may be taken. (Note that I am using “→” as notation for linear implication.)

This is a good fit for real-world decision problems, where e.g. taking the $10 bill precludes also taking the $5 bill. Modeling decision problems using linear logic may, then, yield insights regarding the sense in which counterfactuals do or don’t exist.

First try: just the decision problem

As a first try, let’s simply try to translate the logic of the 5 and 10 situation into linear logic. We assume logical atoms named “Start”, “End”, “$5”, and “$10”. Respectively, these represent: the state of being at the start of the problem, the state of being at the end of the problem, having $5, and having $10.

To represent that we have the option of taking either bill, we assume the following implications:

TakeFive : Start → End ⊗ $5

TakeTen : Start → End ⊗ $10

The “⊗” operator can be read as “and” in the sense of “I have a book and some cheese on the table”; it combines multiple resources into a single linear proposition.

So, the above implications state that it is possible, starting from the start state, to end up in the end state, yielding $5 if you took the five dollar bill, and $10 if you took the 10 dollar bill.

The agent’s goal is to prove “Start → End ⊗ $X”, for X as high as possible. Clearly, “TakeTen” is a solution for X = 10. Assuming the logic is consistent, no better proof is possible. By the Curry-Howard isomorphism, the proof represents a computational strategy for acting in the world, namely, taking the $10 bill.

Second try: source code determining action

The above analysis is utterly trivial. What makes the 5 and 10 problem nontrivial is naturalizing it, to the point where the agent is a causal entity similar to the environment. One way to model the agent being a causal entity is to assume that it has source code.

Let “M” be a Turing machine specification. Let “Ret(M, x)” represent the proposition that M returns x. Note that, if M never halts, then Ret(M, x) is not true for any x.

How do we model the fact that the agent’s action is produced by a computer program? What we would like to be able to assume is that the agent’s action is equal to the output of some machine M. To do this, we need to augment the TakeFive/TakeTen actions to yield additional data:

TakeFive : Start → End ⊗ $5 ⊗ ITookFive

TakeTen : Start → End ⊗ $10 ⊗ ITookTen

The ITookFive / ITookTen propositions are a kind of token assuring that the agent (“I”) took five or ten. (Both of these are interpreted as classical propositions, so they may be duplicated or deleted freely).

How do we relate these propositions to the source code, M? We will say that M must agree with whatever action the agent took:

MachineFive : ITookFive → Ret(M, “Five”)

MachineTen : ITookTen → Ret(M, “Ten”)

These operations yield, from the fact that “I” have taken five or ten, that the source code “M” eventually returns a string identical with this action. Thus, these encode the assumption that “my source code is M”, in the sense that my action always agrees with M’s.

Operationally speaking, after the agent has taken 5 or 10, the agent can be assured of the mathematical fact that M returns the same action. (This is relevant in more complex decision problems, such as twin prisoner’s dilemma, where the agent’s utility depends on mathematical facts about what values different machines return)

Importantly, the agent can’t use MachineFive/MachineTen to know what action M takes before actually taking the action. Otherwise, the agent could take the opposite of the action they know they will take, causing a logical inconsistency. The above construction would not work if the machine were only run for a finite number of steps before being forced to return an answer; that would lead to the agent being able to know what action it will take, by running M for that finite number of steps.

This model naturally handles cases where M never halts; if the agent never executes either TakeFive or TakeTen, then it can never execute either MachineFive or MachineTen, and so cannot be assured of Ret(M, x) for any x; indeed, if the agent never takes any action, then Ret(M, x) isn’t true for any x, as that would imply that the agent eventually takes action x.

Interpreting the counterfactuals

At this point, it’s worth discussing the sense in which counterfactuals do or do not exist. Let’s first discuss the simpler case, where there is no assumption about source code.

First, from the perspective of the logic itself, only one of TakeFive or TakeTen may be evaluated. There cannot be both a fact of the matter about what happens if the agent takes five, and a fact of the matter about what happens if the agent takes ten. This is because even defining both facts at once requires re-using the Start proposition.

So, from the perspective of the logic, there aren’t counterfactuals; only one operation is actually run, and what “would have happened” if the other operation were run is undefinable.

On the other hand, there is an important sense in which the proof system contains counterfactuals. In constructing a linear logic proof, different choices may be made. Given “Start” as an assumption, I may prove “End ⊗ $5” by executing TakeFive, or “End ⊗ $10” by executing TakeTen, but not both.

Proof systems are, in general, systems of rules for constructing proofs, which leave quite a lot of freedom in which proofs are constructed. By the Curry-Howard isomorphism, the freedom in how the proofs are constructed corresponds to freedom in how the agent behaves in the real world; using TakeFive in a proof has the effect, if executed, of actually (irreversibly) taking the $5 bill.

So, we can say, by reasoning about the proof system, that if TakeFive is run, then $5 will be yielded, and if TakeTen is run, then $10 will be yielded, and only one of these may be run.

The logic itself says there can’t be a fact of the matter about both what happens if 5 is taken and if 10 is taken. On the other hand, the proof system says that both proofs that get $5 by taking 5, and proofs that get $10 by taking 10, are possible.

How to interpret this difference? One way is by asserting that the logic is about the territory, while the proof system is about the map; so, counterfactuals are represented in the map, even though the map itself asserts that there is only a singular territory.

And, importantly, the map doesn’t represent the entire territory; it’s a proof system for reasoning about the territory, not the territory itself. The map may, thus, be “looser” than the territory, allowing more possibilities than could possibly be actually realized.

What prevents the map from drawing out logical implications to the point where it becomes clear that only one action may possibly be taken? Given the second-try setup, the agent simply cannot use the fact of their source code being M, until actually taking the action; thus, no amount of drawing implications can conclude anything about the relationship between M and the agent’s action. In addition to this, reasoning about M itself becomes harder the longer M runs, i.e. the longer the agent is waiting to make the decision; so, simply reasoning about the map, without taking actions, need not conclude anything about which action will be taken, leaving both possibilities live until one is selected.

Conclusion

This approach aligns significantly with the less-formal descriptions given of subjective implication decision theory and counterfactual nonrealism. Counterfactuals aren’t real in the sense that they are definable after having taken the relevant action; rather, an agent in a state of uncertainty about which action it will take may consider multiple possibilities as freely selectable, even if they are assured that their selection will be equal to the output of some computer program.

The linear logic formalization increases my confidence in this approach, by providing a very precise notion of the sense in which the counterfactuals do and don’t exist, which would be hard to make precise without similar formalism.

I am, at this point, less worried about the problems with counterfactual nonrealism (such as global accounting) than I was when I wrote the post, and more worried about the problems of policy-dependent source code (which requires the environment to be an ensemble of deterministic universes, rather than a single one), such that I have updated towards counterfactual nonrealism as a result of this analysis, although I am still not confident.

Overall, I find linear logic quite promising for modeling embedded decision problems from the perspective of an embedded agent, as it builds critical facts such as non-reversibility into the logic itself.

Appendix: spurious counterfactuals

The following describes the problem of spurious counterfactuals in relation to the model.

Assume the second-try setup. Suppose the agent becomes assured that Ret(M, “Five”); that is, that M returns the action “Five”. From this, it is provable that the agent may, given Start, attain the linear logic proposition 0, by taking action “Ten” and then running MachineTen to get Ret(M, “Ten”), which yields inconsistency with Ret(M, “Five”). From 0, anything follows, e.g. $1000000, by the principle of explosion.

If the agent is maximizing guaranteed utility, then they will take the $10 bill, to be assured of the highest utility possible. So, it cannot be the case that the agent can be correctly assured that they will take action five, as that would lead to them taking a different action.

If, on the other hand, the agent would have provably taken the $5 bill upon receiving the assurance (say, because they notice that taking the $10 bill could result in the worst possible utility), then there is a potential issue with this assurance being a self-fulfilling prophecy. But, if the agent is constructing proofs (plans for action) so as to maximize guaranteed utility, this will not occur.

This solution is essentially the same as the one given in the paper on UDT with a known search order.

Topological metaphysics: relating point-set topology and locale theory

The following is an informal exposition of some mathematical concepts from Topology via Logic, with special attention to philosophical implications. Those seeking more technical detail should simply read the book.

There are, roughly, two ways of doing topology:

  • Point-set topology: Start with a set of points. Consider a topology as a set of subsets of these points which are “open”, where open sets must satisfy some laws.
  • Locale theory: Start with a set of opens (similar to propositions), which are closed under some logical operators (especially and and or), and satisfy logical relations.

What laws are satisfied?

  • For point-set topology: The empty set and the full set must both be open; finite intersections and infinite unions of opens must be open.
  • For local theory: “True” and “false” must be opens; the opens must be closed under finite “and” and infinite “or”; and some logical equivalences must be satisfied, such that “and” and “or” work as expected.

Roughly, open sets and opens both correspond to verifiable propositions. If X and Y are both verifiable, then both “X or Y” and “X and Y” are verifiable; and, indeed, even countably infinite disjunctions of verifiable statements are verifiable, by exhibiting the particular statement in the disjunction that is verified as true.

What’s the philosophical interpretation of the difference between point-set topology and locale theory, then?

  • Point-set topology corresponds to the theory of possible worlds. There is a “real state of affairs”, which can be partially known about. Open sets are “events” that are potentially observable (verifiable). Ontology comes before epistemology. Possible worlds are associated with classical logic and classical probability/utility theory.
  • Locale theory corresponds to the theory of situation semantics. There are facts that are true in a particular situation, which have logical relations with each other. The first three lines of Wittgenstein’s Tracatus Logico-Philosophicus are: “The world is everything that is the case. / The world is the totality of facts, not of things. / The world is determined by the facts, and by these being all the facts.” Epistemology comes before ontology. Situation semantics is associated with intuitionist logic and Jeffrey-Bolker utility theory (recently discussed by Abram Demski).

Thus, they correspond to fairly different metaphysics. Can these different metaphysics be converted to each other?

  • Converting from point-set topology to locale theory is easy. The opens are, simply, the open sets; their logical relations (and/or) are determined by set operations (intersection/union). They automatically satisfy the required laws.
  • To convert from locale theory to point-set topology, construct possible worlds as sets of opens (which must be logically coherent, e.g. the set of opens can’t include “A and B” without including “A”), which are interpreted as the set of opens that are true of that possible world. The open sets of the topology correspond with the opens, as sets of possible words which contain the open.

From assumptions about possible worlds and possible observations of it, it is possible to derive a logic of observations; from assumptions about the logical relations of different propositions, it is possible to consider a set of possible worlds and interpretations of the propositions as world-properties.

Metaphysically, we can consider point-set topology as ontology-first, and locale theory as epistemology-first. Point-set topology starts with possible worlds, corresponding to Kantian noumena; locale theory starts with verifiable propositions, corresponding to Kantian phenomena.

While the interpretation of a given point-set topology as a locale is trivial, the interpretation of a locale theory as a point-set topology is less so. What this construction yields is a way of getting from observations to possible worlds. From the set of things that can be known (and knowable logical relations between these knowables), it is possible to conjecture a consistent set of possible worlds and ways those knowables relate to the possible worlds.

Of course, the true possible worlds may be finer-grained than these consistent set; however, it cannot be coarser-grained, or else the same possible world would result in different observations. No finer potentially-observable (verifiable or falsifiable) distinctions may be made between possible worlds than the ones yielded by this transformation; making finer distinctions risks positing unreferenceable entities in a self-defeating manner.

How much extra ontological reach does this transformation yield? If the locale has a countable basis, then the point-set topology may have an uncountable point-set (specifically, of the same cardinality as the reals). The continuous can, then, be constructed from the discrete, as the underlying continuous state of affairs that could generate any given possibly-infinite set of discrete observations.

In particular, the reals may be constructed from a locale based on open intervals whose beginning/end are rational numbers. That is: a real r may be represented as a set of (a, b) pairs where a and b are rational, and a < r < b. The locale whose basis is rational-delimited open intervals (whose elements are countable unions of such open intervals, and which specifies logical relationships between them, e.g. conjunction) yields the point-set topology of the reals. (Note that, although including all countable unions of basis elements would make the locale uncountable, it is possible to weaken the notion of locale to only require unions of recursively enumerable sets, which preserves countability)

If metaphysics may be defined as the general framework bridging between ontology and epistemology, then the conversions discussed provide a metaphysics: a way of relating that-which-could-be to that-which-can-be-known.

I think this relationship is quite interesting and clarifying. I find it useful in my own present philosophical project, in terms of relating subject-centered epistemology to possible centered worlds. Ontology can reach further than epistemology, and topology provides mathematical frameworks for modeling this.

That this construction yields continuous from discrete is an added bonus, which should be quite helpful in clarifying the relation between the mental and physical. Mental phenomena must be at least partially discrete for logical epistemology to be applicable; meanwhile, physical theories including Newtonian mechanics and standard quantum theory posit that physical reality is continuous, consisting of particle positions or a wave function. Thus, relating discrete epistemology to continuous ontology is directly relevant to philosophy of science and theory of mind.

Two Alternatives to Logical Counterfactuals

The following is a critique of the idea of logical counterfactuals. The idea of logical counterfactuals has appeared in previous agent foundations research (especially at MIRI): here, here. “Impossible possible worlds” have been considered elsewhere in the literature; see the SEP article for a summary.

I will start by motivating the problem, which also gives an account for what a logical counterfactual is meant to be.

Suppose you learn about physics and find that you are a robot. You learn that your source code is “A”. You also believe that you have free will; in particular, you may decide to take either action X or action Y. In fact, you take action X. Later, you simulate “A” and find, unsurprisingly, that when you give it the observations you saw up to deciding to take action X or Y, it outputs action X. However, you, at the time, had the sense that you could have taken action Y instead. You want to be consistent with your past self, so you want to, at this later time, believe that you could have taken action Y at the time. If you could have taken Y, then you do take Y in some possible world (which still satisfies the same laws of physics). In this possible world, it is the case that “A” returns Y upon being given those same observations. But, the output of “A” when given those observations is a fixed computation, so you now need to reason about a possible world that is logically incoherent, given your knowledge that “A” in fact returns X. This possible world is, then, a logical counterfactual: a “possible world” that is logically incoherent.

To summarize: a logical counterfactual is a notion of “what would have happened” had you taken a different action after seeing your source code, and in that “what would have happened”, the source code must output a different action than what you actually took; hence, this “what would have happened” world is logically incoherent.

It is easy to see that this idea of logical counterfactuals is unsatisfactory. For one, no good account of them has yet been given. For two, there is a sense in which no account could be given; reasoning about logically incoherent worlds can only be so extensive before running into logical contradiction.

To extensively refute the idea, it is necessary to provide an alternative account of the motivating problem(s) which dispenses with the idea. Even if logical counterfactuals are unsatisfactory, the motivating problem(s) remain.

I now present two alternative accounts: counterfactual nonrealism, and policy-dependent source code.

Counterfactual nonrealism

According to counterfactual nonrealism, there is no fact of the matter about what “would have happened” had a different action been taken. There is, simply, the sequence of actions you take, and the sequence of observations you get. At the time of taking an action, you are uncertain about what that action is; hence, from your perspective, there are multiple possibilities.

Given this uncertainty, you may consider material conditionals: if I take action X, will consequence Q necessarily follow? An action may be selected on the basis of these conditionals, such as by determining which action results in the highest guaranteed expected utility if that action is taken.

This is basically the approach taken in my post on subjective implication decision theory. It is also the approach taken by proof-based UDT.

The material conditionals are ephemeral, in that at a later time, the agent will know that they could only have taken a certain action (assuming they knew their source code before taking the action), due to having had longer to think by then; hence, all the original material conditionals will be vacuously true. The apparent nondeterminism is, then, only due to the epistemic limitation of the agent at the time of making the decision, a limitation not faced by a later version of the agent (or an outside agent) with more computation power.

This leads to a sort of relativism: what is undetermined from one perspective may be determined from another. This makes global accounting difficult: it’s hard for one agent to evaluate whether another agent’s action is any good, because the two agents have different epistemic states, resulting in different judgments on material conditionals.

A problem that comes up is that of “spurious counterfactuals” (analyzed in the linked paper on proof-based UDT). An agent may become sure of its own action before that action is taken. Upon being sure of that action, the agent will know the material implication that, if they take a different action, something terrible will happen (this material implication is vacuously true). Hence the agent may take the action they were sure they would take, making the original certainty self-fulfilling. (There are technical details with how the agent becomes certain having to do with Löb’s theorem).

The most natural decision theory resulting in this framework is timeless decision theory (rather than updateless decision theory). This is because the agent updates on what they know about the world so far, and considers the material implications of themselves taken a certain action; these implications include logical implications if the agent knows their source code. Note that timeless decision theory is dynamically inconsistent in the counterfactual mugging problem.

Policy-dependent source code

A second approach is to assert that one’s source code depends on one’s entire policy, rather than only one’s actions up to seeing one’s source code.

Formally, a policy is a function mapping an observation history to an action. It is distinct from source code, in that the source code specifies the implementation of the policy in some programming language, rather than itself being a policy function.

Logically, it is impossible for the same source code to generate two different policies. There is a fact of the matter about what action the source code outputs given an observation history (assuming the program halts). Hence there is no way for two different policies to be compatible with the same source code.

Let’s return to the robot thought experiment and re-analyze it in light of this. After the robot has seen that their source code is “A” and taken action X, the robot considers what would have happened if they had taken action Y instead. However, if they had taken action Y instead, then their policy would, trivially, have to be different from their actual policy, which takes action X. Hence, their source code would be different. Hence, they would not have seen that their source code is “A”.

Instead, if the agent were to take action Y upon seeing that their source code is “A”, their source code must be something else, perhaps “B”. Hence, which action the agent would have taken depends directly on their policy’s behavior upon seeing that the source code is “B”, and indirectly on the entire policy (as source code depends on policy).

We see, then, that the original thought experiment encodes a reasoning error. The later agent wants to ask what would have happened if they had taken a different action after knowing their source code; however, the agent neglects that such a policy change would have resulted in seeing different source code! Hence, there is no need to posit a logically incoherent possible world.

The reasoning error came about due to using a conventional, linear notion of interactive causality. Intuitively, what you see up to time t depends only on your actions before time t. However, policy-dependent source code breaks this condition. What source code you see that you have depends on your entire policy, not just what actions you took up to seeing your source code. Hence, reasoning under policy-dependent source code requires abandoning linear interactive causality.

The most natural decision theory resulting from this approach is updateless decision theory, rather that timeless decision theory, as it is the entire policy that the counterfactual is on.

Conclusion

Before very recently, my philosophical approach had been counterfactual nonrealism. However, I am now more compelled by policy-dependent source code, after having analyzed it. I believe this approach fixes the main problem of counterfactual nonrealism, namely relativism making global accounting difficult. It also fixes the inherent dynamic inconsistency problems that TDT has relative to UDT (which are related to the relativism).

I believe the re-analysis I have provided of the thought experiment motivating logical counterfactuals is sufficient to refute the original interpretation, and thus to de-motivate logical counterfactuals.

The main problem with policy-dependent source code is that, since it violates linear interactive causality, analysis is correspondingly more difficult. Hence, there is further work to be done in considering simplified environment classes where possible simplifying assumptions (including linear interactive causality) can be made. It is critical, though, that the linear interactive causality assumption not be used in analyzing cases of an agent learning their source code, as this results in logical incoherence.

What is metaphysical free will?

This is an attempt to explain metaphysical free will. This serves to explain metaphysics in general.

First: on the distinction between subject-properties and object-properties. The subject-object relation holds between some subject and some object. For example, a person might be a subject looking at a table, which is an object. Objects are, roughly, entities that could potentially be beheld by some subject.

Metaphysical free will is a property of subjects rather than objects. This will make more sense if I first contrast it with object-properties.

Objects can be defined by some properties: location, color, temperature, and so on. These properties yield testable predictions. Objects that are hot will be painful to touch, for example.

Object properties are best-defined when they are closely connected with testable predictions. The logical positivist program, though ultimately unsuccessful, is quite effective when applied to defining object properties. Similarly, the falsificationist program is successful in clarifying the meaning of a variety of scientific hypotheses in terms of predictions.

Intuitively, free will has to do with the ability of a someone to choose from one of multiple options. This implies a kind of unpredictability, at least from the perspective of the one making the choice.

Hence, there is a tension in considering free will as an object-property, in that object properties are about predictable relations, whereas free will is about choice. (Probabilistic randomness would not much help either, as e.g. taking an action with 50% probability does not match the intuitive notion of choice)

The most promising attempts to define free will as an object-property are within the physicalist school that includes Gary Drescher and Daniel Dennett. These define choice in terms of optimization: selection of the best action from a list of options, based upon anticipated consequences. This remains an object-property, because it yields a testable prediction: that the chosen action will be the one that is predicted to lead to the best consequences (and if the agent is well-informed, one that actually will). Drescher calls this “mechanical choice”.

I will now contrast object-properties (including mechanical choice) with subject-properties.

The distinction between subjects and objects is, to a significant extent, grammatical. Subjects do things, objects have things done to them. “I repaired the table with some glue.”

It is easy to detect notions of choice in ordinary language. “I could have gone to the store but I chose not to”; “you don’t have to do all that work”; “this software has so many options and capabilities“.

Functional definitions of objects are often defined in terms of the capabilities the subject has in using the object. For example, an axe can (roughly) be defined as an object that can be swung to hit another object and create a rift.

The desiderata of products, including software, are about usability. The desire is for an object that can be used in a number of ways.

Moral language, too, refers to capabilities. What one should do depends on what one can do; see Ought implies Can.

We could say, then, that this sort of subjunctive language is tied with orienting towards reality in a certain way. The orientation is, specifically, about noticing the capabilities that one’s self (and perhaps others) have, and communicating about these capabilities. I find that replacing the word “metaphysics” with the word “orientation” is often illuminating.

When this orientation is coupled with language, the language describes itself as between observation and action. That is: we talk as if we may take action on the basis of our speech. Thus, our language refers to, among other things, our capabilities, which are decision-relevant. This is in contrast to thinking of language as a side effect, or as an action in itself.

This could be studied in AI terms. An AI may be programmed to assume it has control of “its action”, and may have a model of what the consequences of various actions are, which correspond to its capabilities. From the AI’s perspective, it has a choice among multiple actions, hence in a sense “believing in metaphysical free will”. To program an AI to take effective actions, it isn’t sufficient for it to develop a model of what is; it must also develop a model of what could be made to happen. (The AI may, like a human, generate verbal reports of its capabilities, and select actions on the basis of these verbal reports)

Even relatively objective ways of orienting towards reality notice capabilities. I’ve already noted the phenomenon of functional definitions. If you look around, you will see many objects, and you will also likely notice affordances: ways these objects may be used. It may seem that these affordances inhere in the objects, although it would be more precise to say that affordances exist in the subject-object relationship rather than the object itself, as they depend on the subject.

Metaphysics isn’t directly an object of scientific study, but can be seen in the scientific process itself, in the way that one must comport one’s self towards reality to do science. This comportment includes tool usage, logic, testing, observation, recording, abstraction, theorizing, and so on. The language scientists use in the course of their scientific study, and their communication about the results, reveals this metaphysics.

(Yes, recordings of scientific practice may be subject to scientific study, but interpreting the raw data of the recordings as e.g. “testing” requires a theory bridging between the objective recorded data and whatever “testing” is, where “testing” is naively a type of intentional action)

Upon noticing choice in one’s metaphysics, one may choose to philosophize on it, to see if it holds up to consistency checks. If the metaphysics leads to inconsistencies, then it should be modified or discarded.

The most obvious possible source of inconsistency is in the relation between the metaphysical “I” and the physical body. If the “I” is identical with one’s own physical body, then metaphysical properties of the self, such as freedom of choice, must be physical properties, leading to the usual problems.

If, on the other hand, the “I” is not identical with one’s physical body, then it must be explained why the actions and observations of the “I” so much align with the actions of the body; the mind-body relation must be clarified.

Another issue is akrasia; sometimes it seems that the mind decides to take an action but the body does not move accordingly. Thus, free will may be quite partial, even if it exists.

I’ve written before about reconciliation between metaphysical free will and the predictions of physics. I believe this account is better than the others I have seen, although nowhere near complete.

It is worth contrasting the position of believing in metaphysical free will with its converse. For example, in the Bhagavad Gita, Krishna states that the wise do not identify with the doer:

All actions are performed by the gunas of prakriti. Deluded by identification with the ego, a person thinks, “I am the doer.” But the illumined man or woman understands the domain of the gunas and is not attached. Such people know that the gunas interact with each other; they do not claim to be the doer.

Bhagavad Gita, Easwaran translation, ch. 3, 27-28

In this case the textual “I” is dissociated from the “doer” which takes action. Instead, the “I” is more like a placeholder in a narrative created by natural mental processes (gunas), not an agent in itself. (The interpretation here is not entirely clear, as Krishna also gives commands to Arjuna)

This specific discussion of metaphysical free will generalizes to metaphysics in general. Metaphysics deals with the basic entities/concepts associated with reality, subjects, and objects. It is contrasted with physics, which deals with objects, generalizing from observable properties of them (and the space they exist in and so on) to lawful theories.

To summarize metaphysical free will:

  • We talk in ways that imply that we and others have capabilities and make choices.
  • This way of talking is possible and sufficiently-motivated because of the way we comport ourselves towards reality, noticing our capabilities.
  • Effective AIs should similarly be expected to model their own capabilities as distinct from the present state of the world.
  • It is difficult to coherently identify these capabilities we talk as if we have, with physical properties of our bodies.
  • Therefore, it may be a reasonable (at least provisional) assumption that the capabilities we have are not physical properties of our bodies, and are metaphysical.
  • The implications of this assumption can be philosophically investigated, to build out a more coherent account, or to find difficulties in doing so.
  • There are ways of critiquing metaphysical free will. The assumption may lead to contradictions, with observations, well-supported scientific theories, and so on.

The absurdity of un-referenceable entities

Whereof one cannot speak, thereof one must be silent.

Ludwig Wittgenstein, Tractatus Logico-Philosophicus

Some criticism of my post on physicalism is that it discusses reference, not the world. To quote one comment: “I consider references to be about agents, not about the world.” To quote another: “Remember, you have only established that indexicality is needed for reference, ie. semantic, not that it applies to entities in themselves” and also “you need to show that standpoints are ontologically fundamental, not just epistemically or semantically.” A post containing answers says: “However, everyone already kind of knows the we can’t definitely show the existence of any objective reality behind our observations and that we can only posit it.” (Note, I don’t mean to pick on these commentators, they’re expressing a very common idea)

These criticisms could be rephrased in this way:

“You have shown limits on what can be referenced. However, that in no way shows limits on the world itself. After all, there may be parts of the world that cannot be referenced.”

This sounds compelling at first: wouldn’t it be strange to think that properties of the world can be deduced from properties of human reference?

But, a slight amount of further reflection betrays the absurdity involved in asserting the possible existence of un-referenceable entities. “Un-referenceable entities” is, after all, a reference.

A statement such as “there exist things that cannot be referenced” is comically absurd, in that it refers to things in the course of denying their referenceability.

We may say, then, that it is not the case that there exist things that cannot be referenced. The assumption that this is the case leads to contradiction.

I believe this sort of absurdity is quite related to Kantian philosophy. Kant distinguished phenomena (appearances) from noumena (things-in-themselves), and asserted that through observation and understanding we can only understand phenomena, not noumena. Quoting Kant:

Appearances, to the extent that as objects they are thought in ac­cordance with the unity of the categories, are called phaenomena. If, however, I suppose there to be things that are merely objects of the un­derstanding and that, nevertheless, can be given to an intuition, although not to sensible intuition, then such things would be called noumena.

Critique of Pure Reason, Chapter III

Kant at least grants that noumena are given to some “intuition”, though not a sensible intuition. This is rather less ridiculous than asserting un-referenceability.

It is ironic that noumena-like entity being hypothesized in the present case (the physical world) would, by Kant’s criterion, be considered a scientific entity, a phenomenon.

Part of the absurdity in saying that the physical world may be un-referenceable is that it is at odds with the claim that physics is known through observation and experimentation. After all, un-referenceable observations and experimental results are of no use in science; they couldn’t made their way into theories. So the shadow of the world that can be known (and known about) by science is limited to the referenceable. The un-referenceable may, at best, be inferred (although, of course, this statement is absurd in refererring to the un-referenceable).

It’s easy to make fun of this idea of un-referenceable entities (infinitely more ghostly than ghosts), but it’s worth examining what is compelling about this (absurd) position, to see what, if anything, can be salvaged.

From a modern perspective, we can see things that a pre-modern perspective cannot conceptualize. For example, we know about gravitational lensing, quantum entanglement, Cesium, and so on. It seems that, from our perspective, these things-in-themselves did not appear in the pre-modern phenomenal world. While they had influence, they did not appear in a way clear enough for a concept to be developed.

We may believe it is, then, normative for the pre-moderns to accept, in humility, that there are things-in-themselves they lack the capacity to conceptualize. And we may, likewise, admit this of the modern perspective, in light of the likelihood of future scientific advances.

However, conceptualizability is not the same as referenceability. Things can be pointed to that don’t yet have clear concepts associated with them, such as the elusive phenomena seen in dreams.

In this case, pre-moderns may point to modern phenomena as “those things that will be phenomena in 500 years”. We can talk about those things our best theories don’t conceptualize that will be conceptualized later. And this is a kind of reference; it travels through space-time to access phenomena not immediately present.

This reference is vague, in that it doesn’t clearly define what things are modern phenomena, and also doesn’t allow one to know ahead of time what these phenomena are. But it’s finitely vague, in contrast to the infinite vagueness of “un-referenceable entities”. It’s at least possible to imagine accessing them, by e.g. becoming immortal and living until modern times.

A case that our current condition (e.g. modernity) cannot know about something can be translated into a reference: a reference to that which we cannot know on account of our conditions but could know under other imaginable conditions. Which is, indeed, unsurprising, given that any account of something outside our understanding existing, must refer to that thing outside our understanding.

My critique of an un-refererenceable physical world is quite similar to Nietzsche’s of Kant’s unknowable noumena. Nietzsche wrote:

The “thing-in-itself” nonsensical. If I remove all the relationships, all the “properties,” all the “activities” of a thing, the thing does not remain over; because thingness has only been invented by us owing to the requirements of logic, thus with the aim of defining, communication (to bind together the multiplicity of relationships, properties, activities).

Will to Power, sec. 558

I continue to be struck by the irony of the transition from physical phenomena to physical noumena. Kant’s positing of a realm of noumena was, perhaps, motivated by a kind of humility, a kind of respect for morality, an appeasement of theological elements in society, while still making a place for thinking-for-one’s-self, science, and so on, in a separate magisterium that can’t collide with the noumenal realm.

Any idea, whether it’s God, Physics, or Objectivity, can disconnect from the human cognitive faculty that relates ideas to the world of experience, and remain as a mere signifier, which persists as a form of unfalsifiable control. When Physics and Objectivity take on theological significance (as they do in modern times), a move analogous to Kant’s will place them in an un-falsifiable noumenal realm, with the phenomenal realm being the subjective and/or intersubjective. This is extremely ironic.