Here’s my current explicit theory of ontology and meta-epistemology. I haven’t looked into the philosophical literature that much, but this view has similarities to both conjectural realism and to minimum description length.
I use “entity” to mean some piece of data in the mind, similar to an object in an object-oriented programming language. They’re the basic objects perception and models are made of.
Humans start with primitive entities, which include low-level physical percepts, and perhaps other things, though I’m not sure.
We posit entities to explain other entities, using Occam/probability rules; some entities are rules about how entities predict/explain other entities. Occam says to posit few entities to explain many.
Probability says explanations may be stochastic (e.g. dogs are white with 30% probability). See minimum description length for more on how Occam and probability interact.
High-level percepts get posited to explain low-level percepts, e.g. a color splotch gets posited to explain all the individual colored points that are close to each other. A line gets posited to explain a bunch of individual colored points that are in, well, a line.
Persistent objects are posited (object permanence) to explain regularities in high-level percepts over spacetime. Object-types get posited to explain similarities between different objects.
Generalities (e.g. “that swans are white”) get posited to explain regularities between different objects. Generalities may be stochastic (coins turn up heads half the time when flipped). It’s hard to disentangle generalities from types themselves (is being white a generality about swans, or a defining feature?). Logical universals (such as modus ponens) are generalities.
Some generalities are causal relations, e.g. that striking a match causes a flame. Causal relations explain “future” events from “past” events, in a directed acyclic graph structure.
So far, the picture is egocentric, in that percepts are taken to be basic. If I adopt a percept-based ontology, I will believe that the world moves around me as I walk, rather than believing that I move through the world. Things are addressed in coordinates relative to my position, not relative to the ground. (This is easy to see if you pay attention to your visual field while walking around)
Whence objectivity? As I walk, most of those things around me “don’t move” if I posit that the ground is stable, as they have the same velocity as the ground. So by positing the ground is still while I move, I posit fewer motions. While I could in theory continue using an egocentric reference frame and posit laws of motion to explain why the world moves around me, this ends up more complicated and epicyclical than simply positing that the ground is still while I move. Objectivity-in-general is a result of these shifts in reference frame, where things are addressed relative to some common ground rather than egocentrically.
Objectivity implies theory of mind, in that I take my mental phenomena to be “properties of me-the-person” rather than “the mental phenomena that are apparent”, as an egocentric reference frame would take them to be. I posit other minds like my own, which is a natural result of the generalization that human bodies are inhabited by minds. Empathy is the connection I effectively posit between my own mental phenomena and others’ through this generalization.
An ontology shift happens when we start positing different types of entities than we did previously. We may go from thinking in terms of color splotches to thinking in terms of objects, or from thinking in terms of chemical essences to thinking in terms of molecules. Each step is justified by the Occam/probability rules; the new ontology must make the overall structure simpler.
Language consists of words, which are themselves entities that explain lower-level percepts (phonemes, dots of ink on paper, etc). Children learning language find that these entities are correlated with the reality they have already posited. (This is clear in the naive case, where teachers simply use language to honestly describe reality, but correlation is still present when language use is dishonest). The combination of objectivity and language has the result of standardizing a subset of ontology between different speakers, though nonverbal ontology continues to exist.
Mathematical entities (e.g. numbers) are posited to explain regularities in entities, such as the regularity between “two things over here” and “two things over there”, and between linguistic entities such as the word “two” and the actual “two things over here”. Mathematical generalizations are posited to explain mathematical entities.
Fictional worlds are posited to explain fictional media. We, in some sense, assume that a fiction book is an actual description of some world. Unlike with nonfiction media, we don’t expect this world to be the same as the one we move through in everyday life; it isn’t the actual world. Reality is distinguished from fantasy by their differing correlational structures.
If everything but primitive entities is posited, in what sense are these things “ultimately real”? There is no notion of “ultimately real” outside the positing structure. We may distinguish reality from fantasy within the structure, as the previous paragraph indicates. We may also distinguish illusion from substance, as we expect substance but not illusion to generate concordant observations upon being viewed differently. We may distinguish persistent ontology (which stays the same as we get more data) from non-persistent ontology (which changes as we get more data). And we may distinguish primitive entities from posited ones. But, there doesn’t seem to be a notion of ultimate reality beyond these particular distinctions and ones like them. I think this is a feature, not a bug. However, it’s at least plausible that when I learn more, my ontology will stabilize to the point where I have a natural sense of ultimate reality.
What does it mean for propositions to be true or false? A proposition is some sentence (an entity) corresponding to a predicate on worlds; it is true if and only if the predicate is true of the world. For example, “snow is white” is true if and only if snow is white. This is basically a correspondence theory, where we may speak of correspondences between the (already-ontologized) territory and ontological representations of it.
But, what about ontological uncertainty? It’s hard to say whether an ontology, such as the ontology of objects, is “true” or “false”. We may speak of it “fitting the territory well” or “fitting the territory badly”, which is not the same thing as “true” or “false” in a propositional sense. If we expect our ontologies to shift in the future (and I expect mine to shift), then, from the perspective of our new ontology, our current ontology will be false, the way Newtonian mechanics is false. However, we don’t have access to this hypothetical future ontology yet, so we can’t use it to judge our current ontology as false; the judgment that the original ontology is false comes along with a new worldview, which we don’t have yet. What we can say is whether or not we expect our reasoning processes to produce ontology shifts when exposed to future data.
May non-falsifiable entities be posited? Yes, if they explain more than they posit. Absent ability to gain more historical data, many historical events are non-falsifiable. Still, positing such an event explains the data (e.g. artifacts supposedly left at the site of the event) better than alternatives (e.g. positing that the writing was produced by people who happened to have the same delusion). So, entities need not be falsifiable in general, although ones that are completely unrelated to any observational consequences will never be posited in the first place.
Is reality out there, or is it all in our heads? External objects are out there; they aren’t in your brain, or they would be damaging your brain tissue. Yet, our representations of such objects are in our heads. Objects differ from our representations of them; they’re in different places, are different sizes, and are shaped differently. When I speak of posited structures, I speak of representations, not the objects themselves, although our posited structures constitute our sense of all that is.
Reductionism and physicalism
But isn’t reality made of atoms (barring quantum mechanics), not objects? We posit atoms to explain objects and their features. Superficially, positing so many atoms violates Occamian principles, but this is not an issue in probabilistic epistemologies, where we may (implicitly) sum over many possible atomic configurations. The brain doesn’t actually do such a sum; in practice we rarely posit particular atoms, and instead posit generalities about atoms and their relation to other entities (such as chemical types). Objects still exist in our ontologies, and are explained by atoms. Atoms explain, but do not explain away, objects.
But couldn’t you get all the observations you’re using objects to explain using atoms? Perhaps an AI can do this, but a human can’t. Humans continue to posit objects upon learning about atoms. The ontology shift to believing in only-atoms would be computationally intractable.
But doesn’t that mean the ultimate reality is atoms, not objects? “Ultimate reality” is hard to define, as explained previously. Plausibly, I would believe in atoms and not believe in objects if I thought much faster than I actually do. This would make objects a non-persistent ontology, as opposed to the more-persistent atomic ontology. However, this conterfactual is strange, as it assumes my brain is larger than the rest of the universe. Even then, I would be unable to model my brain as atomic. So it seems that, as an epistemic fact, atoms aren’t all there are; I would never shift to an atom-only ontology, no matter how big my brain was.
But isn’t this confusing the territory and the best map of the territory? As explained previously, our representations are not the territory. Our sense of the territory itself (not just of our map of it) contains objects, or, to drop the quotation, the territory itself contains objects. (Why drop the quotation? I’m describing my sense of the territory to you; there is nothing else I could proximately describe, other than my sense of the territory; in reaching for the territory itself, I proximately find my sense of it)
This discussion is going towards the idea of supervenience, which is that high-level phenomena (such as objects) are entirely determined by low-level phenomena (such as atoms). Supervenience is a generality that relates high-level phenomena to low-level ones. Importantly, supervenience is non-referential (and thus vacuous) if there are no high-level phenomena.
If all supervenes on atoms, then there are high-level phenomena (such as objects), not just atoms. Positing supervenience yields all the effective predictions that physicalism could yield (in our actual brains, not in theoretical super-AIs). Supervenience may imply physicalism, depending on the definition of physicalism, but it doesn’t imply that atoms are the only entities.
Supervenience leaves open a degree of freedom, namely, the function mapping low-level phenomena to high-level phenomena. In the case of consciousness as the high-level phenomenon, this function will, among other things, resolve indexical/anthropic uncertainty (which person are the experiences I see happening to?) and uncertainty about the hard problem of consciousness (which physical structures are conscious, and of what?).
Doesn’t this imply that p-zombies are conceivable? We may distinguish “broad” notions of conceivability, under which just about any posited structure is conceivable (and under which p-zombies are conceivable), and “narrow” notions, where the structure must satisfy certain generalities, such as logic and symmetry. Adding p-zombies to the posited structure might break important general relations we expect will hold, such as logic, symmetry of function from physical structure to mental structure, or realization-independence. I’m not going to resolve the zombie argument in this particular post, but will conclude that it is at least not clear that zombies are conceivable in the narrow sense.
Conclusion
This is my current best simple, coherent view of ontology and meta-epistemology. If I were to give it a name, it would be “Occamian conjecturalism”, but it’s possible it has already been named. I’m interested in criticism of this view, or other thoughts on it.
This was really good. It gets pretty close to what I think is my current meta-epistemology.
LikeLike