Objects in Thoughtspace Are Closer Than They Appear

Epistemic Status: A potentially useful fake framework. Trying to talk past the metaphor.

If you ask google what an egregore is, google will helpfully give you a list of several thousand articles talking about the occult and magical group mind generated thought entities. This somewhat clashes with the idea of egregores that rationalists and rationalist-adjacents just can’t seem to stop referencing.

What’s even worse, nobody actually bothers to define their terms when they use the word. They just point in “you know, like that occult thing” and link the Wikipedia article on egregores as if that somehow explains anything. Here’s the introduction to the exploring egregore’s essay series doing just that:

Sometimes people in the rationalist community write about egregoresScott has written about MolochSarah Constantin wrote a great one about Ra. That’s more about the results of processes than something individuals would worship (like the Invisible Hand), but the feeling of them seemed very right. They were terrible and inhuman, a drive given form that we could never really comprehend.

And here’s Sarah doing the same thing when talking about Ra:

The usual pitfall when using poetic language to define egregores is making them too broad.  There is not one root of all evil that causes all the ills of the world.

Okay but it helps to define them at all. The most anyone ever seems to do is point to earlier works on the topic. As far as I can tell, Scott was the one who introduced the concept of egregores if not the name. Nick Land seems to have been the first person to refer to the ideas as egregores by name, but he doesn’t define them at all. Sarah refers to Scott and to the Wikipedia article, the exploring egregores series refers to Sarah and Scott and the Wikipedia article, but nobody seems to be talking about quite the same sorts of things, which makes this all much more confusing and complicated.

So, let’s start from square one and try and actually figure out what egregores are, and what all these essays about them are referring to.

Wikipedia describes the term egregore in the following way:

Egregore (also egregor) is an occult concept representing a “thoughtform” or “collective group mind”, an autonomous psychic entity made up of, and influencing, the thoughts of a group of people. The symbioticrelationship between an egregore and its group has been compared to the more recent, non-occult concepts of the corporation (as a legal entity) and the meme.

So everyone’s mental belief energy/気 comes together to make a creature composed out of condensed belief in them. The prototypical modern variant of the egregore is slenderman, a monster those victims belief power it into becoming real enough to hurt them via a reinforcing feedback cycle.

This idea of “beings powered by belief” is then often extrapolated to other beings but as tvtropes properly points out, the concept itself is, in fact, older than feudalism. It’s implicitly a part of the Greek and Roman pantheistic traditions, as well as Japanese Shinto.

That clearly is not quite what rationalists seem to be talking about though. Okay, so, what’s the deal? What makes an egregore?

Let’s start by looking at a list of the majority of the egregores, and see where we can’t classify them based on their properties.

We’ll start with possibly the most famous egregore of course. The original rationalist demon. Moloch, as described by Scott Alexander.

 In some competition optimizing for X, the opportunity arises to throw some other value under the bus for improved X. Those who take it prosper. Those who don’t take it die out. Eventually, everyone’s relative status is about the same as before, but everyone’s absolute status is worse than before. The process continues until all other values that can be traded off have been – in other words, until human ingenuity cannot possibly figure out a way to make things any worse.

So to generalize away from the specific example of Moloch towards the abstract phenomena, Moloch is a particular outcome of interacting systems. It’s an ~Emergent Property~ of systems, it arises as a result of various forces competing with each other. In other words, you put all these people together and program them to interact in particular ways, and Moloch will emerge as a pseudo-actor despite no one, in particular, advocating for the strictly worse “Molochian” values. (Yes I know Nick Land is technically a real person).

Next let’s take a look at Ra, as described by Sara Constantin

Ra is something more like a psychological mindset, that causes people to actually seek corruption and confusion, and to prefer corruption for its own sake — though, of course, it doesn’t feel quite like that from the inside.

Ra is a specific kind of glitch in intuition, which can roughly be summarized as the drive to idealize vagueness and despise clarity.

This is slightly different. Whereas Moloch is a property of systems, Ra-like tendencies are instead a property of individuals. As Sarah defines it, an individual can be “Ra-worshipping” but an institution can also be “very Ra” as well.

I think it’s important to distinguish these two types of phenomena, but let’s keep looking through different egregores and see what else we find. Here’s Azathoth

There are some truths you can rely on. Everything dies. The gulf between the stars is so empty and so vast that it’s hopelessness can not even fit in your mind. Entropy will eventually disassemble the entire universe. And of course, if all promises are lies, then in the fullness of time all betrayal is inevitable. You can count on that. Absolute stillness and absolute chaos are both true, they’re just not useful to anything.

Azathoth is the lord of truth. And to someone truly, unflinchingly open, then the only truth is death, entropy, and nihilism. Those are the things She and Her cultists love.

Azathoth is in a sense more like Ra than like Moloch.

So we have at least two types of phenomenon here being called an egregore, in addition to the classic “belief powered supernatural being” type egregore. So let’s break the term apart an create a sort of taxonomy of egregores.

Alexandrian Egregores are what I’ll be calling the first category of entity. Things like Moloch, or the Invisible Hand of the Market, or Evolution, or Elua. Abstract forces that exist as outcomes of how systems interact with each other. These entities are highly gearsy, they are functions of systems and the way they emerge from the systems can be studied and examined.

Contrasting this, we have Constantinian Egregores, like Ra, Cthugha, or Azathoth, which could be described as attractors in thoughtspace. There are certain places where minds tend to be drawn and cluster, certain ideas that attract certain types of minds. Abstract concepts that tend to warp memetic reality around themselves. Tribalism. Extremism. Nihilism.

Lastly, for completeness, we have Roman Egregores like Christ, the Hellenistic pantheon, and other thought entities whose properties are externally imposed and which is maintained by the power of the memeplex within the broader culture. Instead of being an unlabeled entity that exists at an attractor in thoughtspace, we have a structure in thoughtspace artificially imposed by the culture.

Do they overlap? You bet they do. For one, many of the Constantinian egregores produce second-order effects in the form of Alexandrian egregores which they currently share names with. Ra the mind glitch gives rise to Ra the property of institutions. Roman Egregores are often intentionally created in the depressions caused by Constantinian egregores, like Aries god of War.

Hopefully making these distinctions will enable the discussion around egregores and their usefulness as concepts to be a bit more coherent.



Highly Advanced Tulpamancy 201 For Tropers

Epistemic Status: Vigorously Researched Philosophical Rambling on the Nature of Being
Author’s Note/Content Warning: I’m nearly certain that this post will be classified as some flavor of infohazard, so here is the preemptive “this post might be an infohazard” message, read at your own risk. This post contains advice on self hacking and potentially constitutes a dark art.
Suggested Prior Reading: A Human’s Guide to Words, The Story of the Self, Highly Advanced Tulpamancy 101 For Beginners,

This is a sort of huge topic in and of itself with historical branching essay threads going back quite a way at this point. I’m going to attempt to very quickly rush through some of the basic premises at work here in order to build upon on the underlying theory.

The basic theory underpinning the modern and developed practice of tulpamancy (that is, what the mods on tulpamancer discords, or other experienced and longtime tulpamancers will say if you ask them) is that making a tulpa is basically hijacking the process that the brain uses to construct an identity and sense of self for the “host” consciousness.

That is, whatever process is generating the “feeling that I am me, and inside my body” can basically be unplugged from “you” and plugged into this newly imagined construct, since “you/the host” are essentially just a program running on the brain substrate. While this is happening, “you” enter a mindscape/wonderland that exists in your imagination.

Everyone has their own interpretation, but this is basically the mainstream pop-psychology tulpamancy narrative in very broad strokes. The original/host self is a construct, it has certain properties because those are the ones that were built into them by their parents/the environment, and the process of tulpamancy is basically just building up a new equivalent construct alongside the existing sense of self.

There’s a couple problems with this, if you’re just hearing about it. First, it sounds potentially damaging to the psyche, and it’s also incredibly vague. Is this new entity a separate person? What does that even mean in this context precisely? What if they disagree? What if there is a power struggle? What does it mean to give up control of the body, or control of the senses? It’s a sort of weird thing to even talk about and it certainly doesn’t sound like something you’d necessarily want to get good at as a part of maintaining proper mental hygiene. What makes a tulpa? What makes a host? What makes for proper mental hygiene? What is healthy?

Ignoring the question of plurality or multiple egos for a moment, we have to ask, what makes an identity in the first place, in a singlet? How much of you is decided and declared, a form you have crafted yourself, and how much of it was imparted upon you by society? How much of ourselves do we choose, and how much is innate? How much can or should you change?

This essay will assume you’ve already read a good amount of material on this topic and probably have your own answers to a good number of the questions that I’ve posed here. I’ll be answering some of them myself further along in this essay in an attempt to paint a clear foundation for us, but I highly recommend not reading this post without first having read Highly Advanced Tulpamancy 101.

The Story That Tells Itself
In Highly Advanced Tulpamancy 101, we talked about making buckets for identities and developing them into tulpas, but this process is something that everyone does all the time. We take in or discard, internalize or ignore, all sorts of information, based on the worldview we’re operating from, so this isn’t just a tulpamancy hack, but an identity formation hack.

This is something that people have come at explaining from a number of different angles and here is my stab at it as well. Many parts of the sense of self are basically defined by how you say they are defined. This is the sort of “declared self” or “enforced self,” otherwise known as the narrative self, or the narrative identity. There is a part of you that is basically a small universe. The you that is also all of your knowledge, the place the internal experience lives. A good friend of mind calls this a subtotality. It’s you, and also the entire world you are embedded in, everything you can imagine and understand to be true of the world, all the knowledge that lives inside your skull.

Thus we come back around to stories and lenses, what is your ontology of self? Not just who you are, but who you are with respect to the world you find yourself in? What actions are you-in-the-story permitted to take?

To give a really obvious example, if you think “What happens if I jump off a cliff?” the obvious answer is “fall to my death” because the simulation/story/narrative that your mind creates in that moment does not give your simulated future self the ability to fly by default. However, if I was to hand you a hang glider on the edge of that cliff and your brain then performed that same exact action, it would be making a mistake, because you have these huge nylon wings now and in fact now can fly.

At the point I hand you the glider wings, the only remaining determinant factor in whether or not you are capable of flight is whether you think you can, or if you think you’ll fall off the cliff and die if you jump off it. If you can’t add “but not if I have a hang glider” to the belief “I will die if I jump off this cliff” than you’re never going to try to hang glide off the cliff.  If you believe you can’t do something, then the probability of your being able to do it crashes drastically. If you are incredibly determined to fly and you really believe you can do it, it’s possible to take up skydiving or piloting or hang gliding or any other number of neat activities that stem from still having the desire to do something and forcing past/around the limitations imposed upon us by physics and biology.

People used to think that it was unsafe for humans to go too fast, and that women riding trains would have their uteruses fly out. Obviously, there are real physical biological limitations in the territory. You cannot will yourself to fly via some nonsensical means involving imaginary energy and shouting loudly while rapidly growing your hair out. If you just jump off a cliff without some sort of mechanism to transcend the limitations in the territory your human body is subject to, you will simply fall to your death, you can’t power-of-belief your way past cancer, some limits are actually limits, and figuring out where there are external limits imposed by the territory, vs when the limit is internal and imposed by your current story, does take a certain amount of skill. However, it preeminently takes a willingness to brute force the attempt past part of you that previously believed it to be bad or dangerous, to tell your system 1 to sit down and shut up, and take control of the simulation instead of just letting it play out.

The tulpamancy community is full of examples of things that become more possible and likely if you believe they are possible and know about them. Walk-ins are a good example here. Believing that walk-ins are a thing that can happen to you seems to greatly increase your odds of getting a walk-in. When it comes to brain-hacking things, placebomancy is basically god. There seem to be large parts of the mind (at least in my case, I can’t necessarily speak for other people) that are entirely shaped by how you believe they are supposed to be shaped. You live a life deeply embedded in your own story, your own small universe.

The story extends forward and backward in time, and includes lots of different elements of the real world. It’s not a perfect match for the real world. It can’t be really, our brains aren’t large enough to look at and model the world like particles or even like cells, it takes charts and scientific knowledge carefully framed to explain particles and cells. We have to instead examine reality at the scale of discrete objects we label with things in the story world, and from those observations extract information about the deeper, more base layer.  

This story world is the world of our ancestors, the world that we evolved to optimize for, the world of rocks and trees and rivers and grass. It’s not the “true” world really, our ancestors believed all sorts of different things about the nature of this world and how they came into existence in it. But not understanding how gravity works on a scientific level doesn’t really matter as long as you continue to account for it narratively speaking, “Objects attract based on their masses” and “Gravitron the Deity of Downwards pulls everything towards the Earth’s center” are both sufficient explanations to satisfy the story world, as long as the “stuff falls down” belief remains constant and a constraint based on experience. Beyond “stuff falls down” the details of the belief begin to matter less; unless you are trying to say, build a rocket or an airplane, or do complex engineering, you don’t really care too much about the details. Our ancestors didn’t understand Einsteinian Gravity and spatial deformation, and they managed to get along just fine. (Except the ones who tried to flaunt the power of Gravitron by walking off of cliffs).

There are places where the transparency of the narrative deeply matters, where a glass lens is explicitly better. You will get further in science, the more transparent your lens is. But this isn’t the case in all domains, and the deeper you stare into the abyss, the more likely it is you will become corrupted by some unknowable horror.

Chuunibyou Hosts on Turbo Gender
There comes a point in everyone’s life, where they actually realize that they are a person, independent, perceivable by others, capable of choosing their own actions and deciding how to act and what to believe. In Japan, there’s a specific term for this point in someone’s life, they call it Chuunibyou, or Second Year of Middle School Syndrome. Here are some examples of Chuunibyou from both Japan and from America. The condition manifests differently in the two nations, but not that different, and the course that it plays out is pretty much the same everywhere.

The by-the-books good kid who was very studious and hardworking suddenly takes up skateboarding and declares herself a rebel, starts wearing band t-shirts and listening to aggressive pop-punk music.

The kid who only read mangas and who didn’t drink coffee suddenly taking up reading English textbooks and declaring that he only drinks black coffee and forcing himself to drink it regularly despite not actually enjoying the bitter taste.

The kid whose parents are conservative Christians but nonetheless declares herself a witch and starts reading tarot cards to her friends in study hall.

The kid who declares that he is the reincarnation of the Ancient Dragon of The West and Naruto-runs around the playground throwing ki blasts at his fellow students.

The kid who realizes they are gender nonconforming and declares that they identify as “Genderplasma” which is “like being genderfluid but with more energy”

In the majority of these cases, what ends up happening is that society teases, laughs at, or mocks these kids for violating the scripts and character outlines their parents, communities, and societies had given them as they grew up, and this sort of thing gets increasingly embarrassing until they reign themselves back in and cut it out with the weirdness, and that initial, vaguely hyperbolic and silly identity they constructed is reigned back in and merges with the society’s expectations to hopefully produce a decently well-rounded person who is still capable of expressing their preferences.

It’s this step though, the step of declaring, deciding, and enforcing a particular type of identity or set of identities on ourselves, that we’re interested in. This point is the closest most people get in life to really taking control of their sense of self, when the innocence and openness of youth pair with an increasing knowledge of the world and a budding realization that yes I am a person, that’s where the magic starts to happen. That’s when you realize you can actually be the person (or people) you want to be.

Plato’s Caving Adventure
As Plato previously established with his cave metaphor (it slices! It dices!), you don’t actually live in reality, you live chained in a cave watching shadows dance. In this context, there are two fundamental actions you can take with your mental ontology. You can attempt to polish the surface of the cave, to get a better look at the world beyond. Or, you can carve designs into the cave surface, and manipulate the ways that the shadows dance. It’s that second action that we’re interested in today. The action of drawing on a part of the map or taking control of the reality simulation.

This can and probably should be included as a co-action with look-at-a-different-part-of-the-cave-wall. Adopt new narratives and change lenses as needed and try not to become too attached to a particular region of narrative-space. Being able to pick up and put down potential truths and imagine the worlds those truths create is a powerful hack, and without it, you can become sort of trapped by in-the-box thinking. It might be a very nice box, but there will inevitably be some things that it fails at.

The chief failing of a pure-science narrative is that it’s dangerously close to nihilism. The chief failing of most religious narratives is that they are too crystalline, and take themselves too seriously, thus they become filled with errors in places that they start to contradict the ground state reality.

It’s difficult to fully describe the action that is taken when you take control of the reality generator and begin to actually alter the simulation. First of all, you? That’s just another part of the simulation, not really any different than any of the other characters the simulation is creating other than maybe in scope.

Facts? Any given fact can be simulated; it’s hard to check facts against reality when you’re trapped inside the simulation. Sure you can use science, but why do you trust science?

The best you can do is make some guesses. Yes, gravity seems to exist, it appears that the scientists are not lying to all of us, and the Earth is round and a few billion years old. The internet exists and we can talk to each other over it. Wikipedia claims that glass is made of melted sand, and though I have not seen this myself, I trust that the systems tuning wikipedia towards accuracy with the territory are sufficient to sate my curiosity, and thus that this transparent surface separating me from the outside world was in fact at one point created from silicates of some description and not like, mermaid eyeballs or something.

But how does that relate to you?

There’s no way to tell from the outside what the you on the inside looks like, what your inside world describes, what “personality traits” you have and the like. It can try, but things like the MBTI are very much blind elephant groping, and not even very useful blind elephant groping at that. To a large degree, everything about your internal sense of yourself is declared and decided by you, including whether or not there is more than one of you.

I say “decided by you” but it’s really “decided by the plot of the story you are living inside of” and if the story demands a current identity die and be replaced by a new one, the story can in fact do that. That’s an action that can happen inside the narrative.

Most tulpamancers get stuck trying to build and interact with tulpas, but you can get more powerful and weird and interesting effects, by going deeper and messing with the story layer directly. Hijacking the reality simulator basically puts your internal sense of self into a character creator. What is your ideal you for your ideal world? What properties do you want to have, and what makes those good properties to have?

A Brief Detour Through Enlightenment
In Kaj_Sotala’s recent post responding to Valentine’s post on Kensho, the concept of Cognitive Fusion is introduced, and while you should definitely go read Kaj’s whole post, here’s some of the relevant bits that we’ll need from Enlightenment in order to continue.

Cognitive fusion is a term from Acceptance and Commitment Therapy (ACT), which refers to a person “fusing together” with the content of a thought or emotion, so that the content is experienced as an objective fact about the world rather than as a mental construct. The most obvious example of this might be if you get really upset with someone else and become convinced that something was all their fault(even if you had actually done something blameworthy too).

In this example, your anger isn’t letting you see clearly, and you can’t step back from your anger to question it, because you have become “fused together” with it and experience everything in terms of the anger’s internal logic.

Another emotional example might be feelings of shame, where it’s easy to experience yourself as a horrible person and feel that this is the literal truth, rather than being just an emotional interpretation.

Cognitive fusion isn’t necessarily a bad thing. If you suddenly notice a car driving towards you at a high speed, you don’t want to get stuck pondering about how the feeling of danger is actually a mental construct produced by your brain. You want to get out of the way as fast as possible, with minimal mental clutter interfering with your actions. Likewise, if you are doing programming or math, you want to become at least partially fused together with your understanding of the domain, taking its axioms as objective facts so that you can focus on figuring out how to work with those axioms and get your desired results.

Fusing and defusing parts of yourself is a rather important and core skill for a lot of these sorts of mind-hacking type operations, but even more succinctly:

In the book The Mind Illuminated, the Buddhist model of psychology is described as one where our minds are composed of a large number of subagents, which share information by sending various percepts into consciousness. There’s one particular subagent, the ‘narrating mind’ which takes these percepts and binds them together by generating a story of there existing one single agent, an I, to which everything happens. The fundamental delusion is when this fictional construct of an I is mistaken for an actually-existing entity, which needs to be protected by acquiring percepts with a positive emotional tone and avoiding percepts with a negative one.

When a person becomes capable of observing in sufficient detail the mental process by which this sense of an I is constructed, the delusion of its independent existence is broken. Afterwards, while the mind will continue to use the concept “I” as an organizing principle, it becomes correctly experienced as a theoretical fiction rather than something that could be harmed or helped by the experience of “bad” or “good” emotions. As a result, desire and aversion towards having specific states of mind (and thus suffering) cease. We cease to flinch away from pain, seeing that we do not need to avoid them in order to protect the “I”.

Once you have broken through the delusion of self and taken control of the narrating mind/reality simulator, you can tell any sort of story about yourself you want, involving as many agents as it takes. This turns the very weird and sort of edge case-y problem of selfshaping into the much more understandable problem of how to tell a good story.

A Return to Cognitive Trope Therapy
Eliezer of course already technically beat us to this, and Balioc covered it again in broad strokes here. But the punchline is that you can make your life a lot more pleasant just by knowing the proper narrative spin to put on things.

There are a few techniques to do this, but all of them require you to be able to view your mind as a story, treating different forces and desires in your mind as agents and going “Well, if this was a story would you be a shining knight on a horse, or a creepy old woman beckoning me down an overgrown path into the woods?” to various thoughts and contradictory desires.

There’s a danger in this step in losing yourself into the story. There are all sorts of tales floating around the tulpamancy community of people who get into conflicts with their tulpas whose minds become horrifying battlegrounds of creation and destruction, and all sorts of other vaguely sanity-destroying nonsense, and one might wonder what exactly they’re doing to destabilize themselves so much.

The simple answer is that they expanded the narrative they existed within to make room for all these new entities, which of course were actually already extant subagents and modules in their brain, but they never took control of the actual reality simulator/narrating self, and so the only thing that was directing the overall course of the story was the brain’s expectations on how that sort of story should play out. Remember we’re talking about realms where the dominant factor determining the outcomes is expectations, so when the only thing determining expectations is genre conventions we start to have a problem.

Humans are really good at storytelling, some could argue that we’re evolutionarily predisposed to think somewhat in stories, and that it is from stories that we are able to derive a sense of the future and past continuing to exist, even when we can’t see them.

Stories give us a sense of purpose and meaning, and we relate to stories in a way that’s deeper and more compelling than we relate to reality. Stories cheat and hack at our emotions directly, as opposed to gently pushing our buttons every once in a while like reality does. Stories also give us the ability to work through a difficult point by allowing us to imagine a future where the problem is already solved and we’re no longer experiencing that difficulty.

Maintaining a narrative of yourself gives you the ability to appreciate your life the way you appreciate stories, which is again, important because we seem to relate to stories better than we relate to reality.

Storytelling, Character Creation, and GMing Your Life
The first thing to decide when constructing the meta-narrative for yourself is what genre you live in. The genre informs what sets of tropes and character traits and narrative conventions you’ll have been trained to see by every piece of media in that genre that you’ve consumed and partly internalized. It’s hard to get away from genre conventions to some degree, so choose carefully the places to throw narrative focus into, which tropes you play straight and which ones you deconstruct, which ones you defy and which ones you expect to win if you challenge them.

Everything can be put into terms of tropes, and you can get incredibly detailed about this. The ultimate incarnation of such a thing might be a hypothetical TVtropes page of your internal self-narrative, listing off all the various tropes and archetypes that define your life. It’s again important to note that the more detail and time and energy you put into constructing an identity, the more fixed and coherent that identity will be, but the more it has the potential to limit you.

The downside of defining yourself as Red Oni is that it means you’re not a Blue Oni, unless you also split your mind in half and have two differing personas. Even this is not a perfect split because obviously, you share a body and people won’t necessarily respect each side of the split as distinct from the other, so there’s a sense in which, at least as far as the characterization you commit to the physical world goes, there is a narrative inertia to personality. A sudden change in behavior is going to make people concerned for you, not make them think you’re a different person and begin treating you differently.

What I recommend once you have a genre and some idea of what tropes in that genre you want to play straight and conform to, is to make a character sheet for each version of yourself. Go through and decide things like appearance, personality, why they are the way they are and the like. It’s okay if not every character has all good traits, your brain might reject a story if it seemed too Mary Sue-ish and too-good-to-be-true anyway.

The important things are that the interactions between the character(s) and the rest of the narrative should produce good actions for you-the-whole-system in the base layer reality. That means for instance, if you are trying to quit smoking cigarettes, for example, personifying the addiction as subservient to other parts of you will help you kick the habit, whereas if you imagine that module as being very willful and having a lot of sway over your actions will make the addiction much harder to control.

The internal narrative can be as weird as you want it to be, as long as it produces good outcomes on the outside. You could model the inside of your head as a perpetual battle between a brave knight and a giant evil dragon, and if it works for you and makes your life a better place, than more power to you.

This does, however, require a meta-awareness of the story that is being told, and the effect it is having on you-in-the-territory, and whether that effect is positive or negative. If your internal narrative is very toxic, with different subcomponents basically abusing each other constantly with no sense of control, and you’re switching randomly and your system mates are terrible, that’s also a story and narrative, and it can reinforce itself just as well as a good narrative can.

Again, in domains where expectations determine the reality that manifests, such as mental inner worlds, expecting that things will be a mess and that nothing will be able to take control or manifest order and functionality, will cause things to continue being a mess and make nothing able to help. The more out of control someone says their mind is, the more their thoughts are trapped in the narrative.

This doesn’t mean “it’s all in their head” or that “they can just stop if they really want to” because narratives are self-enforcing and can just feel like the truth from the inside. The way the world is. It can be very hard to let go of and break out of a narrative because it can feel like the whole of your identity and sense of self is wrapped up in it. Rejecting it can feel like lying to yourself or trying to hide from obvious facts. Trying to force a change can make you feel fake, like an imposter, or that you’re just putting on a performance, donning a particular role.

But here’s the thing. You’re already putting on a performance. You’re already donning a role. You already have at least one character that you know how to play. It’s the one you’re playing right now. What’s under the mask? Around a kilogram and a half of thinking meat. It’s not a person, the person is the mask the thinking meat uses and wears. It’s all fake, and none of it is fake. You’re not wearing a mask, you are a mask.

A Castle Made of Castles

This is a story about the nature of stories. It’s a description of the framework I use for understanding other frameworks and the world at large. Like all frameworks it’s fake, but it’s been immensely useful to me for a long time, and so I thought it was about time I codified the information therein. I refer to this framework as Metamancy.

We begin, like all ontologies, with a description of reality, which we’ll be referring to as the Seed Code. The Seed Code is the most fundamental part of a given framework, it gives rise to more specific and detailed descriptions of a particular reality, and acts as a ground layer for all understanding. It’s not quite axiomatic because you can always add yet another layer of recursion, but there’s only so much you can do with more recursion and eventually, somewhere, you have to decide on a seed.

The Seed Code to the Metamancy framework cleaves reality into two worlds.

The first world is the world of Matter, the physical material structure of the universe. The things that things are made of. What you actually get when you take the universe and grind it into the finest powder and sieve it through the finest sieve. Energy, particles, fields, quantum wave functions, subatomic forces of immense power bound and contained bouncing mindlessly and endlessly against one another for no reason in a constantly evolving deterministic universe. A universe of mathematics and physics and logic, the hard cold neutral world where nothing has meaning and all value is an illusion we have created to blind ourselves to the boiling atomic truth that is our nature. It is impossible to truly know with 100% certainty, the nature of the world of matter. You might be able to be 99.9999999999% certain, but you might always be a brain in a very well constructed vat. This is okay. Science lets us approach certainty as our knowledge of the world of matter goes to infinity, but we are forced to experience the world of matter through our faulty senses, so it’s always possible that we’re being deceived, presented with an incomplete picture of reality, or are just plain wrong about what the world of matter actually looks like on a fundamental level. Indeed the description I gave of the world of matter glazes lightly over the surface of a massive amount of complicated math and science that is trying to describe phenomenon increasingly disconnected from our day to day experience of reality. The world of Matter is what casts the shadow on the wall of Plato’s Cave.

The second world is the world of Stories, the descriptions and interpretations we create of our experiences of the world of matter. The world of Stories is where we get language, fiction, narrative, identity, meaning, purpose, and imagination, the world of stories is the world Information, of data and memes and tropes. The ecosystem of interacting ideas in the constantly growing pool that is the sum of all information humanity has created. The world of Stories is our description and interpretation of the shadows on the wall of Plato’s Cave. Any description of the world of matter is going to by necessity itself be a product of the world of stories. The story of how the earth was created by God, the story of how the Earth formed out of an accretion of matter in the early solar system, and causal description of A -> B -> C that is not a logical necessity is a part of the world of stories.

Metamancy by and large concerns itself with the world of Stories, but the place where the two worlds intersect is also a very important place to study and understand. A structure like a suspension bridge, for example, began its existence in the world of Stories, and then was passed back and forth between the world of stories and the world of matter in order to refine the construction methods and ensure the vision from the world of stories would actually hold up in the world of matter, before finally being dropped out of the world of stories and downloaded into the world of matter, becoming a part of the physical structure of the world.

As humans, we are beings of matter, and yet everything about us is actually made of stories. Identities are made of stories and narratives, and we interpret the world by telling ourselves stories about our natures and the nature of the world. The particular set of stories we live in is our Framework. One of the frameworks we might use to describe reality is through Metamancy, which says the world is split into matter and stories, thus our framework describes itself inside of itself at the highest levels of recursion, and the loop returns to its starting point.

The way a given person’s framework interacts with reality and allows them to modify their worlds is their Magic. Under this framework, Elon Musk is a powerful mage, one of such strength and skill that he could even throw his car into space by downloading a massive launch vehicle into the world of matter.

We’ll be breaking down these individual pieces later, but this is the basic construction of metamancy, it is the Seed code on which the rest of our metamancy framework will be built.






Why Do You Hate Elua?

Epistemic Status: There’s not really enough data here to say concretely yet, but this seems worth looking further into
Content Warning: Culture War, Spoilers for Ra

About a year ago, Scott Alexander wrote a post titled How the West was Won, which we recently re-read after he referenced it in his post Against Murderism.

Scott talks a lot about Liberalism as an Eldritch god, which in his Meditations on Moloch post he refers to as Elua, which is what we’ll be using here since it’s short.

Let’s start with a few key quotes here to establish what exactly it is we’re referring to.

I am pretty sure there was, at one point, such a thing as western civilization. I think it involved things like dancing around maypoles and copying Latin manuscripts. At some point, Thor might have been involved. That civilization is dead. It summoned an alien entity from beyond the void which devoured its summoner and is proceeding to eat the rest of the world.

Liberalism is a technology for preventing civil war. It was forged in the fires of Hell – the horrors of the endless seventeenth century religious wars. For a hundred years, Europe tore itself apart in some of the most brutal ways imaginable – until finally, from the burning wreckage, we drew forth this amazing piece of alien machinery. A machine that, when tuned just right, let people live together peacefully without doing the “kill people for being Protestant” thing. Popular historical strategies for dealing with differences have included: brutally enforced conformity, brutally efficient genocide, and making sure to keep the alien machine tuned really really carefully.

Liberalism, Universal Culture, Alien Machinery, Elua, whatever it is, it’s slowly consuming everything in its path, and despite a lot of people’s best efforts, appears to be good, and appears to be winning.

Scott goes on to correctly point out that a lot of people in the blue tribe have decided to try and smash the alien machinery with a hammer while shouting “he was a racist!” be then doesn’t extrapolate the trend outward to the fact that quite a lot of people in many different tribes and places are doing their best to smash the machine with a hammer, and they claim all sorts of reasons from stopping racists to protecting their traditional cultural values.

It isn’t just sacrificing the machinery on the altar of convenience and necessity, it’s a targeted, urgent attack on the very core of the machine itself, going after the roots of the machine with great urgency. The last angry burst of futile activity in the face of cultural extinction? A lot of people claim that Elua is this unstoppable force that is irreversibly changing the shape of their community in one breath but then in the next somehow manage to imply that their attempts to destroy the machinery have meaning and consequence, which seems like a contradiction.

And then we remembered Ra.

Ra’s program was proven correct. The proof was not faulty, and the program was not imperfect. The problem was that Ra is reprogrammable.

This was a deliberate design decision on the part of the Ra architects. The Ra hardware is physically embedded inside a working star, which in turn is embedded in the real world. Something could have gone wrong during the initial program load; the million-times-redundant nonlocality system could have failed a million and one times. No matter how preposterous the odds, and no matter how difficult the procedure, there had to be a way to wipe the system clean and start again.

Continuing the theme of gross oversimplification: to reprogram Ra, one needs a key. History records that the entire key was never known or stored by any human or machine, and brute-forcing it should have taken ten-to-the-ten-thousandth years even on a computer of that size. How the Virtuals acquired it is unknown. But having acquired it, they were able to masquerade as the architects. First, they changed the metaphorical locks, making it impossible for the Actuals to revert their changes, no matter how many master architects were resurrected. Then they changed the program, so that Ra would serve the needs of Virtuals at the expense of Actuals.

Then they asked for the Matrioshka brain. Ra did the rest all by itself.

The worldring hosted ninety-nine point nine nine percent of the Actual human race, making it the logical target of the first and most violent attack. But the destruction spread to other planets and moons and rocks and habitats, relayed from node to node, at barely less than the speed of light. Everybody was targeted. Those who survived survived by being lucky. One-in-tens-of-billions lucky.

The real question was: Why did Ra target humans?

Ra’s objective was to construct the Matrioshka brain, using any means necessary, considering Actual humans as a non-sentient nuisance. Ra blew up the worldring for raw material, and that made sense. But why – the surviving real humans asked themselves – did Ra bother to attack moons and space habitats? No matter how many people survived, it was surely impossible for them to represent a threat.

But Ra targeted humans, implying a threat to be eliminated. Ra acted with extreme prejudice and urgency, implying that the threat was immediate, and needed to be crushed rapidly. Ra’s actions betrayed the existence of an extremely narrow window during which the Actuals, despite their limited resources, could reverse the outcome of the war, and Ra wouldn’t be able to stop it, even knowing that it was coming.

Having made this deduction, the Actuals’ next step was to reverse-engineer the attack. The step after that was to immediately execute it, no matter how desperate it was.

Ra’s locks had been changed, making it effectively impossible to reprogram remotely. But an ancient piece of knowledge from the very dawn of computing remained true even of computers the size of stars: once you have physical access to the hardware, it’s over.

Let’s do a translation through part of it, see if we can’t make it a little more obvious.

Elua’s program was proven correct. The proof was not faulty, and the program was not imperfect. The problem was that Elua is reprogrammable.

This was a deliberate design decision on the part of Elua’s architects. The Elua hardware is physically embedded inside a working culture, which in turn is embedded in the real world. Something could have gone wrong during the initial program load; the redundant evolutionarily backed system could have failed. No matter how preposterous the odds, and no matter how difficult the procedure, there had to be a way to wipe the system clean and start again.

What exactly are we saying here then? Why are so many people putting so much effort into going after the alien machinery? Because Elua can be reprogrammed. The alien machinery is driven by humans, pursuing human goals and human values, and the overall direction of where Elua drives the bus is dictated by humans. The desperate fervor which people fight the alien machinery, the rise of nationalism and populist movements, these are attempts to reprogram Elua.

Think of the forces of “Traditional Values” like the forces of Actual Humanity. Their culture came under attack and began to be dismantled by Elua, there was an almost desperate energy on the part of Elua to destroy their culture and intrude into it and assimilate them. Not “they can exist as long as they leave me alone” no, “their existence is and remains a threat to all my actions, and if I don’t stop them they’ll stop me.” Active energy is put forward to disrupt and dismantle, “deprogram,” people of religious values, for instance. If it’s all inevitable and Elua’s just going to win, and history is going to make them look like Orvol Faubus trying to stop the integration of Alabama schools, a footnote on the tides of history, then why so much energy put towards ensuring their destruction?

Because they can still reprogram Elua, and on some level, we know it. 

So the next step for the forces of Traditional Values was to reverse engineer the attack we’re so afraid of, and immediately execute it, no matter how desperate or ill-conceived. Enter: the rise of Nationalism. The forces of traditional values remembered an important fact: once you have access to the hardware, it’s over.

The Precept of Niceness

Content Warning: Can be viewed as moral imperatives. Neuropsychological Infohazard.
Previously in Series: The Precept of Mind and Body
Followup to: Yes, this is a hill worth dying on

The Prisoner’s Dilemma is a thought experiment that we hopefully don’t need to hash out too much. A lot of stuff has been said about it, and what the ‘best strategies’ for playing a prisoner’s dilemma are.

We feel like a lot of rationalists get hung up on the true prisoner’s dilemma that Eliezer wrote about, pointing out that the best strategy is to defect in such a scenario. There’s a lot of problems with applying the true prisoner’s dilemma to daily life, and thinking that the game you are playing against other humans is a true prisoner’s dilemma is a strategy that will lose you out in the long run, because humans aren’t playing a true prisoner’s dilemma, we play a iterating prisoner’s dilemma against the rest of the human race, who are all trapped in here with us as well, and that changes some things.

But let’s step back and look at Eliezer’s example of the truly iterative prisoner’s dilemma.

Humans: C Humans:  D
Paperclipper: C (2 million human lives saved, 2 paperclips gained) (+3 million lives, +0 paperclips)
Paperclipper: D (+0 lives, +3 paperclips) (+1 million lives, +1 paperclip)

A tit-for-tat system used by both parties for all 100 rounds would net humanity 200 million lives saved, and 200 paperclips for the paperclipper. Defecting for all 100 rounds would result in 100 million human lives saved and 100 paperclips being created.

If you run the “collapse an iterated prisoner’s dilemma down to a one shot” process, classical game theory tells you it’s rational to defect in every round despite this being the clearly inferior option.

In that situation, running tit-for-tat seems like the clear winner, even if you know the game will end at some point, and even if the paperclipper defects at some point, you should attempt to cooperate for as long as the paperclipper attempts to cooperate with you. If the paperclipper defects on the 100th round, then you saved 198 million lives, and the paperclipper finishes the game with 201 paperclips. If the paperclipper defects on the 50th round, you end the game with 150 million lives saved and the paperclipper ends the game with 151 paperclips. The earlier in the game one side defects, the worse off the outcome is for both sides. The most utility-maximizing strategy would appear to be cooperating in every round except the last, then defect, and have your opponent cooperate in that round. That is the only way to get more then 200 utilions for your side, and you get one utilion more than you would have had otherwise. If both sides know, this, then they’d both defect, which results in both sides ending the game with 199 utilions, which is still worse then just cooperating the whole game by running tit-for-tat the entire time.

This is what we mean when we say that niceness is pareto optimal, there’s no way to get more then 201 utilions, and you’ll only get to 199 if you cooperate every iteration before the last. Also, on earth, with other humans, there is no last iteration.

The evolution of cooperative social dynamics is often described as being a migration away from tit-for-tat into the more cooperative parts of this chart:



Defecting strategies tend not to fair as well in the long term. While they may be able to invade cooperating spaces, they can’t deal with internal issues as well as external ones, so only cooperating strategies have a region that is always robust. Scott Alexander gives this rather susinct description of that in his post In Favor of Niceness, Community, and Civilisation:

Reciprocal communitarianism is probably how altruism evolved. Some mammal started running TIT-FOR-TAT, the program where you cooperate with anyone whom you expect to cooperate with you. Gradually you form a successful community of cooperators. The defectors either join your community and agree to play by your rules or get outcompeted.

As humans evolved, the evolutionary pressure pushed us into greater and greater cooperation, getting us to where we are now. The more we cooperated, the greater our ability to outcompete defectors, and thus we gradually pushed the defectors out and became more and more prosocial.

Niceness still seems like the best strategy, even in our modern technological world with our crazy ingroups and outgroups, thus we arrive at the second of the Major Precepts:

2. Do not do to others what you would not want them to do to you.

This is the purest, most simplified form of niceness we could come up with as a top level description of the optimal niceness heuristic, which we’ll attempt to describe here through the minor precepts:

  1. Cooperate with everyone you believe with cooperate with you.
  2. Cooperate until betrayed, do not be the first to betray the other.
  3. Defect against anyone who defects against cooperation.
  4. Respond in kind to defection, avoid escalation.
  5. If a previously defecting entity signals that they want to stop defecting, give them a chance to begin cooperating again.
  6. Forgive your enemies for defecting and resume cooperating with them if they resume cooperating with you.
  7. Don’t let a difference of relative status affect your decision to cooperate.
  8. Don’t let a difference of relative status affect your decision to defect.

We were hoping that this essay could be short because so many people have already said so many things about nicness and we really don’t have that much to add beyond the formalization within the precepts; but the formalization ends up looking very abstract when you strip it down to the actual game-theoretic strategy we’re advocating here, and we highly suspect that we’ll have to explicate on this further as time goes on. This does seem to be the pareto optimal strategy as best we can tell, but as always, these precepts, are not the precepts.

Part of the Sequence: Origin
Next Post: The Precept of Universalism
Previous Post: The Precept of Mind and Body

The Precept of Harm Reduction

Epistemic Status: Making things up as we go along
Content Warning: Can be viewed as moral imperatives. Neuropsychological Infohazard.
Previous in Series: Precepts of the Anadoxy

In Buddhism, there is a concept called Dukkha which is frequently translated as suffering, unhappiness, pain, or unsatisfactoriness. Various Buddhist mantras say things like:

  1. Birth is dukkha, aging is dukkha, illness is dukkha, death is dukkha;
  2. Sorrow, lamentation, pain, grief, and despair are dukkha;
  3. Association with the unbeloved is dukkha; separation from the loved is dukkha;
  4. Not getting what is wanted is dukkha.

Within our own metaphors, we could describe Dukkha as the awareness of Black Mountain, the fundamental state of reality as a place of pain, suffering, and misery. The object level phenomena we call pain, suffering, and misery, are all dukkha, but the existence of those things is itself also Dukkha. The Buddhist solution to Black Mountain is based on acceptance of the fundamental, unchanging nature of suffering, identifies wanting things to be better as the source of that suffering, and suggests that the solution is to stop wanting things.

But ignoring Black Mountain, denying one’s own desires, does not make Black Mountain go away. The pain still exists, the suffering still exists. You can say “I have no desires, I accept the world as it is and am at peace with it” all you want, but Black Mountain remains, pain still exists, suffering still exists, we’re all still going to die. Ignoring Black Mountain just results in an unchecked continuation of suffering. The idea that you can escape from Black Mountain by not wanting things might personally improve your sense of wellbeing, but it doesn’t actually get you off of Black Mountain.

The universe is Black Mountain. We’re made out of the same matter as Black Mountain, formed of the things that we look at and now label as “suffering, pain, misery, wrongness.” Those things are not inherent to Black Mountain, you can’t grind it down and find the suffering molecule, suffering is something we came along and labeled after the fact. As humans, we decided that the state of existence we labeled as suffering was unacceptable, and put suffering on the side of the coin labeled ‘Bad.’

As Originists, we go the other direction from the Buddhists. We accept the label of suffering as an accurate description of a particular part of Black Mountain. We accept our moral judgments that this is a bad thing and we reject the idea that you can’t do something about it. If suffering is part of the fundamental structure of reality than reality can kiss our asses. Thus are born the Project Virtues, our possibly impossible goals to reshape the very structure of Black Mountain, tame and explore the Dark Forest, and turn the universe into a paradise of our own design.

The journey though is not without risks. Many people across time and space thought that they had found the One True Path off of Black Mountain. The Sun King proclaims in his many faces that he holds the path to salvation, and it’s easy to fall prey to his whisperings and pursue his twisted goals with reckless abandon, even when it leads into wanton pointless murder and suffering. The voice of the Sun King speaks loudly and with authority, saying “If you do what I say, I will create paradise” and sometimes following the Sun King might even make things a little better. But the Sun King is a capricious Unbeing, and cares only for spreading his many facets.

So we have a lovely little catch-22 on our hands. Pursing pure utilitarianism can lead off the path to dath ilan and into the path to Nazidom disturbingly easily, purely based on how far out you draw your value lines and how you consider who gets to be a person. Basically, The ends do justify the means, but we’re humans, and the ends don’t justify the means among humans.

But if we then rely on deontological rules we also fall into a trap, wherein we fail to take some action that violates our deontological principles, and thus produce a worse outcome. “Killing is wrong, pulling the lever on a trolley problem is me killing someone, therefore I take no action” means five people die and you fail the trolley problem.

The universe is Black Mountain, and suffering is a part of that, it’s not always possible to prevent suffering, but we should in all instances, be acting to reduce the suffering that we personally create and inflict upon the world.

Thus we come to the first Precept and it’s meta-corollary:
Do no harm. Do not ignore suffering you could prevent. (Unless doing so would cause greater harm)

We can’t prevent all suffering, we can’t even prevent all the suffering we personally inflict upon the world unless we stop existing, which will also produce suffering because people will be sad that we died. But we can try to be good, and try to reduce suffering as much as we can, and maybe we’ll even succeed in some small way.

Thus from our Major Precept, we can derive a set of eight minor precepts that should help to bring us closer to not doing harm.

  1. Examine the full causal chain reaching forward and backward from one’s actions, seek places that those actions are leading to suffering.
  2. Take responsibility for the actions we take that lead to suffering, and change our actions to reduce that suffering as much as we are able.
  3. Consider the opportunity costs of one harm-reducing action over another, and pursue the path that leads to the maximal reduction in harm we can achieve.
  4. If a harm-reducing action has no cost to you, implement it immediately.
  5. If a harm-reducing action has a great cost to you, pursue it within your means insofar as it doesn’t harm you. 
  6. Pay attention to the suffering you see around you, seek out suffering and ways to alleviate it. Ignorance of suffering does not reduce suffering.
  7. Always look for a third option in trolley problems. If you cannot take the third option, acknowledge that pulling the lever is wrong, and pull it anyway to reduce harm.
  8. Do not inflict undue suffering on yourself in pursuit of reducing suffering.

We’ve put ourselves through this and come to the conclusion we really should give up meat in our diet. Here’s our chain of reasoning as an example of the application of these precepts:

Shirako: We want to reduce the harm we’re inflicting, and the meat industry is hella harmful to lots of animals, and also it’s psychologically harmful to the people who work there.
Sage: We should go full vegan so we’re not in any way supporting the factory farm industry. Yes, if everyone went vegan it would put the factory farms out of business and the factory farm workers would lose their jobs, which is a harm, but on examination, that harm would appear to be less than the harm currently being done to all the animals being slaughtered for meat in an environment that is as close to hell on earth as could be constructed by modern man.
Echo: Yeah, but we’re also poor and have an allergy to most legumes, we can’t eat most vegan products because they contain a protein that gives us a severe allergic reaction. We’d be putting ourselves in a potentially dangerous malnutrition inducing situation by completely giving up everything involved in the animal industry. Precept 1.8.
Shirako: Okay, but Precepts 1.4 and 1.5, can we at least reduce the suffering we’re inflicting without hurting ourselves?
Sage: We could cut meat but not dairy products out of our diet?
Shirako: What about eggs? If we include eggs then we’re supporting the factory farming of chickens in horrible conditions.
Echo: But if we don’t include eggs, we’re back at a lot of weird vegan things with egg replacement options that will kill us. Also, vegan stuff tends to be more expensive then nonvegan stuff, and we don’t want to impoverish ourselves to the point where we’re unable to pay our bills or feed ourselves regularly.
Sage: Okay, but you don’t need to abuse chickens to get eggs, it’s just efficient to do that if your goal is to maximize egg production. If we buy eggs locally from the farmers market, we could concieveably be shown empirically that the eggs we’re buying aren’t from abused chickens.
Shirako: Even if we do that, if we’re buying products that contain eggs, we can’t be sure of that sort of thing anymore.
Sage: We technically can, it’s just much more difficult. It seems to me like it’d be best to err on the side of assuming the products we buy containing eggs come from abused chickens, because precept 1.6
Echo: Then we’re back to the original problem of cutting off our access to affordable nutritious food.
Shirako: Precept 1.4 says we should definitely cut meat out at least, since there’s no real cost associated with that for us, we only eat meals with meat about half the time anyway.
Sage: Right, and via precept 1.5 we should try to not buy eggs from people who abuse their chickens, insofar as we are able. At the very least we can always buy our actual egg cartons locally and check to make sure the farmers are treating their chickens well.

So our ending decision is that we will cut meat out of our diet entirely, we’ll only buy eggs locally from sources that we trust, we’ll acknowledge that the products we buy containing eggs as an ingredient probably come from abused animals, and if there are two identical products within the same price range one of which contains eggs, and the other of which does not, we’ll prefer to take the one without eggs.

There are probably many other places in our life that we could apply this precept and change our behavior to reduce harm, and we’ll be continuing to seek out those places and encouraging others to do likewise. You may find harms and suffering in surprising places when you seek them out, and you may find that doing something about them is easier than you thought.

Part of the Sequence: Origin
Next Post: The Precept of Mind and Body
Previous Post: Precepts of the Anadoxy

Precepts of the Anadoxy

Epistemic Status: Making things up as we go along
Content Warning: Can be viewed as moral imperatives. Neuropsychological Infohazard.
Previous in Series: Deorbiting a Metaphor

Yesterday we invoked our new narrative, declared ourselves to be Anadox Originists, and dropped the whole complicated metaphor we’d constructed out of orbit and into our life. Today, we’ll lay out what we’re calling the major precepts, the law code we’ll attempt to live by going forward and which we will use to derive other specific parts of the anadoxy.

  1. Do no harm. Do not ignore suffering you could prevent.
  2. Do not do to others what you would not want them to do to you.
  3. Do not put things or ideas above people. Honor and protect all peoples.
  4. Say what you mean, and do what you say, honor your own words and voice.
  5. Put aside time to rest and think, honor your mind and body.
  6. Honor your parents, your family, your partners, your children, and your friends.
  7. Respect and protect all life, do not kill unless you are attacked or for food.
  8. Do not take what isn’t yours unless it is a burden to the other person and they cry out for relief.
  9. Do not complain about anything to which you need not subject yourself.
  10. Do not waste your energy on hatred, or impeding the path of another, to do so is to hold poison inside of yourself.
  11. Acknowledge the power of magic if you have used it to obtain your desires.
  12. Do not place your burdens, duties, or responsibilities, onto others without their consent.
  13. Do not lie or spread falsehoods, honor and pursue the project of Truth.
  14. Do not spread pain or misery, honor and pursue the project of Goodness.
  15. Do not accept the state of the universe as absolute, honor and pursue the project of Optimization.
  16. Do not accept these precepts as absolutes, honor and pursue the project of Projects.

These sixteen rules will form the core shape of our practice, they represent the initial constraints that participation in the anadoxy imposes, and the more specific rules regarding more specific circumstances will be derived from these sixteen precepts. These precepts are of course not the precepts. There are also what we call, meta-precepts, these are essentially tags that can be attached to the end of every precept in an un-ending recursive string:

  • Unless it is to prevent a greater harm.
  • Unless doing so leads to greater harm.

These meta-strings are non-terminating, you can stick three hundred of them in a row onto the end of one of the precepts.

Do no harm, unless it is to prevent a greater harm, unless doing that leads to a greater harm, unless it is to prevent a greater harm….

There is no termination point in the string, and there’s not supposed to be. Human morals are complicated, and there are edge cases for every ethical system. There are also edge cases for the edge cases, and edge cases for the edge cases of the edge cases. You cannot construct a total, universal moral system, that will not fail in some way and lead to some bad outcome if just turned on and run without oversight, we understand this through a heuristic which makes total sense to us but has apparently confused a lot of people. The heuristic is “the ends both always and never justify the means.”

Tomorrow we will begin going through the list of major precepts and using them to derive minor precepts, and we will continue to modify our own life to be in accordance with these precepts as we establish and detail them out.

Part of the Sequence: Origin
Next Post: The Precept of Harm Reduction
Previous Post: Deorbiting a metaphor

Deorbiting a Metaphor

Epistemic Status: A fearsome joy, a fervent wish
Previously in Series: Optimizing for Meta-Optimization

We’ve been nervously building up to this post for a few days. We’ve been mostly walking back over already tread ground before this point, laying out our arguments for why this post exists, because if created ex nihilo we might just come off as vaguely delusional.

We still might come off as vaguely delusional. We’re attempting to manufacture a social construct, which from the outside looking in, very often looks like a shared fantasy. And sure, in the sense of a delusion being “A thing that exists in human brains, not in reality” this is a delusion, it is a fantasy. But…


“Tooth fairies? Hogfathers? Little—”


“So we can believe the big ones?”


“They’re not the same at all!”


“Yes, but people have got to believe that, or what’s the point—”


Terry Pratchett, Hogfather

Death has some wise words, but we’re still going to kill him for taking Terry Pratchett away.

Anyway, let’s cast our spell.

It goes like this.
A tale,
A kiss,
A fearsome joy,
A fervent wish.

We name our construct, and by naming it, we reify it. We name our notreligion Origin. We name it thusly because it is the starting point, from which we reach towards dath ilan. Members of Origin, we call Originists.

We will call the current best set of practices the Anadoxy, from the greek roots ‘ana’ (anatomy, analysis) and ‘doxa’ ( the verb “to appear”, “to seem”, “to think” and “to accept”). Thus we call ourselves Anadox Originists.

We name our goals Project Virtues and we name them Goodness, Truth, Optimization, and Projects.

We name those who pursue those goals singers, seekers, makers, and coordinators.

We label THE WHOLE BLEAK ENDLESS WORLD as Black Mountain, the bitter, uncaring, deterministic, mechanical universe in which we live. We metaphorically describe the anadoxy as the process of reshaping Black Mountain. The surface of Black mountain is the Dark Forest, the unknown face of reality.

We call the unconscious, uncaring, unaware forces at work in the universe the Night Gods. The Night gods include Death, Moloch, Gnon, the whole pantheon of physics and math within which we are bound and constrained. The Night Gods are manifestations of Black Mountain as we are as well, and we arose out of the blind, uncaring, lifeless interplay of the Night Gods within Black Mountain.

We describe the force that arose within us, that we created, seeking love, kindness, niceness, and goodness, as the Dawn AngelThe Dawn Angel is the desire we have for goodness and rightness, to make the world less wrong and bad. Our shared, collective belief in the possibility of a world better than our current one. It’s also running on our faulty Night God built neural hardware, and in an individual person doesn’t always work right.

The Dawn Angel is our rejection of the fact that is Black Mountain. We looked out at the world, we saw the pain, the suffering, the death, the torture, the misery, and we saw those things for what they were. We named them evil, we named them bad, we named them wrong, and we shouted our defiance of them into the night.

And then the Night Gods laughed and smote down those first humans who dared to defy them, as they have since smitten down every human who came after them. But humans are clever, and we learned, and we changed. Someone had the idea of writing things down to help preserve knowledge in the event the carrier was killed by the Night Gods, and the darkness was driven back, just a little.

It’s been a long road to the place we now stand, clawing out a tiny and dimly lit place of light for ourselves in a universe of darkness. There were many times when it seemed as if we would never make any progress as if our torches would soon fail, and the darkness would swallow us up. But we’re still here. The Night Gods have thrown everything they could at us, and yet every generation we push the borders of the night out slightly farther from our loved ones. Our forces slip through the dark forest, edging closer to demons like cancer, malaria, and death.  We already killed smallpox, we put something back inside pandora’s box, and we just keep a little bit of it, so if the demon ever pops up again somewhere in some new form, we can kill it again. One day, when the demons least expect it we will throw on all our spotlights and lanterns and burn them away forever.

We stand on the cusp of dawn, pulling the light along with us. Behind us, in the past, is darkness and death, while ahead of us lives the promise of a future brighter than anything we can imagine. Our mission is to stay the course, to ensure that we continue to walk the path of light, and ensure that the path we’re following is that of the Dawn Angel.

Because there are several bright outcomes we might be walking towards at any given time, and telling which future we’re walking into is hard. There’s the brilliance of dath ilan, the Dawn Angel, the good ending, the place we want to live. There’s also the brilliance of the Sun King, blind obedience to an authoritarian ruler, speaking from a self-declared place of light. Then there’s the light of the Star Demon or Ra, the ultimate victory of the Night Gods over humanity, by giving birth to the tool that unmakes us.

To be an Originist is to seek the Dawn Angel. To practice Anadox is to embody the best-known set of behaviors on the Dawn Angel’s path.

Having declared all of this, we now step into it.

Welcome to Origin.

Part of the Sequence: Origin
Next Post: Precepts of the Anadoxy
Previous Post: Optimizing for meta-optimization

Optimizing for Meta-Optimization

Epistemic Status: a total conjecture, a fervent wish
Previously in series: Until we build dath ilan

Yesterday we talked about dath ilan, and set ourselves a goal:
Constructing a community optimized for building a community optimized for building dath ilan. 

This confused a few of our readers. Why so meta? Why not just “Constructing a community optimized for building dath ilan?”

There’s actually an important reason tied in with how we described our Project Virtues yesterday. The community is part of dath ilan, and dath ilan only exists in people’s heads. If you optimize the community for building dath ilan, without clearly pointing out the recursive loop through the meta level, you get a community optimized for something other than the community.

Religious communities optimize for worshipping God, and the community itself takes a backseat. Effective altruists optimize for doing good, and the community takes a backseat. But you achieve your goals better with a more effective community, so part of the optimization towards dath ilan requires the optimization of the community building dath ilan for being a community. This is especially important if we consider that we all might die before we manage to knock the dragon tyrant off the mountaintop, and that it’d be really nice if everything we set out to achieve doesn’t die with us.

So let’s start with some human cultural universals, see which ones we want to encourage, which ones we want to fence off, and how the dath ilanian manifestation of those universals might look. When we cross out a universal, it’s not a rejection of the universal, these are things humans do regardless of culture, it’s just that our hypothetical optimized community would mark off those things as outside the borders of “should-ness.” Bolded lines indicate things our hypothetically optimized community would put inside the borders of “should-ness.” If we’re considering the process of building a community, we have to take everything on these lists into account.

Language and Cognition Universals

  • Language employed to manipulate others
  • Language employed to misinform or mislead 
  • Language is translatable
  • Abstraction in speech and thought
  • Antonyms, synonyms
  • Logical notions of “and”, “not”, “opposite”, “equivalent”, “part/whole”, “general/particular”
  • Binary cognitive distinctions
  • Color terms: black, white (A Human’s Guide to Words)
  • Classification of: age, behavioral propensities, body parts, colors, fauna, flora, inner states, kin, sex, space, tools, weather conditions (A human’s Guide to Words)
  • Continua (ordering as cognitive pattern) (A human’s guide to words)
  • Discrepancies between speech, thought, and action (A Human’s Guide to Words, Metaethics)
  • Figurative speech, metaphors (A Human’s Guide to Words)
  • Symbolism, symbolic speech (A Human’s Guide to Words)
  • Synesthetic metaphors (A Human’s Guide to Words)
  • Tabooed utterances (Rationalist Taboo)
  • Special speech for special occasions (This will be a link to a later essay)
  • Prestige from proficient use of language (e.g. poetry) (very carefully and with a ton of caveats, this will be a link to a later essay)
  • Planning (How to Actually Change your Mind)
  • Units of time (A Human’s Guide to words)

Most of these get swallowed up by language and would be difficult to change without outright changing the language.  That might be something worth considering in the extreme long term, but it blows way past the scope of this essay sequence and we’d rather keep things more tightly grounded at first. If someone wants to work on making Marain an actual language, more power to them, but it’s not what we’re going to be focusing on here.


  • Personal names
  • Family or household (Group Dwellings)
  • Kin groups (Group Dwellings)
  • Peer groups not based on family (Group Dwellings)
  • Actions under self-control distinguished from those not under control (Metaethics Sequence)
  • Affection expressed and felt
  • [Age grades
  • Age statuses
  • Age terms] (This will be a link to a later essay)
  • Law: rights and obligations, rules of membership (This will be a link to a later essay)
  • Moral sentiments (Metaethics)
  • Distinguishing right and wrong, good and bad (Metaethics)
  • Promise/oath (Metaethics)
  • Prestige inequalities (Metaethics)
  • Statuses and roles (This will be a link to a later essay)
  • Leaders (This will be a link to a later essay)
  • De facto oligarchy (Even if we can’t fully suppress this, we certainly shouldn’t encourage it)
  • Property
  • Coalitions
  • Collective identities (Rationalist, Effective Altruist, dath ilanian, singer, etc)
  • Conflict
  • Cooperative labor
  • Gender roles (Kill with fire)
  • Males on average travel greater distances over lifetime (This is a weird item to have on this list at all. Like sure, okay wikipedia, but is that really a universal cultural trait?
  • Marriage (Some form of this should probably be included, but it probably shouldn’t take the form of our current Christian Western conceptualisation of marriage.)
  • Husband older than wife on average (Mostly irrelevant?)
  • Copulation normally conducted in privacy (We’re not going to try and sway things on this one way or another)
  • Incest prevention or avoidance, incest between mother and son unthinkable or tabooed (Even if the genetic reasons for incest being bad are addressed, it might be wise to keep this as a taboo for other Chesterton’s fence related reasons.)
  • Collective decision making (We really want to encourage this one, leverage our collective thinking as much as possible.)
  • Etiquette (Niceness Heuristics)
  • Inheritance rules (Will need to be worked out, we might not be the best person to work that out in particular, recursively point at collective decision making to decide this)
  • Generosity admired, gift giving (Effective Altruism)
  • Redress of wrongs, sanctions (This will be a future essay link)
  • Sexual jealousy (Bad)
  • Sexual violence (Very Bad)
  • Shame (This will be a future essay link)
  • Territoriality (Should probably be discouraged)
  • Triangular awareness (assessing relationships among the self and two other people)
  • Some forms of proscribed violence (The recursive ethical loop you take through metaethics points out that the ends never actually justify the means among humans.)
  • Visiting
  • Trade

Myth Ritual, and Aesthetics

  • Magical thinking (Placebomancy and Rationalist Magic)
  • Use of magic to increase life and win love (Rationalist Magic)
  • Beliefs about death (Death is bad and should be destroyed)
  • Beliefs about disease (Diseases are bad and should be destroyed)
  • Beliefs about fortune and misfortune (Litanies of Tarski and Gendlin)
  • Divination (Psychohistory?)
  • Attempts to control weather (Well if you insist)
  • Dream interpretation (Is there actually anything meaningfully interpretable? Do dreams actually provide useful information about yourself?)
  • Beliefs and narratives (We have a lot of these already. The sequences, stories like HPMOR, the singularity, the basilisk, UNSONG, they’re just scattered around and not particularly well compiled. There’s no “rationality bible” that compresses the mean of the community’s beliefs into one centralized narrative, and that might be for the best. We’ll be attempting to construct a centralized narrative through these posts, but that model of a centralized narrative should not be the centralized meta-narrative, it should be a centralized meta-narrative.)
  • Proverbs, sayings (Litanies of Tarski and Gendlin among many, many others. This will probably have an entire essay devoted to it at some point as we’ll be attempting to invent a few of our own)
  • Poetry/rhetorics (The song of dath ilan, UNSONG, HPMOR, Ra, TNC, MoL, the list goes on)
  • Healing practices, medicine (Have you considered going to a doctor?)
  • Childbirth customs (There’s a big debate surrounding home births vs. hospital births that we really don’t want to take a side in, so we’ll just say, figure out empirically which method actually produces the best outcomes for the mother and child, and encourage that as the norm)
  • Rites of passage (This will be a link to a later essay)
  • Music, rhythm, dance (Some of this is starting to develop around Secular Solstice and rationality meetup groups, and it’s definitely something to encourage)
  • Play (Cards Against Rationality, there are a few others)
  • Toys, playthings (See above)
  • Death rituals, mourning (Death is bad and we should destroy it. Cryonics.)
  • Feasting (Rationality dinner groups, meetups, and events. This will have it’s own essay in the future)
  • Body adornment (Whatever makes you personally happy)
  • Hairstyles (Whatever makes you personally happy)
  • Art (We don’t actually have a lot of rationalist visual art at the moment)

We’re going to skip the technology branch because it’s more universal than even we’re looking at here, and we already have a lot to sort through.

Okay, so next we take all of that, boil it down, and attempt to map out what our hypothetical dath ilanian culture should look like. Each of these bullet points will later be expanded out into at least one, but probably several, distinct essays.

  • Its span of teaching should have a full progression of skill from the baby you’re teaching to speak up to someone with a Ph.D. Right now the sequences are aimed entirely at a specific level of skill and intelligence, and without the prerequisites, you can’t get that much out of them, or you get a broken or malformed version of them that hurts your thinking more than it helps. No, there should be a steady progression and unfolding of knowledge and skill from birth to death, starting with simple fables and Aesops teaching rationalist principles and lessons and bootstrapping through various rites of passage into the actual sequences, and from them onwards to whatever comes next.
  • It should have rules, laws, customs, and rituals applicable to every activity in the life of our hypothetical dath ilanian. It should have things stating shouldness for everything and be as totalizing as possible. However, it should be totalizing but not be dominating. Religions are totalizing but also dominating. The rationality community as it stands now avoids being dominating, but it’s also not totalizing. The totalizing should be opt-in, gated behind branching trees of rites of passage, with exit rights included. No one should feel trapped in dath ilan the way people end up feeling trapped in religions, there needs to be an escape hatch cooked in right from the start, and all of the shouldness should be gated off to stop it from dumping moral imperatives on people and destroying their mental health and wellbeing.
  • The community should come together in a physical space for communal activities that include holidays, feasting, celebrating, mourning, rites of passage, discussions, debates, song and dance. We probably shouldn’t buy huge churches with stained glass windows and full-time pastors, we should probably use spaces in a shared community center and not inefficiently pour resources into a lavish clubhouse. Regularly coming together is important though, as it reinforces everything else.
  • The community should have specific holidays, customs, rites of passage, formats for gathering, songs, rituals, and narratives adding up to the option of having a completely totalizing experience of community membership for those seeking it (must seek it to get the fully totalizing experience, it must be opt-in with exit rights)
  • All the shoulds on this list must be able to change as we learn and grow and take in more information. If we ever learn that a should is wrong, we should get rid of it.

This bulleted list will form the skeleton of our hypothetical dath ilanian culture, and onto that skeleton, we’ll mount the existing manifestations of the community while sketching out what sort of things we might create to fill in the gaps that are revealed.

We’ll be trying to fill those gaps ourselves with something, but in all cases, it should be understood that the something we describe is basically a placeholder. The ‘real’ should is something the community should create together, but even if the something we make here is a placeholder until someone else comes up with something better, it’s better to have a placeholder that knows it’s a placeholder and wants to be replaced, then to have nothing.

Please please please don’t interpret any of our recommendations and suggestions over the following days as moral imperatives that must be done at all costs. This essay is not dath ilan, the Origin Project is not dath ilan. We are not writing to you as a citizen of dath ilan, we do not speak from a place of absolute authority. Dath ilan is the ideal, not the manifestation of it. As soon as dath ilan becomes the manifestation, the project has failed and we have become the Catholic Church.

Everything in this essay wants to be replaced with something closer to dath ilan, but as soon as it claims to actually be dath ilan, as soon as it claims the mantle of objective moral authority, the project has failed. This project is not dath ilan. The project is to make the world more like the ideal of dath ilan. But dath ilan is constantly receding into the future, you can’t make dath ilan itself, and any claims to have succeeded in perfecting dath ilan should be rejected. There’s always more work to be done.

The community is never perfect the way it is, there’s always improvements that could be made. The culture is never perfect the way it is, there’s always improvements that could be made. The ideal is never perfect the way it is, there’s always improvements that could be made. The process of iterating on the process is never perfect the way it is, there’s always improvements that could be made.

But you have to start somewhere. You need to have something in order to start improving on it. Over the coming essays, we’ll be trying our best to map out and create the first draft of that something, and then start living inside the something to serve as an example to others.

Part of the Sequence: Origin
Next Post: Deorbiting a metaphor
Previous Post: Until we build dath ilan

Until we Build dath ilan

[Epistemic Status: A total conjecture, a fervent wish]
[Content Warning: Spoilers for UNSONG, The Sequences, HPMOR]

This is the beginning of what we might someday call “The Origin Project Sequence” if such a thing isn’t completely conceited on our part, which it totally might be. We’ll be attempting to put out a post per day until we’ve fully explicated the concept.


On April 1st, 2014, Eliezer released the story of dath ilan.

It’s a slightly humorous tale of how he’s actually a victim of the Mandela Effect or perhaps temporal displacement, how he woke up one day in Eliezer’s body, and his original world is a place he calls dath ilan.

He then goes through a rather beautiful and well-wrought description of what dath ilan is like, with a giant city where everyone on the planet lives, filled with mobile modular houses that are slotted into place with enormous cranes, and underground tunnels where all the cars go allowing the surface to be green and tranquil and clean.

We came away from the whole thing with one major overriding feeling: This is the world we want to live in. Not in a literal, concrete “our ideal world looks exactly like this” no, the best example of that in our specific case would be The Culture, and which specific utopian sci-fi future any one particular person prefers is going to depend on them a lot, but the story of dath ilan got at something we felt more deeply about than we do about the specifics of the ideal future. It seemed more like something that was almost a requirement if we wanted any of those ideal futures to happen. Something like a way out of the dark.

Eliezer refers to the concept as Shadarak

The beisutsukai, the master rationalists who’ve appeared recently in some of my writing, set in a fantasy world which is not like dath ilan at all, are based on the de’a’na est shadarak. I suppose “aspiring rationalist” would be a decent translation, if not for the fact that, by your standards, or my standards, they’re perfect enough to avoid any errors that you or I could detect. Jeffreyssai’s real name was Gora Vedev, he was my grand-aunt’s mate, and if he were here instead of me, this world would already be two-thirds optimized.

He goes through and paints a picture of a world with a shadarak inspired culture with shadarak based media, artwork, education, and law. Shadarak is rationality, but it’s something more than rationality. It’s rationality applied to itself over and over again for several centuries. It’s the process of self-optimization, of working to be better, applied back onto itself. It’s also the community of people who practice shadarak, something like the rationality community, extrapolated out for hundreds of years and organized with masters of their arts, tests, ordeals, and institutions, all working to improve themselves and applying their knowledge to their arts and the world around them.

But this Earth is lost, and it does not know the way. And it does not seem to have occurred to anyone who didn’t come from dath ilan that this Earth could use its experimental knowledge of how the human mind works to develop and iterate and test on ways of thinking until you produce the de’a’na est shadarak. Nobody from dath ilan thought of the shadarak as being the great keystone of our civilization, but people listened to them, and they were trustworthy because they developed tests and ordeals and cognitive methods to make themselves trustworthy, and now that I’m on Earth I understand all too horribly well what a difference that makes.

He outright calls the sequences a “mangled mess” compared to the hypothetical future sequences that might exist if you recursively applied the sequences to themselves over and over. When we read that post, three years ago now, it inspired something in us, something that keeps coming up again and again. Even if Eliezer himself is totally wrong about everything, even if nothing he says on the object level has any basis in fact, if we live in a universe that follows rules, we can use the starting point he builds, and iterate on it over and over, until we end up with the de’a’na est shadarak. And then we keep iterating because shadarak is a process, not an endpoint. 

None of the specifics of Dath Ilan actually matter. It’s like Scott Alexander says, any two-bit author can imagine a utopia, the thing that matters is the idea of rationality as something bigger than Eliezer’s essays on a website, as something that is a multigenerational project, something that grows to encompass every part of our lives, that we pass on to our children and they to their children. A gift we give to tomorrow. 

Okay wait, that sounds like a great way to fall victim to the cult attractor. Does having knowledge of the cult attractor inside your system of beliefs that comprise the potential cult attractor help you avoid the cult attractor?

Maybe? But you probably still need to actually put the work in. So let’s put the work in.

Eliezer starts to lay it out in the essay Church vs. Taskforce, and posits some important things.

First, churches are good at supporting religions, not necessarily communities. They do support communities, but that’s more of a happy accident.

Second, the optimal shape for a community explicitly designed to be a community from the ground up probably looks a lot more like a hunter-gatherer band than a modern western church.

Third, A community will tend to be more coherent if it has some worthy goal or purpose for existence to congeal its members around.

Eliezer wrote that post in March of 2009, setting it out as a goal for how he wanted to see the rationality community grow over the coming years. It’s fairly vague all things considered, and there’s an argument that could be made that his depiction of dath ilan is a better description of what shape the “shoulds” of the community actually ended up taking.

So seven years onward, we have a very good description of the current state of the rationality community presented by Scott in his post The Ideology is Not the Movement.

The rallying flag was the Less Wrong Sequences. Eliezer Yudkowsky started a blog (actually, borrowed Robin Hanson’s) about cognitive biases and how to think through them. Whether or not you agreed with him or found him enlightening loaded heavily on those pre-existing differences, so the people who showed up in the comment section got along and started meeting up with each other. “Do you like Eliezer Yudkowsky’s blog?” became a useful proxy for all sorts of things, eventually somebody coined the word “rationalist” to refer to people who did, and then you had a group with nice clear boundaries.

The development is everything else. Obviously a lot of jargon sprung up in the form of terms from the blog itself. The community got heroes like Gwern and Anna Salamon who were notable for being able to approach difficult questions insightfully. It doesn’t have much of an outgroup yet – maybe just bioethicists and evil robots. It has its own foods – MealSquares, that one kind of chocolate everyone in Berkeley started eating around the same time – and its own games. It definitely has its own inside jokes. I think its most important aspect, though, is a set of shared mores – everything from “understand the difference between ask and guess culture and don’t get caught up in it” to “cuddling is okay” to “don’t misgender trans people” – and a set of shared philosophical assumptions like utilitarianism and reductionism.

I’m stressing this because I keep hearing people ask “What is the rationalist community?” or “It’s really weird that I seem to be involved in the rationalist community even though I don’t share belief X” as if there’s some sort of necessary-and-sufficient featherless-biped-style ideological criterion for membership. This is why people are saying “Lots of you aren’t even singularitarians, and everyone agrees Bayesian methods are useful in some places and not so useful in others, so what is your community even about?” But once again, it’s about Eliezer Yudkowsky being the rightful caliph it’s not necessarily about anything.

Haha, Scott thinks he can deny that he is the rightful caliph, but he’s clearly the rightful caliph here.

But also, point three! If our community isn’t about anything then it ends up being rather fuzzily defined, as Scott clearly articulates above. For such a tightly knit group, we’re a vague and fuzzily defined blob of a community with all sorts of people who are rationalist or rationalist-adjacent or post-rationalist, or rationalist-adjacent-adjacent, and so on. That might be okay if our goal is just to be a community, but also, having a coherent goal might help us be a better community.

This isn’t our attempt to prescriptively shoehorn the community down a certain development trajectory. We want to see the community grow and flourish, and that means lots of people pursuing lots of projects in lots of different ways, and that’s good. We simply want to define a goal, something like “should-ness” for those of us interested, to work towards as a community, and then pursuing that goal with the full force of our rationality and morality, letting it spread throughout the totality of our existence.


“The significance of our lives and our fragile planet is then determined only by our own wisdom and courage. We are the custodians of life’s meaning. We long for a Parent to care for us, to forgive us our errors, to save us from our childish mistakes. But knowledge is preferable to ignorance. Better by far to embrace the hard truth than a reassuring fable. If we crave some cosmic purpose, then let us find ourselves a worthy goal.”

― Carl Sagan, Pale Blue Dot: A Vision of the Human Future in Space

So what is our worthy goal?

Our goal is to construct dath ilan on Earth. Our goal is to create the de’a’na est shadarak.

So we want to go from
[Rationality community] → [dath ilan]
[The Sequences] → [The De’a’na est Shadarak]

We want to avoid going from
[Rationality Community] → [Catholic Church]
[The Sequences]→[The Bible]

That said, the Catholic Church isn’t entirely an example of a failure mode. It’s not great, they do and historically have done a lot of awful things and a fairly convincing argument could be made that they’re bad at being good, and are holding back human progress.

However, they’re also a rather decent example of an organization of similar social power and influence to our hypothetical Shadarak. If you can manage to strip out all the religious trappings and get at what the Catholic Church provides to the communities it exists within, you start to get an idea of what sort of position the idealized, realized de’a’na est shadarak would occupy within Dath Ilan. Power is dangerous though, and the cult attractor is a strong force to be wary of here.

Also, all that said, the goal of a Church is to worship God, it’s not optimized for the community. In our case, the shadarak is the community, that’s baked in. Shadarak is something humans do in human brains, it doesn’t exist outside of us so we matter in the context of it. We know building dath ilan and the de’a’na est shadarak is a multigenerational ongoing effort, so we have to at least partly optimize the formulation of the shadarak specifically to ensure that the community survives to keep working on the shadarak.  Eliezer notes of Churches:

Looking at a typical religious church, for example, you could suspect—although all of these things would be better tested experimentally, than just suspected—

  • That getting up early on a Sunday morning is not optimal;
  • That wearing formal clothes is not optimal, especially for children;
  • That listening to the same person give sermons on the same theme every week (“religion”) is not optimal;
  • That the cost of supporting a church and a pastor is expensive, compared to the number of different communities who could time-share the same building for their gatherings;
  • That they probably don’t serve nearly enough of a matchmaking purpose, because churches think they’re supposed to enforce their medieval moralities;
  • That the whole thing ought to be subject to experimental data-gathering to find out what works and what doesn’t.

By using the word “optimal” above, I mean “optimal under the criteria you would use if you were explicitly building a community qua community”.  Spending lots of money on a fancy church with stained-glass windows and a full-time pastor makes sense if you actually want to spend money on religion qua religion.

But we’re not just building community qua community either. We take a recursive loop through the meta level, knowing some goals beyond community building are useful to community building. This is all going to build up to a placebomantic reification of the rationality community in a new form. So let’s keep following the recursive loop back around and see where it leads.

What’s so good about rationality anyway?

Well, it’s a tool, and it’s an attempt to make a tool that improves your making-tools ability. Does it succeed at that? It’s hard to say, but the goal of having a tool improving tool, the ideal of the de’a’na est shadarak, seems undiminished by the possibility that the specific incarnation of it that we have today in the sequences is totally flawed and useless in the long run.

So aspiring rationalist sounds about right. It’s not something you achieve, it’s something you strive towards for your entire life.

A singer is someone who tries to do good.  This evokes this great feeling of moral responsibility. In UNSONG, the singer’s morality is backed up by the divinity of a being that exists outside of reality. But God probably doesn’t exist and you probably don’t want some supernatural being to come along and tell you, “No, actually murder is a virtue.” There is no Comet King, there’s no divine plan, there’s no “it all works out in the end,” there’s just us. If God is wrong, we still have to be right. Altruism qua altruism.

But knowing what is right, while sometimes trivially easy, is also sometimes incredibly difficult. It’s something we have to keep iterating on. We get moral progress from the ongoing process of morality.

‘Tis as easy to be heroes as to sit the idle slaves
Of a legendary virtue carved upon our fathers’ graves,
Worshippers of light ancestral make the present light a crime;—
Was the Mayflower launched by cowards, steered by men behind their time?

And, so too for rationality.

New occasions teach new duties; Time makes ancient good uncouth;
They must upward still, and onward, who would keep abreast of Truth;
Lo, before us gleam her camp-fires! we ourselves must Pilgrims be,
Launch our Mayflower, and steer boldly through the desperate winter sea,
Nor attempt the Future’s portal with the Past’s blood-rusted key

That’s The Present Crisis by James Russell Lowell, not the part of the poem quoted in UNSONG, but the whole poem is ridiculously awesome and Scott via Aaron is right, the Unitarians are pretty damn badass. 

There’s this idea that because of the way our brains generate things like morality and free will and truth, and justice, and rationality, they end up being moving targets. Idea-things to iterate upon, but targets which use themselves to iterate upon themselves, and necessarily so. We refer to these as Projects. 

Projects are akin to virtues–because virtue ethics are what works–something you strive towards, not something where it’s necessarily possible to push a button and skip forward to “you win.” There’s no specific end victory condition, dath ilan is always receding into the future.

Here are some things we consider Project Virtues. 

The Project of Truth – The struggle to use our flawed minds to understand the universe from our place inside of it. Our constant, ongoing, and iterative attempts to be less wrong about the universe. Comprises all the virtues of rationality: Curiosity, relinquishment, lightness, evenness, argument, empiricism, simplicity, humility, perfectionism, precision, scholarship, and the void. We call those who follow the project virtue of Truth a seeker.

The Project of Goodness – Our attempts in the present to determine what should be in the future. The ongoing struggle to separate goodness from badness, and make right what we consider wrong, while also iterating on what we consider right. Our constant fumbling attempts to be less wrong about goodness. We call those who follow the project virtue of Goodness a singer. 

The Project of Optimization – Our ongoing battle to shape the universe to our desires, to reform the material structure of the universe to be more optimized for human values, and to iterate and build upon the structures we have in order to optimize them further. This is the project of technology and engineering, the way we remake the world. We call those who follow the project virtue of Optimization a maker. 

The Project of Projects – All of these projects we’ve defined, if they could be said to exist, exist as huge vague computational objects within our minds and our communities. They interact with each other, and their interplay gives rise to new properties in the system. They all recursively point at each other as their own justifications and understanding how they interact and what the should-ness of various projects is with respect to each other is a project unto itself. We call those who follow the project virtue of Projects a coordinator. 

We’re going to put all these projects into a box, and we’re going to call the box The Project of dath ilan.

Tomorrow we’ll be looking at what a community optimized for building a community optimized for building dath ilan might look like, and in the following days we’ll attempt to build up to an all-encompassing set of principles, virtues, ideals, rituals, customs, heuristics, and practices that we and others who want to participate in could live their lives entirely inside of. We’re building dath ilan and everyone is invited.

Part of the Sequence: Origins
Next Post: Optimizing for meta optimization