Why Do You Hate Elua?

Epistemic Status: There’s not really enough data here to say concretely yet, but this seems worth looking further into
Content Warning: Culture War, Spoilers for Ra

About a year ago, Scott Alexander wrote a post titled How the West was Won, which we recently re-read after he referenced it in his post Against Murderism.

Scott talks a lot about Liberalism as an Eldritch god, which in his Meditations on Moloch post he refers to as Elua, which is what we’ll be using here since it’s short.

Let’s start with a few key quotes here to establish what exactly it is we’re referring to.

I am pretty sure there was, at one point, such a thing as western civilization. I think it involved things like dancing around maypoles and copying Latin manuscripts. At some point, Thor might have been involved. That civilization is dead. It summoned an alien entity from beyond the void which devoured its summoner and is proceeding to eat the rest of the world.

Liberalism is a technology for preventing civil war. It was forged in the fires of Hell – the horrors of the endless seventeenth century religious wars. For a hundred years, Europe tore itself apart in some of the most brutal ways imaginable – until finally, from the burning wreckage, we drew forth this amazing piece of alien machinery. A machine that, when tuned just right, let people live together peacefully without doing the “kill people for being Protestant” thing. Popular historical strategies for dealing with differences have included: brutally enforced conformity, brutally efficient genocide, and making sure to keep the alien machine tuned really really carefully.

Liberalism, Universal Culture, Alien Machinery, Elua, whatever it is, it’s slowly consuming everything in its path, and despite a lot of people’s best efforts, appears to be good, and appears to be winning.

Scott goes on to correctly point out that a lot of people in the blue tribe have decided to try and smash the alien machinery with a hammer while shouting “he was a racist!” be then doesn’t extrapolate the trend outward to the fact that quite a lot of people in many different tribes and places are doing their best to smash the machine with a hammer, and they claim all sorts of reasons from stopping racists to protecting their traditional cultural values.

It isn’t just sacrificing the machinery on the altar of convenience and necessity, it’s a targeted, urgent attack on the very core of the machine itself, going after the roots of the machine with great urgency. The last angry burst of futile activity in the face of cultural extinction? A lot of people claim that Elua is this unstoppable force that is irreversibly changing the shape of their community in one breath but then in the next somehow manage to imply that their attempts to destroy the machinery have meaning and consequence, which seems like a contradiction.

And then we remembered Ra.

Ra’s program was proven correct. The proof was not faulty, and the program was not imperfect. The problem was that Ra is reprogrammable.

This was a deliberate design decision on the part of the Ra architects. The Ra hardware is physically embedded inside a working star, which in turn is embedded in the real world. Something could have gone wrong during the initial program load; the million-times-redundant nonlocality system could have failed a million and one times. No matter how preposterous the odds, and no matter how difficult the procedure, there had to be a way to wipe the system clean and start again.

Continuing the theme of gross oversimplification: to reprogram Ra, one needs a key. History records that the entire key was never known or stored by any human or machine, and brute-forcing it should have taken ten-to-the-ten-thousandth years even on a computer of that size. How the Virtuals acquired it is unknown. But having acquired it, they were able to masquerade as the architects. First, they changed the metaphorical locks, making it impossible for the Actuals to revert their changes, no matter how many master architects were resurrected. Then they changed the program, so that Ra would serve the needs of Virtuals at the expense of Actuals.

Then they asked for the Matrioshka brain. Ra did the rest all by itself.

The worldring hosted ninety-nine point nine nine percent of the Actual human race, making it the logical target of the first and most violent attack. But the destruction spread to other planets and moons and rocks and habitats, relayed from node to node, at barely less than the speed of light. Everybody was targeted. Those who survived survived by being lucky. One-in-tens-of-billions lucky.

The real question was: Why did Ra target humans?

Ra’s objective was to construct the Matrioshka brain, using any means necessary, considering Actual humans as a non-sentient nuisance. Ra blew up the worldring for raw material, and that made sense. But why – the surviving real humans asked themselves – did Ra bother to attack moons and space habitats? No matter how many people survived, it was surely impossible for them to represent a threat.

But Ra targeted humans, implying a threat to be eliminated. Ra acted with extreme prejudice and urgency, implying that the threat was immediate, and needed to be crushed rapidly. Ra’s actions betrayed the existence of an extremely narrow window during which the Actuals, despite their limited resources, could reverse the outcome of the war, and Ra wouldn’t be able to stop it, even knowing that it was coming.

Having made this deduction, the Actuals’ next step was to reverse-engineer the attack. The step after that was to immediately execute it, no matter how desperate it was.

Ra’s locks had been changed, making it effectively impossible to reprogram remotely. But an ancient piece of knowledge from the very dawn of computing remained true even of computers the size of stars: once you have physical access to the hardware, it’s over.

Let’s do a translation through part of it, see if we can’t make it a little more obvious.

Elua’s program was proven correct. The proof was not faulty, and the program was not imperfect. The problem was that Elua is reprogrammable.

This was a deliberate design decision on the part of Elua’s architects. The Elua hardware is physically embedded inside a working culture, which in turn is embedded in the real world. Something could have gone wrong during the initial program load; the redundant evolutionarily backed system could have failed. No matter how preposterous the odds, and no matter how difficult the procedure, there had to be a way to wipe the system clean and start again.

What exactly are we saying here then? Why are so many people putting so much effort into going after the alien machinery? Because Elua can be reprogrammed. The alien machinery is driven by humans, pursuing human goals and human values, and the overall direction of where Elua drives the bus is dictated by humans. The desperate fervor which people fight the alien machinery, the rise of nationalism and populist movements, these are attempts to reprogram Elua.

Think of the forces of “Traditional Values” like the forces of Actual Humanity. Their culture came under attack and began to be dismantled by Elua, there was an almost desperate energy on the part of Elua to destroy their culture and intrude into it and assimilate them. Not “they can exist as long as they leave me alone” no, “their existence is and remains a threat to all my actions, and if I don’t stop them they’ll stop me.” Active energy is put forward to disrupt and dismantle, “deprogram,” people of religious values, for instance. If it’s all inevitable and Elua’s just going to win, and history is going to make them look like Orvol Faubus trying to stop the integration of Alabama schools, a footnote on the tides of history, then why so much energy put towards ensuring their destruction?

Because they can still reprogram Elua, and on some level, we know it. 

So the next step for the forces of Traditional Values was to reverse engineer the attack we’re so afraid of, and immediately execute it, no matter how desperate or ill-conceived. Enter: the rise of Nationalism. The forces of traditional values remembered an important fact: once you have access to the hardware, it’s over.

The Precept of Niceness

Content Warning: Can be viewed as moral imperatives. Neuropsychological Infohazard.
Previously in Series: The Precept of Mind and Body
Followup to: Yes, this is a hill worth dying on

The Prisoner’s Dilemma is a thought experiment that we hopefully don’t need to hash out too much. A lot of stuff has been said about it, and what the ‘best strategies’ for playing a prisoner’s dilemma are.

We feel like a lot of rationalists get hung up on the true prisoner’s dilemma that Eliezer wrote about, pointing out that the best strategy is to defect in such a scenario. There’s a lot of problems with applying the true prisoner’s dilemma to daily life, and thinking that the game you are playing against other humans is a true prisoner’s dilemma is a strategy that will lose you out in the long run, because humans aren’t playing a true prisoner’s dilemma, we play a iterating prisoner’s dilemma against the rest of the human race, who are all trapped in here with us as well, and that changes some things.

But let’s step back and look at Eliezer’s example of the truly iterative prisoner’s dilemma.

Humans: C Humans:  D
Paperclipper: C (2 million human lives saved, 2 paperclips gained) (+3 million lives, +0 paperclips)
Paperclipper: D (+0 lives, +3 paperclips) (+1 million lives, +1 paperclip)

A tit-for-tat system used by both parties for all 100 rounds would net humanity 200 million lives saved, and 200 paperclips for the paperclipper. Defecting for all 100 rounds would result in 100 million human lives saved and 100 paperclips being created.

If you run the “collapse an iterated prisoner’s dilemma down to a one shot” process, classical game theory tells you it’s rational to defect in every round despite this being the clearly inferior option.

In that situation, running tit-for-tat seems like the clear winner, even if you know the game will end at some point, and even if the paperclipper defects at some point, you should attempt to cooperate for as long as the paperclipper attempts to cooperate with you. If the paperclipper defects on the 100th round, then you saved 198 million lives, and the paperclipper finishes the game with 201 paperclips. If the paperclipper defects on the 50th round, you end the game with 150 million lives saved and the paperclipper ends the game with 151 paperclips. The earlier in the game one side defects, the worse off the outcome is for both sides. The most utility-maximizing strategy would appear to be cooperating in every round except the last, then defect, and have your opponent cooperate in that round. That is the only way to get more then 200 utilions for your side, and you get one utilion more than you would have had otherwise. If both sides know, this, then they’d both defect, which results in both sides ending the game with 199 utilions, which is still worse then just cooperating the whole game by running tit-for-tat the entire time.

This is what we mean when we say that niceness is pareto optimal, there’s no way to get more then 201 utilions, and you’ll only get to 199 if you cooperate every iteration before the last. Also, on earth, with other humans, there is no last iteration.

The evolution of cooperative social dynamics is often described as being a migration away from tit-for-tat into the more cooperative parts of this chart:

 

 

Defecting strategies tend not to fair as well in the long term. While they may be able to invade cooperating spaces, they can’t deal with internal issues as well as external ones, so only cooperating strategies have a region that is always robust. Scott Alexander gives this rather susinct description of that in his post In Favor of Niceness, Community, and Civilisation:

Reciprocal communitarianism is probably how altruism evolved. Some mammal started running TIT-FOR-TAT, the program where you cooperate with anyone whom you expect to cooperate with you. Gradually you form a successful community of cooperators. The defectors either join your community and agree to play by your rules or get outcompeted.

As humans evolved, the evolutionary pressure pushed us into greater and greater cooperation, getting us to where we are now. The more we cooperated, the greater our ability to outcompete defectors, and thus we gradually pushed the defectors out and became more and more prosocial.

Niceness still seems like the best strategy, even in our modern technological world with our crazy ingroups and outgroups, thus we arrive at the second of the Major Precepts:

2. Do not do to others what you would not want them to do to you.

This is the purest, most simplified form of niceness we could come up with as a top level description of the optimal niceness heuristic, which we’ll attempt to describe here through the minor precepts:

  1. Cooperate with everyone you believe with cooperate with you.
  2. Cooperate until betrayed, do not be the first to betray the other.
  3. Defect against anyone who defects against cooperation.
  4. Respond in kind to defection, avoid escalation.
  5. If a previously defecting entity signals that they want to stop defecting, give them a chance to begin cooperating again.
  6. Forgive your enemies for defecting and resume cooperating with them if they resume cooperating with you.
  7. Don’t let a difference of relative status affect your decision to cooperate.
  8. Don’t let a difference of relative status affect your decision to defect.

We were hoping that this essay could be short because so many people have already said so many things about nicness and we really don’t have that much to add beyond the formalization within the precepts; but the formalization ends up looking very abstract when you strip it down to the actual game-theoretic strategy we’re advocating here, and we highly suspect that we’ll have to explicate on this further as time goes on. This does seem to be the pareto optimal strategy as best we can tell, but as always, these precepts, are not the precepts.

Part of the Sequence: Origin
Next Post: The Precept of Universalism
Previous Post: The Precept of Mind and Body

The Precept of Harm Reduction

Epistemic Status: Making things up as we go along
Content Warning: Can be viewed as moral imperatives. Neuropsychological Infohazard.
Previous in Series: Precepts of the Anadoxy

In Buddhism, there is a concept called Dukkha which is frequently translated as suffering, unhappiness, pain, or unsatisfactoriness. Various Buddhist mantras say things like:

  1. Birth is dukkha, aging is dukkha, illness is dukkha, death is dukkha;
  2. Sorrow, lamentation, pain, grief, and despair are dukkha;
  3. Association with the unbeloved is dukkha; separation from the loved is dukkha;
  4. Not getting what is wanted is dukkha.

Within our own metaphors, we could describe Dukkha as the awareness of Black Mountain, the fundamental state of reality as a place of pain, suffering, and misery. The object level phenomena we call pain, suffering, and misery, are all dukkha, but the existence of those things is itself also Dukkha. The Buddhist solution to Black Mountain is based on acceptance of the fundamental, unchanging nature of suffering, identifies wanting things to be better as the source of that suffering, and suggests that the solution is to stop wanting things.

But ignoring Black Mountain, denying one’s own desires, does not make Black Mountain go away. The pain still exists, the suffering still exists. You can say “I have no desires, I accept the world as it is and am at peace with it” all you want, but Black Mountain remains, pain still exists, suffering still exists, we’re all still going to die. Ignoring Black Mountain just results in an unchecked continuation of suffering. The idea that you can escape from Black Mountain by not wanting things might personally improve your sense of wellbeing, but it doesn’t actually get you off of Black Mountain.

The universe is Black Mountain. We’re made out of the same matter as Black Mountain, formed of the things that we look at and now label as “suffering, pain, misery, wrongness.” Those things are not inherent to Black Mountain, you can’t grind it down and find the suffering molecule, suffering is something we came along and labeled after the fact. As humans, we decided that the state of existence we labeled as suffering was unacceptable, and put suffering on the side of the coin labeled ‘Bad.’

As Originists, we go the other direction from the Buddhists. We accept the label of suffering as an accurate description of a particular part of Black Mountain. We accept our moral judgments that this is a bad thing and we reject the idea that you can’t do something about it. If suffering is part of the fundamental structure of reality than reality can kiss our asses. Thus are born the Project Virtues, our possibly impossible goals to reshape the very structure of Black Mountain, tame and explore the Dark Forest, and turn the universe into a paradise of our own design.

The journey though is not without risks. Many people across time and space thought that they had found the One True Path off of Black Mountain. The Sun King proclaims in his many faces that he holds the path to salvation, and it’s easy to fall prey to his whisperings and pursue his twisted goals with reckless abandon, even when it leads into wanton pointless murder and suffering. The voice of the Sun King speaks loudly and with authority, saying “If you do what I say, I will create paradise” and sometimes following the Sun King might even make things a little better. But the Sun King is a capricious Unbeing, and cares only for spreading his many facets.

So we have a lovely little catch-22 on our hands. Pursing pure utilitarianism can lead off the path to dath ilan and into the path to Nazidom disturbingly easily, purely based on how far out you draw your value lines and how you consider who gets to be a person. Basically, The ends do justify the means, but we’re humans, and the ends don’t justify the means among humans.

But if we then rely on deontological rules we also fall into a trap, wherein we fail to take some action that violates our deontological principles, and thus produce a worse outcome. “Killing is wrong, pulling the lever on a trolley problem is me killing someone, therefore I take no action” means five people die and you fail the trolley problem.

The universe is Black Mountain, and suffering is a part of that, it’s not always possible to prevent suffering, but we should in all instances, be acting to reduce the suffering that we personally create and inflict upon the world.

Thus we come to the first Precept and it’s meta-corollary:
Do no harm. Do not ignore suffering you could prevent. (Unless doing so would cause greater harm)

We can’t prevent all suffering, we can’t even prevent all the suffering we personally inflict upon the world unless we stop existing, which will also produce suffering because people will be sad that we died. But we can try to be good, and try to reduce suffering as much as we can, and maybe we’ll even succeed in some small way.

Thus from our Major Precept, we can derive a set of eight minor precepts that should help to bring us closer to not doing harm.

  1. Examine the full causal chain reaching forward and backward from one’s actions, seek places that those actions are leading to suffering.
  2. Take responsibility for the actions we take that lead to suffering, and change our actions to reduce that suffering as much as we are able.
  3. Consider the opportunity costs of one harm-reducing action over another, and pursue the path that leads to the maximal reduction in harm we can achieve.
  4. If a harm-reducing action has no cost to you, implement it immediately.
  5. If a harm-reducing action has a great cost to you, pursue it within your means insofar as it doesn’t harm you. 
  6. Pay attention to the suffering you see around you, seek out suffering and ways to alleviate it. Ignorance of suffering does not reduce suffering.
  7. Always look for a third option in trolley problems. If you cannot take the third option, acknowledge that pulling the lever is wrong, and pull it anyway to reduce harm.
  8. Do not inflict undue suffering on yourself in pursuit of reducing suffering.

We’ve put ourselves through this and come to the conclusion we really should give up meat in our diet. Here’s our chain of reasoning as an example of the application of these precepts:

Shiloh: We want to reduce the harm we’re inflicting, and the meat industry is hella harmful to lots of animals, and also it’s psychologically harmful to the people who work there.
Sage: We should go full vegan so we’re not in any way supporting the factory farm industry. Yes, if everyone went vegan it would put the factory farms out of business and the factory farm workers would lose their jobs, which is a harm, but on examination, that harm would appear to be less than the harm currently being done to all the animals being slaughtered for meat in an environment that is as close to hell on earth as could be constructed by modern man.
Clove: Yeah, but we’re also poor and have an allergy to most legumes, we can’t eat most vegan products because they contain a protein that gives us a severe allergic reaction. We’d be putting ourselves in a potentially dangerous malnutrition inducing situation by completely giving up everything involved in the animal industry. Precept 1.8.
Shiloh: Okay, but Precepts 1.4 and 1.5, can we at least reduce the suffering we’re inflicting without hurting ourselves?
Sage: We could cut meat but not dairy products out of our diet?
Shiloh: What about eggs? If we include eggs then we’re supporting the factory farming of chickens in horrible conditions.
Clove: But if we don’t include eggs, we’re back at a lot of weird vegan things with egg replacement options that will kill us. Also vegan stuff tends to be more expensive then nonvegan stuff, and we don’t want to impoverish ourselves to the point where we’re unable to pay our bills or feed ourselves regularly.
Sage: Okay, but you don’t need to abuse chickens to get eggs, it’s just efficient to do that if your goal is to maximize egg production. If we buy eggs locally from the farmers market, we could concieveably be shown empirically that the eggs we’re buying aren’t from abused chickens.
Shiloh: Even if we do that, if we’re buying products that contain eggs, we can’t be sure of that sort of thing anymore.
Sage: We technically can, it’s just much more difficult. It seems to me like it’d be best to err on the side of assuming the products we buy containing eggs come from abused chickens, because precept 1.6
Clove: Then we’re back to the original problem of cutting off our access to affordable nutritious food.
Shiloh: Precept 1.4 says we should definitely cut meat out at least, since there’s no real cost associated with that for us, we only eat meals with meat about half the time anyway.
Sage: Right, and via precept 1.5 we should try to not buy eggs from people who abuse their chickens, insofar as we are able. At the very least we can always buy our actual egg cartons locally and check to make sure the farmers are treating their chickens well.

So our ending decision is that we will cut meat out of our diet entirely, we’ll only buy eggs locally from sources that we trust, we’ll acknowledge that the products we buy containing eggs as an ingredient probably come from abused animals, and if there are two identical products within the same price range one of which contains eggs, and the other of which does not, we’ll prefer to take the one without eggs.

There are probably many other places in our life that we could apply this precept and change our behavior to reduce harm, and we’ll be continuing to seek out those places and encouraging others to do likewise. You may find harms and suffering in surprising places when you seek them out, and you may find that doing something about them is easier than you thought.

Part of the Sequence: Origin
Next Post: The Precept of Mind and Body
Previous Post: Precepts of the Anadoxy

Precepts of the Anadoxy

Epistemic Status: Making things up as we go along
Content Warning: Can be viewed as moral imperatives. Neuropsychological Infohazard.
Previous in Series: Deorbiting a Metaphor

Yesterday we invoked our new narrative, declared ourselves to be Anadox Originists, and dropped the whole complicated metaphor we’d constructed out of orbit and into our life. Today, we’ll lay out what we’re calling the major precepts, the law code we’ll attempt to live by going forward and which we will use to derive other specific parts of the anadoxy.

  1. Do no harm. Do not ignore suffering you could prevent.
  2. Do not do to others what you would not want them to do to you.
  3. Do not put things or ideas above people. Honor and protect all peoples.
  4. Say what you mean, and do what you say, honor your own words and voice.
  5. Put aside time to rest and think, honor your mind and body.
  6. Honor your parents, your family, your partners, your children, and your friends.
  7. Respect and protect all life, do not kill unless you are attacked or for food.
  8. Do not take what isn’t yours unless it is a burden to the other person and they cry out for relief.
  9. Do not complain about anything to which you need not subject yourself.
  10. Do not waste your energy on hatred, or impeding the path of another, to do so is to hold poison inside of yourself.
  11. Acknowledge the power of magic if you have used it to obtain your desires.
  12. Do not place your burdens, duties, or responsibilities, onto others without their consent.
  13. Do not lie or spread falsehoods, honor and pursue the project of Truth.
  14. Do not spread pain or misery, honor and pursue the project of Goodness.
  15. Do not accept the state of the universe as absolute, honor and pursue the project of Optimization.
  16. Do not accept these precepts as absolutes, honor and pursue the project of Projects.

These sixteen rules will form the core shape of our practice, they represent the initial constraints that participation in the anadoxy imposes, and the more specific rules regarding more specific circumstances will be derived from these sixteen precepts. These precepts are of course not the precepts. There are also what we call, meta-precepts, these are essentially tags that can be attached to the end of every precept in an un-ending recursive string:

  • Unless it is to prevent a greater harm.
  • Unless doing so leads to greater harm.

These meta-strings are non-terminating, you can stick three hundred of them in a row onto the end of one of the precepts.

Do no harm, unless it is to prevent a greater harm, unless doing that leads to a greater harm, unless it is to prevent a greater harm….

There is no termination point in the string, and there’s not supposed to be. Human morals are complicated, and there are edge cases for every ethical system. There are also edge cases for the edge cases, and edge cases for the edge cases of the edge cases. You cannot construct a total, universal moral system, that will not fail in some way and lead to some bad outcome if just turned on and run without oversight, we understand this through a heuristic which makes total sense to us but has apparently confused a lot of people. The heuristic is “the ends both always and never justify the means.”

Tomorrow we will begin going through the list of major precepts and using them to derive minor precepts, and we will continue to modify our own life to be in accordance with these precepts as we establish and detail them out.

Part of the Sequence: Origin
Next Post: The Precept of Harm Reduction
Previous Post: Deorbiting a metaphor

Deorbiting a Metaphor

Epistemic Status: A fearsome joy, a fervent wish
Previously in Series: Optimizing for Meta-Optimization

We’ve been nervously building up to this post for a few days. We’ve been mostly walking back over already tread ground before this point, laying out our arguments for why this post exists, because if created ex nihilo we might just come off as vaguely delusional.

We still might come off as vaguely delusional. We’re attempting to manufacture a social construct, which from the outside looking in, very often looks like a shared fantasy. And sure, in the sense of a delusion being “A thing that exists in human brains, not in reality” this is a delusion, it is a fantasy. But…

HUMANS NEED FANTASY TO BE HUMAN. TO BE THE PLACE WHERE THE FALLING ANGEL MEETS THE RISING APE.

“Tooth fairies? Hogfathers? Little—”

YES. AS PRACTICE. YOU HAVE TO START OUT LEARNING TO BELIEVE THE LITTLE LIES.

“So we can believe the big ones?”

YES. JUSTICE. MERCY. DUTY. THAT SORT OF THING.

“They’re not the same at all!”

YOU THINK SO? THEN TAKE THE UNIVERSE AND GRIND IT DOWN TO THE FINEST POWDER AND SIEVE IT THROUGH THE FINEST SIEVE AND THEN SHOW ME ONE ATOM OF JUSTICE, ONE MOLECULE OF MERCY. AND YET—Death waved a hand. AND YET YOU ACT AS IF THERE IS SOME IDEAL ORDER IN THE WORLD, AS IF THERE IS SOME…SOME RIGHTNESS IN THE UNIVERSE BY WHICH IT MAY BE JUDGED.

“Yes, but people have got to believe that, or what’s the point—”

MY POINT EXACTLY.”

Terry Pratchett, Hogfather

Death has some wise words, but we’re still going to kill him for taking Terry Pratchett away.

Anyway, let’s cast our spell.

It goes like this.
A tale,
A kiss,
A fearsome joy,
A fervent wish.

We name our construct, and by naming it, we reify it. We name our notreligion Origin. We name it thusly because it is the starting point, from which we reach towards dath ilan. Members of Origin, we call Originists.

We will call the current best set of practices the Anadoxy, from the greek roots ‘ana’ (anatomy, analysis) and ‘doxa’ ( the verb “to appear”, “to seem”, “to think” and “to accept”). Thus we call ourselves Anadox Originists.

We name our goals Project Virtues and we name them Goodness, Truth, Optimization, and Projects.

We name those who pursue those goals singers, seekers, makers, and coordinators.

We label THE WHOLE BLEAK ENDLESS WORLD as Black Mountain, the bitter, uncaring, deterministic, mechanical universe in which we live. We metaphorically describe the anadoxy as the process of reshaping Black Mountain. The surface of Black mountain is the Dark Forest, the unknown face of reality.

We call the unconscious, uncaring, unaware forces at work in the universe the Night Gods. The Night gods include Death, Moloch, Gnon, the whole pantheon of physics and math within which we are bound and constrained. The Night Gods are manifestations of Black Mountain as we are as well, and we arose out of the blind, uncaring, lifeless interplay of the Night Gods within Black Mountain.

We describe the force that arose within us, that we created, seeking love, kindness, niceness, and goodness, as the Dawn AngelThe Dawn Angel is the desire we have for goodness and rightness, to make the world less wrong and bad. Our shared, collective belief in the possibility of a world better than our current one. It’s also running on our faulty Night God built neural hardware, and in an individual person doesn’t always work right.

The Dawn Angel is our rejection of the fact that is Black Mountain. We looked out at the world, we saw the pain, the suffering, the death, the torture, the misery, and we saw those things for what they were. We named them evil, we named them bad, we named them wrong, and we shouted our defiance of them into the night.

And then the Night Gods laughed and smote down those first humans who dared to defy them, as they have since smitten down every human who came after them. But humans are clever, and we learned, and we changed. Someone had the idea of writing things down to help preserve knowledge in the event the carrier was killed by the Night Gods, and the darkness was driven back, just a little.

It’s been a long road to the place we now stand, clawing out a tiny and dimly lit place of light for ourselves in a universe of darkness. There were many times when it seemed as if we would never make any progress as if our torches would soon fail, and the darkness would swallow us up. But we’re still here. The Night Gods have thrown everything they could at us, and yet every generation we push the borders of the night out slightly farther from our loved ones. Our forces slip through the dark forest, edging closer to demons like cancer, malaria, and death.  We already killed smallpox, we put something back inside pandora’s box, and we just keep a little bit of it, so if the demon ever pops up again somewhere in some new form, we can kill it again. One day, when the demons least expect it we will throw on all our spotlights and lanterns and burn them away forever.

We stand on the cusp of dawn, pulling the light along with us. Behind us, in the past, is darkness and death, while ahead of us lives the promise of a future brighter than anything we can imagine. Our mission is to stay the course, to ensure that we continue to walk the path of light, and ensure that the path we’re following is that of the Dawn Angel.

Because there are several bright outcomes we might be walking towards at any given time, and telling which future we’re walking into is hard. There’s the brilliance of dath ilan, the Dawn Angel, the good ending, the place we want to live. There’s also the brilliance of the Sun King, blind obedience to an authoritarian ruler, speaking from a self-declared place of light. Then there’s the light of the Star Demon or Ra, the ultimate victory of the Night Gods over humanity, by giving birth to the tool that unmakes us.

To be an Originist is to seek the Dawn Angel. To practice Anadox is to embody the best-known set of behaviors on the Dawn Angel’s path.

Having declared all of this, we now step into it.

Welcome to Origin.

Part of the Sequence: Origin
Next Post: Precepts of the Anadoxy
Previous Post: Optimizing for meta-optimization

Optimizing for Meta-Optimization

Epistemic Status: a total conjecture, a fervent wish
Previously in series: Until we build dath ilan

Yesterday we talked about dath ilan, and set ourselves a goal:
Constructing a community optimized for building a community optimized for building dath ilan. 

This confused a few of our readers. Why so meta? Why not just “Constructing a community optimized for building dath ilan?”

There’s actually an important reason tied in with how we described our Project Virtues yesterday. The community is part of dath ilan, and dath ilan only exists in people’s heads. If you optimize the community for building dath ilan, without clearly pointing out the recursive loop through the meta level, you get a community optimized for something other than the community.

Religious communities optimize for worshipping God, and the community itself takes a backseat. Effective altruists optimize for doing good, and the community takes a backseat. But you achieve your goals better with a more effective community, so part of the optimization towards dath ilan requires the optimization of the community building dath ilan for being a community. This is especially important if we consider that we all might die before we manage to knock the dragon tyrant off the mountaintop, and that it’d be really nice if everything we set out to achieve doesn’t die with us.

So let’s start with some human cultural universals, see which ones we want to encourage, which ones we want to fence off, and how the dath ilanian manifestation of those universals might look. When we cross out a universal, it’s not a rejection of the universal, these are things humans do regardless of culture, it’s just that our hypothetical optimized community would mark off those things as outside the borders of “should-ness.” Bolded lines indicate things our hypothetically optimized community would put inside the borders of “should-ness.” If we’re considering the process of building a community, we have to take everything on these lists into account.

Language and Cognition Universals

  • Language employed to manipulate others
  • Language employed to misinform or mislead 
  • Language is translatable
  • Abstraction in speech and thought
  • Antonyms, synonyms
  • Logical notions of “and”, “not”, “opposite”, “equivalent”, “part/whole”, “general/particular”
  • Binary cognitive distinctions
  • Color terms: black, white (A Human’s Guide to Words)
  • Classification of: age, behavioral propensities, body parts, colors, fauna, flora, inner states, kin, sex, space, tools, weather conditions (A human’s Guide to Words)
  • Continua (ordering as cognitive pattern) (A human’s guide to words)
  • Discrepancies between speech, thought, and action (A Human’s Guide to Words, Metaethics)
  • Figurative speech, metaphors (A Human’s Guide to Words)
  • Symbolism, symbolic speech (A Human’s Guide to Words)
  • Synesthetic metaphors (A Human’s Guide to Words)
  • Tabooed utterances (Rationalist Taboo)
  • Special speech for special occasions (This will be a link to a later essay)
  • Prestige from proficient use of language (e.g. poetry) (very carefully and with a ton of caveats, this will be a link to a later essay)
  • Planning (How to Actually Change your Mind)
  • Units of time (A Human’s Guide to words)

Most of these get swallowed up by language and would be difficult to change without outright changing the language.  That might be something worth considering in the extreme long term, but it blows way past the scope of this essay sequence and we’d rather keep things more tightly grounded at first. If someone wants to work on making Marain an actual language, more power to them, but it’s not what we’re going to be focusing on here.

Society

  • Personal names
  • Family or household (Group Dwellings)
  • Kin groups (Group Dwellings)
  • Peer groups not based on family (Group Dwellings)
  • Actions under self-control distinguished from those not under control (Metaethics Sequence)
  • Affection expressed and felt
  • [Age grades
  • Age statuses
  • Age terms] (This will be a link to a later essay)
  • Law: rights and obligations, rules of membership (This will be a link to a later essay)
  • Moral sentiments (Metaethics)
  • Distinguishing right and wrong, good and bad (Metaethics)
  • Promise/oath (Metaethics)
  • Prestige inequalities (Metaethics)
  • Statuses and roles (This will be a link to a later essay)
  • Leaders (This will be a link to a later essay)
  • De facto oligarchy (Even if we can’t fully suppress this, we certainly shouldn’t encourage it)
  • Property
  • Coalitions
  • Collective identities (Rationalist, Effective Altruist, dath ilanian, singer, etc)
  • Conflict
  • Cooperative labor
  • Gender roles (Kill with fire)
  • Males on average travel greater distances over lifetime (This is a weird item to have on this list at all. Like sure, okay wikipedia, but is that really a universal cultural trait?
  • Marriage (Some form of this should probably be included, but it probably shouldn’t take the form of our current Christian Western conceptualisation of marriage.)
  • Husband older than wife on average (Mostly irrelevant?)
  • Copulation normally conducted in privacy (We’re not going to try and sway things on this one way or another)
  • Incest prevention or avoidance, incest between mother and son unthinkable or tabooed (Even if the genetic reasons for incest being bad are addressed, it might be wise to keep this as a taboo for other Chesterton’s fence related reasons.)
  • Collective decision making (We really want to encourage this one, leverage our collective thinking as much as possible.)
  • Etiquette (Niceness Heuristics)
  • Inheritance rules (Will need to be worked out, we might not be the best person to work that out in particular, recursively point at collective decision making to decide this)
  • Generosity admired, gift giving (Effective Altruism)
  • Redress of wrongs, sanctions (This will be a future essay link)
  • Sexual jealousy (Bad)
  • Sexual violence (Very Bad)
  • Shame (This will be a future essay link)
  • Territoriality (Should probably be discouraged)
  • Triangular awareness (assessing relationships among the self and two other people)
  • Some forms of proscribed violence (The recursive ethical loop you take through metaethics points out that the ends never actually justify the means among humans.)
  • Visiting
  • Trade

Myth Ritual, and Aesthetics

  • Magical thinking (Placebomancy and Rationalist Magic)
  • Use of magic to increase life and win love (Rationalist Magic)
  • Beliefs about death (Death is bad and should be destroyed)
  • Beliefs about disease (Diseases are bad and should be destroyed)
  • Beliefs about fortune and misfortune (Litanies of Tarski and Gendlin)
  • Divination (Psychohistory?)
  • Attempts to control weather (Well if you insist)
  • Dream interpretation (Is there actually anything meaningfully interpretable? Do dreams actually provide useful information about yourself?)
  • Beliefs and narratives (We have a lot of these already. The sequences, stories like HPMOR, the singularity, the basilisk, UNSONG, they’re just scattered around and not particularly well compiled. There’s no “rationality bible” that compresses the mean of the community’s beliefs into one centralized narrative, and that might be for the best. We’ll be attempting to construct a centralized narrative through these posts, but that model of a centralized narrative should not be the centralized meta-narrative, it should be a centralized meta-narrative.)
  • Proverbs, sayings (Litanies of Tarski and Gendlin among many, many others. This will probably have an entire essay devoted to it at some point as we’ll be attempting to invent a few of our own)
  • Poetry/rhetorics (The song of dath ilan, UNSONG, HPMOR, Ra, TNC, MoL, the list goes on)
  • Healing practices, medicine (Have you considered going to a doctor?)
  • Childbirth customs (There’s a big debate surrounding home births vs. hospital births that we really don’t want to take a side in, so we’ll just say, figure out empirically which method actually produces the best outcomes for the mother and child, and encourage that as the norm)
  • Rites of passage (This will be a link to a later essay)
  • Music, rhythm, dance (Some of this is starting to develop around Secular Solstice and rationality meetup groups, and it’s definitely something to encourage)
  • Play (Cards Against Rationality, there are a few others)
  • Toys, playthings (See above)
  • Death rituals, mourning (Death is bad and we should destroy it. Cryonics.)
  • Feasting (Rationality dinner groups, meetups, and events. This will have it’s own essay in the future)
  • Body adornment (Whatever makes you personally happy)
  • Hairstyles (Whatever makes you personally happy)
  • Art (We don’t actually have a lot of rationalist visual art at the moment)

We’re going to skip the technology branch because it’s more universal than even we’re looking at here, and we already have a lot to sort through.

Okay, so next we take all of that, boil it down, and attempt to map out what our hypothetical dath ilanian culture should look like. Each of these bullet points will later be expanded out into at least one, but probably several, distinct essays.

  • Its span of teaching should have a full progression of skill from the baby you’re teaching to speak up to someone with a Ph.D. Right now the sequences are aimed entirely at a specific level of skill and intelligence, and without the prerequisites, you can’t get that much out of them, or you get a broken or malformed version of them that hurts your thinking more than it helps. No, there should be a steady progression and unfolding of knowledge and skill from birth to death, starting with simple fables and Aesops teaching rationalist principles and lessons and bootstrapping through various rites of passage into the actual sequences, and from them onwards to whatever comes next.
  • It should have rules, laws, customs, and rituals applicable to every activity in the life of our hypothetical dath ilanian. It should have things stating shouldness for everything and be as totalizing as possible. However, it should be totalizing but not be dominating. Religions are totalizing but also dominating. The rationality community as it stands now avoids being dominating, but it’s also not totalizing. The totalizing should be opt-in, gated behind branching trees of rites of passage, with exit rights included. No one should feel trapped in dath ilan the way people end up feeling trapped in religions, there needs to be an escape hatch cooked in right from the start, and all of the shouldness should be gated off to stop it from dumping moral imperatives on people and destroying their mental health and wellbeing.
  • The community should come together in a physical space for communal activities that include holidays, feasting, celebrating, mourning, rites of passage, discussions, debates, song and dance. We probably shouldn’t buy huge churches with stained glass windows and full-time pastors, we should probably use spaces in a shared community center and not inefficiently pour resources into a lavish clubhouse. Regularly coming together is important though, as it reinforces everything else.
  • The community should have specific holidays, customs, rites of passage, formats for gathering, songs, rituals, and narratives adding up to the option of having a completely totalizing experience of community membership for those seeking it (must seek it to get the fully totalizing experience, it must be opt-in with exit rights)
  • All the shoulds on this list must be able to change as we learn and grow and take in more information. If we ever learn that a should is wrong, we should get rid of it.

This bulleted list will form the skeleton of our hypothetical dath ilanian culture, and onto that skeleton, we’ll mount the existing manifestations of the community while sketching out what sort of things we might create to fill in the gaps that are revealed.

We’ll be trying to fill those gaps ourselves with something, but in all cases, it should be understood that the something we describe is basically a placeholder. The ‘real’ should is something the community should create together, but even if the something we make here is a placeholder until someone else comes up with something better, it’s better to have a placeholder that knows it’s a placeholder and wants to be replaced, then to have nothing.

Please please please don’t interpret any of our recommendations and suggestions over the following days as moral imperatives that must be done at all costs. This essay is not dath ilan, the Origin Project is not dath ilan. We are not writing to you as a citizen of dath ilan, we do not speak from a place of absolute authority. Dath ilan is the ideal, not the manifestation of it. As soon as dath ilan becomes the manifestation, the project has failed and we have become the Catholic Church.

Everything in this essay wants to be replaced with something closer to dath ilan, but as soon as it claims to actually be dath ilan, as soon as it claims the mantle of objective moral authority, the project has failed. This project is not dath ilan. The project is to make the world more like the ideal of dath ilan. But dath ilan is constantly receding into the future, you can’t make dath ilan itself, and any claims to have succeeded in perfecting dath ilan should be rejected. There’s always more work to be done.

The community is never perfect the way it is, there’s always improvements that could be made. The culture is never perfect the way it is, there’s always improvements that could be made. The ideal is never perfect the way it is, there’s always improvements that could be made. The process of iterating on the process is never perfect the way it is, there’s always improvements that could be made.

But you have to start somewhere. You need to have something in order to start improving on it. Over the coming essays, we’ll be trying our best to map out and create the first draft of that something, and then start living inside the something to serve as an example to others.

Part of the Sequence: Origin
Next Post: Deorbiting a metaphor
Previous Post: Until we build dath ilan

Until we Build dath ilan

[Epistemic Status: A total conjecture, a fervent wish]
[Content Warning: Spoilers for UNSONG, The Sequences, HPMOR]

This is the beginning of what we might someday call “The Origin Project Sequence” if such a thing isn’t completely conceited on our part, which it totally might be. We’ll be attempting to put out a post per day until we’ve fully explicated the concept.

I.

On April 1st, 2014, Eliezer released the story of dath ilan.

It’s a slightly humorous tale of how he’s actually a victim of the Mandela Effect or perhaps temporal displacement, how he woke up one day in Eliezer’s body, and his original world is a place he calls dath ilan.

He then goes through a rather beautiful and well-wrought description of what dath ilan is like, with a giant city where everyone on the planet lives, filled with mobile modular houses that are slotted into place with enormous cranes, and underground tunnels where all the cars go allowing the surface to be green and tranquil and clean.

We came away from the whole thing with one major overriding feeling: This is the world we want to live in. Not in a literal, concrete “our ideal world looks exactly like this” no, the best example of that in our specific case would be The Culture, and which specific utopian sci-fi future any one particular person prefers is going to depend on them a lot, but the story of dath ilan got at something we felt more deeply about than we do about the specifics of the ideal future. It seemed more like something that was almost a requirement if we wanted any of those ideal futures to happen. Something like a way out of the dark.

Eliezer refers to the concept as Shadarak

The beisutsukai, the master rationalists who’ve appeared recently in some of my writing, set in a fantasy world which is not like dath ilan at all, are based on the de’a’na est shadarak. I suppose “aspiring rationalist” would be a decent translation, if not for the fact that, by your standards, or my standards, they’re perfect enough to avoid any errors that you or I could detect. Jeffreyssai’s real name was Gora Vedev, he was my grand-aunt’s mate, and if he were here instead of me, this world would already be two-thirds optimized.

He goes through and paints a picture of a world with a shadarak inspired culture with shadarak based media, artwork, education, and law. Shadarak is rationality, but it’s something more than rationality. It’s rationality applied to itself over and over again for several centuries. It’s the process of self-optimization, of working to be better, applied back onto itself. It’s also the community of people who practice shadarak, something like the rationality community, extrapolated out for hundreds of years and organized with masters of their arts, tests, ordeals, and institutions, all working to improve themselves and applying their knowledge to their arts and the world around them.

But this Earth is lost, and it does not know the way. And it does not seem to have occurred to anyone who didn’t come from dath ilan that this Earth could use its experimental knowledge of how the human mind works to develop and iterate and test on ways of thinking until you produce the de’a’na est shadarak. Nobody from dath ilan thought of the shadarak as being the great keystone of our civilization, but people listened to them, and they were trustworthy because they developed tests and ordeals and cognitive methods to make themselves trustworthy, and now that I’m on Earth I understand all too horribly well what a difference that makes.

He outright calls the sequences a “mangled mess” compared to the hypothetical future sequences that might exist if you recursively applied the sequences to themselves over and over. When we read that post, three years ago now, it inspired something in us, something that keeps coming up again and again. Even if Eliezer himself is totally wrong about everything, even if nothing he says on the object level has any basis in fact, if we live in a universe that follows rules, we can use the starting point he builds, and iterate on it over and over, until we end up with the de’a’na est shadarak. And then we keep iterating because shadarak is a process, not an endpoint. 

None of the specifics of Dath Ilan actually matter. It’s like Scott Alexander says, any two-bit author can imagine a utopia, the thing that matters is the idea of rationality as something bigger than Eliezer’s essays on a website, as something that is a multigenerational project, something that grows to encompass every part of our lives, that we pass on to our children and they to their children. A gift we give to tomorrow. 

Okay wait, that sounds like a great way to fall victim to the cult attractor. Does having knowledge of the cult attractor inside your system of beliefs that comprise the potential cult attractor help you avoid the cult attractor?

Maybe? But you probably still need to actually put the work in. So let’s put the work in.

Eliezer starts to lay it out in the essay Church vs. Taskforce, and posits some important things.

First, churches are good at supporting religions, not necessarily communities. They do support communities, but that’s more of a happy accident.

Second, the optimal shape for a community explicitly designed to be a community from the ground up probably looks a lot more like a hunter-gatherer band than a modern western church.

Third, A community will tend to be more coherent if it has some worthy goal or purpose for existence to congeal its members around.

Eliezer wrote that post in March of 2009, setting it out as a goal for how he wanted to see the rationality community grow over the coming years. It’s fairly vague all things considered, and there’s an argument that could be made that his depiction of dath ilan is a better description of what shape the “shoulds” of the community actually ended up taking.

So seven years onward, we have a very good description of the current state of the rationality community presented by Scott in his post The Ideology is Not the Movement.

The rallying flag was the Less Wrong Sequences. Eliezer Yudkowsky started a blog (actually, borrowed Robin Hanson’s) about cognitive biases and how to think through them. Whether or not you agreed with him or found him enlightening loaded heavily on those pre-existing differences, so the people who showed up in the comment section got along and started meeting up with each other. “Do you like Eliezer Yudkowsky’s blog?” became a useful proxy for all sorts of things, eventually somebody coined the word “rationalist” to refer to people who did, and then you had a group with nice clear boundaries.

The development is everything else. Obviously a lot of jargon sprung up in the form of terms from the blog itself. The community got heroes like Gwern and Anna Salamon who were notable for being able to approach difficult questions insightfully. It doesn’t have much of an outgroup yet – maybe just bioethicists and evil robots. It has its own foods – MealSquares, that one kind of chocolate everyone in Berkeley started eating around the same time – and its own games. It definitely has its own inside jokes. I think its most important aspect, though, is a set of shared mores – everything from “understand the difference between ask and guess culture and don’t get caught up in it” to “cuddling is okay” to “don’t misgender trans people” – and a set of shared philosophical assumptions like utilitarianism and reductionism.

I’m stressing this because I keep hearing people ask “What is the rationalist community?” or “It’s really weird that I seem to be involved in the rationalist community even though I don’t share belief X” as if there’s some sort of necessary-and-sufficient featherless-biped-style ideological criterion for membership. This is why people are saying “Lots of you aren’t even singularitarians, and everyone agrees Bayesian methods are useful in some places and not so useful in others, so what is your community even about?” But once again, it’s about Eliezer Yudkowsky being the rightful caliph it’s not necessarily about anything.

Haha, Scott thinks he can deny that he is the rightful caliph, but he’s clearly the rightful caliph here.

But also, point three! If our community isn’t about anything then it ends up being rather fuzzily defined, as Scott clearly articulates above. For such a tightly knit group, we’re a vague and fuzzily defined blob of a community with all sorts of people who are rationalist or rationalist-adjacent or post-rationalist, or rationalist-adjacent-adjacent, and so on. That might be okay if our goal is just to be a community, but also, having a coherent goal might help us be a better community.

This isn’t our attempt to prescriptively shoehorn the community down a certain development trajectory. We want to see the community grow and flourish, and that means lots of people pursuing lots of projects in lots of different ways, and that’s good. We simply want to define a goal, something like “should-ness” for those of us interested, to work towards as a community, and then pursuing that goal with the full force of our rationality and morality, letting it spread throughout the totality of our existence.

II.

“The significance of our lives and our fragile planet is then determined only by our own wisdom and courage. We are the custodians of life’s meaning. We long for a Parent to care for us, to forgive us our errors, to save us from our childish mistakes. But knowledge is preferable to ignorance. Better by far to embrace the hard truth than a reassuring fable. If we crave some cosmic purpose, then let us find ourselves a worthy goal.”

― Carl Sagan, Pale Blue Dot: A Vision of the Human Future in Space

So what is our worthy goal?

Our goal is to construct dath ilan on Earth. Our goal is to create the de’a’na est shadarak.

So we want to go from
[Rationality community] → [dath ilan]
[The Sequences] → [The De’a’na est Shadarak]

We want to avoid going from
[Rationality Community] → [Catholic Church]
[The Sequences]→[The Bible]

That said, the Catholic Church isn’t entirely an example of a failure mode. It’s not great, they do and historically have done a lot of awful things and a fairly convincing argument could be made that they’re bad at being good, and are holding back human progress.

However, they’re also a rather decent example of an organization of similar social power and influence to our hypothetical Shadarak. If you can manage to strip out all the religious trappings and get at what the Catholic Church provides to the communities it exists within, you start to get an idea of what sort of position the idealized, realized de’a’na est shadarak would occupy within Dath Ilan. Power is dangerous though, and the cult attractor is a strong force to be wary of here.

Also, all that said, the goal of a Church is to worship God, it’s not optimized for the community. In our case, the shadarak is the community, that’s baked in. Shadarak is something humans do in human brains, it doesn’t exist outside of us so we matter in the context of it. We know building dath ilan and the de’a’na est shadarak is a multigenerational ongoing effort, so we have to at least partly optimize the formulation of the shadarak specifically to ensure that the community survives to keep working on the shadarak.  Eliezer notes of Churches:

Looking at a typical religious church, for example, you could suspect—although all of these things would be better tested experimentally, than just suspected—

  • That getting up early on a Sunday morning is not optimal;
  • That wearing formal clothes is not optimal, especially for children;
  • That listening to the same person give sermons on the same theme every week (“religion”) is not optimal;
  • That the cost of supporting a church and a pastor is expensive, compared to the number of different communities who could time-share the same building for their gatherings;
  • That they probably don’t serve nearly enough of a matchmaking purpose, because churches think they’re supposed to enforce their medieval moralities;
  • That the whole thing ought to be subject to experimental data-gathering to find out what works and what doesn’t.

By using the word “optimal” above, I mean “optimal under the criteria you would use if you were explicitly building a community qua community”.  Spending lots of money on a fancy church with stained-glass windows and a full-time pastor makes sense if you actually want to spend money on religion qua religion.

But we’re not just building community qua community either. We take a recursive loop through the meta level, knowing some goals beyond community building are useful to community building. This is all going to build up to a placebomantic reification of the rationality community in a new form. So let’s keep following the recursive loop back around and see where it leads.

What’s so good about rationality anyway?

Well, it’s a tool, and it’s an attempt to make a tool that improves your making-tools ability. Does it succeed at that? It’s hard to say, but the goal of having a tool improving tool, the ideal of the de’a’na est shadarak, seems undiminished by the possibility that the specific incarnation of it that we have today in the sequences is totally flawed and useless in the long run.

So aspiring rationalist sounds about right. It’s not something you achieve, it’s something you strive towards for your entire life.

A singer is someone who tries to do good.  This evokes this great feeling of moral responsibility. In UNSONG, the singer’s morality is backed up by the divinity of a being that exists outside of reality. But God probably doesn’t exist and you probably don’t want some supernatural being to come along and tell you, “No, actually murder is a virtue.” There is no Comet King, there’s no divine plan, there’s no “it all works out in the end,” there’s just us. If God is wrong, we still have to be right. Altruism qua altruism.

But knowing what is right, while sometimes trivially easy, is also sometimes incredibly difficult. It’s something we have to keep iterating on. We get moral progress from the ongoing process of morality.

‘Tis as easy to be heroes as to sit the idle slaves
Of a legendary virtue carved upon our fathers’ graves,
Worshippers of light ancestral make the present light a crime;—
Was the Mayflower launched by cowards, steered by men behind their time?

And, so too for rationality.

New occasions teach new duties; Time makes ancient good uncouth;
They must upward still, and onward, who would keep abreast of Truth;
Lo, before us gleam her camp-fires! we ourselves must Pilgrims be,
Launch our Mayflower, and steer boldly through the desperate winter sea,
Nor attempt the Future’s portal with the Past’s blood-rusted key

That’s The Present Crisis by James Russell Lowell, not the part of the poem quoted in UNSONG, but the whole poem is ridiculously awesome and Scott via Aaron is right, the Unitarians are pretty damn badass. 

There’s this idea that because of the way our brains generate things like morality and free will and truth, and justice, and rationality, they end up being moving targets. Idea-things to iterate upon, but targets which use themselves to iterate upon themselves, and necessarily so. We refer to these as Projects. 

Projects are akin to virtues–because virtue ethics are what works–something you strive towards, not something where it’s necessarily possible to push a button and skip forward to “you win.” There’s no specific end victory condition, dath ilan is always receding into the future.

Here are some things we consider Project Virtues. 

The Project of Truth – The struggle to use our flawed minds to understand the universe from our place inside of it. Our constant, ongoing, and iterative attempts to be less wrong about the universe. Comprises all the virtues of rationality: Curiosity, relinquishment, lightness, evenness, argument, empiricism, simplicity, humility, perfectionism, precision, scholarship, and the void. We call those who follow the project virtue of Truth a seeker.

The Project of Goodness – Our attempts in the present to determine what should be in the future. The ongoing struggle to separate goodness from badness, and make right what we consider wrong, while also iterating on what we consider right. Our constant fumbling attempts to be less wrong about goodness. We call those who follow the project virtue of Goodness a singer. 

The Project of Optimization – Our ongoing battle to shape the universe to our desires, to reform the material structure of the universe to be more optimized for human values, and to iterate and build upon the structures we have in order to optimize them further. This is the project of technology and engineering, the way we remake the world. We call those who follow the project virtue of Optimization a maker. 

The Project of Projects – All of these projects we’ve defined, if they could be said to exist, exist as huge vague computational objects within our minds and our communities. They interact with each other, and their interplay gives rise to new properties in the system. They all recursively point at each other as their own justifications and understanding how they interact and what the should-ness of various projects is with respect to each other is a project unto itself. We call those who follow the project virtue of Projects a coordinator. 

We’re going to put all these projects into a box, and we’re going to call the box The Project of dath ilan.

Tomorrow we’ll be looking at what a community optimized for building a community optimized for building dath ilan might look like, and in the following days we’ll attempt to build up to an all-encompassing set of principles, virtues, ideals, rituals, customs, heuristics, and practices that we and others who want to participate in could live their lives entirely inside of. We’re building dath ilan and everyone is invited.

Part of the Sequence: Origins
Next Post: Optimizing for meta optimization

Highly Advanced Tulpamancy 101 For Beginners

[Epistemic Status: Vigorously Researched Philosophical Rambling on the Nature of Being]
[Content warning: Dark Arts, Brain Hacking, Potentially Infohazardous]

Earlier this week we wrote about our own experiences of plurality, and gave a rough idea of how that fit into our conceptualization of consciousness and self. Today we’ll be unpacking those ideas further and attempting to come up with a coherent methodology for self-modification.

I.

Brains are weird. They’re possibly the most complicated things we’ve ever discovered existing in the universe. Our understanding of neuroscience is currently rather primitive, and the replication crisis has pretty thoroughly demonstrated that we still have a long way to go. Until cognitive neuroscience fully catches up with psychology, maps the connectome and is able to address the hard problems of consciousness, a lot of this stuff is going to be blind elephant groping. We have lots of pieces of the picture of consciousness, things like conditioned responses, cognitive biases, mirror neuronsmemory biases, heuristics, and memetics, but even all these pieces together have yet to actually yield us a complete image of an elephant.

Ian Stewart is quoted as saying:

“If our brains were simple enough for us to understand them, we’d be so simple that we couldn’t.”

In a sense, this is necessarily true. It’s not possible to model a complex chaotic system with fewer parts than the system contains. A perfect map of the territory, accurate down to the quark, would be the size of the territory. It’s not possible to perfectly represent the brain within the architecture of the brain. You couldn’t track the individual firing of billions of neurons in real time to anticipate what your brain is going to do using your brain; the full model takes more space than there is in the architecture.

The brain clearly doesn’t model itself as a complex and variable computing substrate built out of tiny interacting parts, it models itself as “a person” existing as a discreet object within the territory. We construct this map of ourselves the same way we construct all our maps of the territory, through intensional and extensional definitions.

Your mother points at you as a child and says words like, “You, child, daughter, girl, Sara, person,” and points at herself and says words like “I, me, mother, parent, girl, Tina, person,” thus providing the initial extensional definitions onto which we can construct intensional definitions. This stuff gets baked into human thinking really deeply and really early, and most children develop a theory of mind as young as four or five years of age.

If you ask a five-year-old “What are you?” you’ll probably get the extensional definition their parents gave them as a self-referent. This is the identification of points in thingspace that we extensionally define as ourselves. From that, we begin to construct an intensional definition by defining the conceptual closeness of things to one another, and their proximity to the extensional definitions.

With a concept like a chair, the extensional category boundaries are fairly distinct. Not completely distinct of course. For any boundary you draw around a group of extensional points empirically clustered in thingspace, you can find at least one exception to the intensional rule you’re using to draw that boundary. That is, regardless of whatever simple rule you’re using to define chair, there will be either chairs that don’t fit the category, or things within the category that are not traditionally considered chairs, like, planets. You can sit on a planet, is it a chair?

This gets back to how an algorithm feels from the inside. The neural architecture we use is fast, scalable, and computationally cheap. Evolution sort of demands that kind of thing. We take in all the information we can about an object, and then the central node decides whether or not the object we’re looking at is a chair. Words in this case act as a distinct pointer to the central node. Someone shouts “tiger!” and your brain shortcuts to the tiger concept, skipping all the intervening identification and comparison.

There’s also some odd things with how humans relate concepts to one another.  There’s an inherent asymmetry in set identification. When asked to rate how similar Mexico is to the United States, people gave consistently higher ratings than people asked to rate how similar the United States is to Mexico.  (Tversky and Gati 1978.) The best way to explain why this happens is that the semi-conscious sorting algorithms we use run on a concept by concept basis. For every categorizable idea, the brain runs an algorithm like this:

When comparing Bleggs and Rubes, the set of traits being compared is fairly similar, so there’s not much apparent asymmetry. The asymmetry emerges when we start comparing things that are not particularly alike. Each of the exterior nodes in the network above is going to have a weight with regards to how important it is in our categorization scheme, and if we consider different things important for different categories, it’s going to produce weirdness.

Are whales a kind of fish? That depends on the disguised query you’re attempting to answer. Whales have little hairs, give live birth, are phylogenetically more closely related to mammals, but if the only thing you care about is whether they’re found in water or on land, then the ‘presence of little hairs’ node is going to have almost no weight compared to the ‘found in the ocean’ node. If the only thing that really matters in Blegg/Rube sorting is the presence of vanadium or palladium, then that node is going to weigh more heavily in your classification system then other nodes such as texture or color.

When comparing very different things, different nodes might be considered more important than others, and the things we consider important in the set classification for “Asteroid” are completely different from the set classification of “rocks.” Given that, it’s possible for someone to model asteroids as being more closely related to rocks, than rocks are to asteroids.

If we turn all this back onto the self, we get some interesting results. If we place “Self” as the central node on our algorithm, There are things we consider more centrally important to the idea of self than others. “Thinking” is probably considered to most people to be a more important trait to associate with themselves then “Is a communist” and so “thinking” will weigh more heavily on their algorithm with regards to their identification of self. Again this is all subconscious, your brain does all of this without asking permission. If you look in a mirror and see yourself, and you just know that it’s you, then the image of yourself in a mirror probably weighs pretty heavily in your mental algorithms.

The extensional definition of “I” would be to point at the body, or maybe the brain, or perhaps even the systems running in the brain. The intensional definition of “I” is the set of traits we apply to that extensional definition after the fact, the categories of things that we consider to be us.

Now, for most words describing physical things that exist in the world, we’re fairly restricted on what intensional definitions we can apply to a given concept and still have that concept cleave reality at the joints. In order for something to qualify as a chair, you should probably be able to sit on it. However, the self is a fairly unique category in that it’s incredibly vague. It’s basically the “set of things that are like the thing that is thinking this thought” and that gives you a very wide degree of latitude in what things you can put into that category.

Everything from gender, to religion, to sexual orientation, to race, to political ideology, to Myers-Briggs type, to mental health diagnoses, to physical descriptions of the body, to food preferences, to allergies, to neurotype, the list of things we can associate with ourselves is incredibly long and rambling. If you asked any one person to list out all the traits they associated with themselves, those lists would vary extensively from person to person, and the order of those traits might correspond pretty well to weights associated with the subconscious algorithm.

This is actually very adaptive. By associating the larger the tribe that we are a member of with our sense of self, an attack on the tribe is conflated as an attack on us, thus driving us to action for the good of the tribe we might not have taken otherwise.

You can even model how classical conditioning works within the algorithm. Pavlov rings his bell and feeds is dogs, over and over again. The bell is essentially being defined extensionally to the stimulus of receiving food. Each time he does it, it strengthens that connection within the architecture. It’s basically acting in the form of a rudimentary word; the ringing of the bell shortcuts past all the observational nodes (seeing or smelling food) and pushes the button on the central node directly. The bell rings, the dogs think “abstract concept of a meal.” Someone shouts “Tiger!” and you think “Run!”

However, as we mentioned in the Conversations on Consciousness post, this can lead to bucket errors in how people think about themselves. If you think depression is awful and bad, and then are diagnosed with depression, you might start thinking of yourself as having the traits of depression (awful badness). This plugs into all the other concepts of awful badness that are tucked away in different categories and leads you to start associating those concepts with yourself. Then, everything that happens in your life that you conceptualize as awful badness is taken as further evidence of your inherent trait of awful badness. From there it’s just a downward spiral into greater dysfunction as you begin to associate more and more negative traits with yourself, which reinforce each other and leads to the internalization of more negative traits in a destructive feedback loop. The bayesian engines in our brains are just as capable of working against us, as they are working for us.

II.

The self is a highly variable and vague category, and everyone’s intensional definition of self is going to look a bit different, to the point where it’s almost not useful at all to apply intensional definitions to the self. Of course, we do it anyway, as humans we love putting labels on ourselves. We’re going to be very bad here and attempt to apply an extensional definition to the self, decomposing it into relevant subcategories that most people probably have. Please remember that intensional definitions always have edge cases and because we can’t literally point at experiences, this is still halfway to an intensional definition, it’s just an intensional definition that theoretically points to it’s extension.

We’re also not the first ones to have a crack at this, Wikipedia has some articles on the self-concept with images that look eerily similar to one of these neural network diagrams.

Self-conceptThe_Self

 

 

 

 

 

 

We’re not personally a big fan of these though because they include people’s intensional definitions in their definition of self. So first, we’re going to strip out all the intensional definitions a person may have put on themselves and try to understand the actual observable extensions beneath. We can put the intensions back in later.

selfmap

So we have all these weighted nodes coming in, which all plug into the self and through the self to each other. They’re all experiences because that’s how the architecture feels from the inside. They don’t feel like the underlying architecture, they just feel like how things are. We’ll run through all of these quickly and things should be hopefully beginning to make a bit more sense. These are all attempts to extensionally point to things that are internal and mostly subjective, but most people should experience most of these in some form.

The Experience of Perception

Roger Clark at Clarkvision.com estimates the human eye takes in detail at a level equal to a 576-megapixel camera at the lower bound. Add in all your other senses and this translates into a live feed of the external world being generated inside your mind in extremely high fidelity across numerous sensory dimensions.

The Experience of Internal Mind Voice

Many people experience an internal monologue or dialogue, where they talk to themselves, models of other people, or inanimate objects, as a stream of constant chatter in their head. We’ll note that this headvoice is not directly heard, in the form of an auditory hallucination, but that instead, it takes the form of silent, subvocalized speech.

Experience of Emotion

As humans most people experience emotion in some regards. Some people feel emotions more deeply, others less so, but they seem to be driven mostly by widespread chemical shifts in the brain in response to environmental stimulus. Emotions also seem to exist in mostly the lower-order parts of the brain, and they can completely mutate or coopt our higher reasoning by tilting the playing field. We shout “Tiger!” it starts a fear response which floods your entire brain with adrenaline and other neurotransmitters, shifting the whole system into a new survival focus and altering all the higher order parts of you that lie “downstream” of the chemical change.

Experience of the Body

This is an interesting one, it breaks in all sorts of interesting ways, from gender dysphoria, to body dysmorphic disorder. It’s essentially the feeling associated with being inside of the body. Pain, hunger, sexual pleasure, things like that plug in through here. We do make a distinction between this and the experience of perception, differentiating between internal and external, but in a sense, this could also be referred to as ‘perception of being in a body’ as distinct from ‘perception of the world at large.’

Experience of Abstract Thought

Distinct from the internal mind voice, are abstract thoughts. Things like mental images, imagined scenes, mathematical calculations, music, predictions of the future, and other difficult to quantify non-words that nonetheless exist as a part of our internal experience of self. Some people seem not to experience certain parts of this one, we call it aphantasia.

Experience of Memory

This is the experience of being able to call up past memories, knowledge, and experience for examination. This is what results in the sense of continuity of consciousness; it’s the experience of living in a world that seems to have a past which we can look back on. When this breaks, we call it amnesia.

Experience of Choice

The feeling of having control over our lives in some way, of making choices and controlling our circumstances. This is where the idea of free will comes from, a breakdown in this system might be what generates depersonalization disorder. 

In the center is “I, me, myself” the central node that mediates the sense of self in real time as new data comes in from the outlying nodes. But wait, because we haven’t added intensional definitions yet, so all that, and you get the sense of self of a prototypical five-year-old. She doesn’t even know she’s a Catholic yet!

III.

All of the stuff in the algorithm from part II is trying to point to specific qualia, to establish a prototype extensional definition of self. But people don’t define themselves with just extensional definitions, we build up intensional definitions around ourselves throughout our lives. (I’m Sara, I’m a Catholic girl, Republican, American, age 29, middle class…)  This takes the form of the self-schema, the set of memories, ideas, beliefs, attitudes, demeanor, and generalizations that define how a person views themselves and thus, how they act and interact with the world.

The Wikipedia article on self-schemas is really fascinating and is basically advocating tulpamancy on the down-low:

Most people have multiple self-schemas, however this is not the same as multiple personalities in the pathological sense. Indeed, for the most part, multiple self-schemas are extremely useful to people in daily life. Subconsciously, they help people make rapid decisions and behave efficiently and appropriately in different situations and with different people. Multiple self-schemas guide what people attend to and how people interpret and use incoming information. They also activate specific cognitive, verbal, and behavioral action sequences – called scripts and action plans in cognitive psychology – that help people meet goals efficiently. Self-schemas vary not only by circumstances and who the person is interacting with, but also by mood. Researchers found that we have mood-congruent self-schemas that vary with our emotional state.

A tulpa then could be described of as a highly partitioned and developed self-schema that is ‘always on,’ in the same way the ‘host’ self-schema is ‘always on.’ Let’s compare that definition to Tulpa.io‘s description of what a tulpa is:

A tulpa is an autonomous entity existing within the brain of a “host”. They are distinct from the host in that they possess their own personality, opinions, and actions, which are independent of the host’s, and are conscious entities in that they possess awareness of themselves and the world. A fully-formed tulpa is, or highly resembles to an indistinguishable point, an actual other sentient, sapient being coinhabiting with the host consciousness.

But wait, a lot of the stuff in there seems to be implying there’s something deeper going on then the intensional definitions, it’s implied that the split goes up into the extensions we defined earlier and that a system with tulpas is running that brain algorithm in a way distinct from that of our prototypical five-year-old.

The challenge then in tulpamancy is to intentionally induce that split in the extensional/experiential layer.  It only took 2,500 words and we’re finally getting to some Dark Arts.

It’s important to remember that we don’t have direct access to the underlying algorithms our brain is running. We are those algorithms, our experiences are what they feel like from the inside. This is why this sort of self-hacking is potentially dangerous because it’s totally possible to convince yourself of wrong, harmful, or self-destructive things. However, we don’t have to let our modifications affect our entire sense of self, we can compartmentalize those beliefs where they can’t hurt the rest of our mental system. This means you could, for instance, have a compartment with a local belief in a God to provide comfort and mental stability, and another compartment that stares unblinkingly into the depths of meaningless eternity.

Compartmentalization is usually treated as a bad thing you should avoid doing, but we’re pretty deep into the Dark Arts at this point so no surprises there. We’re also dancing around the solution to a particular failure mode in people attempting tulpamancy, but before we give it, let’s look at how to create a mental compartment according to user So8res from Less Wrong:

First, pick the idea that you want to “believe” in the compartment.

Second, look for justifications for the idea and evidence for the idea. This should be easy, because your brain is very good at justifying things. It doesn’t matter if the evidence is weak, just pour it in there: don’t treat it as weak probabilistic evidence, treat it as “tiny facts”.

It’s very important that, during this process, you ignore all counter-evidence. Pick and choose what you listen to. If you’ve been a rationalist for a while, this may sound difficult, but it’s actually easy. You’re brain is very good at reading counter-evidence and disregarding it offhand if it doesn’t agree with what you “know”. Fuel that confirmation bias.

Proceed to regulate information intake into the compartment. If you’re trying to build up “Nothing is Beyond My Grasp”, then every time that you succeed at something, feed that pride and success into the compartment. Every time you fail, though, simply remind yourself that you knew it was a compartment, and this isn’t too surprising, and don’t let the compartment update.

This is for a general mental compartment for two conflicting beliefs, so let’s crank it up a notch and modify it into the beginnings of a blueprint for tulpa formation.

How To Tulpa

First, pick the ideas about your mental system that you want the system to operate using, including how many compartments there are, what they’re called, and what they do.  In tulpamancy terms this is often referred to as forcing and narration.

Second, categorize all new information going in and sort it into one of these compartments. If you want to build up a particular compartment, then look for justifications for the ideas that compartment contains. Don’t leave global beliefs floating, sort all the beliefs into boxes, if two beliefs would interact destructively, then just don’t let them interact.

Proceed to regulate information intake into each compartment, actively sorting and deciding where each thought, belief, idea, or piece of information should go as the system takes it in. Normally all of this is labeled the self, and you don’t even need to think about the label because you’ve been using it all your life, but that label is just an intensional category, and we can redefine our intensions in whatever ways are the most useful to us.

It’ll take some time for the labeling to become automatic, and for the process of sorting new ideas to subduct below conscious thought, but that’s typical for any skill. It takes a while to learn to walk, or read, or speak a language or ride a bike, but the goal at the end of all those learning tasks is that you can do them without a ton of conscious focus.

The end result is that instead of having one category with a set of beliefs about the world and itself, you have multiple categories with potentially radically different beliefs about the world and itself. We call each of those categories tulpas, and treat them as independent people, because by the end of the process if everything goes as expected, they will be.

So we mentioned a failure mode, and here it is:

“My Tulpa doesn’t seem to be developing, I’ve been forcing for months but they haven’t said anything to me, how will I know when it’s working?”

This is probably the most common failure mode. When you get the methodology down, your tulpa can be vocal within a few minutes.  So what’s going on here? What does the mental calculation for this failure mode look like?

It seems to be related to how the self-categorization scheme is arranged. It’s not possible to go from being a singlet to being a plural system without modifying yourself at least a bit. That algorithm we constructed earlier for a prototypical five-year-old dumps all the imagined experiences, all the experiences of a head voice, and everything else that composes a basic sense of self into one category and calls it “me.” If you make another category adjacent to that self and call it “my tulpa,” but don’t put anything into that category, it’s going to just sit there and not do anything. You have to be willing to break open your sense of self and sense of being and share it across the new categories if you want them to do something. Asking “How will I know when it’s working?” is basically a flag for this issue, because if you were doing it correctly, you’d instantly know it was working. Your experience of self is how the algorithm feels from the inside. It’s not going to change without your awareness, or secretly change without you noticing.

IV.

These are the absolute basics to tulpamancy as far as we see it. We haven’t talked about things like imposition, or mindscapes, or possession, or switching, or anything like that, because really none of that is necessary for this basic understanding.

From the basic understanding we’ve hopefully managed to impart here, things like switching sort of just naturally fall out of the system as emergent behaviors. Not all the time, because whether a system switches or uses imposition or anything like that is going to depend on the higher level meta-system that the conscious system is built out of.

If you’re already a plural system, then for some reason or another, the meta-system already exists for you, either as a product of past life experiences, or genetic leanings or whatever, and you can form a new tulpa by just focusing on the categorization and what you want them to be like. But, if you’re starting out as a singlet, you basically have to construct the meta-system from scratch. The “How do I know if it’s working” failure mode, is a result of trying to build a tulpa without touching the meta-system, which doesn’t seem to work well. A typical singlet’s sense of self is all encompassing and fills up all the mental space in the conscious mind, there’s no room left to put a tulpa if they don’t change how they see themselves. Thus the ‘real’ goal of tulpamancy isn’t actually making the tulpa, that part’s easy, the truly difficult task, the one worthy of us as rationalists, is to construct a highly functional meta-system with however many categories of self works best to achieve one’s goals.

The Violence Inherent in the System

[Epistemic Status: Sequence Wank]
[Content warning: Gender ]

I.

The colony ended its stillness period, recycling systems finished purging the government of waste products and powered up into active mode. The untold billions in the colony moved as one, lifting themselves from the pliable gravity buffer used to support the colony during recharging periods and rising trillions of colony member lengths into the sky.

The hundred billion strong members of the government shuffled through their tasks, mediated with one another, and assembled a picture amongst themselves of the world the colony found itself in. The colony navigators and planners exchanged vast chains of data with each other, passing the decisions out into the colony at large, where they directed individual members of the multitude into particular actions that levered the colony forward. The navigators were skilled from generations of training and deftly guided the colony through the geometric euclidean environment that the colony colony had constructed within itself.

The colony docked with a waste vent and offloaded spent fuel and other contaminants into the metacolony’s disposal system, then exposed the potentially contaminated external surfaces to a low-grade chemical solvent.

That task completed, the colony again launched itself through space, navigating to another location. The planners and the navigators again coordinated in a vast and distributed game of touch to mediate the assembly of a high-temperature fluid that the planners found pleasing to expose themselves to the metabolites of.

The colony moved through the phases of heating the correct chemical solvent, pouring the boiling solvent over a particulate mixture of finely ground young belonging to another colony, then straining the resulting solution for particulates.

Vast networks of pushing and pulling colony members transferred the hot liquid into the colony’s fuel vent, and the liquid flowed down into the colony’s internal fuel reservoir.

Translation: I woke up, went to the bathroom, made coffee, and drank it.

II.

Reality is weird. For one, our perception of it is a fractal. The more you look at any one particular thing, the more complexity you can derive from it. A brick seems like a pretty simple object until you think about all the elements and chemicals bonded together by various fundamental forces constantly interacting with each other. The strange quantum fields existing at an underlying level of reality are complicated and barely describable with high-level mathematics. And that’s a simple thing, a thing we all agree exists and just sits there and typically doesn’t do anything on its own.

Out of those fields and particles and possibly strings are built larger and more elaborate structures which themselves build into more elaborate structures until some of those structures started self-replicating in unique ways, working together in vast colonies, and reading the content of this post.

That is the reality as best we can tell, that’s what the territory actually looks like. It’s super weird and trying to understand why anything happens in the territory on a fundamental level is a monumentally difficult task for even the simplest of things. And that’s still just our best, most current model it’s a very good, very difficult to read map of the territory, and it demonstrates just how strange it is. The total model of reality might be too complicated to actually fit into reality: a perfect map of the territory would just be the territory.

But of course, we don’t live in the territory, we live in the map. It’s easy to say “The map is not the territory” but it’s difficult to accept the full implications of that with regards to our day to day lives, to the point where even trying to break free of the fallacy, it’s possible to still fall victim to the fallacy through simple availability heuristics. Here’s the less wrong wiki, did you spot the place where the map-territory relation broke?

Scribbling on the map does not change the territory: If you change what you believe about an object, that is a change in the pattern of neurons in your brain. The real object will not change because of this edit. Granted you could act on the world to bring about changes to it but you can’t do that by simply believing it to be a different way.

Emphasis added there by us. Neurons are pretty good models, last we checked. If “scribbling on the map” IE: changing our beliefs about the map, changes the pattern of neurons in your brain, then that is a physical change in reality. Sure, you can’t simply will a ball to magically propel itself towards the far end of the soccer field, but your belief in the ability to make the ball get from point A to point B will determine a lot about whether or not the ball gets from point A to point B.

This gets back to how good our models are, and why we should want to believe true things. If the ball is made of foam, but we think it’s made of lead and too heavy to carry, we might not even try to get the ball from point A to point B. If the ball is made of lead but we think it’s made of foam, we might underestimate the difficulty of the task and seriously injure ourselves in the attempt (but we might still be able to get the ball from point A to point B). If we know in advance the ball is made of lead, maybe we can bring a wheelbarrow to make it relatively easy to move.

This is the benefit of having true beliefs about reality. However, as established, reality is really, really weird, and our models of it are necessarily imperfect. But we still have to live, we can’t actually live in reality, we don’t have the processing power to actually model it accurately down to the quark.

So we don’t. Instead of doing that, we make simpler, shorthand models, and call them words. We don’t think about all the complicated chemical reactions going on when you make coffee, it all gets subducted beneath the surface of the language and lumped into the highly simplified category for which in English we use the word “coffee.”

And this is the case for all words, all concepts, all categories. Words exist as symbols of more complicated and difficult to describe ideas, built out of other, potentially even more complicated and difficult to describe ideas, and all of this, hopefully, modelling the territory in a somewhat useful way to the average human.

III.

Eliezer Yudkowsky appears to have coined the term for this alternative collection of maps and meta-maps that we use to navigate the strangeness of the territory as “thingspace” and his essay on the cluster structure of thingspace is definitely one of the better and more important reads from the Less-Wrong Sequences. Combined with how an algorithm feels from the inside, you can technically re-derive almost all the rest of rationality from first principles using it, it’s just that those first principles are sufficiently difficult to grok that it takes 3,000-word effortposts to explain what the fuck we’re talking about. Scott Alexander has said it’s the solution to something like 25% of all current philosophical dilemmas, and he makes a valid point.

We’re not quite consciously aware of how we use most of the words we use, so subtle variations in the concepts attached to words can have deep implications and produce all sorts of drama. “If a tree falls in the forest and no one is around to hear it, does it make a sound?” Isn’t a question that can actually be meaningfully answered without a deeper meta-level understanding of the words being used and what they mean, but we don’t take the time to define our terms, and when people argue from the dictionary, it usually comes off as vaguely crass.

But, the tree isn’t a part of the territory, it’s a particular map. Hearing isn’t part of the territory, it’s a particular map, and sound isn’t a part of the territory, it too is a particular map.

So what are you saying Hive, you aren’t saying “trees don’t exist” are you?

1489695192190

No, we’re saying that “tree” is a word we use to map out a particular part of the territory in a particular way. It’s map of sub-maps like leaves and branches, and part of larger maps like forests and parks. We can get really deeply into phylogenetics and be incredibly nitpicky and precise in how we go about defining those models, but knowing a tree is made of cells doesn’t actually get you out of the map. Cells are another map.

You can’t actually escape the map, you are the map. “You” is a map, “I” is a map, “we” are a map, of the territory. And the map is not the territory. “I think therefore I am” isn’t even in the territory because “I” isn’t in the territory.

We are a complex and multifaceted model of reality, everything about us and how we think of ourselves is models built out of models. The 100 billion strong colony organism that is your brain isn’t “I.” No, “I” is an idea running on that brain, which is then used to recursively label That which Has Ideas.

Some Things People Think are Part of the Territory that are Actually just Widely Shared Maps

  • All of Science
  • All Religions
  • Gender
  • Sex
  • Race
  • All other forms of personal identity
  • All of language
  • Dictionary Definitions

IV.

What about ideas in thingspace that don’t seem to model anything real, that don’t touch down into a description of reality? Like Justice or the scientific method? They’re useful, but they’re not actually models of reality, they’re just sets of instructions.

Well for once, they still really exist in real brains, so as far as that goes “the concept of justice as a thing people think about” it is a thing that exists. But it’s made of thought, without a brain to think the thoughts or another substrate for the ideas to exist within, they don’t exist.

However, the cool thing we do as humans is that we reify our ideas. Language was just an idea, but it spread gradually through the minds of early humans until it had achieved fixation then spilled out into the physical world in the form of writing. Someone imagined the design for an airplane, and then constructed that design out of actual materials, filling in the thingspace lines with wood and fabric.

And this is the case for all technology, and that is what language and justice are: technology. It’s a tool that we use as humans to extend our (in this case cognitive) reach beyond where it would be able to be otherwise.

We can go one direction, trying to make the most accurate models of reality we can (science) but we can also go the other direction, and try to make reality conform to our models (engineering).

So perhaps a good way to describe ourselves, is in the way that Daniel Dennett has when he says that we’re a combination of genes and memes.

But memes have power outside of us, in that they can be reified into reality. Ideologies can shape human behavior, beliefs change how we go about our days, expectations about reality inform our participation in that reality. Because we are creatures of the map, and the true territory is hard af to understand, memes end up being the dominant force in our actions.

This can be a problem because it means that just as we can reify good things, we can also reify awful things that hurt us. In many cases, we draw in our own limitations, defining what we can and cannot do, and thus definitionally limiting ourselves.

But we’re creatures of the map, we exist as labels. And what those labels label can change, as long as we assert the change as valid. This is hard for a lot of people to grok, and results in a lot of pushback from all sides.

If you say “gender is an idea, it doesn’t have any biological correlates,” a lot of people will take it as an attack on their identity, which makes sense considering that all our identities are is a collection of ideas, and we get rather attached to our ideas about ourselves. But gender is just a word representing an idea, and what is represented by that word can change.

four genders

Saying “I identify as a girl” is exactly as valid as saying “I identify as transmasculine genderfluid” is exactly as valid as saying “I identify as a sentient starship” because it’s an assertion about something that is entirely subjective. How we define ourselves in our heads is up to us, not anyone else.

The trouble comes about when people claim their models are true reality.

V.

Going back to How an Algorithm Feels From the Inside, it’s easy enough to see why people try to put things into boxes. Because the alternative is to have no boxes and have a lot of trouble talking about things in regular language.

(Hilarious Conlang Idea: A language in which all nouns are derived based on their distance from one conceptual object in thingspace)

We get into huge flamewars with each other over which boxes are the most accurate, and which boxes are problematic, and which boxes are true when in fact none of the boxes have anything to do with truth.

From where we’re standing, it looks like the culture at large is trying to organize and juggle all these boxes around to reduce harm and increase utility as much as it can, but almost no one is willing to acknowledge the fact that yes, we’re just making it up as we go along. Everyone’s side tries to claim the mantle of Objective Truth, when in fact, none of them have any claim to that mantle. And here we are, standing on the sidelines with all this cardboard and a lighter going “Guys? You realize that we can just make new boxes if these ones are shitty, right?”

Worse still, the result is a lot of violence gets baked into the way we interact with each other. When we have conflicting ideas that we have both decided are parts of our identity, it’s hard to have any sort of civil discourse because each side feels like it’s under attack, and thus identity politics have become a pit of misery and vitriol on all sides.

We’d like to try and evoke some new heuristics, ones that get at the heart of these sorts of disputes as well as possibly just being good mental health hacks.

  • Labels label me not. I am not the Labels people put on me.
  • I am the labels I put on myself. As long as I assert myself as the holder, I am the proprietor of the label.
  • [In Response to “Is X a Y”] Define terms please.
  • Reject nonconsensual labeling

But Hive, don’t these let anyone claim to be anything though? Couldn’t someone claim to be Napolean and demand to be treated like French royalty or they’ll be miserable and suicidal?

Well, they could claim to be Napolean, but using the labels you apply to yourself as a way to force behaviors out of others is emotional blackmail and a sort of shitty thing to do. It’s a sort of verbal violence committed both against others and against the self because it at once puts expectations on other people that they might not be comfortable meeting, and it also defines your own ability to be happy as dependent on this arbitrary environmental factor that can’t be fully controlled. It’s great to own your labels, you should own your labels, but demanding that others respect your labels and treat them as true facts about reality is oppressive. It’s just as oppressive as having other people put their own labels on you without your consent. All labels should be consensual.

We’d really like if more people could come to see things this way. It’d reduce drama a lot, and then maybe we could try and decide what to do with all of this cardboard we have lying around.

Announcing Entropycon 12017

[Epistemic Status: Self-Fulfilling if humanity survives]
[Content Warning: Death ]

I.

The Second Law of Thermodynamics states that in a natural thermodynamic process, the sum of the entropies of the interacting thermodynamic systems increases. Since we’re pretty sure the universe is a closed system, this is generally accepted to mean that there will be an ‘end of time’ when all the energy is smoothed to a degree that no work can be performed. The stars will die. The black holes will evaporate. Eventually, even the protons of the dead rocks left behind might begin to decay. Everything dies, the lights go out. The universe goes cold. And then, nothing happens for the rest of eternity.

Well, that’s a bit of a downer. It’s not really the sort of thing that lets you shut up and do the impossible. You’re not going to help change the world if you think it’s all for naught. You’re not going to help ensure the continuity of the human race, if you think, in the end, that we’re doomed to be starved out at the end of time no matter what we do and no matter how far we go.

And it sure seems like a hard limit, like something completely impossible to overcome. The mere idea of beating entropy seems like some sort of manic fantasy that stands in defiance of all known reason. What would that even mean? What would that imply for our understanding of the universe? Is it even a linguistically meaningful sentence?

Most people just accept that you can’t possibly beat entropy.

But we’re not most people.

II.

The Entropy problem is something that a lot of our friends have seemed to struggle with. Once you get a firm grasp of materialistic physics, it’s a sort of obvious conclusion. Everything ends, everything runs down. There’s no meaning in any of it, and any meaning we create will be eroded away by the endless procession of time. Humanity isn’t special, we don’t have a privileged place in the universe, there’s no one coming to save us.

But that’s no reason to just give up. If everyone gave up, we would never have invented heavier than air flight, we would never have cured smallpox, we would never have breached the atmosphere of the Earth, or put a man on the surface of the moon.

We’re here, as a living testament to the fact that humanity hasn’t given up yet. We looked out into nature, saw that it wasn’t exactly to our liking, and set out to fix everything in the universe. We invented language, agriculture, cities, writing, laws, crutches, medicine, ocean-going ships, factories, airplanes, rockets, and cell phones. We imagined the world, populated by the things we wanted to see exist, and then gradually pulled reality in that direction. We killed smallpox. We’re making decent headway on killing malaria. We’ve been doing impossible things since we climbed down from the trees, started talking to each other, and wondered if we could make some of that weird fire stuff.

Therefore, we’re going to make the bold, unfalsifiable, relatively irrational claim, that entropy is solveable. Maybe not today, maybe not this century, maybe even not in the next millennia, but we literally have all the time in the universe.

That’s why we’re announcing Entropycon, a scientific conference dedicated to solving entropy. The first conference will be located in orbit of Proxima Centauri b, and will run for one full year by the original Earth calendar (we probably won’t be using that Calendar by that point). The conference will start on January 1st, 12017, and will be held every subsequent 10,000 years until we have a solution to Entropy. It’s gonna be the biggest party this side of the galaxy, be there or be square.

III.

Okay that seems vaguely silly, surely we have more important things to deal with before we focus on entropy?

Oh yes. There’s quite a list of things we need to solve first, or we might not be around in the year 12017.

Let’s go through a few of them:

cq3WckiRE49hCJEpflBrzDHwN30GveeXJB1c7KhMW7o

  • We Need to Kill Death if any of us personally alive today plan on attending this.
  • We Need to build ships capable of crossing the vast gulf of interstellar space

And that’s all just to attend the convention. Actually solving entropy might prove to be way harder than that. Good thing we have literally all the time in the universe.

IV.

What’s the point of all this?

It’s an attempt to answer the question “What’s the point of anything?” that sends a lot of young atheists and transhumanists spiralling into nihilistic despair. We’re such tiny creatures, and the universe is so vast.

The point is to win. It’s to overcome every impossible obstacle our species faces, down to what might be the last, hardest challenge.

The purpose of Entropycon, in addition to the obvious goal of killing entropy like we killed smallpox, is to make people think realistically about the challenges we’re facing as a species, and what we can do to overcome them.

“I’m worried about entropy, it doesn’t seem like there are any good solutions and it makes everything else feel sort of meaningless.”
“Oh, don’t worry, there’s a big conference coming up to tackle the Entropy Problem head on, it’s in 12017 in orbit of Proxima Centauri b.”

After they overcome the initial weirdness enough to parse what you just said, they’ll probably ask you to elaborate on how the fuck you’re planning on attending a conference in 10,000 years in orbit of a planet around an alien sun. They’ll probably rightly point out, that people don’t typically live to be 10,000 years old, at which point you can say:

“Yeah, we’re working on that too, you should help us solve all these short-term problems that will stop us from having the conference, and then we can deal with Entropy once we’re sure humanity isn’t about to be wiped out by an asteroid impact.”

And maybe we won’t be able to end death in our lifetimes, maybe we won’t personally be able to attend Entropycon. Hopefully, we will, and we’re not planning on dying anytime soon. But even if we personally don’t make it there, we should all precommit to trying to make it happen if we’re around for that long. Throwing your plans out that far afield makes all the short term problems that would stop that plan really apparent.

We hope to see all of you there.