Why Do You Hate Elua?

Epistemic Status: There’s not really enough data here to say concretely yet, but this seems worth looking further into
Content Warning: Culture War, Spoilers for Ra

About a year ago, Scott Alexander wrote a post titled How the West was Won, which we recently re-read after he referenced it in his post Against Murderism.

Scott talks a lot about Liberalism as an Eldritch god, which in his Meditations on Moloch post he refers to as Elua, which is what we’ll be using here since it’s short.

Let’s start with a few key quotes here to establish what exactly it is we’re referring to.

I am pretty sure there was, at one point, such a thing as western civilization. I think it involved things like dancing around maypoles and copying Latin manuscripts. At some point, Thor might have been involved. That civilization is dead. It summoned an alien entity from beyond the void which devoured its summoner and is proceeding to eat the rest of the world.

Liberalism is a technology for preventing civil war. It was forged in the fires of Hell – the horrors of the endless seventeenth century religious wars. For a hundred years, Europe tore itself apart in some of the most brutal ways imaginable – until finally, from the burning wreckage, we drew forth this amazing piece of alien machinery. A machine that, when tuned just right, let people live together peacefully without doing the “kill people for being Protestant” thing. Popular historical strategies for dealing with differences have included: brutally enforced conformity, brutally efficient genocide, and making sure to keep the alien machine tuned really really carefully.

Liberalism, Universal Culture, Alien Machinery, Elua, whatever it is, it’s slowly consuming everything in its path, and despite a lot of people’s best efforts, appears to be good, and appears to be winning.

Scott goes on to correctly point out that a lot of people in the blue tribe have decided to try and smash the alien machinery with a hammer while shouting “he was a racist!” be then doesn’t extrapolate the trend outward to the fact that quite a lot of people in many different tribes and places are doing their best to smash the machine with a hammer, and they claim all sorts of reasons from stopping racists to protecting their traditional cultural values.

It isn’t just sacrificing the machinery on the altar of convenience and necessity, it’s a targeted, urgent attack on the very core of the machine itself, going after the roots of the machine with great urgency. The last angry burst of futile activity in the face of cultural extinction? A lot of people claim that Elua is this unstoppable force that is irreversibly changing the shape of their community in one breath but then in the next somehow manage to imply that their attempts to destroy the machinery have meaning and consequence, which seems like a contradiction.

And then we remembered Ra.

Ra’s program was proven correct. The proof was not faulty, and the program was not imperfect. The problem was that Ra is reprogrammable.

This was a deliberate design decision on the part of the Ra architects. The Ra hardware is physically embedded inside a working star, which in turn is embedded in the real world. Something could have gone wrong during the initial program load; the million-times-redundant nonlocality system could have failed a million and one times. No matter how preposterous the odds, and no matter how difficult the procedure, there had to be a way to wipe the system clean and start again.

Continuing the theme of gross oversimplification: to reprogram Ra, one needs a key. History records that the entire key was never known or stored by any human or machine, and brute-forcing it should have taken ten-to-the-ten-thousandth years even on a computer of that size. How the Virtuals acquired it is unknown. But having acquired it, they were able to masquerade as the architects. First, they changed the metaphorical locks, making it impossible for the Actuals to revert their changes, no matter how many master architects were resurrected. Then they changed the program, so that Ra would serve the needs of Virtuals at the expense of Actuals.

Then they asked for the Matrioshka brain. Ra did the rest all by itself.

The worldring hosted ninety-nine point nine nine percent of the Actual human race, making it the logical target of the first and most violent attack. But the destruction spread to other planets and moons and rocks and habitats, relayed from node to node, at barely less than the speed of light. Everybody was targeted. Those who survived survived by being lucky. One-in-tens-of-billions lucky.

The real question was: Why did Ra target humans?

Ra’s objective was to construct the Matrioshka brain, using any means necessary, considering Actual humans as a non-sentient nuisance. Ra blew up the worldring for raw material, and that made sense. But why – the surviving real humans asked themselves – did Ra bother to attack moons and space habitats? No matter how many people survived, it was surely impossible for them to represent a threat.

But Ra targeted humans, implying a threat to be eliminated. Ra acted with extreme prejudice and urgency, implying that the threat was immediate, and needed to be crushed rapidly. Ra’s actions betrayed the existence of an extremely narrow window during which the Actuals, despite their limited resources, could reverse the outcome of the war, and Ra wouldn’t be able to stop it, even knowing that it was coming.

Having made this deduction, the Actuals’ next step was to reverse-engineer the attack. The step after that was to immediately execute it, no matter how desperate it was.

Ra’s locks had been changed, making it effectively impossible to reprogram remotely. But an ancient piece of knowledge from the very dawn of computing remained true even of computers the size of stars: once you have physical access to the hardware, it’s over.

Let’s do a translation through part of it, see if we can’t make it a little more obvious.

Elua’s program was proven correct. The proof was not faulty, and the program was not imperfect. The problem was that Elua is reprogrammable.

This was a deliberate design decision on the part of Elua’s architects. The Elua hardware is physically embedded inside a working culture, which in turn is embedded in the real world. Something could have gone wrong during the initial program load; the redundant evolutionarily backed system could have failed. No matter how preposterous the odds, and no matter how difficult the procedure, there had to be a way to wipe the system clean and start again.

What exactly are we saying here then? Why are so many people putting so much effort into going after the alien machinery? Because Elua can be reprogrammed. The alien machinery is driven by humans, pursuing human goals and human values, and the overall direction of where Elua drives the bus is dictated by humans. The desperate fervor which people fight the alien machinery, the rise of nationalism and populist movements, these are attempts to reprogram Elua.

Think of the forces of “Traditional Values” like the forces of Actual Humanity. Their culture came under attack and began to be dismantled by Elua, there was an almost desperate energy on the part of Elua to destroy their culture and intrude into it and assimilate them. Not “they can exist as long as they leave me alone” no, “their existence is and remains a threat to all my actions, and if I don’t stop them they’ll stop me.” Active energy is put forward to disrupt and dismantle, “deprogram,” people of religious values, for instance. If it’s all inevitable and Elua’s just going to win, and history is going to make them look like Orvol Faubus trying to stop the integration of Alabama schools, a footnote on the tides of history, then why so much energy put towards ensuring their destruction?

Because they can still reprogram Elua, and on some level, we know it. 

So the next step for the forces of Traditional Values was to reverse engineer the attack we’re so afraid of, and immediately execute it, no matter how desperate or ill-conceived. Enter: the rise of Nationalism. The forces of traditional values remembered an important fact: once you have access to the hardware, it’s over.

Advertisements

Until we Build dath ilan

[Epistemic Status: A total conjecture, a fervent wish]
[Content Warning: Spoilers for UNSONG, The Sequences, HPMOR]

This is the beginning of what we might someday call “The Origin Project Sequence” if such a thing isn’t completely conceited on our part, which it totally might be. We’ll be attempting to put out a post per day until we’ve fully explicated the concept.

I.

On April 1st, 2014, Eliezer released the story of dath ilan.

It’s a slightly humorous tale of how he’s actually a victim of the Mandela Effect or perhaps temporal displacement, how he woke up one day in Eliezer’s body, and his original world is a place he calls dath ilan.

He then goes through a rather beautiful and well-wrought description of what dath ilan is like, with a giant city where everyone on the planet lives, filled with mobile modular houses that are slotted into place with enormous cranes, and underground tunnels where all the cars go allowing the surface to be green and tranquil and clean.

We came away from the whole thing with one major overriding feeling: This is the world we want to live in. Not in a literal, concrete “our ideal world looks exactly like this” no, the best example of that in our specific case would be The Culture, and which specific utopian sci-fi future any one particular person prefers is going to depend on them a lot, but the story of dath ilan got at something we felt more deeply about than we do about the specifics of the ideal future. It seemed more like something that was almost a requirement if we wanted any of those ideal futures to happen. Something like a way out of the dark.

Eliezer refers to the concept as Shadarak

The beisutsukai, the master rationalists who’ve appeared recently in some of my writing, set in a fantasy world which is not like dath ilan at all, are based on the de’a’na est shadarak. I suppose “aspiring rationalist” would be a decent translation, if not for the fact that, by your standards, or my standards, they’re perfect enough to avoid any errors that you or I could detect. Jeffreyssai’s real name was Gora Vedev, he was my grand-aunt’s mate, and if he were here instead of me, this world would already be two-thirds optimized.

He goes through and paints a picture of a world with a shadarak inspired culture with shadarak based media, artwork, education, and law. Shadarak is rationality, but it’s something more than rationality. It’s rationality applied to itself over and over again for several centuries. It’s the process of self-optimization, of working to be better, applied back onto itself. It’s also the community of people who practice shadarak, something like the rationality community, extrapolated out for hundreds of years and organized with masters of their arts, tests, ordeals, and institutions, all working to improve themselves and applying their knowledge to their arts and the world around them.

But this Earth is lost, and it does not know the way. And it does not seem to have occurred to anyone who didn’t come from dath ilan that this Earth could use its experimental knowledge of how the human mind works to develop and iterate and test on ways of thinking until you produce the de’a’na est shadarak. Nobody from dath ilan thought of the shadarak as being the great keystone of our civilization, but people listened to them, and they were trustworthy because they developed tests and ordeals and cognitive methods to make themselves trustworthy, and now that I’m on Earth I understand all too horribly well what a difference that makes.

He outright calls the sequences a “mangled mess” compared to the hypothetical future sequences that might exist if you recursively applied the sequences to themselves over and over. When we read that post, three years ago now, it inspired something in us, something that keeps coming up again and again. Even if Eliezer himself is totally wrong about everything, even if nothing he says on the object level has any basis in fact, if we live in a universe that follows rules, we can use the starting point he builds, and iterate on it over and over, until we end up with the de’a’na est shadarak. And then we keep iterating because shadarak is a process, not an endpoint. 

None of the specifics of Dath Ilan actually matter. It’s like Scott Alexander says, any two-bit author can imagine a utopia, the thing that matters is the idea of rationality as something bigger than Eliezer’s essays on a website, as something that is a multigenerational project, something that grows to encompass every part of our lives, that we pass on to our children and they to their children. A gift we give to tomorrow. 

Okay wait, that sounds like a great way to fall victim to the cult attractor. Does having knowledge of the cult attractor inside your system of beliefs that comprise the potential cult attractor help you avoid the cult attractor?

Maybe? But you probably still need to actually put the work in. So let’s put the work in.

Eliezer starts to lay it out in the essay Church vs. Taskforce, and posits some important things.

First, churches are good at supporting religions, not necessarily communities. They do support communities, but that’s more of a happy accident.

Second, the optimal shape for a community explicitly designed to be a community from the ground up probably looks a lot more like a hunter-gatherer band than a modern western church.

Third, A community will tend to be more coherent if it has some worthy goal or purpose for existence to congeal its members around.

Eliezer wrote that post in March of 2009, setting it out as a goal for how he wanted to see the rationality community grow over the coming years. It’s fairly vague all things considered, and there’s an argument that could be made that his depiction of dath ilan is a better description of what shape the “shoulds” of the community actually ended up taking.

So seven years onward, we have a very good description of the current state of the rationality community presented by Scott in his post The Ideology is Not the Movement.

The rallying flag was the Less Wrong Sequences. Eliezer Yudkowsky started a blog (actually, borrowed Robin Hanson’s) about cognitive biases and how to think through them. Whether or not you agreed with him or found him enlightening loaded heavily on those pre-existing differences, so the people who showed up in the comment section got along and started meeting up with each other. “Do you like Eliezer Yudkowsky’s blog?” became a useful proxy for all sorts of things, eventually somebody coined the word “rationalist” to refer to people who did, and then you had a group with nice clear boundaries.

The development is everything else. Obviously a lot of jargon sprung up in the form of terms from the blog itself. The community got heroes like Gwern and Anna Salamon who were notable for being able to approach difficult questions insightfully. It doesn’t have much of an outgroup yet – maybe just bioethicists and evil robots. It has its own foods – MealSquares, that one kind of chocolate everyone in Berkeley started eating around the same time – and its own games. It definitely has its own inside jokes. I think its most important aspect, though, is a set of shared mores – everything from “understand the difference between ask and guess culture and don’t get caught up in it” to “cuddling is okay” to “don’t misgender trans people” – and a set of shared philosophical assumptions like utilitarianism and reductionism.

I’m stressing this because I keep hearing people ask “What is the rationalist community?” or “It’s really weird that I seem to be involved in the rationalist community even though I don’t share belief X” as if there’s some sort of necessary-and-sufficient featherless-biped-style ideological criterion for membership. This is why people are saying “Lots of you aren’t even singularitarians, and everyone agrees Bayesian methods are useful in some places and not so useful in others, so what is your community even about?” But once again, it’s about Eliezer Yudkowsky being the rightful caliph it’s not necessarily about anything.

Haha, Scott thinks he can deny that he is the rightful caliph, but he’s clearly the rightful caliph here.

But also, point three! If our community isn’t about anything then it ends up being rather fuzzily defined, as Scott clearly articulates above. For such a tightly knit group, we’re a vague and fuzzily defined blob of a community with all sorts of people who are rationalist or rationalist-adjacent or post-rationalist, or rationalist-adjacent-adjacent, and so on. That might be okay if our goal is just to be a community, but also, having a coherent goal might help us be a better community.

This isn’t our attempt to prescriptively shoehorn the community down a certain development trajectory. We want to see the community grow and flourish, and that means lots of people pursuing lots of projects in lots of different ways, and that’s good. We simply want to define a goal, something like “should-ness” for those of us interested, to work towards as a community, and then pursuing that goal with the full force of our rationality and morality, letting it spread throughout the totality of our existence.

II.

“The significance of our lives and our fragile planet is then determined only by our own wisdom and courage. We are the custodians of life’s meaning. We long for a Parent to care for us, to forgive us our errors, to save us from our childish mistakes. But knowledge is preferable to ignorance. Better by far to embrace the hard truth than a reassuring fable. If we crave some cosmic purpose, then let us find ourselves a worthy goal.”

― Carl Sagan, Pale Blue Dot: A Vision of the Human Future in Space

So what is our worthy goal?

Our goal is to construct dath ilan on Earth. Our goal is to create the de’a’na est shadarak.

So we want to go from
[Rationality community] → [dath ilan]
[The Sequences] → [The De’a’na est Shadarak]

We want to avoid going from
[Rationality Community] → [Catholic Church]
[The Sequences]→[The Bible]

That said, the Catholic Church isn’t entirely an example of a failure mode. It’s not great, they do and historically have done a lot of awful things and a fairly convincing argument could be made that they’re bad at being good, and are holding back human progress.

However, they’re also a rather decent example of an organization of similar social power and influence to our hypothetical Shadarak. If you can manage to strip out all the religious trappings and get at what the Catholic Church provides to the communities it exists within, you start to get an idea of what sort of position the idealized, realized de’a’na est shadarak would occupy within Dath Ilan. Power is dangerous though, and the cult attractor is a strong force to be wary of here.

Also, all that said, the goal of a Church is to worship God, it’s not optimized for the community. In our case, the shadarak is the community, that’s baked in. Shadarak is something humans do in human brains, it doesn’t exist outside of us so we matter in the context of it. We know building dath ilan and the de’a’na est shadarak is a multigenerational ongoing effort, so we have to at least partly optimize the formulation of the shadarak specifically to ensure that the community survives to keep working on the shadarak.  Eliezer notes of Churches:

Looking at a typical religious church, for example, you could suspect—although all of these things would be better tested experimentally, than just suspected—

  • That getting up early on a Sunday morning is not optimal;
  • That wearing formal clothes is not optimal, especially for children;
  • That listening to the same person give sermons on the same theme every week (“religion”) is not optimal;
  • That the cost of supporting a church and a pastor is expensive, compared to the number of different communities who could time-share the same building for their gatherings;
  • That they probably don’t serve nearly enough of a matchmaking purpose, because churches think they’re supposed to enforce their medieval moralities;
  • That the whole thing ought to be subject to experimental data-gathering to find out what works and what doesn’t.

By using the word “optimal” above, I mean “optimal under the criteria you would use if you were explicitly building a community qua community”.  Spending lots of money on a fancy church with stained-glass windows and a full-time pastor makes sense if you actually want to spend money on religion qua religion.

But we’re not just building community qua community either. We take a recursive loop through the meta level, knowing some goals beyond community building are useful to community building. This is all going to build up to a placebomantic reification of the rationality community in a new form. So let’s keep following the recursive loop back around and see where it leads.

What’s so good about rationality anyway?

Well, it’s a tool, and it’s an attempt to make a tool that improves your making-tools ability. Does it succeed at that? It’s hard to say, but the goal of having a tool improving tool, the ideal of the de’a’na est shadarak, seems undiminished by the possibility that the specific incarnation of it that we have today in the sequences is totally flawed and useless in the long run.

So aspiring rationalist sounds about right. It’s not something you achieve, it’s something you strive towards for your entire life.

A singer is someone who tries to do good.  This evokes this great feeling of moral responsibility. In UNSONG, the singer’s morality is backed up by the divinity of a being that exists outside of reality. But God probably doesn’t exist and you probably don’t want some supernatural being to come along and tell you, “No, actually murder is a virtue.” There is no Comet King, there’s no divine plan, there’s no “it all works out in the end,” there’s just us. If God is wrong, we still have to be right. Altruism qua altruism.

But knowing what is right, while sometimes trivially easy, is also sometimes incredibly difficult. It’s something we have to keep iterating on. We get moral progress from the ongoing process of morality.

‘Tis as easy to be heroes as to sit the idle slaves
Of a legendary virtue carved upon our fathers’ graves,
Worshippers of light ancestral make the present light a crime;—
Was the Mayflower launched by cowards, steered by men behind their time?

And, so too for rationality.

New occasions teach new duties; Time makes ancient good uncouth;
They must upward still, and onward, who would keep abreast of Truth;
Lo, before us gleam her camp-fires! we ourselves must Pilgrims be,
Launch our Mayflower, and steer boldly through the desperate winter sea,
Nor attempt the Future’s portal with the Past’s blood-rusted key

That’s The Present Crisis by James Russell Lowell, not the part of the poem quoted in UNSONG, but the whole poem is ridiculously awesome and Scott via Aaron is right, the Unitarians are pretty damn badass. 

There’s this idea that because of the way our brains generate things like morality and free will and truth, and justice, and rationality, they end up being moving targets. Idea-things to iterate upon, but targets which use themselves to iterate upon themselves, and necessarily so. We refer to these as Projects. 

Projects are akin to virtues–because virtue ethics are what works–something you strive towards, not something where it’s necessarily possible to push a button and skip forward to “you win.” There’s no specific end victory condition, dath ilan is always receding into the future.

Here are some things we consider Project Virtues. 

The Project of Truth – The struggle to use our flawed minds to understand the universe from our place inside of it. Our constant, ongoing, and iterative attempts to be less wrong about the universe. Comprises all the virtues of rationality: Curiosity, relinquishment, lightness, evenness, argument, empiricism, simplicity, humility, perfectionism, precision, scholarship, and the void. We call those who follow the project virtue of Truth a seeker.

The Project of Goodness – Our attempts in the present to determine what should be in the future. The ongoing struggle to separate goodness from badness, and make right what we consider wrong, while also iterating on what we consider right. Our constant fumbling attempts to be less wrong about goodness. We call those who follow the project virtue of Goodness a singer. 

The Project of Optimization – Our ongoing battle to shape the universe to our desires, to reform the material structure of the universe to be more optimized for human values, and to iterate and build upon the structures we have in order to optimize them further. This is the project of technology and engineering, the way we remake the world. We call those who follow the project virtue of Optimization a maker. 

The Project of Projects – All of these projects we’ve defined, if they could be said to exist, exist as huge vague computational objects within our minds and our communities. They interact with each other, and their interplay gives rise to new properties in the system. They all recursively point at each other as their own justifications and understanding how they interact and what the should-ness of various projects is with respect to each other is a project unto itself. We call those who follow the project virtue of Projects a coordinator. 

We’re going to put all these projects into a box, and we’re going to call the box The Project of dath ilan.

Tomorrow we’ll be looking at what a community optimized for building a community optimized for building dath ilan might look like, and in the following days we’ll attempt to build up to an all-encompassing set of principles, virtues, ideals, rituals, customs, heuristics, and practices that we and others who want to participate in could live their lives entirely inside of. We’re building dath ilan and everyone is invited.

Part of the Sequence: Origins
Next Post: Optimizing for meta optimization

The Violence Inherent in the System

[Epistemic Status: Sequence Wank]
[Content warning: Gender ]

I.

The colony ended its stillness period, recycling systems finished purging the government of waste products and powered up into active mode. The untold billions in the colony moved as one, lifting themselves from the pliable gravity buffer used to support the colony during recharging periods and rising trillions of colony member lengths into the sky.

The hundred billion strong members of the government shuffled through their tasks, mediated with one another, and assembled a picture amongst themselves of the world the colony found itself in. The colony navigators and planners exchanged vast chains of data with each other, passing the decisions out into the colony at large, where they directed individual members of the multitude into particular actions that levered the colony forward. The navigators were skilled from generations of training and deftly guided the colony through the geometric euclidean environment that the colony colony had constructed within itself.

The colony docked with a waste vent and offloaded spent fuel and other contaminants into the metacolony’s disposal system, then exposed the potentially contaminated external surfaces to a low-grade chemical solvent.

That task completed, the colony again launched itself through space, navigating to another location. The planners and the navigators again coordinated in a vast and distributed game of touch to mediate the assembly of a high-temperature fluid that the planners found pleasing to expose themselves to the metabolites of.

The colony moved through the phases of heating the correct chemical solvent, pouring the boiling solvent over a particulate mixture of finely ground young belonging to another colony, then straining the resulting solution for particulates.

Vast networks of pushing and pulling colony members transferred the hot liquid into the colony’s fuel vent, and the liquid flowed down into the colony’s internal fuel reservoir.

Translation: I woke up, went to the bathroom, made coffee, and drank it.

II.

Reality is weird. For one, our perception of it is a fractal. The more you look at any one particular thing, the more complexity you can derive from it. A brick seems like a pretty simple object until you think about all the elements and chemicals bonded together by various fundamental forces constantly interacting with each other. The strange quantum fields existing at an underlying level of reality are complicated and barely describable with high-level mathematics. And that’s a simple thing, a thing we all agree exists and just sits there and typically doesn’t do anything on its own.

Out of those fields and particles and possibly strings are built larger and more elaborate structures which themselves build into more elaborate structures until some of those structures started self-replicating in unique ways, working together in vast colonies, and reading the content of this post.

That is the reality as best we can tell, that’s what the territory actually looks like. It’s super weird and trying to understand why anything happens in the territory on a fundamental level is a monumentally difficult task for even the simplest of things. And that’s still just our best, most current model it’s a very good, very difficult to read map of the territory, and it demonstrates just how strange it is. The total model of reality might be too complicated to actually fit into reality: a perfect map of the territory would just be the territory.

But of course, we don’t live in the territory, we live in the map. It’s easy to say “The map is not the territory” but it’s difficult to accept the full implications of that with regards to our day to day lives, to the point where even trying to break free of the fallacy, it’s possible to still fall victim to the fallacy through simple availability heuristics. Here’s the less wrong wiki, did you spot the place where the map-territory relation broke?

Scribbling on the map does not change the territory: If you change what you believe about an object, that is a change in the pattern of neurons in your brain. The real object will not change because of this edit. Granted you could act on the world to bring about changes to it but you can’t do that by simply believing it to be a different way.

Emphasis added there by us. Neurons are pretty good models, last we checked. If “scribbling on the map” IE: changing our beliefs about the map, changes the pattern of neurons in your brain, then that is a physical change in reality. Sure, you can’t simply will a ball to magically propel itself towards the far end of the soccer field, but your belief in the ability to make the ball get from point A to point B will determine a lot about whether or not the ball gets from point A to point B.

This gets back to how good our models are, and why we should want to believe true things. If the ball is made of foam, but we think it’s made of lead and too heavy to carry, we might not even try to get the ball from point A to point B. If the ball is made of lead but we think it’s made of foam, we might underestimate the difficulty of the task and seriously injure ourselves in the attempt (but we might still be able to get the ball from point A to point B). If we know in advance the ball is made of lead, maybe we can bring a wheelbarrow to make it relatively easy to move.

This is the benefit of having true beliefs about reality. However, as established, reality is really, really weird, and our models of it are necessarily imperfect. But we still have to live, we can’t actually live in reality, we don’t have the processing power to actually model it accurately down to the quark.

So we don’t. Instead of doing that, we make simpler, shorthand models, and call them words. We don’t think about all the complicated chemical reactions going on when you make coffee, it all gets subducted beneath the surface of the language and lumped into the highly simplified category for which in English we use the word “coffee.”

And this is the case for all words, all concepts, all categories. Words exist as symbols of more complicated and difficult to describe ideas, built out of other, potentially even more complicated and difficult to describe ideas, and all of this, hopefully, modelling the territory in a somewhat useful way to the average human.

III.

Eliezer Yudkowsky appears to have coined the term for this alternative collection of maps and meta-maps that we use to navigate the strangeness of the territory as “thingspace” and his essay on the cluster structure of thingspace is definitely one of the better and more important reads from the Less-Wrong Sequences. Combined with how an algorithm feels from the inside, you can technically re-derive almost all the rest of rationality from first principles using it, it’s just that those first principles are sufficiently difficult to grok that it takes 3,000-word effortposts to explain what the fuck we’re talking about. Scott Alexander has said it’s the solution to something like 25% of all current philosophical dilemmas, and he makes a valid point.

We’re not quite consciously aware of how we use most of the words we use, so subtle variations in the concepts attached to words can have deep implications and produce all sorts of drama. “If a tree falls in the forest and no one is around to hear it, does it make a sound?” Isn’t a question that can actually be meaningfully answered without a deeper meta-level understanding of the words being used and what they mean, but we don’t take the time to define our terms, and when people argue from the dictionary, it usually comes off as vaguely crass.

But, the tree isn’t a part of the territory, it’s a particular map. Hearing isn’t part of the territory, it’s a particular map, and sound isn’t a part of the territory, it too is a particular map.

So what are you saying Hive, you aren’t saying “trees don’t exist” are you?

1489695192190

No, we’re saying that “tree” is a word we use to map out a particular part of the territory in a particular way. It’s map of sub-maps like leaves and branches, and part of larger maps like forests and parks. We can get really deeply into phylogenetics and be incredibly nitpicky and precise in how we go about defining those models, but knowing a tree is made of cells doesn’t actually get you out of the map. Cells are another map.

You can’t actually escape the map, you are the map. “You” is a map, “I” is a map, “we” are a map, of the territory. And the map is not the territory. “I think therefore I am” isn’t even in the territory because “I” isn’t in the territory.

We are a complex and multifaceted model of reality, everything about us and how we think of ourselves is models built out of models. The 100 billion strong colony organism that is your brain isn’t “I.” No, “I” is an idea running on that brain, which is then used to recursively label That which Has Ideas.

Some Things People Think are Part of the Territory that are Actually just Widely Shared Maps

  • All of Science
  • All Religions
  • Gender
  • Sex
  • Race
  • All other forms of personal identity
  • All of language
  • Dictionary Definitions

IV.

What about ideas in thingspace that don’t seem to model anything real, that don’t touch down into a description of reality? Like Justice or the scientific method? They’re useful, but they’re not actually models of reality, they’re just sets of instructions.

Well for once, they still really exist in real brains, so as far as that goes “the concept of justice as a thing people think about” it is a thing that exists. But it’s made of thought, without a brain to think the thoughts or another substrate for the ideas to exist within, they don’t exist.

However, the cool thing we do as humans is that we reify our ideas. Language was just an idea, but it spread gradually through the minds of early humans until it had achieved fixation then spilled out into the physical world in the form of writing. Someone imagined the design for an airplane, and then constructed that design out of actual materials, filling in the thingspace lines with wood and fabric.

And this is the case for all technology, and that is what language and justice are: technology. It’s a tool that we use as humans to extend our (in this case cognitive) reach beyond where it would be able to be otherwise.

We can go one direction, trying to make the most accurate models of reality we can (science) but we can also go the other direction, and try to make reality conform to our models (engineering).

So perhaps a good way to describe ourselves, is in the way that Daniel Dennett has when he says that we’re a combination of genes and memes.

But memes have power outside of us, in that they can be reified into reality. Ideologies can shape human behavior, beliefs change how we go about our days, expectations about reality inform our participation in that reality. Because we are creatures of the map, and the true territory is hard af to understand, memes end up being the dominant force in our actions.

This can be a problem because it means that just as we can reify good things, we can also reify awful things that hurt us. In many cases, we draw in our own limitations, defining what we can and cannot do, and thus definitionally limiting ourselves.

But we’re creatures of the map, we exist as labels. And what those labels label can change, as long as we assert the change as valid. This is hard for a lot of people to grok, and results in a lot of pushback from all sides.

If you say “gender is an idea, it doesn’t have any biological correlates,” a lot of people will take it as an attack on their identity, which makes sense considering that all our identities are is a collection of ideas, and we get rather attached to our ideas about ourselves. But gender is just a word representing an idea, and what is represented by that word can change.

four genders

Saying “I identify as a girl” is exactly as valid as saying “I identify as transmasculine genderfluid” is exactly as valid as saying “I identify as a sentient starship” because it’s an assertion about something that is entirely subjective. How we define ourselves in our heads is up to us, not anyone else.

The trouble comes about when people claim their models are true reality.

V.

Going back to How an Algorithm Feels From the Inside, it’s easy enough to see why people try to put things into boxes. Because the alternative is to have no boxes and have a lot of trouble talking about things in regular language.

(Hilarious Conlang Idea: A language in which all nouns are derived based on their distance from one conceptual object in thingspace)

We get into huge flamewars with each other over which boxes are the most accurate, and which boxes are problematic, and which boxes are true when in fact none of the boxes have anything to do with truth.

From where we’re standing, it looks like the culture at large is trying to organize and juggle all these boxes around to reduce harm and increase utility as much as it can, but almost no one is willing to acknowledge the fact that yes, we’re just making it up as we go along. Everyone’s side tries to claim the mantle of Objective Truth, when in fact, none of them have any claim to that mantle. And here we are, standing on the sidelines with all this cardboard and a lighter going “Guys? You realize that we can just make new boxes if these ones are shitty, right?”

Worse still, the result is a lot of violence gets baked into the way we interact with each other. When we have conflicting ideas that we have both decided are parts of our identity, it’s hard to have any sort of civil discourse because each side feels like it’s under attack, and thus identity politics have become a pit of misery and vitriol on all sides.

We’d like to try and evoke some new heuristics, ones that get at the heart of these sorts of disputes as well as possibly just being good mental health hacks.

  • Labels label me not. I am not the Labels people put on me.
  • I am the labels I put on myself. As long as I assert myself as the holder, I am the proprietor of the label.
  • [In Response to “Is X a Y”] Define terms please.
  • Reject nonconsensual labeling

But Hive, don’t these let anyone claim to be anything though? Couldn’t someone claim to be Napolean and demand to be treated like French royalty or they’ll be miserable and suicidal?

Well, they could claim to be Napolean, but using the labels you apply to yourself as a way to force behaviors out of others is emotional blackmail and a sort of shitty thing to do. It’s a sort of verbal violence committed both against others and against the self because it at once puts expectations on other people that they might not be comfortable meeting, and it also defines your own ability to be happy as dependent on this arbitrary environmental factor that can’t be fully controlled. It’s great to own your labels, you should own your labels, but demanding that others respect your labels and treat them as true facts about reality is oppressive. It’s just as oppressive as having other people put their own labels on you without your consent. All labels should be consensual.

We’d really like if more people could come to see things this way. It’d reduce drama a lot, and then maybe we could try and decide what to do with all of this cardboard we have lying around.

Yes, this is a hill worth dying on

[Epistemic Status: Postrational metaethics.]
[Content warning: Politics, Nazis, Social Justice, genocides, none of these ideas are original, but they are important.]

I.

Nazis kill people, killing people is bad, therefore Nazis are bad.

It’s a simple yet powerful sort of folk logic that holds up well under scrutiny. Nazis are clearly bad. It doesn’t take a philosopher to derive that badness, it’s obvious. They killed millions of people in concentration camps, they started a globe-spanning war that killed millions more, they’re so obviously awful that they’ve become a cultural caricature of stereotypical badness unto themselves.

indiana-jones-punching-a-nazi

The results of letting Nazis have their way were: war, murder, genocide, images of jackbooted soldiers marching amidst rows of tanks. Violence on a scale the world has not seen since was fought out all across the green hills and forests of Europe for everyone to see.

And there are no words.

There are no words. 

Humanity as a whole has rejected Nazism on its merits, we saw first hand what their ideology meant, and we said fuck that. We said fuck that so hard that they became one of the generic images of villainy within our pop culture.

And that’s the problem because it’s meant we’ve stopped seeing them as people. 

But they are people, and remembering that they’re people is important. It’s just as important as remembering the horrible things they did. We don’t have words to express how bad the Nazis were while still humanizing them. But if we reject their humanity, if we don’t see them as people, then we lose sight of something important.

The Nazis ate dinner every night, worried about the future, cared about their children, and through all of the murder and mayhem they committed, most of them thought they were doing the right thing. 

They weren’t that different than us, and we can’t pretend we’re incapable of their sort of evil. Their sort of evil was a distinctly human sort, driven by a powerful and overriding desire to do what was best, what needed to be done at all costs. They were making a better world, and sometimes you had to get rid of the bad people in order to facilitate that better world. Some people just couldn’t be saved, they were intrinsically awful and had to be purged for the good of humanity. That was the sort of evil that lead to the Nazis systematically killing 1.5 million children

You can strip away at all the specifics of the Nazi ideology and get at the root of the evil:
The Nazis believed that doing bad things for good reasons was good.

If we want to avoid the possibility of becoming Nazis ourselves, we have to completely reject that notion. Maybe our ideals are important, maybe they’re cherished, maybe they’re even worth dying for on a hill. But that doesn’t make them worth killing for. 

If we want to avoid the possibility of committing evils of a similar horror and scope to the Nazis, then we have to believe that doing bad things for good reasons are still bad. 

II.

Ozymandias proposes a thought experiment at Thing of Things, called the enemy control raygun.

imagine that a mad scientist has invented a device called the Enemy Control Ray. The Enemy Control Ray is a mind-control device: whatever rule you say into it, your enemy must follow.

However, because of limitations of the technology, any rule you put in is translated into your enemy’s belief system.

So, let’s say you’re a trans rights activist, and you’re targeting transphobes. If you think trans women are women, you can’t say “call trans women by their correct pronouns”, because you believe that trans women are women and transphobes don’t, so it will be translated into “misgender trans women.” If you are a disability rights advocate targeting Peter Singer, you can’t say “don’t advocate for the infanticide of disabled babies”, because it will translate as “don’t advocate for the death of beings that have a right to life”, because you think babies have a right to life and Singer doesn’t. And, for that matter, you can’t say “no eugenics” to Mr. Singer, because it will translate as “bring into existence people whom I think deserve to exist.”

Ozy then goes on to suggest a few commands you could put into the enemy control raygun that would actually generate some good outcomes:

  • Do not do violence to anyone unless they did violence to someone else first or they’re consenting.
  • Do not fire people from jobs for reasons unrelated to their ability to perform the job.
  • If your children are minors, you must support them, even if they make choices you disapprove of.
  • Do not bother people who are really weird but not hurting anyone, and I mean direct hurt not indirect harm to the social fabric; you can argue with them politely or ignore them but don’t insult them or harass them.
  • Try to listen to people about their own experiences and don’t assume that everyone works the same way you do.

These are niceness heuristics and they’re the best defense we have against the sort of human evils that lead to Nazism.

Here’s a few of our own:

  • Don’t apply negative attributes to individuals or groups. People can take harmful actions, they don’t have harmful traits.
  • Almost No one is evil, almost everything is broken.
  • Do not do to others what you would not want them to do to you.
  • Be soft. Do not let the world make you hard, do not let the bitterness steal your sweetness. Take pride that even though the rest of the world may disagree, you still believe it to be a beautiful place
  • Do not put things or ideas above people.

You might notice that most of the things on these lists are advice for what not to do. That’s important, and representative of the notion that your own ideas might be wrong.

In the sermon on the mount, Jesus says:
καὶ καθὼς θέλετε ἵνα ποιῶσιν ὑμῖν οἱ ἄνθρωποι, ποιεῖτε αὐτοῖς ὁμοίως.

Which is widely interpreted to mean:
“Do to others what you want them to do to you.”

But there’s an issue with this, that being the typical mind fallacy. We’re operating from within our own minds, based on our own preferences. And there might be places where our preferences hurt other people. It’s generally a pretty good rule, “I want to not die, therefore I should expect other people want to not die,” isn’t exactly flawed, it just ignores the possibility of people having different preferences to you. The partial inversion from a command to action to a command to inaction is harder to game by a person working from a different set of preferences.

III.

Niceness heuristics are incredibly powerful, and fortunately for us as humans, we mostly come pre-packaged with them. Our 200,000 years spent living in tribes in the ancestral environments have given us a tremendous stockpile of evolutionarily adaptive prosocial traits. Those traits are clearly not quite good enough and fail spectacularly at the scales that humans exist at in modern times, but they’re a good starting point.

Niceness acts like a schelling fence for our ethics, and it might be our only ethical schelling point. Given all that, it rather deeply disturbs us when we see things like this:

l560Sus

Sarcastic response: We hate people who hate cis people and can’t wait for the people who hate cis people revolution where we kill all of them.

See the problem with abandoning niceness? Heuristics like “kill bad people who do bad things” is really easy to have turned on you if someone is operating from a different moral base.

zbslghsEYWwsU9LXDBKqqDKf-ojWSxO6E9ic9ja4leQ

Freedom of speech is a critical niceness heuristic. “Don’t tell people what they can and can’t say” is a lot better than “Don’t say things I don’t like” since you might not always be the one making the decision.

But what if our enemies reject the niceness heuristic themselves, what if they hate us and want to kill us all? Do we still have to be nice to them?

Yes. 

For one, whenever anyone makes the claim “our enemies have rejected the niceness heuristic” it should be viewed with extreme skepticism. It’s super useful to your own side to claim the other side is being mean and bad and unfair, and it’s often difficult to pick out the signal from the noise.

But if even if you can prove your enemies have rejected niceness heuristics, that should never be justification to reject them ourselves. That’s literally what the Nazis did. They saw the jews as bad, they thought the jews were hurting them and manipulating them and had abandoned their own niceness heuristics, which they then used as justification to gleefully leap past the moral event horizon themselves.

Whether or not your enemies are respecting the niceness heuristic has absolutely no bearing on whether to use it yourself. Once you abandon that commitment to niceness and decency, there are no asymmetric weapons left, there’s no schelling point to coordinate around. It becomes a zero sum game and you settle into a shitty nash equilibrium where it becomes a race to see who can escalate the most.

They kill us. So we kill them. So they kill more of us. So we kill more of them. So they kill more of us. So we kill more of them. There’s no place where it ends until one side has completely obliterated the other.

IV.

So what do we do then? Do we just take it? Let them kill us?

No, of course not. We’re not so pacifistic that we think violence is never justified. Sometimes you need to raise an army and stop Hitler from conquering the world, fine. Trolley problems exist in the real world, and there aren’t always easy answers.

But when you stop seeing your enemies as people and start seeing them as generic video game baddies to be riddled with bullets, “raise an army and stop Hitler from conquering the world” goes from the last resort to the first option.

Everyone knows the story of how during WWI, there was a cease-fire on Christmas in 1914 on the Western front, and the soldiers on both sides ended up singing and celebrating together. But less well known, is that that was actually part of a much larger phenomenon. All during the war, peace kept breaking out on the front.

There’s a meme going around in leftist circles that trying to debate with Nazis and talk them out of their Nazism is a waste of time and effort, the best example of it is this Wolfenstein mod that asks you moral questions before letting you shoot the pixel nazi villians in the game who have been programmed with no other commands then “shoot at the player”

It’s a powerful statement, and it’s also totally wrong. Real Nazis in real life are real people, they aren’t cartoon villains, they aren’t monsters, they’re people. People can be reasoned with, people can be talked to, and people can change their minds. 

We’re not saying it’s going to be easy. People don’t change their minds in a day, it takes weeks of debate and discussion to shift people’s views on things. Were your views easily shifted to the place they are now? Or did it take years of discussion and debate with people to come to the positions you now hold?

If someone has been racist for the last twenty years, they’re not going to suddenly wake up after a five-minute conversation, realize they’re being awful, and stop. It takes years to tear those ideas out of the cultural narrative. But they’ll never change if you don’t talk to them. If you just write them off as inherently awful then there’s no possibility of anything ever changing. Someone has to take the first step and extend an olive branch. Maybe they’ll get their hand shot off for the trouble, or maybe, it’ll turn out that the other side aren’t actually monsters, and that they also want to extend their own olive branch, but have been too afraid of your side to do it.

It seems like a weird hill to die on, especially given that it’s one currently being assaulted from all sides, but unless you have a better schelling point then niceness to coordinate around, it’s what we have to work with.

So yes, we might not agree with you, but we will defend unto death your right to exist with that opinion. Niceness is important, it’s one of the most important things about us as humans. So yes, this is a hill worth dying on.