Until we Build dath ilan

[Epistemic Status: A total conjecture, a fervent wish]
[Content Warning: Spoilers for UNSONG, The Sequences, HPMOR]

This is the beginning of what we might someday call “The Origin Project Sequence” if such a thing isn’t completely conceited on our part, which it totally might be. We’ll be attempting to put out a post per day until we’ve fully explicated the concept.

I.

On April 1st, 2014, Eliezer released the story of dath ilan.

It’s a slightly humorous tale of how he’s actually a victim of the Mandela Effect or perhaps temporal displacement, how he woke up one day in Eliezer’s body, and his original world is a place he calls dath ilan.

He then goes through a rather beautiful and well-wrought description of what dath ilan is like, with a giant city where everyone on the planet lives, filled with mobile modular houses that are slotted into place with enormous cranes, and underground tunnels where all the cars go allowing the surface to be green and tranquil and clean.

We came away from the whole thing with one major overriding feeling: This is the world we want to live in. Not in a literal, concrete “our ideal world looks exactly like this” no, the best example of that in our specific case would be The Culture, and which specific utopian sci-fi future any one particular person prefers is going to depend on them a lot, but the story of dath ilan got at something we felt more deeply about than we do about the specifics of the ideal future. It seemed more like something that was almost a requirement if we wanted any of those ideal futures to happen. Something like a way out of the dark.

Eliezer refers to the concept as Shadarak

The beisutsukai, the master rationalists who’ve appeared recently in some of my writing, set in a fantasy world which is not like dath ilan at all, are based on the de’a’na est shadarak. I suppose “aspiring rationalist” would be a decent translation, if not for the fact that, by your standards, or my standards, they’re perfect enough to avoid any errors that you or I could detect. Jeffreyssai’s real name was Gora Vedev, he was my grand-aunt’s mate, and if he were here instead of me, this world would already be two-thirds optimized.

He goes through and paints a picture of a world with a shadarak inspired culture with shadarak based media, artwork, education, and law. Shadarak is rationality, but it’s something more than rationality. It’s rationality applied to itself over and over again for several centuries. It’s the process of self-optimization, of working to be better, applied back onto itself. It’s also the community of people who practice shadarak, something like the rationality community, extrapolated out for hundreds of years and organized with masters of their arts, tests, ordeals, and institutions, all working to improve themselves and applying their knowledge to their arts and the world around them.

But this Earth is lost, and it does not know the way. And it does not seem to have occurred to anyone who didn’t come from dath ilan that this Earth could use its experimental knowledge of how the human mind works to develop and iterate and test on ways of thinking until you produce the de’a’na est shadarak. Nobody from dath ilan thought of the shadarak as being the great keystone of our civilization, but people listened to them, and they were trustworthy because they developed tests and ordeals and cognitive methods to make themselves trustworthy, and now that I’m on Earth I understand all too horribly well what a difference that makes.

He outright calls the sequences a “mangled mess” compared to the hypothetical future sequences that might exist if you recursively applied the sequences to themselves over and over. When we read that post, three years ago now, it inspired something in us, something that keeps coming up again and again. Even if Eliezer himself is totally wrong about everything, even if nothing he says on the object level has any basis in fact, if we live in a universe that follows rules, we can use the starting point he builds, and iterate on it over and over, until we end up with the de’a’na est shadarak. And then we keep iterating because shadarak is a process, not an endpoint. 

None of the specifics of Dath Ilan actually matter. It’s like Scott Alexander says, any two-bit author can imagine a utopia, the thing that matters is the idea of rationality as something bigger than Eliezer’s essays on a website, as something that is a multigenerational project, something that grows to encompass every part of our lives, that we pass on to our children and they to their children. A gift we give to tomorrow. 

Okay wait, that sounds like a great way to fall victim to the cult attractor. Does having knowledge of the cult attractor inside your system of beliefs that comprise the potential cult attractor help you avoid the cult attractor?

Maybe? But you probably still need to actually put the work in. So let’s put the work in.

Eliezer starts to lay it out in the essay Church vs. Taskforce, and posits some important things.

First, churches are good at supporting religions, not necessarily communities. They do support communities, but that’s more of a happy accident.

Second, the optimal shape for a community explicitly designed to be a community from the ground up probably looks a lot more like a hunter-gatherer band than a modern western church.

Third, A community will tend to be more coherent if it has some worthy goal or purpose for existence to congeal its members around.

Eliezer wrote that post in March of 2009, setting it out as a goal for how he wanted to see the rationality community grow over the coming years. It’s fairly vague all things considered, and there’s an argument that could be made that his depiction of dath ilan is a better description of what shape the “shoulds” of the community actually ended up taking.

So seven years onward, we have a very good description of the current state of the rationality community presented by Scott in his post The Ideology is Not the Movement.

The rallying flag was the Less Wrong Sequences. Eliezer Yudkowsky started a blog (actually, borrowed Robin Hanson’s) about cognitive biases and how to think through them. Whether or not you agreed with him or found him enlightening loaded heavily on those pre-existing differences, so the people who showed up in the comment section got along and started meeting up with each other. “Do you like Eliezer Yudkowsky’s blog?” became a useful proxy for all sorts of things, eventually somebody coined the word “rationalist” to refer to people who did, and then you had a group with nice clear boundaries.

The development is everything else. Obviously a lot of jargon sprung up in the form of terms from the blog itself. The community got heroes like Gwern and Anna Salamon who were notable for being able to approach difficult questions insightfully. It doesn’t have much of an outgroup yet – maybe just bioethicists and evil robots. It has its own foods – MealSquares, that one kind of chocolate everyone in Berkeley started eating around the same time – and its own games. It definitely has its own inside jokes. I think its most important aspect, though, is a set of shared mores – everything from “understand the difference between ask and guess culture and don’t get caught up in it” to “cuddling is okay” to “don’t misgender trans people” – and a set of shared philosophical assumptions like utilitarianism and reductionism.

I’m stressing this because I keep hearing people ask “What is the rationalist community?” or “It’s really weird that I seem to be involved in the rationalist community even though I don’t share belief X” as if there’s some sort of necessary-and-sufficient featherless-biped-style ideological criterion for membership. This is why people are saying “Lots of you aren’t even singularitarians, and everyone agrees Bayesian methods are useful in some places and not so useful in others, so what is your community even about?” But once again, it’s about Eliezer Yudkowsky being the rightful caliph it’s not necessarily about anything.

Haha, Scott thinks he can deny that he is the rightful caliph, but he’s clearly the rightful caliph here.

But also, point three! If our community isn’t about anything then it ends up being rather fuzzily defined, as Scott clearly articulates above. For such a tightly knit group, we’re a vague and fuzzily defined blob of a community with all sorts of people who are rationalist or rationalist-adjacent or post-rationalist, or rationalist-adjacent-adjacent, and so on. That might be okay if our goal is just to be a community, but also, having a coherent goal might help us be a better community.

This isn’t our attempt to prescriptively shoehorn the community down a certain development trajectory. We want to see the community grow and flourish, and that means lots of people pursuing lots of projects in lots of different ways, and that’s good. We simply want to define a goal, something like “should-ness” for those of us interested, to work towards as a community, and then pursuing that goal with the full force of our rationality and morality, letting it spread throughout the totality of our existence.

II.

“The significance of our lives and our fragile planet is then determined only by our own wisdom and courage. We are the custodians of life’s meaning. We long for a Parent to care for us, to forgive us our errors, to save us from our childish mistakes. But knowledge is preferable to ignorance. Better by far to embrace the hard truth than a reassuring fable. If we crave some cosmic purpose, then let us find ourselves a worthy goal.”

― Carl Sagan, Pale Blue Dot: A Vision of the Human Future in Space

So what is our worthy goal?

Our goal is to construct dath ilan on Earth. Our goal is to create the de’a’na est shadarak.

So we want to go from
[Rationality community] → [dath ilan]
[The Sequences] → [The De’a’na est Shadarak]

We want to avoid going from
[Rationality Community] → [Catholic Church]
[The Sequences]→[The Bible]

That said, the Catholic Church isn’t entirely an example of a failure mode. It’s not great, they do and historically have done a lot of awful things and a fairly convincing argument could be made that they’re bad at being good, and are holding back human progress.

However, they’re also a rather decent example of an organization of similar social power and influence to our hypothetical Shadarak. If you can manage to strip out all the religious trappings and get at what the Catholic Church provides to the communities it exists within, you start to get an idea of what sort of position the idealized, realized de’a’na est shadarak would occupy within Dath Ilan. Power is dangerous though, and the cult attractor is a strong force to be wary of here.

Also, all that said, the goal of a Church is to worship God, it’s not optimized for the community. In our case, the shadarak is the community, that’s baked in. Shadarak is something humans do in human brains, it doesn’t exist outside of us so we matter in the context of it. We know building dath ilan and the de’a’na est shadarak is a multigenerational ongoing effort, so we have to at least partly optimize the formulation of the shadarak specifically to ensure that the community survives to keep working on the shadarak.  Eliezer notes of Churches:

Looking at a typical religious church, for example, you could suspect—although all of these things would be better tested experimentally, than just suspected—

  • That getting up early on a Sunday morning is not optimal;
  • That wearing formal clothes is not optimal, especially for children;
  • That listening to the same person give sermons on the same theme every week (“religion”) is not optimal;
  • That the cost of supporting a church and a pastor is expensive, compared to the number of different communities who could time-share the same building for their gatherings;
  • That they probably don’t serve nearly enough of a matchmaking purpose, because churches think they’re supposed to enforce their medieval moralities;
  • That the whole thing ought to be subject to experimental data-gathering to find out what works and what doesn’t.

By using the word “optimal” above, I mean “optimal under the criteria you would use if you were explicitly building a community qua community”.  Spending lots of money on a fancy church with stained-glass windows and a full-time pastor makes sense if you actually want to spend money on religion qua religion.

But we’re not just building community qua community either. We take a recursive loop through the meta level, knowing some goals beyond community building are useful to community building. This is all going to build up to a placebomantic reification of the rationality community in a new form. So let’s keep following the recursive loop back around and see where it leads.

What’s so good about rationality anyway?

Well, it’s a tool, and it’s an attempt to make a tool that improves your making-tools ability. Does it succeed at that? It’s hard to say, but the goal of having a tool improving tool, the ideal of the de’a’na est shadarak, seems undiminished by the possibility that the specific incarnation of it that we have today in the sequences is totally flawed and useless in the long run.

So aspiring rationalist sounds about right. It’s not something you achieve, it’s something you strive towards for your entire life.

A singer is someone who tries to do good.  This evokes this great feeling of moral responsibility. In UNSONG, the singer’s morality is backed up by the divinity of a being that exists outside of reality. But God probably doesn’t exist and you probably don’t want some supernatural being to come along and tell you, “No, actually murder is a virtue.” There is no Comet King, there’s no divine plan, there’s no “it all works out in the end,” there’s just us. If God is wrong, we still have to be right. Altruism qua altruism.

But knowing what is right, while sometimes trivially easy, is also sometimes incredibly difficult. It’s something we have to keep iterating on. We get moral progress from the ongoing process of morality.

‘Tis as easy to be heroes as to sit the idle slaves
Of a legendary virtue carved upon our fathers’ graves,
Worshippers of light ancestral make the present light a crime;—
Was the Mayflower launched by cowards, steered by men behind their time?

And, so too for rationality.

New occasions teach new duties; Time makes ancient good uncouth;
They must upward still, and onward, who would keep abreast of Truth;
Lo, before us gleam her camp-fires! we ourselves must Pilgrims be,
Launch our Mayflower, and steer boldly through the desperate winter sea,
Nor attempt the Future’s portal with the Past’s blood-rusted key

That’s The Present Crisis by James Russell Lowell, not the part of the poem quoted in UNSONG, but the whole poem is ridiculously awesome and Scott via Aaron is right, the Unitarians are pretty damn badass. 

There’s this idea that because of the way our brains generate things like morality and free will and truth, and justice, and rationality, they end up being moving targets. Idea-things to iterate upon, but targets which use themselves to iterate upon themselves, and necessarily so. We refer to these as Projects. 

Projects are akin to virtues–because virtue ethics are what works–something you strive towards, not something where it’s necessarily possible to push a button and skip forward to “you win.” There’s no specific end victory condition, dath ilan is always receding into the future.

Here are some things we consider Project Virtues. 

The Project of Truth – The struggle to use our flawed minds to understand the universe from our place inside of it. Our constant, ongoing, and iterative attempts to be less wrong about the universe. Comprises all the virtues of rationality: Curiosity, relinquishment, lightness, evenness, argument, empiricism, simplicity, humility, perfectionism, precision, scholarship, and the void. We call those who follow the project virtue of Truth a seeker.

The Project of Goodness – Our attempts in the present to determine what should be in the future. The ongoing struggle to separate goodness from badness, and make right what we consider wrong, while also iterating on what we consider right. Our constant fumbling attempts to be less wrong about goodness. We call those who follow the project virtue of Goodness a singer. 

The Project of Optimization – Our ongoing battle to shape the universe to our desires, to reform the material structure of the universe to be more optimized for human values, and to iterate and build upon the structures we have in order to optimize them further. This is the project of technology and engineering, the way we remake the world. We call those who follow the project virtue of Optimization a maker. 

The Project of Projects – All of these projects we’ve defined, if they could be said to exist, exist as huge vague computational objects within our minds and our communities. They interact with each other, and their interplay gives rise to new properties in the system. They all recursively point at each other as their own justifications and understanding how they interact and what the should-ness of various projects is with respect to each other is a project unto itself. We call those who follow the project virtue of Projects a coordinator. 

We’re going to put all these projects into a box, and we’re going to call the box The Project of dath ilan.

Tomorrow we’ll be looking at what a community optimized for building a community optimized for building dath ilan might look like, and in the following days we’ll attempt to build up to an all-encompassing set of principles, virtues, ideals, rituals, customs, heuristics, and practices that we and others who want to participate in could live their lives entirely inside of. We’re building dath ilan and everyone is invited.

Part of the Sequence: Origins
Next Post: Optimizing for meta optimization

7 thoughts on “Until we Build dath ilan

  1. Just FYI, dath ilan is supposed to be rewritten lower case as “dath ilan” not upper case as “Dath Ilan” according to Yudkowsky.

    “My… homeworld, I guess… was called “dath ilan”. Which I am not capitalizing, because by our conventions dath ilan is the name of a civilization, and doesn’t get the emphasis-marks that would signify a personal name, which matters because a civilization is not a person.”

    Like

  2. Pingback: Rational Feed – deluks917

  3. Pingback: Optimizing for Meta-Optimization | H i v e w i r e d

  4. I am very interested to see how this sequence pans out. So far, one observation regarding “there’s no specific end victory condition, dath ilan is always receding into the future.” I agree that there is no limit to optimization anywhere in the foreseeable future (at least before the Singularity), but we must make sure to create some metrics of progress. Otherwise you get a situation like in the USSR, where they were always “building the blissful future” but said future never became any closer by any noticeable criterion.

    Like

  5. Pingback: Deorbiting a Metaphor | H i v e w i r e d

  6. Pingback: The Assemblies on The Precepts of Project Optimization and Project Projects – Origin

Leave a comment