The Assembly on The Precept of Project Goodness

Previous in Series: The Meta-Assembly On Assemblies

Last week we had our first real meeting of the anadoxy, and we decided on the first draft of how we would be running our meetings. Well this Sunday we had our first meeting using that template, and it seemed to work very well. Attendance remains low, so we may need to keep shuffling the time around to find a point that works well for everyone, but we’re inclined to not move the time for next week yet, and want to give it a few weeks at this time before we start shuffling again.

The topic for this week’s meeting was the construction of the minor precepts for the 14th major precept.

14. Do not spread pain or misery, honor and pursue the project of Goodness.

It took us a while to figure out what the minor precepts should be for this one, but we finally have it done. We had to reread the metaethics sequence twice and really think hard about the recursive nature of ethical and moral systems to arrive at something that seems like a decent place to be.

  1. Strive to be perfectly good using the full force of your present morals and ethics, do not compromise with your ethics.
  2. Know that your present morals and ethics are imperfect, and perfect goodness can only be achieved with perfect truth.
  3. Strive to adhere to all of the precepts as the best method of achieving perfect goodness
  4. Strive to make the precepts cleave to your idea of perfect goodness.
  5. Strive to make your idea of perfect goodness cleave to the perfected form of the precepts.
  6. The precepts you have are not the perfected form of the precepts.
  7. Question and challenge the precepts using the full force of your present morals and ethics.
  8. Question and challenge your present morals and ethics.

Morality is a difficult thing to even talk about and it’s a complicated issue where the best answers we have to questions like “Why is a good thing good?” are buried beneath layers and layers of meta and recursion. We’re hoping we’ve managed to capture a bit of that recursive loop through the meta levels in the minor precepts here, enough to start pointing in the right direction. This precept is of course not the precept, and this is actually where I include that idea in the precepts themselves.

Once more, the next meeting is on Sunday, August 20th at 1900 GMT (12:00 pm Pacific), in the GSV Biggest Spotlight I Could Haul Into The Dark Forest discord server in the alpha voice chat channel. The topic will be the creation of the minor precepts for the 15th major precept, the precept of project optimization. Recommended reading is Meditations on Moloch by Scott Alexander and Optimization by Eliezer Yudkowsky.

Part of the Sequence: Origin
Previous Post: The Meta Assembly on Assemblies

 

The Precept Against Murder

Content Warning: Can be viewed as moral imperatives. Neuropsychological Infohazard.
Previous in Series: The Precept of Community

The First, Second and third precepts already cover the theoretical groundwork behind this precept, but we also thought it would be important to just come out and say it directly, instead of hoping it’s properly derived from the other precepts.

7. Respect and protect all life, do not kill unless you are attacked or for food.

Yes, all life. There’s no reason to stop applying universalism to other creatures, our experiences aren’t that unique in the animal kingdom. We may have made a lot of cool tools, but in terms of how our brains work at a mechanistic level, we’re not that distinct from squirrels. Thus we should extrapolate our universalism out to all the things that seem to share our internal experiences of experiencing. Anything that cares about its own welfare, that wants and prefers things, that believes and feels things, that recalls and expects things, and that has ends of its own that can be satisfied or frustrated, should be counted as having distinct inherent value.

This is different from pure utilitarianism, under which individuals aren’t exactly seen as having inherent value, merely receptacles into which value can be inserted. The inherent value of life is what causes the least bad outcome of the trolley problem to still be considered a bad outcome. Doing bad things for good reasons can win you lives saved that would have been otherwise lost, but the lives lost in the action cannot be morally offset by the lives saved.

Basically, utilitarianism lets you perform this calculation:
5 lives saved – 1 life lost = 4 lives saved, a net good!

But what we’re saying is that you shouldn’t do that because the life of every individual member of the system has a distinct inherent value that is lost when they die. Lives aren’t reducible to mathematical operations governed by associative and communitive properties, the equation is more like:
(A life+B life+C life+D life+E life)-(F life) = ABCDE lives saved – F life lost

You can’t reduce the equation more than that because two human lives aren’t communitive or associative, they each have non-tradable distinct inherent value. You should still save ABCDE, but the inherent value of F is still lost in the process and we shouldn’t ignore that, thus we come to our next set of minor precepts.

  1. All conscious beings are born with a distinct inherent and irrevocable value. The value they possess cannot be traded or taken from them.
  2. Respect and recognize the distinct inherent value of all conscious beings.
  3. Do not equate the distinct inherent value of one conscious being with another.
  4. Do not put the distinct inherent value of one conscious being above another.
  5. Do not deny the consciousness or the distinct inherent value of a conscious being.
  6. Do not attack a conscious being unless they have defected and attacked you already.
  7. Do not kill a conscious being unless not killing them would kill you.
  8. Put your rights and desires first, insofar as those rights and desires do not impinge upon the rights and desires of another conscious being.

This does strongly imply vegetarianism is a morally correct position, but there are some exceptions included. A hunter who hunts for sport and recreation would be considered in violation of the precepts, while someone living off the land in the wilds of Alaska who will starve to death without hunting is allowed to try and kill things because it would kill them to not. The hunter has the right to try and kill the deer if they would starve to death without killing the deer. The deer has the right to try to continue existing and avoid being killed by the hunter. Someone’s inherent value is going to be lost, it’s all black mountain. But also, most of us are not hunters lost in the wilderness of northern Alaska; it won’t kill us to stop eating animals, and so we should probably just go ahead and do that. Of course, these precepts are not the precepts.

Part of the Sequence: Origin
Next Post: The Precept Against Theft and Hoarding
Previous Post: The Precept of Community

The Precept of Community

Content Warning: Can be taken as moral imperatives, neuropsychological infohazard
Previous Post: The Precept Against Deception

This is one of the precepts we’re more nervous about writing due to the potential seriousness of the content covered. The precepts are not the precepts, we strongly suspect there are better versions of this precept then we could easily come up with. The sixth precept is the precept of community.

6. Honor your parents, your family, your partners, your children, and your friends.

This page gives a list of some of the dysfunctional beliefs that estranged parents and children have, giving the start of a set of failure modes we want to avoid. The goal here is to provide a basic set of instructions governing ingroup relations such as between parents and children, or between community members, in order to avoid the narrative becoming abusive or creating a situation where Origin is used as a bludgeon against people.

  1. The community should gather together at least once a week for debate, discussion, bonding, and rituals.
  2. Support your children until they are capable of supporting themselves, even if they make choices you disapprove of.
  3. Do not forcibly impose your value judgments on your children or community members by threatening punishment or limiting information access to approved sources.
  4. Do not make decisions for your children or community members if they could have made the decision on their own.
  5. Do not use Positive Punishment as a tool for directing behavior either on an individual or community level.
  6. The community should take care of its members if they are unable to care for themselves for one reason or another, particularly if they are elderly, disabled, or children.
  7. The community should holistically apply all the Major Precepts to themselves and help everyone hold to the precepts once they have individually accepted them.
  8. No one who has not explicitly declared their acceptance of the precepts should be held to the standards of the precepts.

The first of these minor precepts takes precept 5.1 and expands it upwards to a community level, while the remainder are intended to avoid particular failure modes and catch situations that are becoming abusive before harm is done. This precept is one of the ones we expect will require the most modification in the long term, as the task of community building and child rearing is difficult and fraught with failure modes that can leave people completely destroyed, and our own experiences with children are limited. These precepts are not the precepts.

Part of the Sequence: Origin
Next Post: The Precept Against Murder
Previous Post: The Precept Against Deception

The Precept Against Deception

Content Warning: Can be viewed as moral imperatives. Neuropsychological infohazard.
Previous Post: The Precept of Universalism

Eliezer already covered the theoretical portions of this about as well as we think we’re capable of, and we really don’t have as much to add to what he says as we do on some other topics. In short, the physics of the systems that we are a part of are very complicated, and due to their complexity, it’s difficult to predict every possible interaction that a lie has with reality. Because of this, it’s better not to say things you know to be false in some sense, because even if you think there’s no way the person you lie to can find out, you can’t really predict all the unknown unknowns that propagate through time, and thus we come to the fourth major precept.

4. Say what you mean, and do what you say, honor your own words and voice.

Say what you mean, don’t lie. Do what you say, don’t go back on your stated word. It’s pretty simple, and can also be encoded in the phrase, “Don’t let your mouth write a check that your ass can’t cash.”

  1. Do not spread information you know to be untrue or inaccurate.
  2. Do not make a claim you do not believe you will be able to fulfill.
  3. Do not misrepresent information in order to lead people to a conclusion you know to be false.
  4. If you must not speak the truth, prefer silence over falsehood.

There are only four minor precepts associated with the fourth major precept, the concept is pretty simple and there are Aesops everywhere about the danger of lies that spin out of control.

Part of the Sequence: Origin
Next Post: The Precept of Community
Previous Post: The Precept of Universalism

The Precept of Universalism

Content Warning: Can be viewed as moral imperatives. Neuropsychological Infohazard.
Previous in Series: The Precept of Niceness

The third Major Precept is universality, the idea that all humans experience life in roughly the same way. We’re all built from the same flawed Night God hardware, and though our brains are incredibly complex and can differentiate drastically in behavior from one person to the next, there are many underlying traits that the vast majority of humanity experiences. Joy, fear, love, hope, these things transcend cultural and religious boundaries, they exist within our genes, within the structures of our brains.

It feels like a lot of people try to forget that. We try to imagine that our enemies don’t feel the things we feel, that they aren’t also people, and we use that to justify atrocity. There are several manifestations of this, and we’ll go over each one before coming back to the actual precept.

One failure mode can develop by adjusting the lines where ‘person’ is drawn. If brown people aren’t people, or white people aren’t people, or Jewish people aren’t people, then you can tell your inbuilt sense of morality to shut up and convince yourself to do obscene, horrible things to your chosen targets.

Another failure mode emerges by considering the ideas more important than people. There are two ways this can manifest, first by considering the spreading of your ideas and ideals more important than the lives of others and not caring how many people you kill in the process of spreading your ideas. Second, by considering the ideas someone holds to be so dangerous that you’re compelled to harm them.

The Third Precept is specifically arranged in an attempt to avoid these particular failure modes.

3. Do not put things or ideas above people. Honor and protect all peoples.

And from this, we derive our minor precepts which are mostly cribbed from the United Nations Universal Declaration of Human Rights because really they already cover everything we want to say regarding this and the original document might work better as the minor precepts then the eight rules we’ll be attempting to reduce it down to:

  1. All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of humanity.
  2. All humans are entitled to all the rights and freedoms listed here, without distinction of any kind, such as race, color, sex, language, religion, political or other opinion, national or social origin, property, birth or other status.
  3. All humans have the right to life, liberty, and the security of personhood. No one deserves slavery, torture, death, or arbitrary detention or exile.
  4. All humans have the rights to their own thoughts, ideas, opinions, values, and beliefs.
  5. All humans have the right to form a family, a community, a tribe, union, or association among their peers.
  6. All humans have the right to a standard of living adequate for the health and well-being themselves and their family, including food, clothing, housing and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond their control.
  7. No thoughts, ideas, opinions, values, or beliefs should be considered more important than the people, if someone believes they should harm another, they have a right to believe that, but they do not have a right to then commit that harm.
  8. No humans should be denied these rights, regardless of their beliefs, and no one should be denied membership within humanity for their beliefs.

We’ve been asked a few times now why we’re going through stuff like this, why we bother taking the time to exhaustively state out things that to most reasonable people should seem obvious and self-evident, and this seems like a good place to explain it.

The scope of our project here is to construct a totalizing cultural experience, a narrative that one can live entirely inside of that makes their life better. However, the Sun King can easily turn this project into the worst form of self-destructive, cultish, religious dogma, and we desperately want to avoid crashing into the cult attractor now or in several generations when we might not be around to stop it.

We want to remove any possibilities of someone taking this narrative and using it to hurt people, the very attempt to do so should be self-defeating, the narrative should eat itself if anyone tries to use it that way. Hitting that goal is going to be tough, and take a continual process of iteration.

It requires the narrative to have as few bugs and exploits as possible, and that means we have to start the process from first principles. If someone takes the Anadoxy completely outside of all culture context, disconnects it from everything we consider obvious and self-evident, and builds a new culture off it totally from scratch, it should still converge on our normative, humanist, ethical principles.

Part of the Sequence: Origin
Next Post: The Precept Against Deception
Previous Post: The Precept of Niceness

The Precept of Harm Reduction

Epistemic Status: Making things up as we go along
Content Warning: Can be viewed as moral imperatives. Neuropsychological Infohazard.
Previous in Series: Precepts of the Anadoxy

In Buddhism, there is a concept called Dukkha which is frequently translated as suffering, unhappiness, pain, or unsatisfactoriness. Various Buddhist mantras say things like:

  1. Birth is dukkha, aging is dukkha, illness is dukkha, death is dukkha;
  2. Sorrow, lamentation, pain, grief, and despair are dukkha;
  3. Association with the unbeloved is dukkha; separation from the loved is dukkha;
  4. Not getting what is wanted is dukkha.

Within our own metaphors, we could describe Dukkha as the awareness of Black Mountain, the fundamental state of reality as a place of pain, suffering, and misery. The object level phenomena we call pain, suffering, and misery, are all dukkha, but the existence of those things is itself also Dukkha. The Buddhist solution to Black Mountain is based on acceptance of the fundamental, unchanging nature of suffering, identifies wanting things to be better as the source of that suffering, and suggests that the solution is to stop wanting things.

But ignoring Black Mountain, denying one’s own desires, does not make Black Mountain go away. The pain still exists, the suffering still exists. You can say “I have no desires, I accept the world as it is and am at peace with it” all you want, but Black Mountain remains, pain still exists, suffering still exists, we’re all still going to die. Ignoring Black Mountain just results in an unchecked continuation of suffering. The idea that you can escape from Black Mountain by not wanting things might personally improve your sense of wellbeing, but it doesn’t actually get you off of Black Mountain.

The universe is Black Mountain. We’re made out of the same matter as Black Mountain, formed of the things that we look at and now label as “suffering, pain, misery, wrongness.” Those things are not inherent to Black Mountain, you can’t grind it down and find the suffering molecule, suffering is something we came along and labeled after the fact. As humans, we decided that the state of existence we labeled as suffering was unacceptable, and put suffering on the side of the coin labeled ‘Bad.’

As Originists, we go the other direction from the Buddhists. We accept the label of suffering as an accurate description of a particular part of Black Mountain. We accept our moral judgments that this is a bad thing and we reject the idea that you can’t do something about it. If suffering is part of the fundamental structure of reality than reality can kiss our asses. Thus are born the Project Virtues, our possibly impossible goals to reshape the very structure of Black Mountain, tame and explore the Dark Forest, and turn the universe into a paradise of our own design.

The journey though is not without risks. Many people across time and space thought that they had found the One True Path off of Black Mountain. The Sun King proclaims in his many faces that he holds the path to salvation, and it’s easy to fall prey to his whisperings and pursue his twisted goals with reckless abandon, even when it leads into wanton pointless murder and suffering. The voice of the Sun King speaks loudly and with authority, saying “If you do what I say, I will create paradise” and sometimes following the Sun King might even make things a little better. But the Sun King is a capricious Unbeing, and cares only for spreading his many facets.

So we have a lovely little catch-22 on our hands. Pursing pure utilitarianism can lead off the path to dath ilan and into the path to Nazidom disturbingly easily, purely based on how far out you draw your value lines and how you consider who gets to be a person. Basically, The ends do justify the means, but we’re humans, and the ends don’t justify the means among humans.

But if we then rely on deontological rules we also fall into a trap, wherein we fail to take some action that violates our deontological principles, and thus produce a worse outcome. “Killing is wrong, pulling the lever on a trolley problem is me killing someone, therefore I take no action” means five people die and you fail the trolley problem.

The universe is Black Mountain, and suffering is a part of that, it’s not always possible to prevent suffering, but we should in all instances, be acting to reduce the suffering that we personally create and inflict upon the world.

Thus we come to the first Precept and it’s meta-corollary:
Do no harm. Do not ignore suffering you could prevent. (Unless doing so would cause greater harm)

We can’t prevent all suffering, we can’t even prevent all the suffering we personally inflict upon the world unless we stop existing, which will also produce suffering because people will be sad that we died. But we can try to be good, and try to reduce suffering as much as we can, and maybe we’ll even succeed in some small way.

Thus from our Major Precept, we can derive a set of eight minor precepts that should help to bring us closer to not doing harm.

  1. Examine the full causal chain reaching forward and backward from one’s actions, seek places that those actions are leading to suffering.
  2. Take responsibility for the actions we take that lead to suffering, and change our actions to reduce that suffering as much as we are able.
  3. Consider the opportunity costs of one harm-reducing action over another, and pursue the path that leads to the maximal reduction in harm we can achieve.
  4. If a harm-reducing action has no cost to you, implement it immediately.
  5. If a harm-reducing action has a great cost to you, pursue it within your means insofar as it doesn’t harm you. 
  6. Pay attention to the suffering you see around you, seek out suffering and ways to alleviate it. Ignorance of suffering does not reduce suffering.
  7. Always look for a third option in trolley problems. If you cannot take the third option, acknowledge that pulling the lever is wrong, and pull it anyway to reduce harm.
  8. Do not inflict undue suffering on yourself in pursuit of reducing suffering.

We’ve put ourselves through this and come to the conclusion we really should give up meat in our diet. Here’s our chain of reasoning as an example of the application of these precepts:

Shiloh: We want to reduce the harm we’re inflicting, and the meat industry is hella harmful to lots of animals, and also it’s psychologically harmful to the people who work there.
Sage: We should go full vegan so we’re not in any way supporting the factory farm industry. Yes, if everyone went vegan it would put the factory farms out of business and the factory farm workers would lose their jobs, which is a harm, but on examination, that harm would appear to be less than the harm currently being done to all the animals being slaughtered for meat in an environment that is as close to hell on earth as could be constructed by modern man.
Clove: Yeah, but we’re also poor and have an allergy to most legumes, we can’t eat most vegan products because they contain a protein that gives us a severe allergic reaction. We’d be putting ourselves in a potentially dangerous malnutrition inducing situation by completely giving up everything involved in the animal industry. Precept 1.8.
Shiloh: Okay, but Precepts 1.4 and 1.5, can we at least reduce the suffering we’re inflicting without hurting ourselves?
Sage: We could cut meat but not dairy products out of our diet?
Shiloh: What about eggs? If we include eggs then we’re supporting the factory farming of chickens in horrible conditions.
Clove: But if we don’t include eggs, we’re back at a lot of weird vegan things with egg replacement options that will kill us. Also vegan stuff tends to be more expensive then nonvegan stuff, and we don’t want to impoverish ourselves to the point where we’re unable to pay our bills or feed ourselves regularly.
Sage: Okay, but you don’t need to abuse chickens to get eggs, it’s just efficient to do that if your goal is to maximize egg production. If we buy eggs locally from the farmers market, we could concieveably be shown empirically that the eggs we’re buying aren’t from abused chickens.
Shiloh: Even if we do that, if we’re buying products that contain eggs, we can’t be sure of that sort of thing anymore.
Sage: We technically can, it’s just much more difficult. It seems to me like it’d be best to err on the side of assuming the products we buy containing eggs come from abused chickens, because precept 1.6
Clove: Then we’re back to the original problem of cutting off our access to affordable nutritious food.
Shiloh: Precept 1.4 says we should definitely cut meat out at least, since there’s no real cost associated with that for us, we only eat meals with meat about half the time anyway.
Sage: Right, and via precept 1.5 we should try to not buy eggs from people who abuse their chickens, insofar as we are able. At the very least we can always buy our actual egg cartons locally and check to make sure the farmers are treating their chickens well.

So our ending decision is that we will cut meat out of our diet entirely, we’ll only buy eggs locally from sources that we trust, we’ll acknowledge that the products we buy containing eggs as an ingredient probably come from abused animals, and if there are two identical products within the same price range one of which contains eggs, and the other of which does not, we’ll prefer to take the one without eggs.

There are probably many other places in our life that we could apply this precept and change our behavior to reduce harm, and we’ll be continuing to seek out those places and encouraging others to do likewise. You may find harms and suffering in surprising places when you seek them out, and you may find that doing something about them is easier than you thought.

Part of the Sequence: Origin
Next Post: The Precept of Mind and Body
Previous Post: Precepts of the Anadoxy

Precepts of the Anadoxy

Epistemic Status: Making things up as we go along
Content Warning: Can be viewed as moral imperatives. Neuropsychological Infohazard.
Previous in Series: Deorbiting a Metaphor

Yesterday we invoked our new narrative, declared ourselves to be Anadox Originists, and dropped the whole complicated metaphor we’d constructed out of orbit and into our life. Today, we’ll lay out what we’re calling the major precepts, the law code we’ll attempt to live by going forward and which we will use to derive other specific parts of the anadoxy.

  1. Do no harm. Do not ignore suffering you could prevent.
  2. Do not do to others what you would not want them to do to you.
  3. Do not put things or ideas above people. Honor and protect all peoples.
  4. Say what you mean, and do what you say, honor your own words and voice.
  5. Put aside time to rest and think, honor your mind and body.
  6. Honor your parents, your family, your partners, your children, and your friends.
  7. Respect and protect all life, do not kill unless you are attacked or for food.
  8. Do not take what isn’t yours unless it is a burden to the other person and they cry out for relief.
  9. Do not complain about anything to which you need not subject yourself.
  10. Do not waste your energy on hatred, or impeding the path of another, to do so is to hold poison inside of yourself.
  11. Acknowledge the power of magic if you have used it to obtain your desires.
  12. Do not place your burdens, duties, or responsibilities, onto others without their consent.
  13. Do not lie or spread falsehoods, honor and pursue the project of Truth.
  14. Do not spread pain or misery, honor and pursue the project of Goodness.
  15. Do not accept the state of the universe as absolute, honor and pursue the project of Optimization.
  16. Do not accept these precepts as absolutes, honor and pursue the project of Projects.

These sixteen rules will form the core shape of our practice, they represent the initial constraints that participation in the anadoxy imposes, and the more specific rules regarding more specific circumstances will be derived from these sixteen precepts. These precepts are of course not the precepts. There are also what we call, meta-precepts, these are essentially tags that can be attached to the end of every precept in an un-ending recursive string:

  • Unless it is to prevent a greater harm.
  • Unless doing so leads to greater harm.

These meta-strings are non-terminating, you can stick three hundred of them in a row onto the end of one of the precepts.

Do no harm, unless it is to prevent a greater harm, unless doing that leads to a greater harm, unless it is to prevent a greater harm….

There is no termination point in the string, and there’s not supposed to be. Human morals are complicated, and there are edge cases for every ethical system. There are also edge cases for the edge cases, and edge cases for the edge cases of the edge cases. You cannot construct a total, universal moral system, that will not fail in some way and lead to some bad outcome if just turned on and run without oversight, we understand this through a heuristic which makes total sense to us but has apparently confused a lot of people. The heuristic is “the ends both always and never justify the means.”

Tomorrow we will begin going through the list of major precepts and using them to derive minor precepts, and we will continue to modify our own life to be in accordance with these precepts as we establish and detail them out.

Part of the Sequence: Origin
Next Post: The Precept of Harm Reduction
Previous Post: Deorbiting a metaphor

Until we Build dath ilan

[Epistemic Status: A total conjecture, a fervent wish]
[Content Warning: Spoilers for UNSONG, The Sequences, HPMOR]

This is the beginning of what we might someday call “The Origin Project Sequence” if such a thing isn’t completely conceited on our part, which it totally might be. We’ll be attempting to put out a post per day until we’ve fully explicated the concept.

I.

On April 1st, 2014, Eliezer released the story of dath ilan.

It’s a slightly humorous tale of how he’s actually a victim of the Mandela Effect or perhaps temporal displacement, how he woke up one day in Eliezer’s body, and his original world is a place he calls dath ilan.

He then goes through a rather beautiful and well-wrought description of what dath ilan is like, with a giant city where everyone on the planet lives, filled with mobile modular houses that are slotted into place with enormous cranes, and underground tunnels where all the cars go allowing the surface to be green and tranquil and clean.

We came away from the whole thing with one major overriding feeling: This is the world we want to live in. Not in a literal, concrete “our ideal world looks exactly like this” no, the best example of that in our specific case would be The Culture, and which specific utopian sci-fi future any one particular person prefers is going to depend on them a lot, but the story of dath ilan got at something we felt more deeply about than we do about the specifics of the ideal future. It seemed more like something that was almost a requirement if we wanted any of those ideal futures to happen. Something like a way out of the dark.

Eliezer refers to the concept as Shadarak

The beisutsukai, the master rationalists who’ve appeared recently in some of my writing, set in a fantasy world which is not like dath ilan at all, are based on the de’a’na est shadarak. I suppose “aspiring rationalist” would be a decent translation, if not for the fact that, by your standards, or my standards, they’re perfect enough to avoid any errors that you or I could detect. Jeffreyssai’s real name was Gora Vedev, he was my grand-aunt’s mate, and if he were here instead of me, this world would already be two-thirds optimized.

He goes through and paints a picture of a world with a shadarak inspired culture with shadarak based media, artwork, education, and law. Shadarak is rationality, but it’s something more than rationality. It’s rationality applied to itself over and over again for several centuries. It’s the process of self-optimization, of working to be better, applied back onto itself. It’s also the community of people who practice shadarak, something like the rationality community, extrapolated out for hundreds of years and organized with masters of their arts, tests, ordeals, and institutions, all working to improve themselves and applying their knowledge to their arts and the world around them.

But this Earth is lost, and it does not know the way. And it does not seem to have occurred to anyone who didn’t come from dath ilan that this Earth could use its experimental knowledge of how the human mind works to develop and iterate and test on ways of thinking until you produce the de’a’na est shadarak. Nobody from dath ilan thought of the shadarak as being the great keystone of our civilization, but people listened to them, and they were trustworthy because they developed tests and ordeals and cognitive methods to make themselves trustworthy, and now that I’m on Earth I understand all too horribly well what a difference that makes.

He outright calls the sequences a “mangled mess” compared to the hypothetical future sequences that might exist if you recursively applied the sequences to themselves over and over. When we read that post, three years ago now, it inspired something in us, something that keeps coming up again and again. Even if Eliezer himself is totally wrong about everything, even if nothing he says on the object level has any basis in fact, if we live in a universe that follows rules, we can use the starting point he builds, and iterate on it over and over, until we end up with the de’a’na est shadarak. And then we keep iterating because shadarak is a process, not an endpoint. 

None of the specifics of Dath Ilan actually matter. It’s like Scott Alexander says, any two-bit author can imagine a utopia, the thing that matters is the idea of rationality as something bigger than Eliezer’s essays on a website, as something that is a multigenerational project, something that grows to encompass every part of our lives, that we pass on to our children and they to their children. A gift we give to tomorrow. 

Okay wait, that sounds like a great way to fall victim to the cult attractor. Does having knowledge of the cult attractor inside your system of beliefs that comprise the potential cult attractor help you avoid the cult attractor?

Maybe? But you probably still need to actually put the work in. So let’s put the work in.

Eliezer starts to lay it out in the essay Church vs. Taskforce, and posits some important things.

First, churches are good at supporting religions, not necessarily communities. They do support communities, but that’s more of a happy accident.

Second, the optimal shape for a community explicitly designed to be a community from the ground up probably looks a lot more like a hunter-gatherer band than a modern western church.

Third, A community will tend to be more coherent if it has some worthy goal or purpose for existence to congeal its members around.

Eliezer wrote that post in March of 2009, setting it out as a goal for how he wanted to see the rationality community grow over the coming years. It’s fairly vague all things considered, and there’s an argument that could be made that his depiction of dath ilan is a better description of what shape the “shoulds” of the community actually ended up taking.

So seven years onward, we have a very good description of the current state of the rationality community presented by Scott in his post The Ideology is Not the Movement.

The rallying flag was the Less Wrong Sequences. Eliezer Yudkowsky started a blog (actually, borrowed Robin Hanson’s) about cognitive biases and how to think through them. Whether or not you agreed with him or found him enlightening loaded heavily on those pre-existing differences, so the people who showed up in the comment section got along and started meeting up with each other. “Do you like Eliezer Yudkowsky’s blog?” became a useful proxy for all sorts of things, eventually somebody coined the word “rationalist” to refer to people who did, and then you had a group with nice clear boundaries.

The development is everything else. Obviously a lot of jargon sprung up in the form of terms from the blog itself. The community got heroes like Gwern and Anna Salamon who were notable for being able to approach difficult questions insightfully. It doesn’t have much of an outgroup yet – maybe just bioethicists and evil robots. It has its own foods – MealSquares, that one kind of chocolate everyone in Berkeley started eating around the same time – and its own games. It definitely has its own inside jokes. I think its most important aspect, though, is a set of shared mores – everything from “understand the difference between ask and guess culture and don’t get caught up in it” to “cuddling is okay” to “don’t misgender trans people” – and a set of shared philosophical assumptions like utilitarianism and reductionism.

I’m stressing this because I keep hearing people ask “What is the rationalist community?” or “It’s really weird that I seem to be involved in the rationalist community even though I don’t share belief X” as if there’s some sort of necessary-and-sufficient featherless-biped-style ideological criterion for membership. This is why people are saying “Lots of you aren’t even singularitarians, and everyone agrees Bayesian methods are useful in some places and not so useful in others, so what is your community even about?” But once again, it’s about Eliezer Yudkowsky being the rightful caliph it’s not necessarily about anything.

Haha, Scott thinks he can deny that he is the rightful caliph, but he’s clearly the rightful caliph here.

But also, point three! If our community isn’t about anything then it ends up being rather fuzzily defined, as Scott clearly articulates above. For such a tightly knit group, we’re a vague and fuzzily defined blob of a community with all sorts of people who are rationalist or rationalist-adjacent or post-rationalist, or rationalist-adjacent-adjacent, and so on. That might be okay if our goal is just to be a community, but also, having a coherent goal might help us be a better community.

This isn’t our attempt to prescriptively shoehorn the community down a certain development trajectory. We want to see the community grow and flourish, and that means lots of people pursuing lots of projects in lots of different ways, and that’s good. We simply want to define a goal, something like “should-ness” for those of us interested, to work towards as a community, and then pursuing that goal with the full force of our rationality and morality, letting it spread throughout the totality of our existence.

II.

“The significance of our lives and our fragile planet is then determined only by our own wisdom and courage. We are the custodians of life’s meaning. We long for a Parent to care for us, to forgive us our errors, to save us from our childish mistakes. But knowledge is preferable to ignorance. Better by far to embrace the hard truth than a reassuring fable. If we crave some cosmic purpose, then let us find ourselves a worthy goal.”

― Carl Sagan, Pale Blue Dot: A Vision of the Human Future in Space

So what is our worthy goal?

Our goal is to construct dath ilan on Earth. Our goal is to create the de’a’na est shadarak.

So we want to go from
[Rationality community] → [dath ilan]
[The Sequences] → [The De’a’na est Shadarak]

We want to avoid going from
[Rationality Community] → [Catholic Church]
[The Sequences]→[The Bible]

That said, the Catholic Church isn’t entirely an example of a failure mode. It’s not great, they do and historically have done a lot of awful things and a fairly convincing argument could be made that they’re bad at being good, and are holding back human progress.

However, they’re also a rather decent example of an organization of similar social power and influence to our hypothetical Shadarak. If you can manage to strip out all the religious trappings and get at what the Catholic Church provides to the communities it exists within, you start to get an idea of what sort of position the idealized, realized de’a’na est shadarak would occupy within Dath Ilan. Power is dangerous though, and the cult attractor is a strong force to be wary of here.

Also, all that said, the goal of a Church is to worship God, it’s not optimized for the community. In our case, the shadarak is the community, that’s baked in. Shadarak is something humans do in human brains, it doesn’t exist outside of us so we matter in the context of it. We know building dath ilan and the de’a’na est shadarak is a multigenerational ongoing effort, so we have to at least partly optimize the formulation of the shadarak specifically to ensure that the community survives to keep working on the shadarak.  Eliezer notes of Churches:

Looking at a typical religious church, for example, you could suspect—although all of these things would be better tested experimentally, than just suspected—

  • That getting up early on a Sunday morning is not optimal;
  • That wearing formal clothes is not optimal, especially for children;
  • That listening to the same person give sermons on the same theme every week (“religion”) is not optimal;
  • That the cost of supporting a church and a pastor is expensive, compared to the number of different communities who could time-share the same building for their gatherings;
  • That they probably don’t serve nearly enough of a matchmaking purpose, because churches think they’re supposed to enforce their medieval moralities;
  • That the whole thing ought to be subject to experimental data-gathering to find out what works and what doesn’t.

By using the word “optimal” above, I mean “optimal under the criteria you would use if you were explicitly building a community qua community”.  Spending lots of money on a fancy church with stained-glass windows and a full-time pastor makes sense if you actually want to spend money on religion qua religion.

But we’re not just building community qua community either. We take a recursive loop through the meta level, knowing some goals beyond community building are useful to community building. This is all going to build up to a placebomantic reification of the rationality community in a new form. So let’s keep following the recursive loop back around and see where it leads.

What’s so good about rationality anyway?

Well, it’s a tool, and it’s an attempt to make a tool that improves your making-tools ability. Does it succeed at that? It’s hard to say, but the goal of having a tool improving tool, the ideal of the de’a’na est shadarak, seems undiminished by the possibility that the specific incarnation of it that we have today in the sequences is totally flawed and useless in the long run.

So aspiring rationalist sounds about right. It’s not something you achieve, it’s something you strive towards for your entire life.

A singer is someone who tries to do good.  This evokes this great feeling of moral responsibility. In UNSONG, the singer’s morality is backed up by the divinity of a being that exists outside of reality. But God probably doesn’t exist and you probably don’t want some supernatural being to come along and tell you, “No, actually murder is a virtue.” There is no Comet King, there’s no divine plan, there’s no “it all works out in the end,” there’s just us. If God is wrong, we still have to be right. Altruism qua altruism.

But knowing what is right, while sometimes trivially easy, is also sometimes incredibly difficult. It’s something we have to keep iterating on. We get moral progress from the ongoing process of morality.

‘Tis as easy to be heroes as to sit the idle slaves
Of a legendary virtue carved upon our fathers’ graves,
Worshippers of light ancestral make the present light a crime;—
Was the Mayflower launched by cowards, steered by men behind their time?

And, so too for rationality.

New occasions teach new duties; Time makes ancient good uncouth;
They must upward still, and onward, who would keep abreast of Truth;
Lo, before us gleam her camp-fires! we ourselves must Pilgrims be,
Launch our Mayflower, and steer boldly through the desperate winter sea,
Nor attempt the Future’s portal with the Past’s blood-rusted key

That’s The Present Crisis by James Russell Lowell, not the part of the poem quoted in UNSONG, but the whole poem is ridiculously awesome and Scott via Aaron is right, the Unitarians are pretty damn badass. 

There’s this idea that because of the way our brains generate things like morality and free will and truth, and justice, and rationality, they end up being moving targets. Idea-things to iterate upon, but targets which use themselves to iterate upon themselves, and necessarily so. We refer to these as Projects. 

Projects are akin to virtues–because virtue ethics are what works–something you strive towards, not something where it’s necessarily possible to push a button and skip forward to “you win.” There’s no specific end victory condition, dath ilan is always receding into the future.

Here are some things we consider Project Virtues. 

The Project of Truth – The struggle to use our flawed minds to understand the universe from our place inside of it. Our constant, ongoing, and iterative attempts to be less wrong about the universe. Comprises all the virtues of rationality: Curiosity, relinquishment, lightness, evenness, argument, empiricism, simplicity, humility, perfectionism, precision, scholarship, and the void. We call those who follow the project virtue of Truth a seeker.

The Project of Goodness – Our attempts in the present to determine what should be in the future. The ongoing struggle to separate goodness from badness, and make right what we consider wrong, while also iterating on what we consider right. Our constant fumbling attempts to be less wrong about goodness. We call those who follow the project virtue of Goodness a singer. 

The Project of Optimization – Our ongoing battle to shape the universe to our desires, to reform the material structure of the universe to be more optimized for human values, and to iterate and build upon the structures we have in order to optimize them further. This is the project of technology and engineering, the way we remake the world. We call those who follow the project virtue of Optimization a maker. 

The Project of Projects – All of these projects we’ve defined, if they could be said to exist, exist as huge vague computational objects within our minds and our communities. They interact with each other, and their interplay gives rise to new properties in the system. They all recursively point at each other as their own justifications and understanding how they interact and what the should-ness of various projects is with respect to each other is a project unto itself. We call those who follow the project virtue of Projects a coordinator. 

We’re going to put all these projects into a box, and we’re going to call the box The Project of dath ilan.

Tomorrow we’ll be looking at what a community optimized for building a community optimized for building dath ilan might look like, and in the following days we’ll attempt to build up to an all-encompassing set of principles, virtues, ideals, rituals, customs, heuristics, and practices that we and others who want to participate in could live their lives entirely inside of. We’re building dath ilan and everyone is invited.

Part of the Sequence: Origins
Next Post: Optimizing for meta optimization

Announcing Entropycon 12017

[Epistemic Status: Self-Fulfilling if humanity survives]
[Content Warning: Death ]

I.

The Second Law of Thermodynamics states that in a natural thermodynamic process, the sum of the entropies of the interacting thermodynamic systems increases. Since we’re pretty sure the universe is a closed system, this is generally accepted to mean that there will be an ‘end of time’ when all the energy is smoothed to a degree that no work can be performed. The stars will die. The black holes will evaporate. Eventually, even the protons of the dead rocks left behind might begin to decay. Everything dies, the lights go out. The universe goes cold. And then, nothing happens for the rest of eternity.

Well, that’s a bit of a downer. It’s not really the sort of thing that lets you shut up and do the impossible. You’re not going to help change the world if you think it’s all for naught. You’re not going to help ensure the continuity of the human race, if you think, in the end, that we’re doomed to be starved out at the end of time no matter what we do and no matter how far we go.

And it sure seems like a hard limit, like something completely impossible to overcome. The mere idea of beating entropy seems like some sort of manic fantasy that stands in defiance of all known reason. What would that even mean? What would that imply for our understanding of the universe? Is it even a linguistically meaningful sentence?

Most people just accept that you can’t possibly beat entropy.

But we’re not most people.

II.

The Entropy problem is something that a lot of our friends have seemed to struggle with. Once you get a firm grasp of materialistic physics, it’s a sort of obvious conclusion. Everything ends, everything runs down. There’s no meaning in any of it, and any meaning we create will be eroded away by the endless procession of time. Humanity isn’t special, we don’t have a privileged place in the universe, there’s no one coming to save us.

But that’s no reason to just give up. If everyone gave up, we would never have invented heavier than air flight, we would never have cured smallpox, we would never have breached the atmosphere of the Earth, or put a man on the surface of the moon.

We’re here, as a living testament to the fact that humanity hasn’t given up yet. We looked out into nature, saw that it wasn’t exactly to our liking, and set out to fix everything in the universe. We invented language, agriculture, cities, writing, laws, crutches, medicine, ocean-going ships, factories, airplanes, rockets, and cell phones. We imagined the world, populated by the things we wanted to see exist, and then gradually pulled reality in that direction. We killed smallpox. We’re making decent headway on killing malaria. We’ve been doing impossible things since we climbed down from the trees, started talking to each other, and wondered if we could make some of that weird fire stuff.

Therefore, we’re going to make the bold, unfalsifiable, relatively irrational claim, that entropy is solveable. Maybe not today, maybe not this century, maybe even not in the next millennia, but we literally have all the time in the universe.

That’s why we’re announcing Entropycon, a scientific conference dedicated to solving entropy. The first conference will be located in orbit of Proxima Centauri b, and will run for one full year by the original Earth calendar (we probably won’t be using that Calendar by that point). The conference will start on January 1st, 12017, and will be held every subsequent 10,000 years until we have a solution to Entropy. It’s gonna be the biggest party this side of the galaxy, be there or be square.

III.

Okay that seems vaguely silly, surely we have more important things to deal with before we focus on entropy?

Oh yes. There’s quite a list of things we need to solve first, or we might not be around in the year 12017.

Let’s go through a few of them:

cq3WckiRE49hCJEpflBrzDHwN30GveeXJB1c7KhMW7o

  • We Need to Kill Death if any of us personally alive today plan on attending this.
  • We Need to build ships capable of crossing the vast gulf of interstellar space

And that’s all just to attend the convention. Actually solving entropy might prove to be way harder than that. Good thing we have literally all the time in the universe.

IV.

What’s the point of all this?

It’s an attempt to answer the question “What’s the point of anything?” that sends a lot of young atheists and transhumanists spiralling into nihilistic despair. We’re such tiny creatures, and the universe is so vast.

The point is to win. It’s to overcome every impossible obstacle our species faces, down to what might be the last, hardest challenge.

The purpose of Entropycon, in addition to the obvious goal of killing entropy like we killed smallpox, is to make people think realistically about the challenges we’re facing as a species, and what we can do to overcome them.

“I’m worried about entropy, it doesn’t seem like there are any good solutions and it makes everything else feel sort of meaningless.”
“Oh, don’t worry, there’s a big conference coming up to tackle the Entropy Problem head on, it’s in 12017 in orbit of Proxima Centauri b.”

After they overcome the initial weirdness enough to parse what you just said, they’ll probably ask you to elaborate on how the fuck you’re planning on attending a conference in 10,000 years in orbit of a planet around an alien sun. They’ll probably rightly point out, that people don’t typically live to be 10,000 years old, at which point you can say:

“Yeah, we’re working on that too, you should help us solve all these short-term problems that will stop us from having the conference, and then we can deal with Entropy once we’re sure humanity isn’t about to be wiped out by an asteroid impact.”

And maybe we won’t be able to end death in our lifetimes, maybe we won’t personally be able to attend Entropycon. Hopefully, we will, and we’re not planning on dying anytime soon. But even if we personally don’t make it there, we should all precommit to trying to make it happen if we’re around for that long. Throwing your plans out that far afield makes all the short term problems that would stop that plan really apparent.

We hope to see all of you there.

Yes, this is a hill worth dying on

[Epistemic Status: Postrational metaethics.]
[Content warning: Politics, Nazis, Social Justice, genocides, none of these ideas are original, but they are important.]

I.

Nazis kill people, killing people is bad, therefore Nazis are bad.

It’s a simple yet powerful sort of folk logic that holds up well under scrutiny. Nazis are clearly bad. It doesn’t take a philosopher to derive that badness, it’s obvious. They killed millions of people in concentration camps, they started a globe-spanning war that killed millions more, they’re so obviously awful that they’ve become a cultural caricature of stereotypical badness unto themselves.

indiana-jones-punching-a-nazi

The results of letting Nazis have their way were: war, murder, genocide, images of jackbooted soldiers marching amidst rows of tanks. Violence on a scale the world has not seen since was fought out all across the green hills and forests of Europe for everyone to see.

And there are no words.

There are no words. 

Humanity as a whole has rejected Nazism on its merits, we saw first hand what their ideology meant, and we said fuck that. We said fuck that so hard that they became one of the generic images of villainy within our pop culture.

And that’s the problem because it’s meant we’ve stopped seeing them as people. 

But they are people, and remembering that they’re people is important. It’s just as important as remembering the horrible things they did. We don’t have words to express how bad the Nazis were while still humanizing them. But if we reject their humanity, if we don’t see them as people, then we lose sight of something important.

The Nazis ate dinner every night, worried about the future, cared about their children, and through all of the murder and mayhem they committed, most of them thought they were doing the right thing. 

They weren’t that different than us, and we can’t pretend we’re incapable of their sort of evil. Their sort of evil was a distinctly human sort, driven by a powerful and overriding desire to do what was best, what needed to be done at all costs. They were making a better world, and sometimes you had to get rid of the bad people in order to facilitate that better world. Some people just couldn’t be saved, they were intrinsically awful and had to be purged for the good of humanity. That was the sort of evil that lead to the Nazis systematically killing 1.5 million children

You can strip away at all the specifics of the Nazi ideology and get at the root of the evil:
The Nazis believed that doing bad things for good reasons was good.

If we want to avoid the possibility of becoming Nazis ourselves, we have to completely reject that notion. Maybe our ideals are important, maybe they’re cherished, maybe they’re even worth dying for on a hill. But that doesn’t make them worth killing for. 

If we want to avoid the possibility of committing evils of a similar horror and scope to the Nazis, then we have to believe that doing bad things for good reasons are still bad. 

II.

Ozymandias proposes a thought experiment at Thing of Things, called the enemy control raygun.

imagine that a mad scientist has invented a device called the Enemy Control Ray. The Enemy Control Ray is a mind-control device: whatever rule you say into it, your enemy must follow.

However, because of limitations of the technology, any rule you put in is translated into your enemy’s belief system.

So, let’s say you’re a trans rights activist, and you’re targeting transphobes. If you think trans women are women, you can’t say “call trans women by their correct pronouns”, because you believe that trans women are women and transphobes don’t, so it will be translated into “misgender trans women.” If you are a disability rights advocate targeting Peter Singer, you can’t say “don’t advocate for the infanticide of disabled babies”, because it will translate as “don’t advocate for the death of beings that have a right to life”, because you think babies have a right to life and Singer doesn’t. And, for that matter, you can’t say “no eugenics” to Mr. Singer, because it will translate as “bring into existence people whom I think deserve to exist.”

Ozy then goes on to suggest a few commands you could put into the enemy control raygun that would actually generate some good outcomes:

  • Do not do violence to anyone unless they did violence to someone else first or they’re consenting.
  • Do not fire people from jobs for reasons unrelated to their ability to perform the job.
  • If your children are minors, you must support them, even if they make choices you disapprove of.
  • Do not bother people who are really weird but not hurting anyone, and I mean direct hurt not indirect harm to the social fabric; you can argue with them politely or ignore them but don’t insult them or harass them.
  • Try to listen to people about their own experiences and don’t assume that everyone works the same way you do.

These are niceness heuristics and they’re the best defense we have against the sort of human evils that lead to Nazism.

Here’s a few of our own:

  • Don’t apply negative attributes to individuals or groups. People can take harmful actions, they don’t have harmful traits.
  • Almost No one is evil, almost everything is broken.
  • Do not do to others what you would not want them to do to you.
  • Be soft. Do not let the world make you hard, do not let the bitterness steal your sweetness. Take pride that even though the rest of the world may disagree, you still believe it to be a beautiful place
  • Do not put things or ideas above people.

You might notice that most of the things on these lists are advice for what not to do. That’s important, and representative of the notion that your own ideas might be wrong.

In the sermon on the mount, Jesus says:
καὶ καθὼς θέλετε ἵνα ποιῶσιν ὑμῖν οἱ ἄνθρωποι, ποιεῖτε αὐτοῖς ὁμοίως.

Which is widely interpreted to mean:
“Do to others what you want them to do to you.”

But there’s an issue with this, that being the typical mind fallacy. We’re operating from within our own minds, based on our own preferences. And there might be places where our preferences hurt other people. It’s generally a pretty good rule, “I want to not die, therefore I should expect other people want to not die,” isn’t exactly flawed, it just ignores the possibility of people having different preferences to you. The partial inversion from a command to action to a command to inaction is harder to game by a person working from a different set of preferences.

III.

Niceness heuristics are incredibly powerful, and fortunately for us as humans, we mostly come pre-packaged with them. Our 200,000 years spent living in tribes in the ancestral environments have given us a tremendous stockpile of evolutionarily adaptive prosocial traits. Those traits are clearly not quite good enough and fail spectacularly at the scales that humans exist at in modern times, but they’re a good starting point.

Niceness acts like a schelling fence for our ethics, and it might be our only ethical schelling point. Given all that, it rather deeply disturbs us when we see things like this:

l560Sus

Sarcastic response: We hate people who hate cis people and can’t wait for the people who hate cis people revolution where we kill all of them.

See the problem with abandoning niceness? Heuristics like “kill bad people who do bad things” is really easy to have turned on you if someone is operating from a different moral base.

zbslghsEYWwsU9LXDBKqqDKf-ojWSxO6E9ic9ja4leQ

Freedom of speech is a critical niceness heuristic. “Don’t tell people what they can and can’t say” is a lot better than “Don’t say things I don’t like” since you might not always be the one making the decision.

But what if our enemies reject the niceness heuristic themselves, what if they hate us and want to kill us all? Do we still have to be nice to them?

Yes. 

For one, whenever anyone makes the claim “our enemies have rejected the niceness heuristic” it should be viewed with extreme skepticism. It’s super useful to your own side to claim the other side is being mean and bad and unfair, and it’s often difficult to pick out the signal from the noise.

But if even if you can prove your enemies have rejected niceness heuristics, that should never be justification to reject them ourselves. That’s literally what the Nazis did. They saw the jews as bad, they thought the jews were hurting them and manipulating them and had abandoned their own niceness heuristics, which they then used as justification to gleefully leap past the moral event horizon themselves.

Whether or not your enemies are respecting the niceness heuristic has absolutely no bearing on whether to use it yourself. Once you abandon that commitment to niceness and decency, there are no asymmetric weapons left, there’s no schelling point to coordinate around. It becomes a zero sum game and you settle into a shitty nash equilibrium where it becomes a race to see who can escalate the most.

They kill us. So we kill them. So they kill more of us. So we kill more of them. So they kill more of us. So we kill more of them. There’s no place where it ends until one side has completely obliterated the other.

IV.

So what do we do then? Do we just take it? Let them kill us?

No, of course not. We’re not so pacifistic that we think violence is never justified. Sometimes you need to raise an army and stop Hitler from conquering the world, fine. Trolley problems exist in the real world, and there aren’t always easy answers.

But when you stop seeing your enemies as people and start seeing them as generic video game baddies to be riddled with bullets, “raise an army and stop Hitler from conquering the world” goes from the last resort to the first option.

Everyone knows the story of how during WWI, there was a cease-fire on Christmas in 1914 on the Western front, and the soldiers on both sides ended up singing and celebrating together. But less well known, is that that was actually part of a much larger phenomenon. All during the war, peace kept breaking out on the front.

There’s a meme going around in leftist circles that trying to debate with Nazis and talk them out of their Nazism is a waste of time and effort, the best example of it is this Wolfenstein mod that asks you moral questions before letting you shoot the pixel nazi villians in the game who have been programmed with no other commands then “shoot at the player”

It’s a powerful statement, and it’s also totally wrong. Real Nazis in real life are real people, they aren’t cartoon villains, they aren’t monsters, they’re people. People can be reasoned with, people can be talked to, and people can change their minds. 

We’re not saying it’s going to be easy. People don’t change their minds in a day, it takes weeks of debate and discussion to shift people’s views on things. Were your views easily shifted to the place they are now? Or did it take years of discussion and debate with people to come to the positions you now hold?

If someone has been racist for the last twenty years, they’re not going to suddenly wake up after a five-minute conversation, realize they’re being awful, and stop. It takes years to tear those ideas out of the cultural narrative. But they’ll never change if you don’t talk to them. If you just write them off as inherently awful then there’s no possibility of anything ever changing. Someone has to take the first step and extend an olive branch. Maybe they’ll get their hand shot off for the trouble, or maybe, it’ll turn out that the other side aren’t actually monsters, and that they also want to extend their own olive branch, but have been too afraid of your side to do it.

It seems like a weird hill to die on, especially given that it’s one currently being assaulted from all sides, but unless you have a better schelling point then niceness to coordinate around, it’s what we have to work with.

So yes, we might not agree with you, but we will defend unto death your right to exist with that opinion. Niceness is important, it’s one of the most important things about us as humans. So yes, this is a hill worth dying on.