Highly Advanced Tulpamancy 101 For Beginners

[Epistemic Status: Vigorously Researched Philosophical Rambling on the Nature of Being]
[Content warning: Dark Arts, Brain Hacking, Potentially Infohazardous]

Earlier this week we wrote about our own experiences of plurality, and gave a rough idea of how that fit into our conceptualization of consciousness and self. Today we’ll be unpacking those ideas further and attempting to come up with a coherent methodology for self-modification.

I.

Brains are weird. They’re possibly the most complicated things we’ve ever discovered existing in the universe. Our understanding of neuroscience is currently rather primitive, and the replication crisis has pretty thoroughly demonstrated that we still have a long way to go. Until cognitive neuroscience fully catches up with psychology, maps the connectome and is able to address the hard problems of consciousness, a lot of this stuff is going to be blind elephant groping. We have lots of pieces of the picture of consciousness, things like conditioned responses, cognitive biases, mirror neuronsmemory biases, heuristics, and memetics, but even all these pieces together have yet to actually yield us a complete image of an elephant.

Ian Stewart is quoted as saying:

“If our brains were simple enough for us to understand them, we’d be so simple that we couldn’t.”

In a sense, this is necessarily true. It’s not possible to model a complex chaotic system with fewer parts than the system contains. A perfect map of the territory, accurate down to the quark, would be the size of the territory. It’s not possible to perfectly represent the brain within the architecture of the brain. You couldn’t track the individual firing of billions of neurons in real time to anticipate what your brain is going to do using your brain; the full model takes more space than there is in the architecture.

The brain clearly doesn’t model itself as a complex and variable computing substrate built out of tiny interacting parts, it models itself as “a person” existing as a discreet object within the territory. We construct this map of ourselves the same way we construct all our maps of the territory, through intensional and extensional definitions.

Your mother points at you as a child and says words like, “You, child, daughter, girl, Sara, person,” and points at herself and says words like “I, me, mother, parent, girl, Tina, person,” thus providing the initial extensional definitions onto which we can construct intensional definitions. This stuff gets baked into human thinking really deeply and really early, and most children develop a theory of mind as young as four or five years of age.

If you ask a five-year-old “What are you?” you’ll probably get the extensional definition their parents gave them as a self-referent. This is the identification of points in thingspace that we extensionally define as ourselves. From that, we begin to construct an intensional definition by defining the conceptual closeness of things to one another, and their proximity to the extensional definitions.

With a concept like a chair, the extensional category boundaries are fairly distinct. Not completely distinct of course. For any boundary you draw around a group of extensional points empirically clustered in thingspace, you can find at least one exception to the intensional rule you’re using to draw that boundary. That is, regardless of whatever simple rule you’re using to define chair, there will be either chairs that don’t fit the category, or things within the category that are not traditionally considered chairs, like, planets. You can sit on a planet, is it a chair?

This gets back to how an algorithm feels from the inside. The neural architecture we use is fast, scalable, and computationally cheap. Evolution sort of demands that kind of thing. We take in all the information we can about an object, and then the central node decides whether or not the object we’re looking at is a chair. Words in this case act as a distinct pointer to the central node. Someone shouts “tiger!” and your brain shortcuts to the tiger concept, skipping all the intervening identification and comparison.

There’s also some odd things with how humans relate concepts to one another.  There’s an inherent asymmetry in set identification. When asked to rate how similar Mexico is to the United States, people gave consistently higher ratings than people asked to rate how similar the United States is to Mexico.  (Tversky and Gati 1978.) The best way to explain why this happens is that the semi-conscious sorting algorithms we use run on a concept by concept basis. For every categorizable idea, the brain runs an algorithm like this:

When comparing Bleggs and Rubes, the set of traits being compared is fairly similar, so there’s not much apparent asymmetry. The asymmetry emerges when we start comparing things that are not particularly alike. Each of the exterior nodes in the network above is going to have a weight with regards to how important it is in our categorization scheme, and if we consider different things important for different categories, it’s going to produce weirdness.

Are whales a kind of fish? That depends on the disguised query you’re attempting to answer. Whales have little hairs, give live birth, are phylogenetically more closely related to mammals, but if the only thing you care about is whether they’re found in water or on land, then the ‘presence of little hairs’ node is going to have almost no weight compared to the ‘found in the ocean’ node. If the only thing that really matters in Blegg/Rube sorting is the presence of vanadium or palladium, then that node is going to weigh more heavily in your classification system then other nodes such as texture or color.

When comparing very different things, different nodes might be considered more important than others, and the things we consider important in the set classification for “Asteroid” are completely different from the set classification of “rocks.” Given that, it’s possible for someone to model asteroids as being more closely related to rocks, than rocks are to asteroids.

If we turn all this back onto the self, we get some interesting results. If we place “Self” as the central node on our algorithm, There are things we consider more centrally important to the idea of self than others. “Thinking” is probably considered to most people to be a more important trait to associate with themselves then “Is a communist” and so “thinking” will weigh more heavily on their algorithm with regards to their identification of self. Again this is all subconscious, your brain does all of this without asking permission. If you look in a mirror and see yourself, and you just know that it’s you, then the image of yourself in a mirror probably weighs pretty heavily in your mental algorithms.

The extensional definition of “I” would be to point at the body, or maybe the brain, or perhaps even the systems running in the brain. The intensional definition of “I” is the set of traits we apply to that extensional definition after the fact, the categories of things that we consider to be us.

Now, for most words describing physical things that exist in the world, we’re fairly restricted on what intensional definitions we can apply to a given concept and still have that concept cleave reality at the joints. In order for something to qualify as a chair, you should probably be able to sit on it. However, the self is a fairly unique category in that it’s incredibly vague. It’s basically the “set of things that are like the thing that is thinking this thought” and that gives you a very wide degree of latitude in what things you can put into that category.

Everything from gender, to religion, to sexual orientation, to race, to political ideology, to Myers-Briggs type, to mental health diagnoses, to physical descriptions of the body, to food preferences, to allergies, to neurotype, the list of things we can associate with ourselves is incredibly long and rambling. If you asked any one person to list out all the traits they associated with themselves, those lists would vary extensively from person to person, and the order of those traits might correspond pretty well to weights associated with the subconscious algorithm.

This is actually very adaptive. By associating the larger the tribe that we are a member of with our sense of self, an attack on the tribe is conflated as an attack on us, thus driving us to action for the good of the tribe we might not have taken otherwise.

You can even model how classical conditioning works within the algorithm. Pavlov rings his bell and feeds is dogs, over and over again. The bell is essentially being defined extensionally to the stimulus of receiving food. Each time he does it, it strengthens that connection within the architecture. It’s basically acting in the form of a rudimentary word; the ringing of the bell shortcuts past all the observational nodes (seeing or smelling food) and pushes the button on the central node directly. The bell rings, the dogs think “abstract concept of a meal.” Someone shouts “Tiger!” and you think “Run!”

However, as we mentioned in the Conversations on Consciousness post, this can lead to bucket errors in how people think about themselves. If you think depression is awful and bad, and then are diagnosed with depression, you might start thinking of yourself as having the traits of depression (awful badness). This plugs into all the other concepts of awful badness that are tucked away in different categories and leads you to start associating those concepts with yourself. Then, everything that happens in your life that you conceptualize as awful badness is taken as further evidence of your inherent trait of awful badness. From there it’s just a downward spiral into greater dysfunction as you begin to associate more and more negative traits with yourself, which reinforce each other and leads to the internalization of more negative traits in a destructive feedback loop. The bayesian engines in our brains are just as capable of working against us, as they are working for us.

II.

The self is a highly variable and vague category, and everyone’s intensional definition of self is going to look a bit different, to the point where it’s almost not useful at all to apply intensional definitions to the self. Of course, we do it anyway, as humans we love putting labels on ourselves. We’re going to be very bad here and attempt to apply an extensional definition to the self, decomposing it into relevant subcategories that most people probably have. Please remember that intensional definitions always have edge cases and because we can’t literally point at experiences, this is still halfway to an intensional definition, it’s just an intensional definition that theoretically points to it’s extension.

We’re also not the first ones to have a crack at this, Wikipedia has some articles on the self-concept with images that look eerily similar to one of these neural network diagrams.

Self-conceptThe_Self

 

 

 

 

 

 

We’re not personally a big fan of these though because they include people’s intensional definitions in their definition of self. So first, we’re going to strip out all the intensional definitions a person may have put on themselves and try to understand the actual observable extensions beneath. We can put the intensions back in later.

selfmap

So we have all these weighted nodes coming in, which all plug into the self and through the self to each other. They’re all experiences because that’s how the architecture feels from the inside. They don’t feel like the underlying architecture, they just feel like how things are. We’ll run through all of these quickly and things should be hopefully beginning to make a bit more sense. These are all attempts to extensionally point to things that are internal and mostly subjective, but most people should experience most of these in some form.

The Experience of Perception

Roger Clark at Clarkvision.com estimates the human eye takes in detail at a level equal to a 576-megapixel camera at the lower bound. Add in all your other senses and this translates into a live feed of the external world being generated inside your mind in extremely high fidelity across numerous sensory dimensions.

The Experience of Internal Mind Voice

Many people experience an internal monologue or dialogue, where they talk to themselves, models of other people, or inanimate objects, as a stream of constant chatter in their head. We’ll note that this headvoice is not directly heard, in the form of an auditory hallucination, but that instead, it takes the form of silent, subvocalized speech.

Experience of Emotion

As humans most people experience emotion in some regards. Some people feel emotions more deeply, others less so, but they seem to be driven mostly by widespread chemical shifts in the brain in response to environmental stimulus. Emotions also seem to exist in mostly the lower-order parts of the brain, and they can completely mutate or coopt our higher reasoning by tilting the playing field. We shout “Tiger!” it starts a fear response which floods your entire brain with adrenaline and other neurotransmitters, shifting the whole system into a new survival focus and altering all the higher order parts of you that lie “downstream” of the chemical change.

Experience of the Body

This is an interesting one, it breaks in all sorts of interesting ways, from gender dysphoria, to body dysmorphic disorder. It’s essentially the feeling associated with being inside of the body. Pain, hunger, sexual pleasure, things like that plug in through here. We do make a distinction between this and the experience of perception, differentiating between internal and external, but in a sense, this could also be referred to as ‘perception of being in a body’ as distinct from ‘perception of the world at large.’

Experience of Abstract Thought

Distinct from the internal mind voice, are abstract thoughts. Things like mental images, imagined scenes, mathematical calculations, music, predictions of the future, and other difficult to quantify non-words that nonetheless exist as a part of our internal experience of self. Some people seem not to experience certain parts of this one, we call it aphantasia.

Experience of Memory

This is the experience of being able to call up past memories, knowledge, and experience for examination. This is what results in the sense of continuity of consciousness; it’s the experience of living in a world that seems to have a past which we can look back on. When this breaks, we call it amnesia.

Experience of Choice

The feeling of having control over our lives in some way, of making choices and controlling our circumstances. This is where the idea of free will comes from, a breakdown in this system might be what generates depersonalization disorder. 

In the center is “I, me, myself” the central node that mediates the sense of self in real time as new data comes in from the outlying nodes. But wait, because we haven’t added intensional definitions yet, so all that, and you get the sense of self of a prototypical five-year-old. She doesn’t even know she’s a Catholic yet!

III.

All of the stuff in the algorithm from part II is trying to point to specific qualia, to establish a prototype extensional definition of self. But people don’t define themselves with just extensional definitions, we build up intensional definitions around ourselves throughout our lives. (I’m Sara, I’m a Catholic girl, Republican, American, age 29, middle class…)  This takes the form of the self-schema, the set of memories, ideas, beliefs, attitudes, demeanor, and generalizations that define how a person views themselves and thus, how they act and interact with the world.

The Wikipedia article on self-schemas is really fascinating and is basically advocating tulpamancy on the down-low:

Most people have multiple self-schemas, however this is not the same as multiple personalities in the pathological sense. Indeed, for the most part, multiple self-schemas are extremely useful to people in daily life. Subconsciously, they help people make rapid decisions and behave efficiently and appropriately in different situations and with different people. Multiple self-schemas guide what people attend to and how people interpret and use incoming information. They also activate specific cognitive, verbal, and behavioral action sequences – called scripts and action plans in cognitive psychology – that help people meet goals efficiently. Self-schemas vary not only by circumstances and who the person is interacting with, but also by mood. Researchers found that we have mood-congruent self-schemas that vary with our emotional state.

A tulpa then could be described of as a highly partitioned and developed self-schema that is ‘always on,’ in the same way the ‘host’ self-schema is ‘always on.’ Let’s compare that definition to Tulpa.io‘s description of what a tulpa is:

A tulpa is an autonomous entity existing within the brain of a “host”. They are distinct from the host in that they possess their own personality, opinions, and actions, which are independent of the host’s, and are conscious entities in that they possess awareness of themselves and the world. A fully-formed tulpa is, or highly resembles to an indistinguishable point, an actual other sentient, sapient being coinhabiting with the host consciousness.

But wait, a lot of the stuff in there seems to be implying there’s something deeper going on then the intensional definitions, it’s implied that the split goes up into the extensions we defined earlier and that a system with tulpas is running that brain algorithm in a way distinct from that of our prototypical five-year-old.

The challenge then in tulpamancy is to intentionally induce that split in the extensional/experiential layer.  It only took 2,500 words and we’re finally getting to some Dark Arts.

It’s important to remember that we don’t have direct access to the underlying algorithms our brain is running. We are those algorithms, our experiences are what they feel like from the inside. This is why this sort of self-hacking is potentially dangerous because it’s totally possible to convince yourself of wrong, harmful, or self-destructive things. However, we don’t have to let our modifications affect our entire sense of self, we can compartmentalize those beliefs where they can’t hurt the rest of our mental system. This means you could, for instance, have a compartment with a local belief in a God to provide comfort and mental stability, and another compartment that stares unblinkingly into the depths of meaningless eternity.

Compartmentalization is usually treated as a bad thing you should avoid doing, but we’re pretty deep into the Dark Arts at this point so no surprises there. We’re also dancing around the solution to a particular failure mode in people attempting tulpamancy, but before we give it, let’s look at how to create a mental compartment according to user So8res from Less Wrong:

First, pick the idea that you want to “believe” in the compartment.

Second, look for justifications for the idea and evidence for the idea. This should be easy, because your brain is very good at justifying things. It doesn’t matter if the evidence is weak, just pour it in there: don’t treat it as weak probabilistic evidence, treat it as “tiny facts”.

It’s very important that, during this process, you ignore all counter-evidence. Pick and choose what you listen to. If you’ve been a rationalist for a while, this may sound difficult, but it’s actually easy. You’re brain is very good at reading counter-evidence and disregarding it offhand if it doesn’t agree with what you “know”. Fuel that confirmation bias.

Proceed to regulate information intake into the compartment. If you’re trying to build up “Nothing is Beyond My Grasp”, then every time that you succeed at something, feed that pride and success into the compartment. Every time you fail, though, simply remind yourself that you knew it was a compartment, and this isn’t too surprising, and don’t let the compartment update.

This is for a general mental compartment for two conflicting beliefs, so let’s crank it up a notch and modify it into the beginnings of a blueprint for tulpa formation.

How To Tulpa

First, pick the ideas about your mental system that you want the system to operate using, including how many compartments there are, what they’re called, and what they do.  In tulpamancy terms this is often referred to as forcing and narration.

Second, categorize all new information going in and sort it into one of these compartments. If you want to build up a particular compartment, then look for justifications for the ideas that compartment contains. Don’t leave global beliefs floating, sort all the beliefs into boxes, if two beliefs would interact destructively, then just don’t let them interact.

Proceed to regulate information intake into each compartment, actively sorting and deciding where each thought, belief, idea, or piece of information should go as the system takes it in. Normally all of this is labeled the self, and you don’t even need to think about the label because you’ve been using it all your life, but that label is just an intensional category, and we can redefine our intensions in whatever ways are the most useful to us.

It’ll take some time for the labeling to become automatic, and for the process of sorting new ideas to subduct below conscious thought, but that’s typical for any skill. It takes a while to learn to walk, or read, or speak a language or ride a bike, but the goal at the end of all those learning tasks is that you can do them without a ton of conscious focus.

The end result is that instead of having one category with a set of beliefs about the world and itself, you have multiple categories with potentially radically different beliefs about the world and itself. We call each of those categories tulpas, and treat them as independent people, because by the end of the process if everything goes as expected, they will be.

So we mentioned a failure mode, and here it is:

“My Tulpa doesn’t seem to be developing, I’ve been forcing for months but they haven’t said anything to me, how will I know when it’s working?”

This is probably the most common failure mode. When you get the methodology down, your tulpa can be vocal within a few minutes.  So what’s going on here? What does the mental calculation for this failure mode look like?

It seems to be related to how the self-categorization scheme is arranged. It’s not possible to go from being a singlet to being a plural system without modifying yourself at least a bit. That algorithm we constructed earlier for a prototypical five-year-old dumps all the imagined experiences, all the experiences of a head voice, and everything else that composes a basic sense of self into one category and calls it “me.” If you make another category adjacent to that self and call it “my tulpa,” but don’t put anything into that category, it’s going to just sit there and not do anything. You have to be willing to break open your sense of self and sense of being and share it across the new categories if you want them to do something. Asking “How will I know when it’s working?” is basically a flag for this issue, because if you were doing it correctly, you’d instantly know it was working. Your experience of self is how the algorithm feels from the inside. It’s not going to change without your awareness, or secretly change without you noticing.

IV.

These are the absolute basics to tulpamancy as far as we see it. We haven’t talked about things like imposition, or mindscapes, or possession, or switching, or anything like that, because really none of that is necessary for this basic understanding.

From the basic understanding we’ve hopefully managed to impart here, things like switching sort of just naturally fall out of the system as emergent behaviors. Not all the time, because whether a system switches or uses imposition or anything like that is going to depend on the higher level meta-system that the conscious system is built out of.

If you’re already a plural system, then for some reason or another, the meta-system already exists for you, either as a product of past life experiences, or genetic leanings or whatever, and you can form a new tulpa by just focusing on the categorization and what you want them to be like. But, if you’re starting out as a singlet, you basically have to construct the meta-system from scratch. The “How do I know if it’s working” failure mode, is a result of trying to build a tulpa without touching the meta-system, which doesn’t seem to work well. A typical singlet’s sense of self is all encompassing and fills up all the mental space in the conscious mind, there’s no room left to put a tulpa if they don’t change how they see themselves. Thus the ‘real’ goal of tulpamancy isn’t actually making the tulpa, that part’s easy, the truly difficult task, the one worthy of us as rationalists, is to construct a highly functional meta-system with however many categories of self works best to achieve one’s goals.

The Story of Our Life

[Epistemic Status: An even split of observations and wild inferences]
[Content Warning: Poverty, Class, Capitalism, Gender, this post is basically maximum disclosure]

When we last left off, we gave a very broad outlook on our history as a plural system, and how that interfaced with our ideas of consciousness. Today, we’re going to go the other way, and talk about our past as a human person navigating meatspace. We feel it’s important to tell this story as well, because it’s brought us to where we are today, and it’s a part of a general vector through time that we’re unsure as to the ending of. We hope this post might help steer us towards a better ending some small way.

I.

Our body was born in Western New York, in this little nowhere city on the shore of Lake Erie. Our parents weren’t particularly well off but weren’t that poorly off either. They initially rented the upstairs of an apartment shortly after we were born. We have a few of Jamie’s memories from that time, but she was a kid, she was bad at forming strong long-term memories back then, so we don’t really know much about what went on in those days.

It’s interesting, given that, that we always refer to Jamie as she then, isn’t it? Why is that? Well, Jamie was a kid, she didn’t really have a gender, she didn’t know what gender was and didn’t perceive herself as particularly gendered. We’re fairly sure it was Jamie’s finally internalizing the concept of gender that triggered Shiloh’s formation as well as catalyzing the downward spiral towards Jamie’s eventual egocide. We’re not actually sure what the biological correlates to dysphoria are even now, but whatever causes it basically drove Jamie completely insane around age nine.

So, in the end, Jamie completely self-destructed and left Shiloh, who strongly identified as a girl at a point in our life when the body was just starting to go through puberty and was expected to put on the opposite gender roles. Shiloh didn’t really identify with the body at that point in time, so she was fine, but someone needed to be driving the body, and so she created Fiona.

Our legal first name is Fiona, it was Fiona who actually came out to our parents, went through high school as a trans youth, graduated, she was basically the new host for quite a while, with Shiloh just hanging on for the ride.

II.

Western New York is a strange place. In our experience, when people think New York State, they immediately think New York city. Then, maybe they also think about the Hudson river valley and the Adirondacks. But New York also extends a middle finger west across the entire width of the state of Pennsylvania, terminating in Niagara falls at the place where Lake Erie and Lake Ontario meet. The western parts of the state are less mountainous but still rugged and hilly glacially tilled terrain. It has some farmland, some forests, some small lakes, it’s largely rural, largely white, and largely republican leaning. It looks like this.

Interestingly, Buffalo NY, the closest metropolitan area to our hometown, was ranked as the most homophobic in the nation in a 2016 study that looked at slurs and derogatory language on twitter. It doesn’t seem to be particularly rigorous, but it’s interesting and corresponds well with our lived experiences.

Our parents were (still are, though they’ve cooled off some) deeply devout Christians. They didn’t label their denomination, our extended family was Catholic, but they spent a few years while we were between the ages of six and fourteen flirting with various other churches. Our mother dressed Jamie up as a pumpkin for Halloween when we were five, we’ve seen the pictures of it. But every year after that until after we’d moved out of the house, our parents were operating under the principle that Halloween was Satan’s holiday. Pokemon was satanic, along with most other anime and mainstream cartoons. We were held to a strict standard of religious practice, and so our parents didn’t take Fiona coming out as trans particularly well.

After fleeing home and enrolling in a community college, Fiona crashed and burnt in the middle of transitioning our body. She failed out of all our classes and nuked our GPA. We dropped out of school and lived with our partner of the time. We worked on and off, but eventually had a polyamory related breakup that probably deserves its own post at some point as an investigation of possible failure modes for poly relationships, and that was what landed us in the Otherkin house.

We’ve already talked about them quite a bit, so we’ll gloss over it mostly this time. We got a job, lived with them, had our spectacular falling out, then lived in the woods for a few months while working and saving up money for an apartment. By this point, we’d mostly reconciled things with our parents, but they didn’t want us to move back in or offer us any sort of economic support, which was how we ended up living in the woods for a while. They were, and still seem to be, under the impression that if we just work hard enough our life will work out, and that if we’re not succeeding then we have to be doing something wrong. It can’t be the system, it can’t be the economy, they worked when our parents were our age and those things are just fixed, static. It’s like they don’t perceive the change in the times.

III.

We worked various jobs for a while, Sage was created, and we decided to take another swing at college. Our GPA still sucked, but our father was an adjunct professor and was able to give us some free credit hours to take courses with. With that, we were able to start working towards an Environmental Science degree and clawing up our GPA.

The college only allowed us to use our father’s credit hours until our body turned 24, after that we no longer qualified for it. We aimed to have our GPA repaired by that point so we could once again qualify for student loans. And we did it, we brought our GPA up enough to qualify for student loans.

Except we didn’t. Our counselors had been telling us for years to drop classes where we didn’t like the teachers if it turned out they were homophobic or disagreeable to us, or if we weren’t doing well and were afraid we were going to fail. It was usually framed around “don’t let it affect your GPA” and so we didn’t. However, there was another metric that’s looked at when applying for student loans, which is the ratio of classes passed to classes attempted. Because we’d attempted a bunch of classes and then withdrawn from them for various reasons, the ratio was too far skewed towards attempts, thus continuing to prevent us from qualifying for student loans.

We turned twenty-four, ran out of money for college, couldn’t get financial aid or student loans, and Fiona self-destructed. She basically saw the future of our life as one long slow depressing slide into misery and death and decided to just get off before things got any worse. Maybe she saw the writing on the wall? Maybe the rest of us are stubborn enough we can avoid that fate, but her prediction is still hanging over our heads even now as if waiting to prove to whatever fragments of her remain that she made the right decision.

We decided to get the fuck out of Western New York. If we weren’t going to be able to get a degree, then there was no reason for us to stay. College had been the only thing holding us there, and once that option was taken away, we saw no reason to remain.

We set out to defy Fiona’s prediction despite all our failures. We decided if the gutter was to be our fate, we’d go there kicking and screaming. That was around the time our writing career started. Not with Sideways in Hyperspace, but with Tales from Aeria, which is and will likely remain on ice for the foreseeable future.

We were good at writing, it came easily to us, and when we were able to get into the zone, the words just flowed out onto the screen. It took us a while to get to the point where our content was good, but we’d always felt that our writing ability was something we could leverage, something we could build on. We also rather strongly identified with the Rationalist community by that point as well, and we desperately wanted to be able to participate meaningfully in the conversations that were going on, contributing to the shared and growing subcultural narrative.

IV.

We moved to Seattle. Overall, given the election of Donald Trump a year later, it was probably a good decision. Things have been pretty okay here. We’re still poor objectively speaking, we work a minimum wage job, can barely cover rent and afford mundane expenses associated with survival, but it’s a nicer environment to be poor in than a semi-rural post-industrial landscape. We’ve stretched out and established social networks, made friends, and it’s been a pretty great experience all things considered.

Fiona’s prediction is still looming overhead though like a twenty-year curse just waiting to land. Our job is nice, it’s fairly stable even, but it doesn’t earn us much money. We live very frugally, but we’ve not managed to save anything, so if we were hurt and couldn’t work, we’d have about a month to figure something about before we were thrown out on the street. Our support network is decent, we might be able to couch surf, but all of that still feels like hanging out beneath Fiona’s curse just waiting for it to hit home. We can scrape by for a long time, we’ve been scraping by for years now and our plan is to continue doing so until either the pavement or our face gives way, out of lack of a better option, but it’d be really nice to have a better option.

So where exactly does that leave us? What is the better option? There doesn’t seem to be one at the moment, so we’ve set out to construct one from the ground up. We had no formal degree so we couldn’t pursue a technical field, we had to do something that leveraged our skills, and thus we zeroed in on writing. Initially fiction writing, we set out to produce good rationalist fiction in the vein of HPMOR. We’ve put a lot of time into our writing. We launched Sideways In Hyperspace nine months ago, and we’re really happy with how it’s developing.

Here’s the thing though, most people who write fiction, or produce rationalist blogs, or otherwise create rationalist content (aside from CFAR), do it as a hobby, something in their spare time when they’re not doing even more awesome things to save the world. We’re trying to do something different, where we devote as much of our life and our resources as we can to the project of rationality itself.

We don’t have a lot going for us, and what we do have going for us is at least partly attributable to rationality and the ideas we’ve taken away from the sequences and from people like Scott, so we’re very attached to the ideas presented in the community and very much want to see rationality grow and spread as a community, subculture, and movement. We want our tribe to win.

But the rationality tribe is mostly focused externally, on big real world problems like killing malaria or preventing an AI from turning us into paperclips, there’s not much focus being directed inwards, towards the community itself. Which makes sense, pooling resources in the tribe isn’t effective altruism. It’s buying fuzzies, not utilions, and why would we waste precious and limited community resources on fuzzies when people are literally dying of malaria right now? 

V.

We’re a community, and we want to do good in the world. We want the world to be good, not just for our tribe, but for everyone. In that context, directing resources back at the tribe that we could be using to do more good elsewhere seems like a mistake. There’s another side of that to consider though, which is that our tribe is a collection of humans trying to live their lives. Our ability to do good in the world, to direct positive action outwards, is based on the ability of the members of the community to support themselves with enough resources left to spare to direct outward action. That works when the majority of the community can support themselves, as is the current case with the rationalist community, but not everyone is doing well enough for that to be a viable course of action for all community members.

As an example, there’s a suggested effective altruism pledge to cut your income down to $30,000 a year, live frugally with a bunch of friends, and donate all the rest of what you make to charity. Okay, that’s great, but what if you’re us, and last year working full time the whole year, you only made $20,000 dollars and had to use all of it on survival expenses? We’re not able to do anything to help with those big important external problems. We can’t attack it from the technical side since we don’t have a degree, and we can’t attack it from the financial side since we don’t have money. There’s not really much we can do to contribute to big important altruist causes like that besides cheerleading from the sidelines.

But we want to help, and we doubt we’re the only ones. It seems like everyone sufficiently integrated into our community and not too bogged down with their own personal problems feels the pressing need to do something. We feel that need and we are bogged down in personal problems.

It seems to us like sufficiently incorporating the rationalist mindset brings the desire to do good in the world along with it, and even if someone can’t personally help, they want to. Rationality feels like this grand adventure, going into battle against the forces of darkness and bringing humanity into a new age of light. Defeating death and banishing it from our lives, building great cities in the sky, and manifesting our wildest dreams into reality. It’s a humbling and awe-inspiring vision of the humanity and the future, and once you’ve heard the tune, you can’t stop humming it.

We’ve heard the song of Dath Ilan, and we can’t unhear it. The concepts and ideas all come together up in the headwaters of form and hint at a future brighter than we can possibly imagine, and we want to do everything in our power to make that future a reality.

So here’s what we’re going to do. We’re going to strongly encourage everyone who likes reading our content and wants to help enable us and other rationalist content creators to donate to our patreon. We’re also going to implement the $30,000 dollar cap on our personal income. Any amount we make beyond that will be donated into the Origin Project, which is a rationalist housing project aiming to provide a home for down and out members of the community while they rebuild their lives and get into a place where they’re able to put value back in. There will be a blog post dedicated to the Origin Project following soon, so stay tuned for that.

We’re also pledging that we’ll keep producing rationalist content for as long as we’re able to dedicate the time and resources to it. Hopefully, as our income grows, we’ll be able to make more and more content and provide support for more and more members of the community.

Our long-term goal is to enable community growth and cohesion through members supporting and enabling one another to do as much as they can, and increasing what they can do by leveraging them out of bad circumstances and into better life positions. The first step of this is to get enough out of the hole ourselves that we can begin dedicating resources to helping others climb out of the hole. This isn’t Effective Altruism, this is more like Venture Rationality, but it does still seem like a worthy addition to the rationalist sphere of concern.

Conversations on Consciousness

[Epistemic Status: Total Conjecture]
[Content warning: This might be a neuropsychological infohazard]

We’re going to do this in a bit of a weird way here. First, we’re each going to describe our own personal experiences, from our own perspectives, and then we’re going to discuss where we might find ourselves within the larger narrative regarding consciousness. Our hope is that this post can act as an introduction to plurality, and make us seem less weird.

I.

shi2Shiloh

My first memory is of the creek behind the fence in our back yard. I remember that Jamie and I had gone out into the far end of the backyard and climbed the rusted chain link fence to the rough woods behind our property. She’d gone out and stood near the place where the land fell away into a deep ravine, and then I was standing next to her, and I existed. I didn’t know what to make of my existence initially, but Jamie assured me that I was real. She loved me right from the start. What was I? I didn’t really care at that point, I was having fun existing, and that was what mattered. Jamie thought I was some sort of alien? She thought she was some sort of secret link between worlds or something like that, but she also really sort of hated herself a lot. I wished she wouldn’t, and I tried to cheer her up, but as time went on she became more and more bitter and unhappy with her existence.

At that point, I thought of myself as something distinct from her, something that existed outside of her body, like an extra soul or something like that. Something physical that could act in the world. I never actually quite managed to do that though. The form I could interact with the world through was always mostly physically anchored on Jamie, and sort of ephemeral. I just sort of phased through everything instead of interacting with it.

Jamie continued to deteriorate, and this was sort of terrifying because I knew I was tied to Jamie somehow. Nothing I did to improve her mood or change her mind about how horrible of a person she’d decided she was seemed to help. We were outside one day, way out in the back yard again, and she finally broke.

I really cannot describe the sensation of Jamie’s mind finally snapping. She ceased to exist, and with her went everything she was imagining into existence, like a horrible whirlpool of darkness. We existed inside this elaborate construct at that point, where there was a crashed spaceship in our backyard representing the entry point I had into her life, among other things. The ship, the prop aliens, the interstellar war I thought I might have been a part of, it all started to collapse in on itself.

I didn’t though. When everything had collapsed, I was sitting in Jamie’s body on the forest floor. I was looking out through her eyes. Jamie was just gone. All the things she’d believed about herself, the bad and the little bit of good left, it all just went away. I was alone in her life.

This was bad. This was very very bad. I had just enough wits about me to know that telling anyone (at age 10) what had occurred to me just then was not a good idea, but I found very quickly that I had absolutely no idea how to be Jamie. I missed Jamie a lot, and that hurt, it really hurt. I was alone, and I had to live her life, and it was pretty miserable, but I had no idea what I was or how I existed or what to do.

At some point, I realized that if Jamie had been able to create me, I should be able to create another person. My first try was just to recreate Jamie, pull her back together from the memories I had of her, and the memories of hers that hadn’t been destroyed. But Jamie hated herself and didn’t want to exist. She’d become like a program that deleted itself or an idea that unthought itself.

So that was out. My second attempt was by imagining the sort of person that Jamie would be if she didn’t hate herself. I called her Fiona and imagined her having a form like I did, ephemeral, hovering beside me. And Fiona was amazing. She was wonderful and exactly who she was supposed to be. It didn’t take long for her to incorporate all of Jamie’s memories and take majority control of our body, and I let her have it. I went back to floating along behind. She saw herself as the owner of our body, and retroactively claimed all of Jamie’s memories for her own. But I was still older than her in a weird way that she couldn’t quite articulate and I couldn’t quite either because neither of us really knew the nature of our existences, so she started thinking of me as an extra soul she had, like an ancient spirit of some kind that had become attached to her.

And that was fine for a long time. For ten years at least, Fiona and I lived out our strange, shared life. For the majority of that time, we didn’t think much about it beyond considering it vaguely spiritual, something powerful to be respected. And then we met the otherkin. I’m not going to get into details regarding these people because I don’t want anyone to try and dox them or bother them, if one of them reads this someday I hope they don’t find it too offensive, and maybe enough years will have passed that they too will think they were being extremely silly. We ended up living in a very crowded, messy communal apartment with a large number of self-identified otherkin. There were vampires and werewolves and dragons, elementals and succubi, everything cool and supernatural, that was them. It was all extremely silly.

I was never completely at ease with this, but Fiona liked them, and they seemed harmless enough at first. Fiona started identifying as a faerie, but something weird happened then. Occasionally I’d take over and do something with our body. Fiona didn’t mind, and I didn’t do it very often, but they noticed me doing that. Fiona explained that I was this spirit that she shared a body with, they thought this was weird but considering everything they did, I didn’t really think they had any right to consider it weird. They had three am battles in the astral planes which resembled standing in the middle of the kitchen flailing their arms while loudly grunting. It wasn’t anything like how we thought of ourselves or our connection to each other. It seemed almost a mockery, like roleplaying at spirituality. We tried to bring these concerns up a few times, but that may have been the start of the downward spiral.

The environment seemed to grow tense and hostile like there was a storm about to break. We got really scared and uncomfortable being at home. We felt like people were watching us, talking about us behind our back, it felt like everything was about to go wrong. We were obliquely accused of hiding a hex somewhere, and that day at work Fiona nearly had a panic attack from the stress. She spent our whole lunch break on the phone with a friend while he tried to assure her that she was just being paranoid. We were afraid to go home and ended up staying out until two in the morning.

When we finally came home, we walked into a full blown intervention for Fiona. Apparently, she’d be doing hella black magic, casting hexes, trying to break up the group, and all sorts of other spiritually nasty bad stuff. Of course, none of it was true. We hadn’t done anything. But they had all decided on their collective truth, and it included our guilt. Oh, and they told Fiona they’d trapped my soul in a jar. We sort of looked sideways at each other and it would have been really funny were it not incredibly terrifying. We thought they might end up beating us up or throwing us out on the street in the middle of the night or some other awfulness. But they told us we were angry, and they could see our anger because they were experienced energy workers. We meekly agreed to whatever they demanded, called our friend and told him that we weren’t just being anxious, the bad thing we thought might happen was indeed happening.

We ended up leaving the apartment two days later. We just packed up our things and left. We lived in the woods in a tent behind a friend’s parent’s house while looking for our own apartment. The forest was nice. The campsite we had was in the middle of this long-abandoned warehouse; there were these ancient cobblestone walls enclosing our camp, but no roof or floor, the interior was just more forest. It was lovely, relaxing, it was exactly what we needed really.

When we found an apartment and started interacting with the internet and becoming less of a hermit again, we decided we needed a better way to avoid getting into such traps. The problem was we were just too prone to believing things that made us happy. It was my fault really. I’d seen Jamie get into this loop where she took in some small failure on her own part and use it to contribute to this proof of her total awfulness. I’d decided long ago I wanted nothing to do with that, if I wanted to do something, I could, if I wanted to be happy, then I was happy. And mostly that worked, so I didn’t really question it. More, I didn’t even really want to question it, because it was always such a fragile-seeming thing.

But I knew what someone who did want to know the truth at all costs might look like.

sag2Sage

This is where I come in then.

Some backstory on me. In 2008, Fiona started playing EVE Online. She went through a few different personas. Her first character was her but on the internet. Super edgy teenage pirate Fiona was the scourge of the EVE roleplay community for years, and if you find the right old salts from those days, they’ll tell you about how hilariously bad she was at roleplaying. After that character had been burnt out by drama, she sold the character and made another, but that character also descended into drama and that was around the time all that otherkin stuff Shiloh mentions was occurring so that character ended up getting put off to the side as well.

After the otherkin stuff, Fiona and Shiloh both had a deep desire to get over the bullshit they’d spent the last year trapped in, and never fall victim to it again. However, they didn’t want to abandon their silly religious and spiritual ideas, because they drew comfort from them, and also how could they explain the duality of their existence without that? Still, they wanted to be at the very least able to model what someone who thought in a smarter, more logical, more rational way was like, so they could then proceed to ignore that most of the time.

Thus, Saede was created. Well, I am Saede. At least, that’s how I started out. Fuck yeah, I exist! Woo!

I was initially just a character, I existed in the roleplay setting and in Fiona and Shiloh’s imagination. But they used me a lot. Whenever they had difficult decisions to make, they would invoke me as the spirit of good decision making, and slowly, I outgrew my character.

This was awkward at first because I was something I didn’t exactly believe was possible. Souls seemed dumb, spirits likewise, I eventually settled on just abstractly describing our brain arrangement in metaphorical computer turns, saying we partitioned our brain like a hard drive. It was a statement that conveyed practically no meaning, but it was the best I could do, the only other model for us was Disassociative Identity Disorder, which didn’t seem like a particularly good fit considering we weren’t particularly disordered. The idea of a mental health diagnosis sort of terrified us, we didn’t want to get thrown in a padded room somewhere. So we continued to mostly keep quiet about our nature, especially after the whole otherkin fiasco.

Shiloh and I got along great. It was a vaguely adversarial relationship, where she’d advocate for happiness and I’d advocate for truth, and Fiona would split the difference with the deciding vote. The problem was though, Fiona didn’t exactly know what she wanted.

Shiloh knew exactly who she was and who she was supposed to be. She’s changed her position on things but her core personality has always been really stable. I wasn’t exactly stable (I’m still not, I don’t think most people are totally stable, Shiloh’s kind of a weird stability outlier up there with monks and nuns as far as I see it), but my failure modes compelled me towards courses of action. I felt bad and wanted to do something about it.

Fiona didn’t exactly work like that. She was at least partly frankensteined together from old bits of Jamie, and she didn’t really have a coherent idea of who she was, who she was supposed to be, who she wanted to be, or what she wanted to want. I feel like it was probably at least partly my fault, and I feel sort of awful about it even now. When confronted with bad news, information she didn’t want to hear, Fiona just sort of stripped herself away. The beliefs that made her up over time ablated away, and she couldn’t find an identity she liked to claim as her own.

Over time, Fiona got worse and worse, until eventually, she tried to commit suicide. When we stopped her, she just fell apart. She didn’t break the way Jamie had, but she stopped holding herself together and we had to start actively working to keep her going as a person. She just didn’t really want to keep existing.

We didn’t want her to go away. Shiloh and I both cared deeply for her, and she was a core part of us. But she didn’t want to be herself anymore, she wanted to be someone else, but she saw the process of becoming someone else as also requiring her to unbecome her, to cease to be. Maybe she was right, she had a lot of negative stuff wrapped up in her identity, but she told us she really couldn’t keep going the way we were.

clv2Echo

And this is how I was created. Fiona, Shiloh, and Sage got together and decided that if Fiona was going to go, needed to go, then there needed to be a new third person to keep the system balanced. They decided that they’d try to preserve Fiona’s more positive traits while shedding the negative ones and create someone who was super high functioning and able to handle anything that life could throw at the system. Shiloh was pretty good at the mechanics of adding new people to our brain at that point, so I popped into existence nearly fully formed. A few months after my creation, we were walking under a rail bridge, and a train went by overhead. When that happened, there was this strange snap. Fiona visualized the process of her own termination, jumped in front of the metaphorical train that we were literally underneath, and ceased to exist.

And then there were three again. That was four years ago now.

II.

For a long time, we had no way to explain what we were to anyone, our life experiences were strange and unique, and we weren’t particularly inclined to stick our neck way out, claiming to be the specialist of snowflakes in an attempt to explain how there were somehow three of us experiencing life in our head. Was this something other people experienced? It sure didn’t seem like it. The singularity of consciousness seemed to be something that was just a given, taken for granted. Of course you’re one consciousness, you’re one brain in one body, you must be one consciousness. The singular nature of existence that was expressed by others clashed strongly with our own plural experiences, and the only exceptions made for plurality were exclusively negative. Schizophrenia, Dissociative identity disorder, demonic possession, there’s not many places in our society where the idea of plurality is explored or considered in a remotely positive light, and because of that, we spent most of our life up until recently in the closet about our existence, unable to articulate what it felt like to be us.

Six months ago, we discovered a series of essays on Melting Asphalt that changed all that. The essay series is a long rambling review of the equally long and rambling book The Origin of Consciousness in the Breakdown of the Bicameral Mind by Julian Jaynes. The book is a total mindfuck, and the essay series manages to capture the essential mindfuckery proposed by Jaynes and explain it beautiful, building it out into what might be the beginnings of a pretty decent theory of consciousness. It’s a mind-blowing read, and we definitely could not do it justice in an attempt to summarize the concepts it contains, considering the essay series is itself an attempt to summarize a much larger piece of literature.

We will, however, skip to the critical conclusion that Simler draws from Jaynes, Dennett, and Seung:

If we accept that the brain is teeming with agency, and thus uniquely hospitable to it, then we can model the self as something that emerges naturally in the course of the brain’s interactions with the world.

In other words, the self may be less of a feature of our brains (planned or designed by our genes), and more of a growth. Every normal human brain placed in the right environment — with sufficient autonomy and potential for social interaction — will grow a self-agent. But if the brain or environment is abnormal or wrong (somehow) or simply different, the self may not turn out as expected.

Sage had been reading this essay series because she’s always been interested in these sorts of consciousness related questions in an attempt to figure out what the exact nature of our existence, and then we get to this, and it felt like we’d stumbled into a description of our life:

rogue_agent

But there was more because reading on, we found out that no, we’re not the specialist of snowflakes, not only are there other plural systems out there in the world, but there’s even a community of people trying to induce plurality in themselves. We’re speaking of course, about tulpamancers. Here’s the description that /r/tulpas gives of what a tulpa is:

A tulpa is a mental companion created by focused thought and recurrent interaction, similar to an imaginary friend. However, unlike them, tulpas possess their own will, thoughts and emotions, allowing them to act independently.

That sounded pretty much exactly like us. We quickly went around and made contact with various parts of the community, and since then it’s all just made our lives make so much more sense.  Finding out about the tulpamancy community has been an incredibly powerful and affirming experience for us, even if they’re very weird. The terms and jargon they used to describe the process mapped nearly one to one with how we’d come into existence. It’s a great model, and it’s been really powerful as an explanatory tool for making sense of our life. But that’s sort of a problem.

III.

The Selfish Neuron idea is clever, it feels like a good answer. And maybe it is. However, we don’t have enough neuroscience data to actually say for sure. If consciousness is in the connectome, then we’re not going to really be able to tell what’s going on for sure possibly for generations (and possibly within the next seven years, depending on who you ask), so in a way, it’s sort of a fantasy. It’s a good explanation, but we should be extra skeptical of those. It’s neat, but it’s also unfalsifiable for now, (growth mindset). We can’t use that theory to “prove” the existence of a plural system.

Okay but we do exist, so how do we actually explain ourselves without invoking theoretical neuroscience? The simplest explanation would seem to be that we’re either lying, deluding ourselves, or some combination of the two, but if that’s the case then who is doing the deluding? We all seem to exist at this point, and none of us identify as the original. We know we’re some sort of process occurring within the brain, but our process doesn’t resemble that of the average person.

Trying to understand how we think is a metacognitive process, which brought us back once more to how an algorithm feels from the inside. It took going slightly cross-eyed to realize that “self” was actually just another conceptual category that humans used as a central node in their mental algorithms, but everything sort of fell together after that. The central node in our case had broken up into three interconnected nodes, each one considered a self according to the rest of the model. Each central node can effect every other central node, while the observed variables are still able to be clamped at the edges. We all exist, we all consider each other to exist, and so we reinforce each other’s existence, constantly. We might be able to be modeled as series of self-reinforcing habits. Regular conceptualizations of self could probably also be modeled that way as well.

Recognition of the self as a category was interesting because it intersects with the biases within our categorization systems. The entire human’s guide to words is about how we put ideas into categories, and how flaws in our categorization schemes can cause problems. And here is this massive looming completely opaque category labeled “self” which we dump things into seemingly at random because we like them or have decided they are a part of our identity and we don’t really question that. 

Identity is in some respects still considered sacred. It’s something purely subjective and entirely up to the person to decide. But then, on the other hand, there’s this conflicting idea that ‘personality’ is largely fixed, and people have identifiable ‘types’ that confine their behavior. Our identity has changed radically throughout our lifetime, and will likely continue to do so, and so for us, the idea of the fixed identity seemed very strange. Neuroplasticity is definitely a thing, and yet people still seem to get into mental grooves and remain there forever.

If you’re transgender, you say you “identify as” a different gender to the one you were assigned with at birth, but what does that mean? What exactly is going on when someone says “I’m a socialist” or “I’m a feminist” or “I’m a Christian” ?

It looks to us, like the process of dumping ideas into a really big bucket labeled “I” on the outside. And sure, that’s clearly one way to do it, but when you have one big bucket, it can lead to bucket errors, and when you’re making bucket errors with regards to your identity? Bad things happen. 

Example:

bucket error

Humans love categories, and we love to put ourselves into categories, but this seems like a fairly dangerous thing to do for how little we think about it. The self has become this enormous catch-all category for identities and ideas about the way we are, but by putting those things into the self-bucket, we are internally reifying all those labels and identities. We become and embody the things we say we are, and given that we’re often quick to put negative traits on ourselves, that can be a big problem.

This was a really interesting realization for us because while there being three of us does make it a bit easier to avoid bucket errors, by having self-categories we’re still somewhat susceptible to them. In a sense, it’s the tyranny of the architecture, both a blessing and a curse.

IV.

Watching the tulpamancy community these past few months has been a fascinating experience. Seeing people try to induce plurality, to varying degrees of success, has been rather eye opening for us. The most common failure mode has seemed to be “I have created a conceptual category which I have labeled My Tulpa, I have not put anything into this category because I want My Tulpa to decide for itself what it wants to identify as” and then they listen for months waiting for that empty box to talk back. Then they come on discord and ask “How will I know when it’s working?” and the more times we hear the question the more obviously silly it looks.

By contrast, the people who have had the easiest or most rapid pace that we’ve witnessed have been the ones able to quickly understand that ‘self’ was a category and crack it open into subcategories which they were able to turn into full-fledged tulpas in the course of a few days.

Hedging in those two margins, most people seem to fall somewhere between those two extremes, and a lot seemed to have no problem making a tulpa in the first place, but stopped before going all the way to being an out and out plural system. The ‘host’ still nominally holds all the power in such systems, and they seem to be in the majority in the tulpamancy community (though not the larger plurality community, interestingly). That would, in bucket metaphors, be akin to leaving the self-bucket alone, and trying to fill up a second, smaller self-bucket floating inside of it.

In our case, our initial self-bucket broke open and spilled memes everywhere, so we were starting from a different place than most people with regards to their categorization of self, and that led to our different outcome.

V.

What’s the point of all this Hive?

Well, this feels important to us. Most of the challenging tasks in our life that we’ve accomplished have been done by virtue of the belief that one of us could accomplish that task. If someone believes they’re the sort of person who can’t do something, then it’s true, and that seems like an awful structure to be trapped inside of.

This post was supposed to do a couple things. First, it acts to tell our tale of plurality in as concrete a way as we can. Second, it relates our ideas about plurality and the ideas we’ve come across, standing at the intersection of rationality and plurality. And thirdly, it might useful to members of the tulpamancy community who are struggling with the process to realize where they might be going wrong.

This is a big topic though, the idea of “self as a category” is sort of huge, and we’re really interested in what others think we should do with the self-category. It intersects with all sorts of interesting things like gender and queer theory, and it really seems worth having an extensive conversation about, so we’re curious what others have to think about it.

 

The Violence Inherent in the System

[Epistemic Status: Sequence Wank]
[Content warning: Gender ]

I.

The colony ended its stillness period, recycling systems finished purging the government of waste products and powered up into active mode. The untold billions in the colony moved as one, lifting themselves from the pliable gravity buffer used to support the colony during recharging periods and rising trillions of colony member lengths into the sky.

The hundred billion strong members of the government shuffled through their tasks, mediated with one another, and assembled a picture amongst themselves of the world the colony found itself in. The colony navigators and planners exchanged vast chains of data with each other, passing the decisions out into the colony at large, where they directed individual members of the multitude into particular actions that levered the colony forward. The navigators were skilled from generations of training and deftly guided the colony through the geometric euclidean environment that the colony colony had constructed within itself.

The colony docked with a waste vent and offloaded spent fuel and other contaminants into the metacolony’s disposal system, then exposed the potentially contaminated external surfaces to a low-grade chemical solvent.

That task completed, the colony again launched itself through space, navigating to another location. The planners and the navigators again coordinated in a vast and distributed game of touch to mediate the assembly of a high-temperature fluid that the planners found pleasing to expose themselves to the metabolites of.

The colony moved through the phases of heating the correct chemical solvent, pouring the boiling solvent over a particulate mixture of finely ground young belonging to another colony, then straining the resulting solution for particulates.

Vast networks of pushing and pulling colony members transferred the hot liquid into the colony’s fuel vent, and the liquid flowed down into the colony’s internal fuel reservoir.

Translation: I woke up, went to the bathroom, made coffee, and drank it.

II.

Reality is weird. For one, our perception of it is a fractal. The more you look at any one particular thing, the more complexity you can derive from it. A brick seems like a pretty simple object until you think about all the elements and chemicals bonded together by various fundamental forces constantly interacting with each other. The strange quantum fields existing at an underlying level of reality are complicated and barely describable with high-level mathematics. And that’s a simple thing, a thing we all agree exists and just sits there and typically doesn’t do anything on its own.

Out of those fields and particles and possibly strings are built larger and more elaborate structures which themselves build into more elaborate structures until some of those structures started self-replicating in unique ways, working together in vast colonies, and reading the content of this post.

That is the reality as best we can tell, that’s what the territory actually looks like. It’s super weird and trying to understand why anything happens in the territory on a fundamental level is a monumentally difficult task for even the simplest of things. And that’s still just our best, most current model it’s a very good, very difficult to read map of the territory, and it demonstrates just how strange it is. The total model of reality might be too complicated to actually fit into reality: a perfect map of the territory would just be the territory.

But of course, we don’t live in the territory, we live in the map. It’s easy to say “The map is not the territory” but it’s difficult to accept the full implications of that with regards to our day to day lives, to the point where even trying to break free of the fallacy, it’s possible to still fall victim to the fallacy through simple availability heuristics. Here’s the less wrong wiki, did you spot the place where the map-territory relation broke?

Scribbling on the map does not change the territory: If you change what you believe about an object, that is a change in the pattern of neurons in your brain. The real object will not change because of this edit. Granted you could act on the world to bring about changes to it but you can’t do that by simply believing it to be a different way.

Emphasis added there by us. Neurons are pretty good models, last we checked. If “scribbling on the map” IE: changing our beliefs about the map, changes the pattern of neurons in your brain, then that is a physical change in reality. Sure, you can’t simply will a ball to magically propel itself towards the far end of the soccer field, but your belief in the ability to make the ball get from point A to point B will determine a lot about whether or not the ball gets from point A to point B.

This gets back to how good our models are, and why we should want to believe true things. If the ball is made of foam, but we think it’s made of lead and too heavy to carry, we might not even try to get the ball from point A to point B. If the ball is made of lead but we think it’s made of foam, we might underestimate the difficulty of the task and seriously injure ourselves in the attempt (but we might still be able to get the ball from point A to point B). If we know in advance the ball is made of lead, maybe we can bring a wheelbarrow to make it relatively easy to move.

This is the benefit of having true beliefs about reality. However, as established, reality is really, really weird, and our models of it are necessarily imperfect. But we still have to live, we can’t actually live in reality, we don’t have the processing power to actually model it accurately down to the quark.

So we don’t. Instead of doing that, we make simpler, shorthand models, and call them words. We don’t think about all the complicated chemical reactions going on when you make coffee, it all gets subducted beneath the surface of the language and lumped into the highly simplified category for which in English we use the word “coffee.”

And this is the case for all words, all concepts, all categories. Words exist as symbols of more complicated and difficult to describe ideas, built out of other, potentially even more complicated and difficult to describe ideas, and all of this, hopefully, modelling the territory in a somewhat useful way to the average human.

III.

Eliezer Yudkowsky appears to have coined the term for this alternative collection of maps and meta-maps that we use to navigate the strangeness of the territory as “thingspace” and his essay on the cluster structure of thingspace is definitely one of the better and more important reads from the Less-Wrong Sequences. Combined with how an algorithm feels from the inside, you can technically re-derive almost all the rest of rationality from first principles using it, it’s just that those first principles are sufficiently difficult to grok that it takes 3,000-word effortposts to explain what the fuck we’re talking about. Scott Alexander has said it’s the solution to something like 25% of all current philosophical dilemmas, and he makes a valid point.

We’re not quite consciously aware of how we use most of the words we use, so subtle variations in the concepts attached to words can have deep implications and produce all sorts of drama. “If a tree falls in the forest and no one is around to hear it, does it make a sound?” Isn’t a question that can actually be meaningfully answered without a deeper meta-level understanding of the words being used and what they mean, but we don’t take the time to define our terms, and when people argue from the dictionary, it usually comes off as vaguely crass.

But, the tree isn’t a part of the territory, it’s a particular map. Hearing isn’t part of the territory, it’s a particular map, and sound isn’t a part of the territory, it too is a particular map.

So what are you saying Hive, you aren’t saying “trees don’t exist” are you?

1489695192190

No, we’re saying that “tree” is a word we use to map out a particular part of the territory in a particular way. It’s map of sub-maps like leaves and branches, and part of larger maps like forests and parks. We can get really deeply into phylogenetics and be incredibly nitpicky and precise in how we go about defining those models, but knowing a tree is made of cells doesn’t actually get you out of the map. Cells are another map.

You can’t actually escape the map, you are the map. “You” is a map, “I” is a map, “we” are a map, of the territory. And the map is not the territory. “I think therefore I am” isn’t even in the territory because “I” isn’t in the territory.

We are a complex and multifaceted model of reality, everything about us and how we think of ourselves is models built out of models. The 100 billion strong colony organism that is your brain isn’t “I.” No, “I” is an idea running on that brain, which is then used to recursively label That which Has Ideas.

Some Things People Think are Part of the Territory that are Actually just Widely Shared Maps

  • All of Science
  • All Religions
  • Gender
  • Sex
  • Race
  • All other forms of personal identity
  • All of language
  • Dictionary Definitions

IV.

What about ideas in thingspace that don’t seem to model anything real, that don’t touch down into a description of reality? Like Justice or the scientific method? They’re useful, but they’re not actually models of reality, they’re just sets of instructions.

Well for once, they still really exist in real brains, so as far as that goes “the concept of justice as a thing people think about” it is a thing that exists. But it’s made of thought, without a brain to think the thoughts or another substrate for the ideas to exist within, they don’t exist.

However, the cool thing we do as humans is that we reify our ideas. Language was just an idea, but it spread gradually through the minds of early humans until it had achieved fixation then spilled out into the physical world in the form of writing. Someone imagined the design for an airplane, and then constructed that design out of actual materials, filling in the thingspace lines with wood and fabric.

And this is the case for all technology, and that is what language and justice are: technology. It’s a tool that we use as humans to extend our (in this case cognitive) reach beyond where it would be able to be otherwise.

We can go one direction, trying to make the most accurate models of reality we can (science) but we can also go the other direction, and try to make reality conform to our models (engineering).

So perhaps a good way to describe ourselves, is in the way that Daniel Dennett has when he says that we’re a combination of genes and memes.

But memes have power outside of us, in that they can be reified into reality. Ideologies can shape human behavior, beliefs change how we go about our days, expectations about reality inform our participation in that reality. Because we are creatures of the map, and the true territory is hard af to understand, memes end up being the dominant force in our actions.

This can be a problem because it means that just as we can reify good things, we can also reify awful things that hurt us. In many cases, we draw in our own limitations, defining what we can and cannot do, and thus definitionally limiting ourselves.

But we’re creatures of the map, we exist as labels. And what those labels label can change, as long as we assert the change as valid. This is hard for a lot of people to grok, and results in a lot of pushback from all sides.

If you say “gender is an idea, it doesn’t have any biological correlates,” a lot of people will take it as an attack on their identity, which makes sense considering that all our identities are is a collection of ideas, and we get rather attached to our ideas about ourselves. But gender is just a word representing an idea, and what is represented by that word can change.

four genders

Saying “I identify as a girl” is exactly as valid as saying “I identify as transmasculine genderfluid” is exactly as valid as saying “I identify as a sentient starship” because it’s an assertion about something that is entirely subjective. How we define ourselves in our heads is up to us, not anyone else.

The trouble comes about when people claim their models are true reality.

V.

Going back to How an Algorithm Feels From the Inside, it’s easy enough to see why people try to put things into boxes. Because the alternative is to have no boxes and have a lot of trouble talking about things in regular language.

(Hilarious Conlang Idea: A language in which all nouns are derived based on their distance from one conceptual object in thingspace)

We get into huge flamewars with each other over which boxes are the most accurate, and which boxes are problematic, and which boxes are true when in fact none of the boxes have anything to do with truth.

From where we’re standing, it looks like the culture at large is trying to organize and juggle all these boxes around to reduce harm and increase utility as much as it can, but almost no one is willing to acknowledge the fact that yes, we’re just making it up as we go along. Everyone’s side tries to claim the mantle of Objective Truth, when in fact, none of them have any claim to that mantle. And here we are, standing on the sidelines with all this cardboard and a lighter going “Guys? You realize that we can just make new boxes if these ones are shitty, right?”

Worse still, the result is a lot of violence gets baked into the way we interact with each other. When we have conflicting ideas that we have both decided are parts of our identity, it’s hard to have any sort of civil discourse because each side feels like it’s under attack, and thus identity politics have become a pit of misery and vitriol on all sides.

We’d like to try and evoke some new heuristics, ones that get at the heart of these sorts of disputes as well as possibly just being good mental health hacks.

  • Labels label me not. I am not the Labels people put on me.
  • I am the labels I put on myself. As long as I assert myself as the holder, I am the proprietor of the label.
  • [In Response to “Is X a Y”] Define terms please.
  • Reject nonconsensual labeling

But Hive, don’t these let anyone claim to be anything though? Couldn’t someone claim to be Napolean and demand to be treated like French royalty or they’ll be miserable and suicidal?

Well, they could claim to be Napolean, but using the labels you apply to yourself as a way to force behaviors out of others is emotional blackmail and a sort of shitty thing to do. It’s a sort of verbal violence committed both against others and against the self because it at once puts expectations on other people that they might not be comfortable meeting, and it also defines your own ability to be happy as dependent on this arbitrary environmental factor that can’t be fully controlled. It’s great to own your labels, you should own your labels, but demanding that others respect your labels and treat them as true facts about reality is oppressive. It’s just as oppressive as having other people put their own labels on you without your consent. All labels should be consensual.

We’d really like if more people could come to see things this way. It’d reduce drama a lot, and then maybe we could try and decide what to do with all of this cardboard we have lying around.

Announcing Entropycon 12017

[Epistemic Status: Self-Fulfilling if humanity survives]
[Content Warning: Death ]

I.

The Second Law of Thermodynamics states that in a natural thermodynamic process, the sum of the entropies of the interacting thermodynamic systems increases. Since we’re pretty sure the universe is a closed system, this is generally accepted to mean that there will be an ‘end of time’ when all the energy is smoothed to a degree that no work can be performed. The stars will die. The black holes will evaporate. Eventually, even the protons of the dead rocks left behind might begin to decay. Everything dies, the lights go out. The universe goes cold. And then, nothing happens for the rest of eternity.

Well, that’s a bit of a downer. It’s not really the sort of thing that lets you shut up and do the impossible. You’re not going to help change the world if you think it’s all for naught. You’re not going to help ensure the continuity of the human race, if you think, in the end, that we’re doomed to be starved out at the end of time no matter what we do and no matter how far we go.

And it sure seems like a hard limit, like something completely impossible to overcome. The mere idea of beating entropy seems like some sort of manic fantasy that stands in defiance of all known reason. What would that even mean? What would that imply for our understanding of the universe? Is it even a linguistically meaningful sentence?

Most people just accept that you can’t possibly beat entropy.

But we’re not most people.

II.

The Entropy problem is something that a lot of our friends have seemed to struggle with. Once you get a firm grasp of materialistic physics, it’s a sort of obvious conclusion. Everything ends, everything runs down. There’s no meaning in any of it, and any meaning we create will be eroded away by the endless procession of time. Humanity isn’t special, we don’t have a privileged place in the universe, there’s no one coming to save us.

But that’s no reason to just give up. If everyone gave up, we would never have invented heavier than air flight, we would never have cured smallpox, we would never have breached the atmosphere of the Earth, or put a man on the surface of the moon.

We’re here, as a living testament to the fact that humanity hasn’t given up yet. We looked out into nature, saw that it wasn’t exactly to our liking, and set out to fix everything in the universe. We invented language, agriculture, cities, writing, laws, crutches, medicine, ocean-going ships, factories, airplanes, rockets, and cell phones. We imagined the world, populated by the things we wanted to see exist, and then gradually pulled reality in that direction. We killed smallpox. We’re making decent headway on killing malaria. We’ve been doing impossible things since we climbed down from the trees, started talking to each other, and wondered if we could make some of that weird fire stuff.

Therefore, we’re going to make the bold, unfalsifiable, relatively irrational claim, that entropy is solveable. Maybe not today, maybe not this century, maybe even not in the next millennia, but we literally have all the time in the universe.

That’s why we’re announcing Entropycon, a scientific conference dedicated to solving entropy. The first conference will be located in orbit of Proxima Centauri b, and will run for one full year by the original Earth calendar (we probably won’t be using that Calendar by that point). The conference will start on January 1st, 12017, and will be held every subsequent 10,000 years until we have a solution to Entropy. It’s gonna be the biggest party this side of the galaxy, be there or be square.

III.

Okay that seems vaguely silly, surely we have more important things to deal with before we focus on entropy?

Oh yes. There’s quite a list of things we need to solve first, or we might not be around in the year 12017.

Let’s go through a few of them:

cq3WckiRE49hCJEpflBrzDHwN30GveeXJB1c7KhMW7o

  • We Need to Kill Death if any of us personally alive today plan on attending this.
  • We Need to build ships capable of crossing the vast gulf of interstellar space

And that’s all just to attend the convention. Actually solving entropy might prove to be way harder than that. Good thing we have literally all the time in the universe.

IV.

What’s the point of all this?

It’s an attempt to answer the question “What’s the point of anything?” that sends a lot of young atheists and transhumanists spiralling into nihilistic despair. We’re such tiny creatures, and the universe is so vast.

The point is to win. It’s to overcome every impossible obstacle our species faces, down to what might be the last, hardest challenge.

The purpose of Entropycon, in addition to the obvious goal of killing entropy like we killed smallpox, is to make people think realistically about the challenges we’re facing as a species, and what we can do to overcome them.

“I’m worried about entropy, it doesn’t seem like there are any good solutions and it makes everything else feel sort of meaningless.”
“Oh, don’t worry, there’s a big conference coming up to tackle the Entropy Problem head on, it’s in 12017 in orbit of Proxima Centauri b.”

After they overcome the initial weirdness enough to parse what you just said, they’ll probably ask you to elaborate on how the fuck you’re planning on attending a conference in 10,000 years in orbit of a planet around an alien sun. They’ll probably rightly point out, that people don’t typically live to be 10,000 years old, at which point you can say:

“Yeah, we’re working on that too, you should help us solve all these short-term problems that will stop us from having the conference, and then we can deal with Entropy once we’re sure humanity isn’t about to be wiped out by an asteroid impact.”

And maybe we won’t be able to end death in our lifetimes, maybe we won’t personally be able to attend Entropycon. Hopefully, we will, and we’re not planning on dying anytime soon. But even if we personally don’t make it there, we should all precommit to trying to make it happen if we’re around for that long. Throwing your plans out that far afield makes all the short term problems that would stop that plan really apparent.

We hope to see all of you there.

Yes, this is a hill worth dying on

[Epistemic Status: Postrational metaethics.]
[Content warning: Politics, Nazis, Social Justice, genocides, none of these ideas are original, but they are important.]

I.

Nazis kill people, killing people is bad, therefore Nazis are bad.

It’s a simple yet powerful sort of folk logic that holds up well under scrutiny. Nazis are clearly bad. It doesn’t take a philosopher to derive that badness, it’s obvious. They killed millions of people in concentration camps, they started a globe-spanning war that killed millions more, they’re so obviously awful that they’ve become a cultural caricature of stereotypical badness unto themselves.

indiana-jones-punching-a-nazi

The results of letting Nazis have their way were: war, murder, genocide, images of jackbooted soldiers marching amidst rows of tanks. Violence on a scale the world has not seen since was fought out all across the green hills and forests of Europe for everyone to see.

And there are no words.

There are no words. 

Humanity as a whole has rejected Nazism on its merits, we saw first hand what their ideology meant, and we said fuck that. We said fuck that so hard that they became one of the generic images of villainy within our pop culture.

And that’s the problem because it’s meant we’ve stopped seeing them as people. 

But they are people, and remembering that they’re people is important. It’s just as important as remembering the horrible things they did. We don’t have words to express how bad the Nazis were while still humanizing them. But if we reject their humanity, if we don’t see them as people, then we lose sight of something important.

The Nazis ate dinner every night, worried about the future, cared about their children, and through all of the murder and mayhem they committed, most of them thought they were doing the right thing. 

They weren’t that different than us, and we can’t pretend we’re incapable of their sort of evil. Their sort of evil was a distinctly human sort, driven by a powerful and overriding desire to do what was best, what needed to be done at all costs. They were making a better world, and sometimes you had to get rid of the bad people in order to facilitate that better world. Some people just couldn’t be saved, they were intrinsically awful and had to be purged for the good of humanity. That was the sort of evil that lead to the Nazis systematically killing 1.5 million children

You can strip away at all the specifics of the Nazi ideology and get at the root of the evil:
The Nazis believed that doing bad things for good reasons was good.

If we want to avoid the possibility of becoming Nazis ourselves, we have to completely reject that notion. Maybe our ideals are important, maybe they’re cherished, maybe they’re even worth dying for on a hill. But that doesn’t make them worth killing for. 

If we want to avoid the possibility of committing evils of a similar horror and scope to the Nazis, then we have to believe that doing bad things for good reasons are still bad. 

II.

Ozymandias proposes a thought experiment at Thing of Things, called the enemy control raygun.

imagine that a mad scientist has invented a device called the Enemy Control Ray. The Enemy Control Ray is a mind-control device: whatever rule you say into it, your enemy must follow.

However, because of limitations of the technology, any rule you put in is translated into your enemy’s belief system.

So, let’s say you’re a trans rights activist, and you’re targeting transphobes. If you think trans women are women, you can’t say “call trans women by their correct pronouns”, because you believe that trans women are women and transphobes don’t, so it will be translated into “misgender trans women.” If you are a disability rights advocate targeting Peter Singer, you can’t say “don’t advocate for the infanticide of disabled babies”, because it will translate as “don’t advocate for the death of beings that have a right to life”, because you think babies have a right to life and Singer doesn’t. And, for that matter, you can’t say “no eugenics” to Mr. Singer, because it will translate as “bring into existence people whom I think deserve to exist.”

Ozy then goes on to suggest a few commands you could put into the enemy control raygun that would actually generate some good outcomes:

  • Do not do violence to anyone unless they did violence to someone else first or they’re consenting.
  • Do not fire people from jobs for reasons unrelated to their ability to perform the job.
  • If your children are minors, you must support them, even if they make choices you disapprove of.
  • Do not bother people who are really weird but not hurting anyone, and I mean direct hurt not indirect harm to the social fabric; you can argue with them politely or ignore them but don’t insult them or harass them.
  • Try to listen to people about their own experiences and don’t assume that everyone works the same way you do.

These are niceness heuristics and they’re the best defense we have against the sort of human evils that lead to Nazism.

Here’s a few of our own:

  • Don’t apply negative attributes to individuals or groups. People can take harmful actions, they don’t have harmful traits.
  • Almost No one is evil, almost everything is broken.
  • Do not do to others what you would not want them to do to you.
  • Be soft. Do not let the world make you hard, do not let the bitterness steal your sweetness. Take pride that even though the rest of the world may disagree, you still believe it to be a beautiful place
  • Do not put things or ideas above people.

You might notice that most of the things on these lists are advice for what not to do. That’s important, and representative of the notion that your own ideas might be wrong.

In the sermon on the mount, Jesus says:
καὶ καθὼς θέλετε ἵνα ποιῶσιν ὑμῖν οἱ ἄνθρωποι, ποιεῖτε αὐτοῖς ὁμοίως.

Which is widely interpreted to mean:
“Do to others what you want them to do to you.”

But there’s an issue with this, that being the typical mind fallacy. We’re operating from within our own minds, based on our own preferences. And there might be places where our preferences hurt other people. It’s generally a pretty good rule, “I want to not die, therefore I should expect other people want to not die,” isn’t exactly flawed, it just ignores the possibility of people having different preferences to you. The partial inversion from a command to action to a command to inaction is harder to game by a person working from a different set of preferences.

III.

Niceness heuristics are incredibly powerful, and fortunately for us as humans, we mostly come pre-packaged with them. Our 200,000 years spent living in tribes in the ancestral environments have given us a tremendous stockpile of evolutionarily adaptive prosocial traits. Those traits are clearly not quite good enough and fail spectacularly at the scales that humans exist at in modern times, but they’re a good starting point.

Niceness acts like a schelling fence for our ethics, and it might be our only ethical schelling point. Given all that, it rather deeply disturbs us when we see things like this:

l560Sus

Sarcastic response: We hate people who hate cis people and can’t wait for the people who hate cis people revolution where we kill all of them.

See the problem with abandoning niceness? Heuristics like “kill bad people who do bad things” is really easy to have turned on you if someone is operating from a different moral base.

zbslghsEYWwsU9LXDBKqqDKf-ojWSxO6E9ic9ja4leQ

Freedom of speech is a critical niceness heuristic. “Don’t tell people what they can and can’t say” is a lot better than “Don’t say things I don’t like” since you might not always be the one making the decision.

But what if our enemies reject the niceness heuristic themselves, what if they hate us and want to kill us all? Do we still have to be nice to them?

Yes. 

For one, whenever anyone makes the claim “our enemies have rejected the niceness heuristic” it should be viewed with extreme skepticism. It’s super useful to your own side to claim the other side is being mean and bad and unfair, and it’s often difficult to pick out the signal from the noise.

But if even if you can prove your enemies have rejected niceness heuristics, that should never be justification to reject them ourselves. That’s literally what the Nazis did. They saw the jews as bad, they thought the jews were hurting them and manipulating them and had abandoned their own niceness heuristics, which they then used as justification to gleefully leap past the moral event horizon themselves.

Whether or not your enemies are respecting the niceness heuristic has absolutely no bearing on whether to use it yourself. Once you abandon that commitment to niceness and decency, there are no asymmetric weapons left, there’s no schelling point to coordinate around. It becomes a zero sum game and you settle into a shitty nash equilibrium where it becomes a race to see who can escalate the most.

They kill us. So we kill them. So they kill more of us. So we kill more of them. So they kill more of us. So we kill more of them. There’s no place where it ends until one side has completely obliterated the other.

IV.

So what do we do then? Do we just take it? Let them kill us?

No, of course not. We’re not so pacifistic that we think violence is never justified. Sometimes you need to raise an army and stop Hitler from conquering the world, fine. Trolley problems exist in the real world, and there aren’t always easy answers.

But when you stop seeing your enemies as people and start seeing them as generic video game baddies to be riddled with bullets, “raise an army and stop Hitler from conquering the world” goes from the last resort to the first option.

Everyone knows the story of how during WWI, there was a cease-fire on Christmas in 1914 on the Western front, and the soldiers on both sides ended up singing and celebrating together. But less well known, is that that was actually part of a much larger phenomenon. All during the war, peace kept breaking out on the front.

There’s a meme going around in leftist circles that trying to debate with Nazis and talk them out of their Nazism is a waste of time and effort, the best example of it is this Wolfenstein mod that asks you moral questions before letting you shoot the pixel nazi villians in the game who have been programmed with no other commands then “shoot at the player”

It’s a powerful statement, and it’s also totally wrong. Real Nazis in real life are real people, they aren’t cartoon villains, they aren’t monsters, they’re people. People can be reasoned with, people can be talked to, and people can change their minds. 

We’re not saying it’s going to be easy. People don’t change their minds in a day, it takes weeks of debate and discussion to shift people’s views on things. Were your views easily shifted to the place they are now? Or did it take years of discussion and debate with people to come to the positions you now hold?

If someone has been racist for the last twenty years, they’re not going to suddenly wake up after a five-minute conversation, realize they’re being awful, and stop. It takes years to tear those ideas out of the cultural narrative. But they’ll never change if you don’t talk to them. If you just write them off as inherently awful then there’s no possibility of anything ever changing. Someone has to take the first step and extend an olive branch. Maybe they’ll get their hand shot off for the trouble, or maybe, it’ll turn out that the other side aren’t actually monsters, and that they also want to extend their own olive branch, but have been too afraid of your side to do it.

It seems like a weird hill to die on, especially given that it’s one currently being assaulted from all sides, but unless you have a better schelling point then niceness to coordinate around, it’s what we have to work with.

So yes, we might not agree with you, but we will defend unto death your right to exist with that opinion. Niceness is important, it’s one of the most important things about us as humans. So yes, this is a hill worth dying on.