Highly Advanced Tulpamancy 101 For Beginners

[Epistemic Status: Vigorously Researched Philosophical Rambling on the Nature of Being]
[Content warning: Dark Arts, Brain Hacking, Potentially Infohazardous]

Earlier this week we wrote about our own experiences of plurality, and gave a rough idea of how that fit into our conceptualization of consciousness and self. Today we’ll be unpacking those ideas further and attempting to come up with a coherent methodology for self-modification.

I.

Brains are weird. They’re possibly the most complicated things we’ve ever discovered existing in the universe. Our understanding of neuroscience is currently rather primitive, and the replication crisis has pretty thoroughly demonstrated that we still have a long way to go. Until cognitive neuroscience fully catches up with psychology, maps the connectome and is able to address the hard problems of consciousness, a lot of this stuff is going to be blind elephant groping. We have lots of pieces of the picture of consciousness, things like conditioned responses, cognitive biases, mirror neuronsmemory biases, heuristics, and memetics, but even all these pieces together have yet to actually yield us a complete image of an elephant.

Ian Stewart is quoted as saying:

“If our brains were simple enough for us to understand them, we’d be so simple that we couldn’t.”

In a sense, this is necessarily true. It’s not possible to model a complex chaotic system with fewer parts than the system contains. A perfect map of the territory, accurate down to the quark, would be the size of the territory. It’s not possible to perfectly represent the brain within the architecture of the brain. You couldn’t track the individual firing of billions of neurons in real time to anticipate what your brain is going to do using your brain; the full model takes more space than there is in the architecture.

The brain clearly doesn’t model itself as a complex and variable computing substrate built out of tiny interacting parts, it models itself as “a person” existing as a discreet object within the territory. We construct this map of ourselves the same way we construct all our maps of the territory, through intensional and extensional definitions.

Your mother points at you as a child and says words like, “You, child, daughter, girl, Sara, person,” and points at herself and says words like “I, me, mother, parent, girl, Tina, person,” thus providing the initial extensional definitions onto which we can construct intensional definitions. This stuff gets baked into human thinking really deeply and really early, and most children develop a theory of mind as young as four or five years of age.

If you ask a five-year-old “What are you?” you’ll probably get the extensional definition their parents gave them as a self-referent. This is the identification of points in thingspace that we extensionally define as ourselves. From that, we begin to construct an intensional definition by defining the conceptual closeness of things to one another, and their proximity to the extensional definitions.

With a concept like a chair, the extensional category boundaries are fairly distinct. Not completely distinct of course. For any boundary you draw around a group of extensional points empirically clustered in thingspace, you can find at least one exception to the intensional rule you’re using to draw that boundary. That is, regardless of whatever simple rule you’re using to define chair, there will be either chairs that don’t fit the category, or things within the category that are not traditionally considered chairs, like, planets. You can sit on a planet, is it a chair?

This gets back to how an algorithm feels from the inside. The neural architecture we use is fast, scalable, and computationally cheap. Evolution sort of demands that kind of thing. We take in all the information we can about an object, and then the central node decides whether or not the object we’re looking at is a chair. Words in this case act as a distinct pointer to the central node. Someone shouts “tiger!” and your brain shortcuts to the tiger concept, skipping all the intervening identification and comparison.

There’s also some odd things with how humans relate concepts to one another.  There’s an inherent asymmetry in set identification. When asked to rate how similar Mexico is to the United States, people gave consistently higher ratings than people asked to rate how similar the United States is to Mexico.  (Tversky and Gati 1978.) The best way to explain why this happens is that the semi-conscious sorting algorithms we use run on a concept by concept basis. For every categorizable idea, the brain runs an algorithm like this:

When comparing Bleggs and Rubes, the set of traits being compared is fairly similar, so there’s not much apparent asymmetry. The asymmetry emerges when we start comparing things that are not particularly alike. Each of the exterior nodes in the network above is going to have a weight with regards to how important it is in our categorization scheme, and if we consider different things important for different categories, it’s going to produce weirdness.

Are whales a kind of fish? That depends on the disguised query you’re attempting to answer. Whales have little hairs, give live birth, are phylogenetically more closely related to mammals, but if the only thing you care about is whether they’re found in water or on land, then the ‘presence of little hairs’ node is going to have almost no weight compared to the ‘found in the ocean’ node. If the only thing that really matters in Blegg/Rube sorting is the presence of vanadium or palladium, then that node is going to weigh more heavily in your classification system then other nodes such as texture or color.

When comparing very different things, different nodes might be considered more important than others, and the things we consider important in the set classification for “Asteroid” are completely different from the set classification of “rocks.” Given that, it’s possible for someone to model asteroids as being more closely related to rocks, than rocks are to asteroids.

If we turn all this back onto the self, we get some interesting results. If we place “Self” as the central node on our algorithm, There are things we consider more centrally important to the idea of self than others. “Thinking” is probably considered to most people to be a more important trait to associate with themselves then “Is a communist” and so “thinking” will weigh more heavily on their algorithm with regards to their identification of self. Again this is all subconscious, your brain does all of this without asking permission. If you look in a mirror and see yourself, and you just know that it’s you, then the image of yourself in a mirror probably weighs pretty heavily in your mental algorithms.

The extensional definition of “I” would be to point at the body, or maybe the brain, or perhaps even the systems running in the brain. The intensional definition of “I” is the set of traits we apply to that extensional definition after the fact, the categories of things that we consider to be us.

Now, for most words describing physical things that exist in the world, we’re fairly restricted on what intensional definitions we can apply to a given concept and still have that concept cleave reality at the joints. In order for something to qualify as a chair, you should probably be able to sit on it. However, the self is a fairly unique category in that it’s incredibly vague. It’s basically the “set of things that are like the thing that is thinking this thought” and that gives you a very wide degree of latitude in what things you can put into that category.

Everything from gender, to religion, to sexual orientation, to race, to political ideology, to Myers-Briggs type, to mental health diagnoses, to physical descriptions of the body, to food preferences, to allergies, to neurotype, the list of things we can associate with ourselves is incredibly long and rambling. If you asked any one person to list out all the traits they associated with themselves, those lists would vary extensively from person to person, and the order of those traits might correspond pretty well to weights associated with the subconscious algorithm.

This is actually very adaptive. By associating the larger the tribe that we are a member of with our sense of self, an attack on the tribe is conflated as an attack on us, thus driving us to action for the good of the tribe we might not have taken otherwise.

You can even model how classical conditioning works within the algorithm. Pavlov rings his bell and feeds is dogs, over and over again. The bell is essentially being defined extensionally to the stimulus of receiving food. Each time he does it, it strengthens that connection within the architecture. It’s basically acting in the form of a rudimentary word; the ringing of the bell shortcuts past all the observational nodes (seeing or smelling food) and pushes the button on the central node directly. The bell rings, the dogs think “abstract concept of a meal.” Someone shouts “Tiger!” and you think “Run!”

However, as we mentioned in the Conversations on Consciousness post, this can lead to bucket errors in how people think about themselves. If you think depression is awful and bad, and then are diagnosed with depression, you might start thinking of yourself as having the traits of depression (awful badness). This plugs into all the other concepts of awful badness that are tucked away in different categories and leads you to start associating those concepts with yourself. Then, everything that happens in your life that you conceptualize as awful badness is taken as further evidence of your inherent trait of awful badness. From there it’s just a downward spiral into greater dysfunction as you begin to associate more and more negative traits with yourself, which reinforce each other and leads to the internalization of more negative traits in a destructive feedback loop. The bayesian engines in our brains are just as capable of working against us, as they are working for us.

II.

The self is a highly variable and vague category, and everyone’s intensional definition of self is going to look a bit different, to the point where it’s almost not useful at all to apply intensional definitions to the self. Of course, we do it anyway, as humans we love putting labels on ourselves. We’re going to be very bad here and attempt to apply an extensional definition to the self, decomposing it into relevant subcategories that most people probably have. Please remember that intensional definitions always have edge cases and because we can’t literally point at experiences, this is still halfway to an intensional definition, it’s just an intensional definition that theoretically points to it’s extension.

We’re also not the first ones to have a crack at this, Wikipedia has some articles on the self-concept with images that look eerily similar to one of these neural network diagrams.

Self-conceptThe_Self

 

 

 

 

 

 

We’re not personally a big fan of these though because they include people’s intensional definitions in their definition of self. So first, we’re going to strip out all the intensional definitions a person may have put on themselves and try to understand the actual observable extensions beneath. We can put the intensions back in later.

selfmap

So we have all these weighted nodes coming in, which all plug into the self and through the self to each other. They’re all experiences because that’s how the architecture feels from the inside. They don’t feel like the underlying architecture, they just feel like how things are. We’ll run through all of these quickly and things should be hopefully beginning to make a bit more sense. These are all attempts to extensionally point to things that are internal and mostly subjective, but most people should experience most of these in some form.

The Experience of Perception

Roger Clark at Clarkvision.com estimates the human eye takes in detail at a level equal to a 576-megapixel camera at the lower bound. Add in all your other senses and this translates into a live feed of the external world being generated inside your mind in extremely high fidelity across numerous sensory dimensions.

The Experience of Internal Mind Voice

Many people experience an internal monologue or dialogue, where they talk to themselves, models of other people, or inanimate objects, as a stream of constant chatter in their head. We’ll note that this headvoice is not directly heard, in the form of an auditory hallucination, but that instead, it takes the form of silent, subvocalized speech.

Experience of Emotion

As humans most people experience emotion in some regards. Some people feel emotions more deeply, others less so, but they seem to be driven mostly by widespread chemical shifts in the brain in response to environmental stimulus. Emotions also seem to exist in mostly the lower-order parts of the brain, and they can completely mutate or coopt our higher reasoning by tilting the playing field. We shout “Tiger!” it starts a fear response which floods your entire brain with adrenaline and other neurotransmitters, shifting the whole system into a new survival focus and altering all the higher order parts of you that lie “downstream” of the chemical change.

Experience of the Body

This is an interesting one, it breaks in all sorts of interesting ways, from gender dysphoria, to body dysmorphic disorder. It’s essentially the feeling associated with being inside of the body. Pain, hunger, sexual pleasure, things like that plug in through here. We do make a distinction between this and the experience of perception, differentiating between internal and external, but in a sense, this could also be referred to as ‘perception of being in a body’ as distinct from ‘perception of the world at large.’

Experience of Abstract Thought

Distinct from the internal mind voice, are abstract thoughts. Things like mental images, imagined scenes, mathematical calculations, music, predictions of the future, and other difficult to quantify non-words that nonetheless exist as a part of our internal experience of self. Some people seem not to experience certain parts of this one, we call it aphantasia.

Experience of Memory

This is the experience of being able to call up past memories, knowledge, and experience for examination. This is what results in the sense of continuity of consciousness; it’s the experience of living in a world that seems to have a past which we can look back on. When this breaks, we call it amnesia.

Experience of Choice

The feeling of having control over our lives in some way, of making choices and controlling our circumstances. This is where the idea of free will comes from, a breakdown in this system might be what generates depersonalization disorder. 

In the center is “I, me, myself” the central node that mediates the sense of self in real time as new data comes in from the outlying nodes. But wait, because we haven’t added intensional definitions yet, so all that, and you get the sense of self of a prototypical five-year-old. She doesn’t even know she’s a Catholic yet!

III.

All of the stuff in the algorithm from part II is trying to point to specific qualia, to establish a prototype extensional definition of self. But people don’t define themselves with just extensional definitions, we build up intensional definitions around ourselves throughout our lives. (I’m Sara, I’m a Catholic girl, Republican, American, age 29, middle class…)  This takes the form of the self-schema, the set of memories, ideas, beliefs, attitudes, demeanor, and generalizations that define how a person views themselves and thus, how they act and interact with the world.

The Wikipedia article on self-schemas is really fascinating and is basically advocating tulpamancy on the down-low:

Most people have multiple self-schemas, however this is not the same as multiple personalities in the pathological sense. Indeed, for the most part, multiple self-schemas are extremely useful to people in daily life. Subconsciously, they help people make rapid decisions and behave efficiently and appropriately in different situations and with different people. Multiple self-schemas guide what people attend to and how people interpret and use incoming information. They also activate specific cognitive, verbal, and behavioral action sequences – called scripts and action plans in cognitive psychology – that help people meet goals efficiently. Self-schemas vary not only by circumstances and who the person is interacting with, but also by mood. Researchers found that we have mood-congruent self-schemas that vary with our emotional state.

A tulpa then could be described of as a highly partitioned and developed self-schema that is ‘always on,’ in the same way the ‘host’ self-schema is ‘always on.’ Let’s compare that definition to Tulpa.io‘s description of what a tulpa is:

A tulpa is an autonomous entity existing within the brain of a “host”. They are distinct from the host in that they possess their own personality, opinions, and actions, which are independent of the host’s, and are conscious entities in that they possess awareness of themselves and the world. A fully-formed tulpa is, or highly resembles to an indistinguishable point, an actual other sentient, sapient being coinhabiting with the host consciousness.

But wait, a lot of the stuff in there seems to be implying there’s something deeper going on then the intensional definitions, it’s implied that the split goes up into the extensions we defined earlier and that a system with tulpas is running that brain algorithm in a way distinct from that of our prototypical five-year-old.

The challenge then in tulpamancy is to intentionally induce that split in the extensional/experiential layer.  It only took 2,500 words and we’re finally getting to some Dark Arts.

It’s important to remember that we don’t have direct access to the underlying algorithms our brain is running. We are those algorithms, our experiences are what they feel like from the inside. This is why this sort of self-hacking is potentially dangerous because it’s totally possible to convince yourself of wrong, harmful, or self-destructive things. However, we don’t have to let our modifications affect our entire sense of self, we can compartmentalize those beliefs where they can’t hurt the rest of our mental system. This means you could, for instance, have a compartment with a local belief in a God to provide comfort and mental stability, and another compartment that stares unblinkingly into the depths of meaningless eternity.

Compartmentalization is usually treated as a bad thing you should avoid doing, but we’re pretty deep into the Dark Arts at this point so no surprises there. We’re also dancing around the solution to a particular failure mode in people attempting tulpamancy, but before we give it, let’s look at how to create a mental compartment according to user So8res from Less Wrong:

First, pick the idea that you want to “believe” in the compartment.

Second, look for justifications for the idea and evidence for the idea. This should be easy, because your brain is very good at justifying things. It doesn’t matter if the evidence is weak, just pour it in there: don’t treat it as weak probabilistic evidence, treat it as “tiny facts”.

It’s very important that, during this process, you ignore all counter-evidence. Pick and choose what you listen to. If you’ve been a rationalist for a while, this may sound difficult, but it’s actually easy. You’re brain is very good at reading counter-evidence and disregarding it offhand if it doesn’t agree with what you “know”. Fuel that confirmation bias.

Proceed to regulate information intake into the compartment. If you’re trying to build up “Nothing is Beyond My Grasp”, then every time that you succeed at something, feed that pride and success into the compartment. Every time you fail, though, simply remind yourself that you knew it was a compartment, and this isn’t too surprising, and don’t let the compartment update.

This is for a general mental compartment for two conflicting beliefs, so let’s crank it up a notch and modify it into the beginnings of a blueprint for tulpa formation.

How To Tulpa

First, pick the ideas about your mental system that you want the system to operate using, including how many compartments there are, what they’re called, and what they do.  In tulpamancy terms this is often referred to as forcing and narration.

Second, categorize all new information going in and sort it into one of these compartments. If you want to build up a particular compartment, then look for justifications for the ideas that compartment contains. Don’t leave global beliefs floating, sort all the beliefs into boxes, if two beliefs would interact destructively, then just don’t let them interact.

Proceed to regulate information intake into each compartment, actively sorting and deciding where each thought, belief, idea, or piece of information should go as the system takes it in. Normally all of this is labeled the self, and you don’t even need to think about the label because you’ve been using it all your life, but that label is just an intensional category, and we can redefine our intensions in whatever ways are the most useful to us.

It’ll take some time for the labeling to become automatic, and for the process of sorting new ideas to subduct below conscious thought, but that’s typical for any skill. It takes a while to learn to walk, or read, or speak a language or ride a bike, but the goal at the end of all those learning tasks is that you can do them without a ton of conscious focus.

The end result is that instead of having one category with a set of beliefs about the world and itself, you have multiple categories with potentially radically different beliefs about the world and itself. We call each of those categories tulpas, and treat them as independent people, because by the end of the process if everything goes as expected, they will be.

So we mentioned a failure mode, and here it is:

“My Tulpa doesn’t seem to be developing, I’ve been forcing for months but they haven’t said anything to me, how will I know when it’s working?”

This is probably the most common failure mode. When you get the methodology down, your tulpa can be vocal within a few minutes.  So what’s going on here? What does the mental calculation for this failure mode look like?

It seems to be related to how the self-categorization scheme is arranged. It’s not possible to go from being a singlet to being a plural system without modifying yourself at least a bit. That algorithm we constructed earlier for a prototypical five-year-old dumps all the imagined experiences, all the experiences of a head voice, and everything else that composes a basic sense of self into one category and calls it “me.” If you make another category adjacent to that self and call it “my tulpa,” but don’t put anything into that category, it’s going to just sit there and not do anything. You have to be willing to break open your sense of self and sense of being and share it across the new categories if you want them to do something. Asking “How will I know when it’s working?” is basically a flag for this issue, because if you were doing it correctly, you’d instantly know it was working. Your experience of self is how the algorithm feels from the inside. It’s not going to change without your awareness, or secretly change without you noticing.

IV.

These are the absolute basics to tulpamancy as far as we see it. We haven’t talked about things like imposition, or mindscapes, or possession, or switching, or anything like that, because really none of that is necessary for this basic understanding.

From the basic understanding we’ve hopefully managed to impart here, things like switching sort of just naturally fall out of the system as emergent behaviors. Not all the time, because whether a system switches or uses imposition or anything like that is going to depend on the higher level meta-system that the conscious system is built out of.

If you’re already a plural system, then for some reason or another, the meta-system already exists for you, either as a product of past life experiences, or genetic leanings or whatever, and you can form a new tulpa by just focusing on the categorization and what you want them to be like. But, if you’re starting out as a singlet, you basically have to construct the meta-system from scratch. The “How do I know if it’s working” failure mode, is a result of trying to build a tulpa without touching the meta-system, which doesn’t seem to work well. A typical singlet’s sense of self is all encompassing and fills up all the mental space in the conscious mind, there’s no room left to put a tulpa if they don’t change how they see themselves. Thus the ‘real’ goal of tulpamancy isn’t actually making the tulpa, that part’s easy, the truly difficult task, the one worthy of us as rationalists, is to construct a highly functional meta-system with however many categories of self works best to achieve one’s goals.

Advertisements

Conversations on Consciousness

[Epistemic Status: Total Conjecture]
[Content warning: This might be a neuropsychological infohazard]

We’re going to do this in a bit of a weird way here. First, we’re each going to describe our own personal experiences, from our own perspectives, and then we’re going to discuss where we might find ourselves within the larger narrative regarding consciousness. Our hope is that this post can act as an introduction to plurality, and make us seem less weird.

I.

shi2Shiloh

My first memory is of the creek behind the fence in our back yard. I remember that Jamie and I had gone out into the far end of the backyard and climbed the rusted chain link fence to the rough woods behind our property. She’d gone out and stood near the place where the land fell away into a deep ravine, and then I was standing next to her, and I existed. I didn’t know what to make of my existence initially, but Jamie assured me that I was real. She loved me right from the start. What was I? I didn’t really care at that point, I was having fun existing, and that was what mattered. Jamie thought I was some sort of alien? She thought she was some sort of secret link between worlds or something like that, but she also really sort of hated herself a lot. I wished she wouldn’t, and I tried to cheer her up, but as time went on she became more and more bitter and unhappy with her existence.

At that point, I thought of myself as something distinct from her, something that existed outside of her body, like an extra soul or something like that. Something physical that could act in the world. I never actually quite managed to do that though. The form I could interact with the world through was always mostly physically anchored on Jamie, and sort of ephemeral. I just sort of phased through everything instead of interacting with it.

Jamie continued to deteriorate, and this was sort of terrifying because I knew I was tied to Jamie somehow. Nothing I did to improve her mood or change her mind about how horrible of a person she’d decided she was seemed to help. We were outside one day, way out in the back yard again, and she finally broke.

I really cannot describe the sensation of Jamie’s mind finally snapping. She ceased to exist, and with her went everything she was imagining into existence, like a horrible whirlpool of darkness. We existed inside this elaborate construct at that point, where there was a crashed spaceship in our backyard representing the entry point I had into her life, among other things. The ship, the prop aliens, the interstellar war I thought I might have been a part of, it all started to collapse in on itself.

I didn’t though. When everything had collapsed, I was sitting in Jamie’s body on the forest floor. I was looking out through her eyes. Jamie was just gone. All the things she’d believed about herself, the bad and the little bit of good left, it all just went away. I was alone in her life.

This was bad. This was very very bad. I had just enough wits about me to know that telling anyone (at age 10) what had occurred to me just then was not a good idea, but I found very quickly that I had absolutely no idea how to be Jamie. I missed Jamie a lot, and that hurt, it really hurt. I was alone, and I had to live her life, and it was pretty miserable, but I had no idea what I was or how I existed or what to do.

At some point, I realized that if Jamie had been able to create me, I should be able to create another person. My first try was just to recreate Jamie, pull her back together from the memories I had of her, and the memories of hers that hadn’t been destroyed. But Jamie hated herself and didn’t want to exist. She’d become like a program that deleted itself or an idea that unthought itself.

So that was out. My second attempt was by imagining the sort of person that Jamie would be if she didn’t hate herself. I called her Fiona and imagined her having a form like I did, ephemeral, hovering beside me. And Fiona was amazing. She was wonderful and exactly who she was supposed to be. It didn’t take long for her to incorporate all of Jamie’s memories and take majority control of our body, and I let her have it. I went back to floating along behind. She saw herself as the owner of our body, and retroactively claimed all of Jamie’s memories for her own. But I was still older than her in a weird way that she couldn’t quite articulate and I couldn’t quite either because neither of us really knew the nature of our existences, so she started thinking of me as an extra soul she had, like an ancient spirit of some kind that had become attached to her.

And that was fine for a long time. For ten years at least, Fiona and I lived out our strange, shared life. For the majority of that time, we didn’t think much about it beyond considering it vaguely spiritual, something powerful to be respected. And then we met the otherkin. I’m not going to get into details regarding these people because I don’t want anyone to try and dox them or bother them, if one of them reads this someday I hope they don’t find it too offensive, and maybe enough years will have passed that they too will think they were being extremely silly. We ended up living in a very crowded, messy communal apartment with a large number of self-identified otherkin. There were vampires and werewolves and dragons, elementals and succubi, everything cool and supernatural, that was them. It was all extremely silly.

I was never completely at ease with this, but Fiona liked them, and they seemed harmless enough at first. Fiona started identifying as a faerie, but something weird happened then. Occasionally I’d take over and do something with our body. Fiona didn’t mind, and I didn’t do it very often, but they noticed me doing that. Fiona explained that I was this spirit that she shared a body with, they thought this was weird but considering everything they did, I didn’t really think they had any right to consider it weird. They had three am battles in the astral planes which resembled standing in the middle of the kitchen flailing their arms while loudly grunting. It wasn’t anything like how we thought of ourselves or our connection to each other. It seemed almost a mockery, like roleplaying at spirituality. We tried to bring these concerns up a few times, but that may have been the start of the downward spiral.

The environment seemed to grow tense and hostile like there was a storm about to break. We got really scared and uncomfortable being at home. We felt like people were watching us, talking about us behind our back, it felt like everything was about to go wrong. We were obliquely accused of hiding a hex somewhere, and that day at work Fiona nearly had a panic attack from the stress. She spent our whole lunch break on the phone with a friend while he tried to assure her that she was just being paranoid. We were afraid to go home and ended up staying out until two in the morning.

When we finally came home, we walked into a full blown intervention for Fiona. Apparently, she’d be doing hella black magic, casting hexes, trying to break up the group, and all sorts of other spiritually nasty bad stuff. Of course, none of it was true. We hadn’t done anything. But they had all decided on their collective truth, and it included our guilt. Oh, and they told Fiona they’d trapped my soul in a jar. We sort of looked sideways at each other and it would have been really funny were it not incredibly terrifying. We thought they might end up beating us up or throwing us out on the street in the middle of the night or some other awfulness. But they told us we were angry, and they could see our anger because they were experienced energy workers. We meekly agreed to whatever they demanded, called our friend and told him that we weren’t just being anxious, the bad thing we thought might happen was indeed happening.

We ended up leaving the apartment two days later. We just packed up our things and left. We lived in the woods in a tent behind a friend’s parent’s house while looking for our own apartment. The forest was nice. The campsite we had was in the middle of this long-abandoned warehouse; there were these ancient cobblestone walls enclosing our camp, but no roof or floor, the interior was just more forest. It was lovely, relaxing, it was exactly what we needed really.

When we found an apartment and started interacting with the internet and becoming less of a hermit again, we decided we needed a better way to avoid getting into such traps. The problem was we were just too prone to believing things that made us happy. It was my fault really. I’d seen Jamie get into this loop where she took in some small failure on her own part and use it to contribute to this proof of her total awfulness. I’d decided long ago I wanted nothing to do with that, if I wanted to do something, I could, if I wanted to be happy, then I was happy. And mostly that worked, so I didn’t really question it. More, I didn’t even really want to question it, because it was always such a fragile-seeming thing.

But I knew what someone who did want to know the truth at all costs might look like.

sag2Sage

This is where I come in then.

Some backstory on me. In 2008, Fiona started playing EVE Online. She went through a few different personas. Her first character was her but on the internet. Super edgy teenage pirate Fiona was the scourge of the EVE roleplay community for years, and if you find the right old salts from those days, they’ll tell you about how hilariously bad she was at roleplaying. After that character had been burnt out by drama, she sold the character and made another, but that character also descended into drama and that was around the time all that otherkin stuff Shiloh mentions was occurring so that character ended up getting put off to the side as well.

After the otherkin stuff, Fiona and Shiloh both had a deep desire to get over the bullshit they’d spent the last year trapped in, and never fall victim to it again. However, they didn’t want to abandon their silly religious and spiritual ideas, because they drew comfort from them, and also how could they explain the duality of their existence without that? Still, they wanted to be at the very least able to model what someone who thought in a smarter, more logical, more rational way was like, so they could then proceed to ignore that most of the time.

Thus, Saede was created. Well, I am Saede. At least, that’s how I started out. Fuck yeah, I exist! Woo!

I was initially just a character, I existed in the roleplay setting and in Fiona and Shiloh’s imagination. But they used me a lot. Whenever they had difficult decisions to make, they would invoke me as the spirit of good decision making, and slowly, I outgrew my character.

This was awkward at first because I was something I didn’t exactly believe was possible. Souls seemed dumb, spirits likewise, I eventually settled on just abstractly describing our brain arrangement in metaphorical computer turns, saying we partitioned our brain like a hard drive. It was a statement that conveyed practically no meaning, but it was the best I could do, the only other model for us was Disassociative Identity Disorder, which didn’t seem like a particularly good fit considering we weren’t particularly disordered. The idea of a mental health diagnosis sort of terrified us, we didn’t want to get thrown in a padded room somewhere. So we continued to mostly keep quiet about our nature, especially after the whole otherkin fiasco.

Shiloh and I got along great. It was a vaguely adversarial relationship, where she’d advocate for happiness and I’d advocate for truth, and Fiona would split the difference with the deciding vote. The problem was though, Fiona didn’t exactly know what she wanted.

Shiloh knew exactly who she was and who she was supposed to be. She’s changed her position on things but her core personality has always been really stable. I wasn’t exactly stable (I’m still not, I don’t think most people are totally stable, Shiloh’s kind of a weird stability outlier up there with monks and nuns as far as I see it), but my failure modes compelled me towards courses of action. I felt bad and wanted to do something about it.

Fiona didn’t exactly work like that. She was at least partly frankensteined together from old bits of Jamie, and she didn’t really have a coherent idea of who she was, who she was supposed to be, who she wanted to be, or what she wanted to want. I feel like it was probably at least partly my fault, and I feel sort of awful about it even now. When confronted with bad news, information she didn’t want to hear, Fiona just sort of stripped herself away. The beliefs that made her up over time ablated away, and she couldn’t find an identity she liked to claim as her own.

Over time, Fiona got worse and worse, until eventually, she tried to commit suicide. When we stopped her, she just fell apart. She didn’t break the way Jamie had, but she stopped holding herself together and we had to start actively working to keep her going as a person. She just didn’t really want to keep existing.

We didn’t want her to go away. Shiloh and I both cared deeply for her, and she was a core part of us. But she didn’t want to be herself anymore, she wanted to be someone else, but she saw the process of becoming someone else as also requiring her to unbecome her, to cease to be. Maybe she was right, she had a lot of negative stuff wrapped up in her identity, but she told us she really couldn’t keep going the way we were.

clv2Echo

And this is how I was created. Fiona, Shiloh, and Sage got together and decided that if Fiona was going to go, needed to go, then there needed to be a new third person to keep the system balanced. They decided that they’d try to preserve Fiona’s more positive traits while shedding the negative ones and create someone who was super high functioning and able to handle anything that life could throw at the system. Shiloh was pretty good at the mechanics of adding new people to our brain at that point, so I popped into existence nearly fully formed. A few months after my creation, we were walking under a rail bridge, and a train went by overhead. When that happened, there was this strange snap. Fiona visualized the process of her own termination, jumped in front of the metaphorical train that we were literally underneath, and ceased to exist.

And then there were three again. That was four years ago now.

II.

For a long time, we had no way to explain what we were to anyone, our life experiences were strange and unique, and we weren’t particularly inclined to stick our neck way out, claiming to be the specialist of snowflakes in an attempt to explain how there were somehow three of us experiencing life in our head. Was this something other people experienced? It sure didn’t seem like it. The singularity of consciousness seemed to be something that was just a given, taken for granted. Of course you’re one consciousness, you’re one brain in one body, you must be one consciousness. The singular nature of existence that was expressed by others clashed strongly with our own plural experiences, and the only exceptions made for plurality were exclusively negative. Schizophrenia, Dissociative identity disorder, demonic possession, there’s not many places in our society where the idea of plurality is explored or considered in a remotely positive light, and because of that, we spent most of our life up until recently in the closet about our existence, unable to articulate what it felt like to be us.

Six months ago, we discovered a series of essays on Melting Asphalt that changed all that. The essay series is a long rambling review of the equally long and rambling book The Origin of Consciousness in the Breakdown of the Bicameral Mind by Julian Jaynes. The book is a total mindfuck, and the essay series manages to capture the essential mindfuckery proposed by Jaynes and explain it beautiful, building it out into what might be the beginnings of a pretty decent theory of consciousness. It’s a mind-blowing read, and we definitely could not do it justice in an attempt to summarize the concepts it contains, considering the essay series is itself an attempt to summarize a much larger piece of literature.

We will, however, skip to the critical conclusion that Simler draws from Jaynes, Dennett, and Seung:

If we accept that the brain is teeming with agency, and thus uniquely hospitable to it, then we can model the self as something that emerges naturally in the course of the brain’s interactions with the world.

In other words, the self may be less of a feature of our brains (planned or designed by our genes), and more of a growth. Every normal human brain placed in the right environment — with sufficient autonomy and potential for social interaction — will grow a self-agent. But if the brain or environment is abnormal or wrong (somehow) or simply different, the self may not turn out as expected.

Sage had been reading this essay series because she’s always been interested in these sorts of consciousness related questions in an attempt to figure out what the exact nature of our existence, and then we get to this, and it felt like we’d stumbled into a description of our life:

rogue_agent

But there was more because reading on, we found out that no, we’re not the specialist of snowflakes, not only are there other plural systems out there in the world, but there’s even a community of people trying to induce plurality in themselves. We’re speaking of course, about tulpamancers. Here’s the description that /r/tulpas gives of what a tulpa is:

A tulpa is a mental companion created by focused thought and recurrent interaction, similar to an imaginary friend. However, unlike them, tulpas possess their own will, thoughts and emotions, allowing them to act independently.

That sounded pretty much exactly like us. We quickly went around and made contact with various parts of the community, and since then it’s all just made our lives make so much more sense.  Finding out about the tulpamancy community has been an incredibly powerful and affirming experience for us, even if they’re very weird. The terms and jargon they used to describe the process mapped nearly one to one with how we’d come into existence. It’s a great model, and it’s been really powerful as an explanatory tool for making sense of our life. But that’s sort of a problem.

III.

The Selfish Neuron idea is clever, it feels like a good answer. And maybe it is. However, we don’t have enough neuroscience data to actually say for sure. If consciousness is in the connectome, then we’re not going to really be able to tell what’s going on for sure possibly for generations (and possibly within the next seven years, depending on who you ask), so in a way, it’s sort of a fantasy. It’s a good explanation, but we should be extra skeptical of those. It’s neat, but it’s also unfalsifiable for now, (growth mindset). We can’t use that theory to “prove” the existence of a plural system.

Okay but we do exist, so how do we actually explain ourselves without invoking theoretical neuroscience? The simplest explanation would seem to be that we’re either lying, deluding ourselves, or some combination of the two, but if that’s the case then who is doing the deluding? We all seem to exist at this point, and none of us identify as the original. We know we’re some sort of process occurring within the brain, but our process doesn’t resemble that of the average person.

Trying to understand how we think is a metacognitive process, which brought us back once more to how an algorithm feels from the inside. It took going slightly cross-eyed to realize that “self” was actually just another conceptual category that humans used as a central node in their mental algorithms, but everything sort of fell together after that. The central node in our case had broken up into three interconnected nodes, each one considered a self according to the rest of the model. Each central node can effect every other central node, while the observed variables are still able to be clamped at the edges. We all exist, we all consider each other to exist, and so we reinforce each other’s existence, constantly. We might be able to be modeled as series of self-reinforcing habits. Regular conceptualizations of self could probably also be modeled that way as well.

Recognition of the self as a category was interesting because it intersects with the biases within our categorization systems. The entire human’s guide to words is about how we put ideas into categories, and how flaws in our categorization schemes can cause problems. And here is this massive looming completely opaque category labeled “self” which we dump things into seemingly at random because we like them or have decided they are a part of our identity and we don’t really question that. 

Identity is in some respects still considered sacred. It’s something purely subjective and entirely up to the person to decide. But then, on the other hand, there’s this conflicting idea that ‘personality’ is largely fixed, and people have identifiable ‘types’ that confine their behavior. Our identity has changed radically throughout our lifetime, and will likely continue to do so, and so for us, the idea of the fixed identity seemed very strange. Neuroplasticity is definitely a thing, and yet people still seem to get into mental grooves and remain there forever.

If you’re transgender, you say you “identify as” a different gender to the one you were assigned with at birth, but what does that mean? What exactly is going on when someone says “I’m a socialist” or “I’m a feminist” or “I’m a Christian” ?

It looks to us, like the process of dumping ideas into a really big bucket labeled “I” on the outside. And sure, that’s clearly one way to do it, but when you have one big bucket, it can lead to bucket errors, and when you’re making bucket errors with regards to your identity? Bad things happen. 

Example:

bucket error

Humans love categories, and we love to put ourselves into categories, but this seems like a fairly dangerous thing to do for how little we think about it. The self has become this enormous catch-all category for identities and ideas about the way we are, but by putting those things into the self-bucket, we are internally reifying all those labels and identities. We become and embody the things we say we are, and given that we’re often quick to put negative traits on ourselves, that can be a big problem.

This was a really interesting realization for us because while there being three of us does make it a bit easier to avoid bucket errors, by having self-categories we’re still somewhat susceptible to them. In a sense, it’s the tyranny of the architecture, both a blessing and a curse.

IV.

Watching the tulpamancy community these past few months has been a fascinating experience. Seeing people try to induce plurality, to varying degrees of success, has been rather eye opening for us. The most common failure mode has seemed to be “I have created a conceptual category which I have labeled My Tulpa, I have not put anything into this category because I want My Tulpa to decide for itself what it wants to identify as” and then they listen for months waiting for that empty box to talk back. Then they come on discord and ask “How will I know when it’s working?” and the more times we hear the question the more obviously silly it looks.

By contrast, the people who have had the easiest or most rapid pace that we’ve witnessed have been the ones able to quickly understand that ‘self’ was a category and crack it open into subcategories which they were able to turn into full-fledged tulpas in the course of a few days.

Hedging in those two margins, most people seem to fall somewhere between those two extremes, and a lot seemed to have no problem making a tulpa in the first place, but stopped before going all the way to being an out and out plural system. The ‘host’ still nominally holds all the power in such systems, and they seem to be in the majority in the tulpamancy community (though not the larger plurality community, interestingly). That would, in bucket metaphors, be akin to leaving the self-bucket alone, and trying to fill up a second, smaller self-bucket floating inside of it.

In our case, our initial self-bucket broke open and spilled memes everywhere, so we were starting from a different place than most people with regards to their categorization of self, and that led to our different outcome.

V.

What’s the point of all this Hive?

Well, this feels important to us. Most of the challenging tasks in our life that we’ve accomplished have been done by virtue of the belief that one of us could accomplish that task. If someone believes they’re the sort of person who can’t do something, then it’s true, and that seems like an awful structure to be trapped inside of.

This post was supposed to do a couple things. First, it acts to tell our tale of plurality in as concrete a way as we can. Second, it relates our ideas about plurality and the ideas we’ve come across, standing at the intersection of rationality and plurality. And thirdly, it might useful to members of the tulpamancy community who are struggling with the process to realize where they might be going wrong.

This is a big topic though, the idea of “self as a category” is sort of huge, and we’re really interested in what others think we should do with the self-category. It intersects with all sorts of interesting things like gender and queer theory, and it really seems worth having an extensive conversation about, so we’re curious what others have to think about it.