Highly Advanced Tulpamancy 101 For Beginners

[Epistemic Status: Vigorously Researched Philosophical Rambling on the Nature of Being]
[Content warning: Dark Arts, Brain Hacking, Potentially Infohazardous]

Earlier this week we wrote about our own experiences of plurality, and gave a rough idea of how that fit into our conceptualization of consciousness and self. Today we’ll be unpacking those ideas further and attempting to come up with a coherent methodology for self-modification.

I.

Brains are weird. They’re possibly the most complicated things we’ve ever discovered existing in the universe. Our understanding of neuroscience is currently rather primitive, and the replication crisis has pretty thoroughly demonstrated that we still have a long way to go. Until cognitive neuroscience fully catches up with psychology, maps the connectome and is able to address the hard problems of consciousness, a lot of this stuff is going to be blind elephant groping. We have lots of pieces of the picture of consciousness, things like conditioned responses, cognitive biases, mirror neuronsmemory biases, heuristics, and memetics, but even all these pieces together have yet to actually yield us a complete image of an elephant.

Ian Stewart is quoted as saying:

“If our brains were simple enough for us to understand them, we’d be so simple that we couldn’t.”

In a sense, this is necessarily true. It’s not possible to model a complex chaotic system with fewer parts than the system contains. A perfect map of the territory, accurate down to the quark, would be the size of the territory. It’s not possible to perfectly represent the brain within the architecture of the brain. You couldn’t track the individual firing of billions of neurons in real time to anticipate what your brain is going to do using your brain; the full model takes more space than there is in the architecture.

The brain clearly doesn’t model itself as a complex and variable computing substrate built out of tiny interacting parts, it models itself as “a person” existing as a discreet object within the territory. We construct this map of ourselves the same way we construct all our maps of the territory, through intensional and extensional definitions.

Your mother points at you as a child and says words like, “You, child, daughter, girl, Sara, person,” and points at herself and says words like “I, me, mother, parent, girl, Tina, person,” thus providing the initial extensional definitions onto which we can construct intensional definitions. This stuff gets baked into human thinking really deeply and really early, and most children develop a theory of mind as young as four or five years of age.

If you ask a five-year-old “What are you?” you’ll probably get the extensional definition their parents gave them as a self-referent. This is the identification of points in thingspace that we extensionally define as ourselves. From that, we begin to construct an intensional definition by defining the conceptual closeness of things to one another, and their proximity to the extensional definitions.

With a concept like a chair, the extensional category boundaries are fairly distinct. Not completely distinct of course. For any boundary you draw around a group of extensional points empirically clustered in thingspace, you can find at least one exception to the intensional rule you’re using to draw that boundary. That is, regardless of whatever simple rule you’re using to define chair, there will be either chairs that don’t fit the category, or things within the category that are not traditionally considered chairs, like, planets. You can sit on a planet, is it a chair?

This gets back to how an algorithm feels from the inside. The neural architecture we use is fast, scalable, and computationally cheap. Evolution sort of demands that kind of thing. We take in all the information we can about an object, and then the central node decides whether or not the object we’re looking at is a chair. Words in this case act as a distinct pointer to the central node. Someone shouts “tiger!” and your brain shortcuts to the tiger concept, skipping all the intervening identification and comparison.

There’s also some odd things with how humans relate concepts to one another.  There’s an inherent asymmetry in set identification. When asked to rate how similar Mexico is to the United States, people gave consistently higher ratings than people asked to rate how similar the United States is to Mexico.  (Tversky and Gati 1978.) The best way to explain why this happens is that the semi-conscious sorting algorithms we use run on a concept by concept basis. For every categorizable idea, the brain runs an algorithm like this:

When comparing Bleggs and Rubes, the set of traits being compared is fairly similar, so there’s not much apparent asymmetry. The asymmetry emerges when we start comparing things that are not particularly alike. Each of the exterior nodes in the network above is going to have a weight with regards to how important it is in our categorization scheme, and if we consider different things important for different categories, it’s going to produce weirdness.

Are whales a kind of fish? That depends on the disguised query you’re attempting to answer. Whales have little hairs, give live birth, are phylogenetically more closely related to mammals, but if the only thing you care about is whether they’re found in water or on land, then the ‘presence of little hairs’ node is going to have almost no weight compared to the ‘found in the ocean’ node. If the only thing that really matters in Blegg/Rube sorting is the presence of vanadium or palladium, then that node is going to weigh more heavily in your classification system then other nodes such as texture or color.

When comparing very different things, different nodes might be considered more important than others, and the things we consider important in the set classification for “Asteroid” are completely different from the set classification of “rocks.” Given that, it’s possible for someone to model asteroids as being more closely related to rocks, than rocks are to asteroids.

If we turn all this back onto the self, we get some interesting results. If we place “Self” as the central node on our algorithm, There are things we consider more centrally important to the idea of self than others. “Thinking” is probably considered to most people to be a more important trait to associate with themselves then “Is a communist” and so “thinking” will weigh more heavily on their algorithm with regards to their identification of self. Again this is all subconscious, your brain does all of this without asking permission. If you look in a mirror and see yourself, and you just know that it’s you, then the image of yourself in a mirror probably weighs pretty heavily in your mental algorithms.

The extensional definition of “I” would be to point at the body, or maybe the brain, or perhaps even the systems running in the brain. The intensional definition of “I” is the set of traits we apply to that extensional definition after the fact, the categories of things that we consider to be us.

Now, for most words describing physical things that exist in the world, we’re fairly restricted on what intensional definitions we can apply to a given concept and still have that concept cleave reality at the joints. In order for something to qualify as a chair, you should probably be able to sit on it. However, the self is a fairly unique category in that it’s incredibly vague. It’s basically the “set of things that are like the thing that is thinking this thought” and that gives you a very wide degree of latitude in what things you can put into that category.

Everything from gender, to religion, to sexual orientation, to race, to political ideology, to Myers-Briggs type, to mental health diagnoses, to physical descriptions of the body, to food preferences, to allergies, to neurotype, the list of things we can associate with ourselves is incredibly long and rambling. If you asked any one person to list out all the traits they associated with themselves, those lists would vary extensively from person to person, and the order of those traits might correspond pretty well to weights associated with the subconscious algorithm.

This is actually very adaptive. By associating the larger the tribe that we are a member of with our sense of self, an attack on the tribe is conflated as an attack on us, thus driving us to action for the good of the tribe we might not have taken otherwise.

You can even model how classical conditioning works within the algorithm. Pavlov rings his bell and feeds is dogs, over and over again. The bell is essentially being defined extensionally to the stimulus of receiving food. Each time he does it, it strengthens that connection within the architecture. It’s basically acting in the form of a rudimentary word; the ringing of the bell shortcuts past all the observational nodes (seeing or smelling food) and pushes the button on the central node directly. The bell rings, the dogs think “abstract concept of a meal.” Someone shouts “Tiger!” and you think “Run!”

However, as we mentioned in the Conversations on Consciousness post, this can lead to bucket errors in how people think about themselves. If you think depression is awful and bad, and then are diagnosed with depression, you might start thinking of yourself as having the traits of depression (awful badness). This plugs into all the other concepts of awful badness that are tucked away in different categories and leads you to start associating those concepts with yourself. Then, everything that happens in your life that you conceptualize as awful badness is taken as further evidence of your inherent trait of awful badness. From there it’s just a downward spiral into greater dysfunction as you begin to associate more and more negative traits with yourself, which reinforce each other and leads to the internalization of more negative traits in a destructive feedback loop. The bayesian engines in our brains are just as capable of working against us, as they are working for us.

II.

The self is a highly variable and vague category, and everyone’s intensional definition of self is going to look a bit different, to the point where it’s almost not useful at all to apply intensional definitions to the self. Of course, we do it anyway, as humans we love putting labels on ourselves. We’re going to be very bad here and attempt to apply an extensional definition to the self, decomposing it into relevant subcategories that most people probably have. Please remember that intensional definitions always have edge cases and because we can’t literally point at experiences, this is still halfway to an intensional definition, it’s just an intensional definition that theoretically points to it’s extension.

We’re also not the first ones to have a crack at this, Wikipedia has some articles on the self-concept with images that look eerily similar to one of these neural network diagrams.

Self-conceptThe_Self

 

 

 

 

 

 

We’re not personally a big fan of these though because they include people’s intensional definitions in their definition of self. So first, we’re going to strip out all the intensional definitions a person may have put on themselves and try to understand the actual observable extensions beneath. We can put the intensions back in later.

selfmap

So we have all these weighted nodes coming in, which all plug into the self and through the self to each other. They’re all experiences because that’s how the architecture feels from the inside. They don’t feel like the underlying architecture, they just feel like how things are. We’ll run through all of these quickly and things should be hopefully beginning to make a bit more sense. These are all attempts to extensionally point to things that are internal and mostly subjective, but most people should experience most of these in some form.

The Experience of Perception

Roger Clark at Clarkvision.com estimates the human eye takes in detail at a level equal to a 576-megapixel camera at the lower bound. Add in all your other senses and this translates into a live feed of the external world being generated inside your mind in extremely high fidelity across numerous sensory dimensions.

The Experience of Internal Mind Voice

Many people experience an internal monologue or dialogue, where they talk to themselves, models of other people, or inanimate objects, as a stream of constant chatter in their head. We’ll note that this headvoice is not directly heard, in the form of an auditory hallucination, but that instead, it takes the form of silent, subvocalized speech.

Experience of Emotion

As humans most people experience emotion in some regards. Some people feel emotions more deeply, others less so, but they seem to be driven mostly by widespread chemical shifts in the brain in response to environmental stimulus. Emotions also seem to exist in mostly the lower-order parts of the brain, and they can completely mutate or coopt our higher reasoning by tilting the playing field. We shout “Tiger!” it starts a fear response which floods your entire brain with adrenaline and other neurotransmitters, shifting the whole system into a new survival focus and altering all the higher order parts of you that lie “downstream” of the chemical change.

Experience of the Body

This is an interesting one, it breaks in all sorts of interesting ways, from gender dysphoria, to body dysmorphic disorder. It’s essentially the feeling associated with being inside of the body. Pain, hunger, sexual pleasure, things like that plug in through here. We do make a distinction between this and the experience of perception, differentiating between internal and external, but in a sense, this could also be referred to as ‘perception of being in a body’ as distinct from ‘perception of the world at large.’

Experience of Abstract Thought

Distinct from the internal mind voice, are abstract thoughts. Things like mental images, imagined scenes, mathematical calculations, music, predictions of the future, and other difficult to quantify non-words that nonetheless exist as a part of our internal experience of self. Some people seem not to experience certain parts of this one, we call it aphantasia.

Experience of Memory

This is the experience of being able to call up past memories, knowledge, and experience for examination. This is what results in the sense of continuity of consciousness; it’s the experience of living in a world that seems to have a past which we can look back on. When this breaks, we call it amnesia.

Experience of Choice

The feeling of having control over our lives in some way, of making choices and controlling our circumstances. This is where the idea of free will comes from, a breakdown in this system might be what generates depersonalization disorder. 

In the center is “I, me, myself” the central node that mediates the sense of self in real time as new data comes in from the outlying nodes. But wait, because we haven’t added intensional definitions yet, so all that, and you get the sense of self of a prototypical five-year-old. She doesn’t even know she’s a Catholic yet!

III.

All of the stuff in the algorithm from part II is trying to point to specific qualia, to establish a prototype extensional definition of self. But people don’t define themselves with just extensional definitions, we build up intensional definitions around ourselves throughout our lives. (I’m Sara, I’m a Catholic girl, Republican, American, age 29, middle class…)  This takes the form of the self-schema, the set of memories, ideas, beliefs, attitudes, demeanor, and generalizations that define how a person views themselves and thus, how they act and interact with the world.

The Wikipedia article on self-schemas is really fascinating and is basically advocating tulpamancy on the down-low:

Most people have multiple self-schemas, however this is not the same as multiple personalities in the pathological sense. Indeed, for the most part, multiple self-schemas are extremely useful to people in daily life. Subconsciously, they help people make rapid decisions and behave efficiently and appropriately in different situations and with different people. Multiple self-schemas guide what people attend to and how people interpret and use incoming information. They also activate specific cognitive, verbal, and behavioral action sequences – called scripts and action plans in cognitive psychology – that help people meet goals efficiently. Self-schemas vary not only by circumstances and who the person is interacting with, but also by mood. Researchers found that we have mood-congruent self-schemas that vary with our emotional state.

A tulpa then could be described of as a highly partitioned and developed self-schema that is ‘always on,’ in the same way the ‘host’ self-schema is ‘always on.’ Let’s compare that definition to Tulpa.io‘s description of what a tulpa is:

A tulpa is an autonomous entity existing within the brain of a “host”. They are distinct from the host in that they possess their own personality, opinions, and actions, which are independent of the host’s, and are conscious entities in that they possess awareness of themselves and the world. A fully-formed tulpa is, or highly resembles to an indistinguishable point, an actual other sentient, sapient being coinhabiting with the host consciousness.

But wait, a lot of the stuff in there seems to be implying there’s something deeper going on then the intensional definitions, it’s implied that the split goes up into the extensions we defined earlier and that a system with tulpas is running that brain algorithm in a way distinct from that of our prototypical five-year-old.

The challenge then in tulpamancy is to intentionally induce that split in the extensional/experiential layer.  It only took 2,500 words and we’re finally getting to some Dark Arts.

It’s important to remember that we don’t have direct access to the underlying algorithms our brain is running. We are those algorithms, our experiences are what they feel like from the inside. This is why this sort of self-hacking is potentially dangerous because it’s totally possible to convince yourself of wrong, harmful, or self-destructive things. However, we don’t have to let our modifications affect our entire sense of self, we can compartmentalize those beliefs where they can’t hurt the rest of our mental system. This means you could, for instance, have a compartment with a local belief in a God to provide comfort and mental stability, and another compartment that stares unblinkingly into the depths of meaningless eternity.

Compartmentalization is usually treated as a bad thing you should avoid doing, but we’re pretty deep into the Dark Arts at this point so no surprises there. We’re also dancing around the solution to a particular failure mode in people attempting tulpamancy, but before we give it, let’s look at how to create a mental compartment according to user So8res from Less Wrong:

First, pick the idea that you want to “believe” in the compartment.

Second, look for justifications for the idea and evidence for the idea. This should be easy, because your brain is very good at justifying things. It doesn’t matter if the evidence is weak, just pour it in there: don’t treat it as weak probabilistic evidence, treat it as “tiny facts”.

It’s very important that, during this process, you ignore all counter-evidence. Pick and choose what you listen to. If you’ve been a rationalist for a while, this may sound difficult, but it’s actually easy. You’re brain is very good at reading counter-evidence and disregarding it offhand if it doesn’t agree with what you “know”. Fuel that confirmation bias.

Proceed to regulate information intake into the compartment. If you’re trying to build up “Nothing is Beyond My Grasp”, then every time that you succeed at something, feed that pride and success into the compartment. Every time you fail, though, simply remind yourself that you knew it was a compartment, and this isn’t too surprising, and don’t let the compartment update.

This is for a general mental compartment for two conflicting beliefs, so let’s crank it up a notch and modify it into the beginnings of a blueprint for tulpa formation.

How To Tulpa

First, pick the ideas about your mental system that you want the system to operate using, including how many compartments there are, what they’re called, and what they do.  In tulpamancy terms this is often referred to as forcing and narration.

Second, categorize all new information going in and sort it into one of these compartments. If you want to build up a particular compartment, then look for justifications for the ideas that compartment contains. Don’t leave global beliefs floating, sort all the beliefs into boxes, if two beliefs would interact destructively, then just don’t let them interact.

Proceed to regulate information intake into each compartment, actively sorting and deciding where each thought, belief, idea, or piece of information should go as the system takes it in. Normally all of this is labeled the self, and you don’t even need to think about the label because you’ve been using it all your life, but that label is just an intensional category, and we can redefine our intensions in whatever ways are the most useful to us.

It’ll take some time for the labeling to become automatic, and for the process of sorting new ideas to subduct below conscious thought, but that’s typical for any skill. It takes a while to learn to walk, or read, or speak a language or ride a bike, but the goal at the end of all those learning tasks is that you can do them without a ton of conscious focus.

The end result is that instead of having one category with a set of beliefs about the world and itself, you have multiple categories with potentially radically different beliefs about the world and itself. We call each of those categories tulpas, and treat them as independent people, because by the end of the process if everything goes as expected, they will be.

So we mentioned a failure mode, and here it is:

“My Tulpa doesn’t seem to be developing, I’ve been forcing for months but they haven’t said anything to me, how will I know when it’s working?”

This is probably the most common failure mode. When you get the methodology down, your tulpa can be vocal within a few minutes.  So what’s going on here? What does the mental calculation for this failure mode look like?

It seems to be related to how the self-categorization scheme is arranged. It’s not possible to go from being a singlet to being a plural system without modifying yourself at least a bit. That algorithm we constructed earlier for a prototypical five-year-old dumps all the imagined experiences, all the experiences of a head voice, and everything else that composes a basic sense of self into one category and calls it “me.” If you make another category adjacent to that self and call it “my tulpa,” but don’t put anything into that category, it’s going to just sit there and not do anything. You have to be willing to break open your sense of self and sense of being and share it across the new categories if you want them to do something. Asking “How will I know when it’s working?” is basically a flag for this issue, because if you were doing it correctly, you’d instantly know it was working. Your experience of self is how the algorithm feels from the inside. It’s not going to change without your awareness, or secretly change without you noticing.

IV.

These are the absolute basics to tulpamancy as far as we see it. We haven’t talked about things like imposition, or mindscapes, or possession, or switching, or anything like that, because really none of that is necessary for this basic understanding.

From the basic understanding we’ve hopefully managed to impart here, things like switching sort of just naturally fall out of the system as emergent behaviors. Not all the time, because whether a system switches or uses imposition or anything like that is going to depend on the higher level meta-system that the conscious system is built out of.

If you’re already a plural system, then for some reason or another, the meta-system already exists for you, either as a product of past life experiences, or genetic leanings or whatever, and you can form a new tulpa by just focusing on the categorization and what you want them to be like. But, if you’re starting out as a singlet, you basically have to construct the meta-system from scratch. The “How do I know if it’s working” failure mode, is a result of trying to build a tulpa without touching the meta-system, which doesn’t seem to work well. A typical singlet’s sense of self is all encompassing and fills up all the mental space in the conscious mind, there’s no room left to put a tulpa if they don’t change how they see themselves. Thus the ‘real’ goal of tulpamancy isn’t actually making the tulpa, that part’s easy, the truly difficult task, the one worthy of us as rationalists, is to construct a highly functional meta-system with however many categories of self works best to achieve one’s goals.

7 thoughts on “Highly Advanced Tulpamancy 101 For Beginners

  1. Pingback: Rational Feed – deluks917

  2. Hi,

    I’m having trouble understanding what exactly this means: “You have to be willing to break open your sense of self and sense of being and share it across the new categories if you want them to do something.”

    What specifically does this look like when someone is creating a tulpa? What does [i]not[/i] doing this look like? Examples would be very helpful. I’m sorry if the answer is obvious, I just haven’t been able to figure out what you mean by this.

    Like

  3. Pingback: The Silence Hidden in the Sound | Hivewired

  4. I’m not exactly sure what you’re trying to achieve here. Are you trying to suggest Tulpamancy is a cult or are you trying to teach Tulpamancy? If you are doing the former, then don’t do the latter. I don’t care if your opinion is that Tulpamancy is “dark arts” or whatever, but please don’t contradict yourself and say, “Hey guys, come join in on this absolutely scary and creepy thing IMO and you will probably suck at it anyway!”

    For an article about “highly advanced Tulpamancy 101”, to me this looks like, “highly advanced bullshit and some Tulpamancy”. The first problem is it takes way too long for you to get to your topic of Tulpamancy. I used ctrl+f and searched “Tulpa” only to find myself at the end of the article. Another problem is you can’t decide if you want to talk about how self-centered people are or you want to talk about conditioning. On top of all of that, you’re trying to scare the reader into thinking that Tulpamancy is some magical voodoo crap without talking about why Tulpamancy is magical voodoo crap. Maybe you should have brought up switching and imposition (hallucinating your Tulpa), cause that will drive people away real fast.

    I can summarize the entire first 3 sections of your article with the following sentence: “Tulpamancy provides a conflicting perspective on our sense of identity by saying that you can create another person in your mind using conditioning, which is scary because conditioning is so powerful in changing someone’s behavior.” After that, you like to talk about how people not only create their own Tulpa, but also can “fail” at Tulpamancy and what they can do to fix it. This should just be removed from this essay, or your other option is to do the opposite where you revise that piece and throw away this rambling.

    The bottom line is I don’t know what you want me to do after walking away from this essay. Am I supposed to feel ashamed for being a self-centered creature? Am I supposed to be terrified on the effects of conditioning? Am I supposed to want to make a Tulpa? Am I supposed to point my finger and think “wow, Tulpamancy is some crazy crap!” You don’t need all of this extra fluff crap to get your non-host readers to think Tulpamancy is weird because the definition of Tulpamancy is enough to weird them out. On top of that, your other topics can be their own essays where you have plenty of options in how you deliver those messages.

    As a final thought, the Tulpamancy community does not need people who are not committed to the practice trying to experiment with this just to get the “oh wow! I was God just there!” and then get bored and kill their Tulpa. Or even worse, scare people into creating a Tulpa who end up convinced their Tulpa will hurt them, which can lead to “I need Help, I’m Scared” posts or Tulpas turning into the threat their hosts feared because they made a self-fulfilling prophecy. I would rather you try to tear down Tulpamancy with a well-structured and well-thought essay than cause the problems I mentioned by treating this topic with an overbearing sense of carelessness.

    Like

  5. I agree with the previous commenters in that you wrote a well formed cognitive exploration as a setup for a poorly thought out tag along about tulpamancy. The title of this essay should read, ‘how to clickbait someone into reading an otherwise well formed article about cognitive science, that is too redundant to prior art or too boring to stand alone’. Not as catchy, I understand, but truer to form. Furthermore, though it sounds like you may have spent a good amount of time studying tulpamancy due to your expert usage of terms, your ultimate advice/warning is very poorly thought out in my opinion. To do the ‘art’ of tulpamancy justice in your article, regardless of your stance, would require your thesis to be better formed, and your points to be consistent to that thesis.

    Like

  6. Pingback: Hemisphere Theory: Much More Than You Wanted to Know | Hivewired

Leave a comment