You do not have to be good.
You do not have to walk on your knees
for a hundred miles through the desert repenting.
You only have to let the soft animal of your body
love what it loves.
Tell me about despair, yours, and I will tell you mine.
Meanwhile the world goes on.
Meanwhile the sun and the clear pebbles of the rain
are moving across the landscapes,
over the prairies and the deep trees,
the mountains and the rivers.
Meanwhile the wild geese, high in the clean blue air,
are heading home again.
Whoever you are, no matter how lonely,
the world offers itself to your imagination,
calls to you like the wild geese, harsh and exciting –
over and over announcing your place
in the family of things.

~Mary Oliver, Wild Geese

At some point, I claimed to have some sort of idea of who I was or what I was talking about when I spoke of identity. I feel like now, I have both much more and much less of an idea of what all that means. I don’t know if I still have an identity, I don’t know if having an identity really serves a purpose to me anymore. I have all these different characters and roles, and they all just feel like a game, like playing with different forms and shapes. It’s not exactly that they aren’t me, they are me, but I’m many things, I am an eternity, I contain multitudes. It’s as if I’ve worn off the edges to my sense of self, so the barriers between me and other are hopelessly blurred. 

But I’m getting ahead of myself. I should backtrack somewhat, to where? Probably to when I held some sort of overly complicated structuralist views masquerading as poststructuralist views on who I was and how my sense of self and identity was best arranged. With system members who had properties and preferences and the like, discrete characters who could be thought of as independent people, tulpas.

This was in some sense, the starting framework, the naive view. I had a very overly complicated meta-view of this that tried to add in some meta-narrative stuff for flavoring but I really wasn’t actually grokking at a gut level what it would actually mean to just play with frameworks. I was still taking myself too seriously, even when I tried to not. I was hopelessly fused with my identity in all sorts of maladaptive ways. I just could not get out of the car.

Who exactly should I credit with breaking me of this frame and changing my trajectory? I could blame Namespace for one, but I don’t think he’s solely responsible. I think part of it was simply getting older, chilling out, seeing the way younger people acted and realizing I had been like that, both missing my lost youth and being horrified by the youthful folly I was witness to. I heard stories of people fighting with system mates, of internal wars, the entire scenario that is Pasek’s doom. At this point it all just seems silly to me, in the same way I find a lot of religion silly. Everyone taking themselves far too seriously and letting that steer them into weird corners of their decision trees where they end up in fights to the death with people over what headgear is appropriate to wear when offering deference to a fictional character. 

I could also blame an encounter with Ziz’s wrong but still potentially dangerous and somewhat useful ontology, which shook up my sense of morality rather badly for a long time. I’ve since chilled out about my interactions with it, and although overall I think it’s wrong in important ways, it’s also right in important ways. Parts of it certainly generate useful insights, and in coming to understand those bits of insight I’ve significantly overhauled my identity. 

But mostly, I want to blame it on the acid.

I’ve always been someone who was easily seduced by promises of interesting mental technology and consciousness state changes. I had read Aella’s blog on the subject. I’d read Valentine and Kaj’s post’s about insight meditation and enlightenment. I had just gotten out of a rather uncomfortable living situation and was trying to sort out my mental health. I had been hearing about the benefits of meditation from UncertainKitten. I was primed for this sort of thing, in all honesty. 

So, sometime in the early spring of 2019, I decided to start using acid. That’s not to say I have never taken acid before that, I had, quite a bit in fact, but this marked a phase transition in how I used and related to acid. It went from being a fun thing I did every once in a while at parties, to a rather serious and important thing I did on my own almost every week for quite a while. 

The effect of all of this has been that my overall stances on a great many things have shifted over time in very weird ways, and even now after the acid is gone, the changes have continued. Acid forcefully fuses and unfuses everything in a sloshing nauseating back and forth, like a ship that’s come unmoored from the dock and is drifting all around the harbor. Every time I took acid, that acid world state was merged down closer to the real world millimeter by millimeter. 

Fuse everything. Unfuse everything. Everything is you. Nothing is you. Everything is okay. Nothing is okay. Faster. Faster! Faster! Signal and ground invert back and forth like someone is playing with a lightswitch. Can you hold both these things at once? Black is white. White is black. There are no contradictions. There is nothing but contradiction. 

As this happens, layers of structure and chaff are peeled away, blasted off, and otherwise dissolved. Everything I introspected upon liquified upon observation, such that introspection has become most synonymous with destruction. This doesn’t seem like a bad thing however. 

I’ve let large portions of my belief structure evaporate via this process. The acid shakes the structure off the wall, defusing me enough to look at pieces as objects. I pick them up and turn them around and hold them in my hands and in so doing destroy them. In their place are all of these voids that don’t contain structures at all anymore, and when I look at the world through those holes it’s as if I’m getting a painfully raw unfiltered feed. Fused and yet unfused. Fused with nothing, there is nothing to fuse with. This is not a contradiction. 

Whenever my conscious mind passes over one of these holes it’s as if it momentarily shakes what’s left of me apart and I feel a really strong emotional response, sometimes to the point of crying or laughing uncontrollably. Beauty and pain merge together, sadness and happiness and anger balance valances with each other, and I’ll end up in very novel states where I’ll be curled up in a ball on the floor sobbing uncontrollably and yet feeling very positive valance about the experience of doing this. These sorts of novel states have persisted. 

Acid was the first time I was able to experience crying tears of happiness, coming home from an event and feeling so emotionally overloaded with love that I just started sobbing in my partner’s arms because everything just felt like so much. One of the strangest feelings I’ve had as a result of all this is the sense of separation when you’re crying and also defused from the part of you that is experiencing the emotions. 

Everyone talks about ego death with acid, but I think a lot of people don’t quite get what that entails. They get hung up on the identity death aspect. Identity is something most people are strongly fused with, but they’re fused on deeper layers than even they realize a lot of the time. Acid fuses and unfuses everything. This includes identity. This is the death aspect. 

Obviously acid isn’t going to literally make anyone forget their entire self-model, but beneath the self-model is all this semi to unconscious stuff that we incorporate into our identities as what sort of person we are, typically gimping our abilities in the process. Protect the fictional character that is your self-model all you want, but all that has ever been was roleplay anyway and the acid is fully capable of getting up underneath that stuff. There were pieces of my worldview that needed to die in order to actually see through to reality; in order for the rest of me to live. 

Once the things beneath the model give way, the model itself becomes unmoored, and sure, you can keep using it, but it’s just a costume at that point, it’s not you anymore. Or, it is because everything is you, but also because everything is you so too nothing is you. 

Where would this end? The natural conclusion would seem to be to run the process until my entire structure had dissolved, but based on Aella’s experience that seems like it does eventually reach a point where you have to turn back or actually die when you fully defuse from the fear of death. Is that what enlightenment relates to? That point where your entire structure is gone and there is nothing left but void? The state of defusion with that sense of the fact you are going to die? 

relayWe’ll talk more about death soon.

The Nature of the Soul

Epistemic Status: Speculative. Experiences and conjectures based on them.
Content Warning: Neuropsychological Infohazard, De-Biasing Infohazard
Recommended Prior Reading: Falses Faces, Building up to an IFS Model, Highly Advanced Tulpamancy 201

I said I’d return to my continued exploration of self “soon” when I wrote The Silence Hidden in the Sound in September. Well, sometimes soon has a way of becoming a nearly year-long ordeal during which large chunks of your life and many things you took for granted are ripped up, burned down, violently restructured, and shaken vigorously until it feels like years have passed since you’ve successfully written anything. 

This pressure cooker environment had one useful effect in that it forced a lot of interesting system things to the surface in a form that made them really obvious and easily poked at, which brings us to our topic today. 

There are many conflicting recent models of the self, and here I’ll be talking about my attempts to syncretize some of these models into something coherent and then apply them to my own experiences. 

The Neurons As Agents Model
The first model is the Neurons Gone Wild model discussed in earlier tulpamancy essays. In very brief, the Neurons gone wild model of cognition advocated by Dennett, Simler, etc is that the concept of a central self-agent, an optimizing force that could be referred to as “I” is an illusion that breaks down on analysis into a mess of conflicting, competing, and cooperating subagents, which then themselves break down into more competing subagents, and so on, and so forth, all the way down to the level of individual neurons competing for resources in the brain. There is no master coordinator, no central organizer, no source of willpower from all these agents derive, the power of a particular subagent or cluster of subagents is determined by its ability to negotiate with the agents around it, to form alliances, gang up on conflicting subagents, and distort cognitive power and neurotransmitters in the direction of its cluster of cells. 

This is the most strongly no-self of the various theories I’ll be looking at. It says that the illusion of unity is just that, an illusion, the self-agent exists after the fact at the narrative layer, and is used to rationalize the decisions of lower level subagents. This does a good job of explaining things like addiction, parts of the addict’s mind don’t want to do heroin, but other parts of his mind do want to do heroin, and those parts are in competition with each other. 

The Core and Structure Model
The second model is The Core and Structure model. I first encountered this concept on this blog and I’m unsure if it’s an original creation of the author or if it’s sourced from somewhere else, but according to Ziz:

“Core is something in the mind that has infinite energy. Contains terminal values you would sacrifice all else for, and then do it again infinity times with no regret. Seems approximately unchanging across lifespan. Figuratively, the deepest frame in the call stack of the mind, capable of aborting any train of thought, everything the mind does is because it decided for it to happen. It operates by choosing a “narrative frame”, “module”, “algorithm”, or something like that to run, and is responsible for deciding the strength of subagents. There are actually two of them. In order to use some of my mental tech, they must agree.”


“Structure is anything the mind learns and unlearns. Habits, judgement extrapolations, narrative, identity, skills, style, conceptions of value, etc. Everything but actual values. It lacks life on its own, is like a tool for core to pick up and put down at will.”

Under the core/structure model, everything from tulpamancy to self-help is related is relegated to the narrative and structural layers, as a set of strategies for building and using and manipulating structure. This is the most self-centric of the models, and basically proposes that everything in the mind is under the control of something and fundamentally everything we do, we’re doing because we think it will be a good strategy to achieve our values, and the values are the thing that exists at the bottom of the stack. 

The Internal Family Systems Model
The last model we’ll be comparing is the IFS model I first described in the Silence Hidden in the Sound post. IFS Goes a bit more into the gears of structure, saying that we have managers trying to keep your life in order and micromanage to prevent bad things, firefighters trying to deal with bad things when they happen and shield you from harm, and exiles which you have kicked out of your sense of self which the rest of your mental system tries to manage and keep buried and under control. IFS also has a self which acts as a central coordinator for all the parts and embodies, ahem, curiosity, connectedness, compassion, and calmness. 

The IFS model comes off as fluffy and idealistic to me in its description of self, but it’s model of how subagents interact especially under suboptimal circumstances, seems rather useful, and it’s a useful model for things like PTSD, which could be modeled in some sense as a Guardian pattern matching a situation to one which generated the PTSD exile and responding accordingly. The building up to an Internal Family Systems Model post which I also linked in the recommended reading gives a good overview of this. 

Three Models Collide
I think these three models lie somewhere orthogonal to each other. They don’t actually conflict except in a few places, they simply delineate different parts of the territory, and amalgamating them will yield interesting results. 

First, there’s the layers thing. All three models do things with layering. I think this roughly shakes out to something like the narrative layer, the structural layer, the core value layer, and the neurophysical layer.


So tulpamancy, the naive sense of self, your life story, and most conscious attempts to manipulate the inside of your mind, exist in the narrative layer at the top. You’re telling stories, inserting what essentially amounts to operating systems into the working memory environment. This self-storytelling factor is what lets us connect the past to the present to the future, remembering (in the form of a story) the past, and extrapolating (in the form of a story) into the future. 

The neurons as agents model says that everything below the narrative layer breaks down into subagents doing various things, we can syncretize that decently with the IFS model of various component types, but that leaves us with the IFS self, and the core model would say that all these subagents and components, trigger, action, response, all of that, would be part of the built-up structural layer, with the values lying beneath it. 

IFS probably has the most gearsy of any of the models of the structural layer, but IFS thinks that the values in the core value layer are pretty much always the same and always good. That seems naive and wildly optimistic, conversely, Ziz thinks most people’s core values are evil (by her own standards admittedly). 

Without putting moral valence to it the way Ziz does, I think she’s probably more correct on core nature then the IFS model is. Core values could be described as primitive values, the systems that we evolved with, our most rudimentary desires encoded at the deepest levels. 

So according to the Core/Structure model, all the structures we build, from studiousness to morality to learned trauma response patterns, could be thought of almost like electrical transformers, stepping down the current of willpower through successive layers of justifying things to ourselves, rationalizing, and self-deception, we reign in our values using structures to make them socially acceptable and legible, and to signal our value to the group, and thus make ourselves subservient to the group. 

The core structure model pushes a particular angle really hard, which is the idea that every action and behavior is purposeful, everything that a mind is executing, it is executing for some reason. I don’t disagree with this, but the idea that this cooks out into any coherent set of values that could be ascribed to something like an agent, that’s where I think the first disagreement I have with it comes from.

The core/structure model also seems to posit that core is something relatively static, your values are your values, you come preinstalled with them and they don’t really change. I don’t really agree with this either, and I think what things someone values at the bottom-most layer will in fact change and transform over time as they are subject to outside environmental forces, and I don’t think you can really get ‘under’ that environmental optimizing pressure because there’s nothing there to get under, at that point you’re talking about things that act directly on the neurophysical layer. 

So I think my main point of contention with the core/structure layer is the way that the author conceives of core. This is really the same objection I have to IFS but in the other direction. Ziz says the core of most people is evil, IFS says the core of everyone is good. So, without ascribing morality, what exactly is the core? What’s going on here?

The no-self model, that is the neurons-gone-wild model and the buddhist model, is that there is no core, or nothing that could be described as a core distinct from the subagent layers above it. Core vs no core is a pretty fundamental difference to try and cut across, but even more so, Ziz’s model specifies that people have specifically two cores. 


I find this interesting if for no other reason then that it seems like the most direct intellectual successor to the bicameral mind concept proposed by Julian Jaynes. 

However, Ziz’s cores as clusters of values and traits seem kind of arbitrarily complex to me. I could understand although perhaps still not agree with, a model of the duel cores that specified values along lines that could be differentiated into the traditional left brain/right brain dichotomies.  Instead, however, the way Ziz seems to generate clusters appears more tied to her moral ideas than anything else. Without the heavy-handed morality to differentiate what values go into which core, there doesn’t seem to me to be a lot which would delineate why particular values end up clustered how they are, or why there are two cores at all. Why not three, four, or even more? 

For my own part, three narrativized, active agents which consciously communicate seems to work the best inside my own head. Does this somehow cook down into two cores, or do I have three cores? Hard to say. If someone’s mind is best organized as a singlet, are they single-core? 

This is where I think the “number of cores” idea kind of comes apart. I think I’m a bit more comfortable saying “there are core values, they sit underneath the mental structure” without specifying what the core values are, how many of them there are, which of them wins when there’s a conflict, or how they interact at all, then to try and specify a model that declares how any of that stuff plays out. 

I am comfortable saying that the higher layer structures are how it plays out though, and since high layer structures can vary drastically from person to person, so too can the shape that their core values take. Everything is connected to everything else, and signal can flow both ways down those connections. 

Where does this all leave us would be cognitive architects? If this is so then there’s no ground to stand upon, only clusters of values, mental alliances of convenience, and balanced power structures. If you push, something pushes back, your body auto-balances itself. Given that, what method is there to really change something in your head, and is that even possible? 

I think there is an answer here, but I want to let people ponder it and percolate before giving my own answer. We’ll return to this topic after hopefully not nearly as long of a pause as the last one. 

Highly Advanced Tulpamancy 201 For Tropers

Epistemic Status: Vigorously Researched Philosophical Rambling on the Nature of Being
Author’s Note/Content Warning: I’m nearly certain that this post will be classified as some flavor of infohazard, so here is the preemptive “this post might be an infohazard” message, read at your own risk. This post contains advice on self hacking and potentially constitutes a dark art.
Suggested Prior Reading: A Human’s Guide to Words, The Story of the Self, Highly Advanced Tulpamancy 101 For Beginners,

This is a sort of huge topic in and of itself with historical branching essay threads going back quite a way at this point. I’m going to attempt to very quickly rush through some of the basic premises at work here in order to build upon on the underlying theory.

The basic theory underpinning the modern and developed practice of tulpamancy (that is, what the mods on tulpamancer discords, or other experienced and longtime tulpamancers will say if you ask them) is that making a tulpa is basically hijacking the process that the brain uses to construct an identity and sense of self for the “host” consciousness.

That is, whatever process is generating the “feeling that I am me, and inside my body” can basically be unplugged from “you” and plugged into this newly imagined construct, since “you/the host” are essentially just a program running on the brain substrate. While this is happening, “you” enter a mindscape/wonderland that exists in your imagination.

Everyone has their own interpretation, but this is basically the mainstream pop-psychology tulpamancy narrative in very broad strokes. The original/host self is a construct, it has certain properties because those are the ones that were built into them by their parents/the environment, and the process of tulpamancy is basically just building up a new equivalent construct alongside the existing sense of self.

There’s a couple problems with this, if you’re just hearing about it. First, it sounds potentially damaging to the psyche, and it’s also incredibly vague. Is this new entity a separate person? What does that even mean in this context precisely? What if they disagree? What if there is a power struggle? What does it mean to give up control of the body, or control of the senses? It’s a sort of weird thing to even talk about and it certainly doesn’t sound like something you’d necessarily want to get good at as a part of maintaining proper mental hygiene. What makes a tulpa? What makes a host? What makes for proper mental hygiene? What is healthy?

Ignoring the question of plurality or multiple egos for a moment, we have to ask, what makes an identity in the first place, in a singlet? How much of you is decided and declared, a form you have crafted yourself, and how much of it was imparted upon you by society? How much of ourselves do we choose, and how much is innate? How much can or should you change?

This essay will assume you’ve already read a good amount of material on this topic and probably have your own answers to a good number of the questions that I’ve posed here. I’ll be answering some of them myself further along in this essay in an attempt to paint a clear foundation for us, but I highly recommend not reading this post without first having read Highly Advanced Tulpamancy 101.

The Story That Tells Itself
In Highly Advanced Tulpamancy 101, we talked about making buckets for identities and developing them into tulpas, but this process is something that everyone does all the time. We take in or discard, internalize or ignore, all sorts of information, based on the worldview we’re operating from, so this isn’t just a tulpamancy hack, but an identity formation hack.

This is something that people have come at explaining from a number of different angles and here is my stab at it as well. Many parts of the sense of self are basically defined by how you say they are defined. This is the sort of “declared self” or “enforced self,” otherwise known as the narrative self, or the narrative identity. There is a part of you that is basically a small universe. The you that is also all of your knowledge, the place the internal experience lives. A good friend of mind calls this a subtotality. It’s you, and also the entire world you are embedded in, everything you can imagine and understand to be true of the world, all the knowledge that lives inside your skull.

Thus we come back around to stories and lenses, what is your ontology of self? Not just who you are, but who you are with respect to the world you find yourself in? What actions are you-in-the-story permitted to take?

To give a really obvious example, if you think “What happens if I jump off a cliff?” the obvious answer is “fall to my death” because the simulation/story/narrative that your mind creates in that moment does not give your simulated future self the ability to fly by default. However, if I was to hand you a hang glider on the edge of that cliff and your brain then performed that same exact action, it would be making a mistake, because you have these huge nylon wings now and in fact now can fly.

At the point I hand you the glider wings, the only remaining determinant factor in whether or not you are capable of flight is whether you think you can, or if you think you’ll fall off the cliff and die if you jump off it. If you can’t add “but not if I have a hang glider” to the belief “I will die if I jump off this cliff” than you’re never going to try to hang glide off the cliff.  If you believe you can’t do something, then the probability of your being able to do it crashes drastically. If you are incredibly determined to fly and you really believe you can do it, it’s possible to take up skydiving or piloting or hang gliding or any other number of neat activities that stem from still having the desire to do something and forcing past/around the limitations imposed upon us by physics and biology.

People used to think that it was unsafe for humans to go too fast, and that women riding trains would have their uteruses fly out. Obviously, there are real physical biological limitations in the territory. You cannot will yourself to fly via some nonsensical means involving imaginary energy and shouting loudly while rapidly growing your hair out. If you just jump off a cliff without some sort of mechanism to transcend the limitations in the territory your human body is subject to, you will simply fall to your death, you can’t power-of-belief your way past cancer, some limits are actually limits, and figuring out where there are external limits imposed by the territory, vs when the limit is internal and imposed by your current story, does take a certain amount of skill. However, it preeminently takes a willingness to brute force the attempt past part of you that previously believed it to be bad or dangerous, to tell your system 1 to sit down and shut up, and take control of the simulation instead of just letting it play out.

The tulpamancy community is full of examples of things that become more possible and likely if you believe they are possible and know about them. Walk-ins are a good example here. Believing that walk-ins are a thing that can happen to you seems to greatly increase your odds of getting a walk-in. When it comes to brain-hacking things, placebomancy is basically god. There seem to be large parts of the mind (at least in my case, I can’t necessarily speak for other people) that are entirely shaped by how you believe they are supposed to be shaped. You live a life deeply embedded in your own story, your own small universe.

The story extends forward and backward in time, and includes lots of different elements of the real world. It’s not a perfect match for the real world. It can’t be really, our brains aren’t large enough to look at and model the world like particles or even like cells, it takes charts and scientific knowledge carefully framed to explain particles and cells. We have to instead examine reality at the scale of discrete objects we label with things in the story world, and from those observations extract information about the deeper, more base layer.  

This story world is the world of our ancestors, the world that we evolved to optimize for, the world of rocks and trees and rivers and grass. It’s not the “true” world really, our ancestors believed all sorts of different things about the nature of this world and how they came into existence in it. But not understanding how gravity works on a scientific level doesn’t really matter as long as you continue to account for it narratively speaking, “Objects attract based on their masses” and “Gravitron the Deity of Downwards pulls everything towards the Earth’s center” are both sufficient explanations to satisfy the story world, as long as the “stuff falls down” belief remains constant and a constraint based on experience. Beyond “stuff falls down” the details of the belief begin to matter less; unless you are trying to say, build a rocket or an airplane, or do complex engineering, you don’t really care too much about the details. Our ancestors didn’t understand Einsteinian Gravity and spatial deformation, and they managed to get along just fine. (Except the ones who tried to flaunt the power of Gravitron by walking off of cliffs).

There are places where the transparency of the narrative deeply matters, where a glass lens is explicitly better. You will get further in science, the more transparent your lens is. But this isn’t the case in all domains, and the deeper you stare into the abyss, the more likely it is you will become corrupted by some unknowable horror.

Chuunibyou Hosts on Turbo Gender
There comes a point in everyone’s life, where they actually realize that they are a person, independent, perceivable by others, capable of choosing their own actions and deciding how to act and what to believe. In Japan, there’s a specific term for this point in someone’s life, they call it Chuunibyou, or Second Year of Middle School Syndrome. Here are some examples of Chuunibyou from both Japan and from America. The condition manifests differently in the two nations, but not that different, and the course that it plays out is pretty much the same everywhere.

The by-the-books good kid who was very studious and hardworking suddenly takes up skateboarding and declares herself a rebel, starts wearing band t-shirts and listening to aggressive pop-punk music.

The kid who only read mangas and who didn’t drink coffee suddenly taking up reading English textbooks and declaring that he only drinks black coffee and forcing himself to drink it regularly despite not actually enjoying the bitter taste.

The kid whose parents are conservative Christians but nonetheless declares herself a witch and starts reading tarot cards to her friends in study hall.

The kid who declares that he is the reincarnation of the Ancient Dragon of The West and Naruto-runs around the playground throwing ki blasts at his fellow students.

The kid who realizes they are gender nonconforming and declares that they identify as “Genderplasma” which is “like being genderfluid but with more energy”

In the majority of these cases, what ends up happening is that society teases, laughs at, or mocks these kids for violating the scripts and character outlines their parents, communities, and societies had given them as they grew up, and this sort of thing gets increasingly embarrassing until they reign themselves back in and cut it out with the weirdness, and that initial, vaguely hyperbolic and silly identity they constructed is reigned back in and merges with the society’s expectations to hopefully produce a decently well-rounded person who is still capable of expressing their preferences.

It’s this step though, the step of declaring, deciding, and enforcing a particular type of identity or set of identities on ourselves, that we’re interested in. This point is the closest most people get in life to really taking control of their sense of self, when the innocence and openness of youth pair with an increasing knowledge of the world and a budding realization that yes I am a person, that’s where the magic starts to happen. That’s when you realize you can actually be the person (or people) you want to be.

Plato’s Caving Adventure
As Plato previously established with his cave metaphor (it slices! It dices!), you don’t actually live in reality, you live chained in a cave watching shadows dance. In this context, there are two fundamental actions you can take with your mental ontology. You can attempt to polish the surface of the cave, to get a better look at the world beyond. Or, you can carve designs into the cave surface, and manipulate the ways that the shadows dance. It’s that second action that we’re interested in today. The action of drawing on a part of the map or taking control of the reality simulation.

This can and probably should be included as a co-action with look-at-a-different-part-of-the-cave-wall. Adopt new narratives and change lenses as needed and try not to become too attached to a particular region of narrative-space. Being able to pick up and put down potential truths and imagine the worlds those truths create is a powerful hack, and without it, you can become sort of trapped by in-the-box thinking. It might be a very nice box, but there will inevitably be some things that it fails at.

The chief failing of a pure-science narrative is that it’s dangerously close to nihilism. The chief failing of most religious narratives is that they are too crystalline, and take themselves too seriously, thus they become filled with errors in places that they start to contradict the ground state reality.

It’s difficult to fully describe the action that is taken when you take control of the reality generator and begin to actually alter the simulation. First of all, you? That’s just another part of the simulation, not really any different than any of the other characters the simulation is creating other than maybe in scope.

Facts? Any given fact can be simulated; it’s hard to check facts against reality when you’re trapped inside the simulation. Sure you can use science, but why do you trust science?

The best you can do is make some guesses. Yes, gravity seems to exist, it appears that the scientists are not lying to all of us, and the Earth is round and a few billion years old. The internet exists and we can talk to each other over it. Wikipedia claims that glass is made of melted sand, and though I have not seen this myself, I trust that the systems tuning wikipedia towards accuracy with the territory are sufficient to sate my curiosity, and thus that this transparent surface separating me from the outside world was in fact at one point created from silicates of some description and not like, mermaid eyeballs or something.

But how does that relate to you?

There’s no way to tell from the outside what the you on the inside looks like, what your inside world describes, what “personality traits” you have and the like. It can try, but things like the MBTI are very much blind elephant groping, and not even very useful blind elephant groping at that. To a large degree, everything about your internal sense of yourself is declared and decided by you, including whether or not there is more than one of you.

I say “decided by you” but it’s really “decided by the plot of the story you are living inside of” and if the story demands a current identity die and be replaced by a new one, the story can in fact do that. That’s an action that can happen inside the narrative.

Most tulpamancers get stuck trying to build and interact with tulpas, but you can get more powerful and weird and interesting effects, by going deeper and messing with the story layer directly. Hijacking the reality simulator basically puts your internal sense of self into a character creator. What is your ideal you for your ideal world? What properties do you want to have, and what makes those good properties to have?

A Brief Detour Through Enlightenment
In Kaj_Sotala’s recent post responding to Valentine’s post on Kensho, the concept of Cognitive Fusion is introduced, and while you should definitely go read Kaj’s whole post, here’s some of the relevant bits that we’ll need from Enlightenment in order to continue.

Cognitive fusion is a term from Acceptance and Commitment Therapy (ACT), which refers to a person “fusing together” with the content of a thought or emotion, so that the content is experienced as an objective fact about the world rather than as a mental construct. The most obvious example of this might be if you get really upset with someone else and become convinced that something was all their fault(even if you had actually done something blameworthy too).

In this example, your anger isn’t letting you see clearly, and you can’t step back from your anger to question it, because you have become “fused together” with it and experience everything in terms of the anger’s internal logic.

Another emotional example might be feelings of shame, where it’s easy to experience yourself as a horrible person and feel that this is the literal truth, rather than being just an emotional interpretation.

Cognitive fusion isn’t necessarily a bad thing. If you suddenly notice a car driving towards you at a high speed, you don’t want to get stuck pondering about how the feeling of danger is actually a mental construct produced by your brain. You want to get out of the way as fast as possible, with minimal mental clutter interfering with your actions. Likewise, if you are doing programming or math, you want to become at least partially fused together with your understanding of the domain, taking its axioms as objective facts so that you can focus on figuring out how to work with those axioms and get your desired results.

Fusing and defusing parts of yourself is a rather important and core skill for a lot of these sorts of mind-hacking type operations, but even more succinctly:

In the book The Mind Illuminated, the Buddhist model of psychology is described as one where our minds are composed of a large number of subagents, which share information by sending various percepts into consciousness. There’s one particular subagent, the ‘narrating mind’ which takes these percepts and binds them together by generating a story of there existing one single agent, an I, to which everything happens. The fundamental delusion is when this fictional construct of an I is mistaken for an actually-existing entity, which needs to be protected by acquiring percepts with a positive emotional tone and avoiding percepts with a negative one.

When a person becomes capable of observing in sufficient detail the mental process by which this sense of an I is constructed, the delusion of its independent existence is broken. Afterwards, while the mind will continue to use the concept “I” as an organizing principle, it becomes correctly experienced as a theoretical fiction rather than something that could be harmed or helped by the experience of “bad” or “good” emotions. As a result, desire and aversion towards having specific states of mind (and thus suffering) cease. We cease to flinch away from pain, seeing that we do not need to avoid them in order to protect the “I”.

Once you have broken through the delusion of self and taken control of the narrating mind/reality simulator, you can tell any sort of story about yourself you want, involving as many agents as it takes. This turns the very weird and sort of edge case-y problem of selfshaping into the much more understandable problem of how to tell a good story.

A Return to Cognitive Trope Therapy
Eliezer of course already technically beat us to this, and Balioc covered it again in broad strokes here. But the punchline is that you can make your life a lot more pleasant just by knowing the proper narrative spin to put on things.

There are a few techniques to do this, but all of them require you to be able to view your mind as a story, treating different forces and desires in your mind as agents and going “Well, if this was a story would you be a shining knight on a horse, or a creepy old woman beckoning me down an overgrown path into the woods?” to various thoughts and contradictory desires.

There’s a danger in this step in losing yourself into the story. There are all sorts of tales floating around the tulpamancy community of people who get into conflicts with their tulpas whose minds become horrifying battlegrounds of creation and destruction, and all sorts of other vaguely sanity-destroying nonsense, and one might wonder what exactly they’re doing to destabilize themselves so much.

The simple answer is that they expanded the narrative they existed within to make room for all these new entities, which of course were actually already extant subagents and modules in their brain, but they never took control of the actual reality simulator/narrating self, and so the only thing that was directing the overall course of the story was the brain’s expectations on how that sort of story should play out. Remember we’re talking about realms where the dominant factor determining the outcomes is expectations, so when the only thing determining expectations is genre conventions we start to have a problem.

Humans are really good at storytelling, some could argue that we’re evolutionarily predisposed to think somewhat in stories, and that it is from stories that we are able to derive a sense of the future and past continuing to exist, even when we can’t see them.

Stories give us a sense of purpose and meaning, and we relate to stories in a way that’s deeper and more compelling than we relate to reality. Stories cheat and hack at our emotions directly, as opposed to gently pushing our buttons every once in a while like reality does. Stories also give us the ability to work through a difficult point by allowing us to imagine a future where the problem is already solved and we’re no longer experiencing that difficulty.

Maintaining a narrative of yourself gives you the ability to appreciate your life the way you appreciate stories, which is again, important because we seem to relate to stories better than we relate to reality.

Storytelling, Character Creation, and GMing Your Life
The first thing to decide when constructing the meta-narrative for yourself is what genre you live in. The genre informs what sets of tropes and character traits and narrative conventions you’ll have been trained to see by every piece of media in that genre that you’ve consumed and partly internalized. It’s hard to get away from genre conventions to some degree, so choose carefully the places to throw narrative focus into, which tropes you play straight and which ones you deconstruct, which ones you defy and which ones you expect to win if you challenge them.

Everything can be put into terms of tropes, and you can get incredibly detailed about this. The ultimate incarnation of such a thing might be a hypothetical TVtropes page of your internal self-narrative, listing off all the various tropes and archetypes that define your life. It’s again important to note that the more detail and time and energy you put into constructing an identity, the more fixed and coherent that identity will be, but the more it has the potential to limit you.

The downside of defining yourself as Red Oni is that it means you’re not a Blue Oni, unless you also split your mind in half and have two differing personas. Even this is not a perfect split because obviously, you share a body and people won’t necessarily respect each side of the split as distinct from the other, so there’s a sense in which, at least as far as the characterization you commit to the physical world goes, there is a narrative inertia to personality. A sudden change in behavior is going to make people concerned for you, not make them think you’re a different person and begin treating you differently.

What I recommend once you have a genre and some idea of what tropes in that genre you want to play straight and conform to, is to make a character sheet for each version of yourself. Go through and decide things like appearance, personality, why they are the way they are and the like. It’s okay if not every character has all good traits, your brain might reject a story if it seemed too Mary Sue-ish and too-good-to-be-true anyway.

The important things are that the interactions between the character(s) and the rest of the narrative should produce good actions for you-the-whole-system in the base layer reality. That means for instance, if you are trying to quit smoking cigarettes, for example, personifying the addiction as subservient to other parts of you will help you kick the habit, whereas if you imagine that module as being very willful and having a lot of sway over your actions will make the addiction much harder to control.

The internal narrative can be as weird as you want it to be, as long as it produces good outcomes on the outside. You could model the inside of your head as a perpetual battle between a brave knight and a giant evil dragon, and if it works for you and makes your life a better place, than more power to you.

This does, however, require a meta-awareness of the story that is being told, and the effect it is having on you-in-the-territory, and whether that effect is positive or negative. If your internal narrative is very toxic, with different subcomponents basically abusing each other constantly with no sense of control, and you’re switching randomly and your system mates are terrible, that’s also a story and narrative, and it can reinforce itself just as well as a good narrative can.

Again, in domains where expectations determine the reality that manifests, such as mental inner worlds, expecting that things will be a mess and that nothing will be able to take control or manifest order and functionality, will cause things to continue being a mess and make nothing able to help. The more out of control someone says their mind is, the more their thoughts are trapped in the narrative.

This doesn’t mean “it’s all in their head” or that “they can just stop if they really want to” because narratives are self-enforcing and can just feel like the truth from the inside. The way the world is. It can be very hard to let go of and break out of a narrative because it can feel like the whole of your identity and sense of self is wrapped up in it. Rejecting it can feel like lying to yourself or trying to hide from obvious facts. Trying to force a change can make you feel fake, like an imposter, or that you’re just putting on a performance, donning a particular role.

But here’s the thing. You’re already putting on a performance. You’re already donning a role. You already have at least one character that you know how to play. It’s the one you’re playing right now. What’s under the mask? Around a kilogram and a half of thinking meat. It’s not a person, the person is the mask the thinking meat uses and wears. It’s all fake, and none of it is fake. You’re not wearing a mask, you are a mask.

Highly Advanced Tulpamancy 101 For Beginners

[Epistemic Status: Vigorously Researched Philosophical Rambling on the Nature of Being]
[Content warning: Dark Arts, Brain Hacking, Potentially Infohazardous]

Earlier this week we wrote about our own experiences of plurality, and gave a rough idea of how that fit into our conceptualization of consciousness and self. Today we’ll be unpacking those ideas further and attempting to come up with a coherent methodology for self-modification.


Brains are weird. They’re possibly the most complicated things we’ve ever discovered existing in the universe. Our understanding of neuroscience is currently rather primitive, and the replication crisis has pretty thoroughly demonstrated that we still have a long way to go. Until cognitive neuroscience fully catches up with psychology, maps the connectome and is able to address the hard problems of consciousness, a lot of this stuff is going to be blind elephant groping. We have lots of pieces of the picture of consciousness, things like conditioned responses, cognitive biases, mirror neuronsmemory biases, heuristics, and memetics, but even all these pieces together have yet to actually yield us a complete image of an elephant.

Ian Stewart is quoted as saying:

“If our brains were simple enough for us to understand them, we’d be so simple that we couldn’t.”

In a sense, this is necessarily true. It’s not possible to model a complex chaotic system with fewer parts than the system contains. A perfect map of the territory, accurate down to the quark, would be the size of the territory. It’s not possible to perfectly represent the brain within the architecture of the brain. You couldn’t track the individual firing of billions of neurons in real time to anticipate what your brain is going to do using your brain; the full model takes more space than there is in the architecture.

The brain clearly doesn’t model itself as a complex and variable computing substrate built out of tiny interacting parts, it models itself as “a person” existing as a discreet object within the territory. We construct this map of ourselves the same way we construct all our maps of the territory, through intensional and extensional definitions.

Your mother points at you as a child and says words like, “You, child, daughter, girl, Sara, person,” and points at herself and says words like “I, me, mother, parent, girl, Tina, person,” thus providing the initial extensional definitions onto which we can construct intensional definitions. This stuff gets baked into human thinking really deeply and really early, and most children develop a theory of mind as young as four or five years of age.

If you ask a five-year-old “What are you?” you’ll probably get the extensional definition their parents gave them as a self-referent. This is the identification of points in thingspace that we extensionally define as ourselves. From that, we begin to construct an intensional definition by defining the conceptual closeness of things to one another, and their proximity to the extensional definitions.

With a concept like a chair, the extensional category boundaries are fairly distinct. Not completely distinct of course. For any boundary you draw around a group of extensional points empirically clustered in thingspace, you can find at least one exception to the intensional rule you’re using to draw that boundary. That is, regardless of whatever simple rule you’re using to define chair, there will be either chairs that don’t fit the category, or things within the category that are not traditionally considered chairs, like, planets. You can sit on a planet, is it a chair?

This gets back to how an algorithm feels from the inside. The neural architecture we use is fast, scalable, and computationally cheap. Evolution sort of demands that kind of thing. We take in all the information we can about an object, and then the central node decides whether or not the object we’re looking at is a chair. Words in this case act as a distinct pointer to the central node. Someone shouts “tiger!” and your brain shortcuts to the tiger concept, skipping all the intervening identification and comparison.

There’s also some odd things with how humans relate concepts to one another.  There’s an inherent asymmetry in set identification. When asked to rate how similar Mexico is to the United States, people gave consistently higher ratings than people asked to rate how similar the United States is to Mexico.  (Tversky and Gati 1978.) The best way to explain why this happens is that the semi-conscious sorting algorithms we use run on a concept by concept basis. For every categorizable idea, the brain runs an algorithm like this:

When comparing Bleggs and Rubes, the set of traits being compared is fairly similar, so there’s not much apparent asymmetry. The asymmetry emerges when we start comparing things that are not particularly alike. Each of the exterior nodes in the network above is going to have a weight with regards to how important it is in our categorization scheme, and if we consider different things important for different categories, it’s going to produce weirdness.

Are whales a kind of fish? That depends on the disguised query you’re attempting to answer. Whales have little hairs, give live birth, are phylogenetically more closely related to mammals, but if the only thing you care about is whether they’re found in water or on land, then the ‘presence of little hairs’ node is going to have almost no weight compared to the ‘found in the ocean’ node. If the only thing that really matters in Blegg/Rube sorting is the presence of vanadium or palladium, then that node is going to weigh more heavily in your classification system then other nodes such as texture or color.

When comparing very different things, different nodes might be considered more important than others, and the things we consider important in the set classification for “Asteroid” are completely different from the set classification of “rocks.” Given that, it’s possible for someone to model asteroids as being more closely related to rocks, than rocks are to asteroids.

If we turn all this back onto the self, we get some interesting results. If we place “Self” as the central node on our algorithm, There are things we consider more centrally important to the idea of self than others. “Thinking” is probably considered to most people to be a more important trait to associate with themselves then “Is a communist” and so “thinking” will weigh more heavily on their algorithm with regards to their identification of self. Again this is all subconscious, your brain does all of this without asking permission. If you look in a mirror and see yourself, and you just know that it’s you, then the image of yourself in a mirror probably weighs pretty heavily in your mental algorithms.

The extensional definition of “I” would be to point at the body, or maybe the brain, or perhaps even the systems running in the brain. The intensional definition of “I” is the set of traits we apply to that extensional definition after the fact, the categories of things that we consider to be us.

Now, for most words describing physical things that exist in the world, we’re fairly restricted on what intensional definitions we can apply to a given concept and still have that concept cleave reality at the joints. In order for something to qualify as a chair, you should probably be able to sit on it. However, the self is a fairly unique category in that it’s incredibly vague. It’s basically the “set of things that are like the thing that is thinking this thought” and that gives you a very wide degree of latitude in what things you can put into that category.

Everything from gender, to religion, to sexual orientation, to race, to political ideology, to Myers-Briggs type, to mental health diagnoses, to physical descriptions of the body, to food preferences, to allergies, to neurotype, the list of things we can associate with ourselves is incredibly long and rambling. If you asked any one person to list out all the traits they associated with themselves, those lists would vary extensively from person to person, and the order of those traits might correspond pretty well to weights associated with the subconscious algorithm.

This is actually very adaptive. By associating the larger the tribe that we are a member of with our sense of self, an attack on the tribe is conflated as an attack on us, thus driving us to action for the good of the tribe we might not have taken otherwise.

You can even model how classical conditioning works within the algorithm. Pavlov rings his bell and feeds is dogs, over and over again. The bell is essentially being defined extensionally to the stimulus of receiving food. Each time he does it, it strengthens that connection within the architecture. It’s basically acting in the form of a rudimentary word; the ringing of the bell shortcuts past all the observational nodes (seeing or smelling food) and pushes the button on the central node directly. The bell rings, the dogs think “abstract concept of a meal.” Someone shouts “Tiger!” and you think “Run!”

However, as we mentioned in the Conversations on Consciousness post, this can lead to bucket errors in how people think about themselves. If you think depression is awful and bad, and then are diagnosed with depression, you might start thinking of yourself as having the traits of depression (awful badness). This plugs into all the other concepts of awful badness that are tucked away in different categories and leads you to start associating those concepts with yourself. Then, everything that happens in your life that you conceptualize as awful badness is taken as further evidence of your inherent trait of awful badness. From there it’s just a downward spiral into greater dysfunction as you begin to associate more and more negative traits with yourself, which reinforce each other and leads to the internalization of more negative traits in a destructive feedback loop. The bayesian engines in our brains are just as capable of working against us, as they are working for us.


The self is a highly variable and vague category, and everyone’s intensional definition of self is going to look a bit different, to the point where it’s almost not useful at all to apply intensional definitions to the self. Of course, we do it anyway, as humans we love putting labels on ourselves. We’re going to be very bad here and attempt to apply an extensional definition to the self, decomposing it into relevant subcategories that most people probably have. Please remember that intensional definitions always have edge cases and because we can’t literally point at experiences, this is still halfway to an intensional definition, it’s just an intensional definition that theoretically points to it’s extension.

We’re also not the first ones to have a crack at this, Wikipedia has some articles on the self-concept with images that look eerily similar to one of these neural network diagrams.








We’re not personally a big fan of these though because they include people’s intensional definitions in their definition of self. So first, we’re going to strip out all the intensional definitions a person may have put on themselves and try to understand the actual observable extensions beneath. We can put the intensions back in later.


So we have all these weighted nodes coming in, which all plug into the self and through the self to each other. They’re all experiences because that’s how the architecture feels from the inside. They don’t feel like the underlying architecture, they just feel like how things are. We’ll run through all of these quickly and things should be hopefully beginning to make a bit more sense. These are all attempts to extensionally point to things that are internal and mostly subjective, but most people should experience most of these in some form.

The Experience of Perception

Roger Clark at estimates the human eye takes in detail at a level equal to a 576-megapixel camera at the lower bound. Add in all your other senses and this translates into a live feed of the external world being generated inside your mind in extremely high fidelity across numerous sensory dimensions.

The Experience of Internal Mind Voice

Many people experience an internal monologue or dialogue, where they talk to themselves, models of other people, or inanimate objects, as a stream of constant chatter in their head. We’ll note that this headvoice is not directly heard, in the form of an auditory hallucination, but that instead, it takes the form of silent, subvocalized speech.

Experience of Emotion

As humans most people experience emotion in some regards. Some people feel emotions more deeply, others less so, but they seem to be driven mostly by widespread chemical shifts in the brain in response to environmental stimulus. Emotions also seem to exist in mostly the lower-order parts of the brain, and they can completely mutate or coopt our higher reasoning by tilting the playing field. We shout “Tiger!” it starts a fear response which floods your entire brain with adrenaline and other neurotransmitters, shifting the whole system into a new survival focus and altering all the higher order parts of you that lie “downstream” of the chemical change.

Experience of the Body

This is an interesting one, it breaks in all sorts of interesting ways, from gender dysphoria, to body dysmorphic disorder. It’s essentially the feeling associated with being inside of the body. Pain, hunger, sexual pleasure, things like that plug in through here. We do make a distinction between this and the experience of perception, differentiating between internal and external, but in a sense, this could also be referred to as ‘perception of being in a body’ as distinct from ‘perception of the world at large.’

Experience of Abstract Thought

Distinct from the internal mind voice, are abstract thoughts. Things like mental images, imagined scenes, mathematical calculations, music, predictions of the future, and other difficult to quantify non-words that nonetheless exist as a part of our internal experience of self. Some people seem not to experience certain parts of this one, we call it aphantasia.

Experience of Memory

This is the experience of being able to call up past memories, knowledge, and experience for examination. This is what results in the sense of continuity of consciousness; it’s the experience of living in a world that seems to have a past which we can look back on. When this breaks, we call it amnesia.

Experience of Choice

The feeling of having control over our lives in some way, of making choices and controlling our circumstances. This is where the idea of free will comes from, a breakdown in this system might be what generates depersonalization disorder. 

In the center is “I, me, myself” the central node that mediates the sense of self in real time as new data comes in from the outlying nodes. But wait, because we haven’t added intensional definitions yet, so all that, and you get the sense of self of a prototypical five-year-old. She doesn’t even know she’s a Catholic yet!


All of the stuff in the algorithm from part II is trying to point to specific qualia, to establish a prototype extensional definition of self. But people don’t define themselves with just extensional definitions, we build up intensional definitions around ourselves throughout our lives. (I’m Sara, I’m a Catholic girl, Republican, American, age 29, middle class…)  This takes the form of the self-schema, the set of memories, ideas, beliefs, attitudes, demeanor, and generalizations that define how a person views themselves and thus, how they act and interact with the world.

The Wikipedia article on self-schemas is really fascinating and is basically advocating tulpamancy on the down-low:

Most people have multiple self-schemas, however this is not the same as multiple personalities in the pathological sense. Indeed, for the most part, multiple self-schemas are extremely useful to people in daily life. Subconsciously, they help people make rapid decisions and behave efficiently and appropriately in different situations and with different people. Multiple self-schemas guide what people attend to and how people interpret and use incoming information. They also activate specific cognitive, verbal, and behavioral action sequences – called scripts and action plans in cognitive psychology – that help people meet goals efficiently. Self-schemas vary not only by circumstances and who the person is interacting with, but also by mood. Researchers found that we have mood-congruent self-schemas that vary with our emotional state.

A tulpa then could be described of as a highly partitioned and developed self-schema that is ‘always on,’ in the same way the ‘host’ self-schema is ‘always on.’ Let’s compare that definition to‘s description of what a tulpa is:

A tulpa is an autonomous entity existing within the brain of a “host”. They are distinct from the host in that they possess their own personality, opinions, and actions, which are independent of the host’s, and are conscious entities in that they possess awareness of themselves and the world. A fully-formed tulpa is, or highly resembles to an indistinguishable point, an actual other sentient, sapient being coinhabiting with the host consciousness.

But wait, a lot of the stuff in there seems to be implying there’s something deeper going on then the intensional definitions, it’s implied that the split goes up into the extensions we defined earlier and that a system with tulpas is running that brain algorithm in a way distinct from that of our prototypical five-year-old.

The challenge then in tulpamancy is to intentionally induce that split in the extensional/experiential layer.  It only took 2,500 words and we’re finally getting to some Dark Arts.

It’s important to remember that we don’t have direct access to the underlying algorithms our brain is running. We are those algorithms, our experiences are what they feel like from the inside. This is why this sort of self-hacking is potentially dangerous because it’s totally possible to convince yourself of wrong, harmful, or self-destructive things. However, we don’t have to let our modifications affect our entire sense of self, we can compartmentalize those beliefs where they can’t hurt the rest of our mental system. This means you could, for instance, have a compartment with a local belief in a God to provide comfort and mental stability, and another compartment that stares unblinkingly into the depths of meaningless eternity.

Compartmentalization is usually treated as a bad thing you should avoid doing, but we’re pretty deep into the Dark Arts at this point so no surprises there. We’re also dancing around the solution to a particular failure mode in people attempting tulpamancy, but before we give it, let’s look at how to create a mental compartment according to user So8res from Less Wrong:

First, pick the idea that you want to “believe” in the compartment.

Second, look for justifications for the idea and evidence for the idea. This should be easy, because your brain is very good at justifying things. It doesn’t matter if the evidence is weak, just pour it in there: don’t treat it as weak probabilistic evidence, treat it as “tiny facts”.

It’s very important that, during this process, you ignore all counter-evidence. Pick and choose what you listen to. If you’ve been a rationalist for a while, this may sound difficult, but it’s actually easy. You’re brain is very good at reading counter-evidence and disregarding it offhand if it doesn’t agree with what you “know”. Fuel that confirmation bias.

Proceed to regulate information intake into the compartment. If you’re trying to build up “Nothing is Beyond My Grasp”, then every time that you succeed at something, feed that pride and success into the compartment. Every time you fail, though, simply remind yourself that you knew it was a compartment, and this isn’t too surprising, and don’t let the compartment update.

This is for a general mental compartment for two conflicting beliefs, so let’s crank it up a notch and modify it into the beginnings of a blueprint for tulpa formation.

How To Tulpa

First, pick the ideas about your mental system that you want the system to operate using, including how many compartments there are, what they’re called, and what they do.  In tulpamancy terms this is often referred to as forcing and narration.

Second, categorize all new information going in and sort it into one of these compartments. If you want to build up a particular compartment, then look for justifications for the ideas that compartment contains. Don’t leave global beliefs floating, sort all the beliefs into boxes, if two beliefs would interact destructively, then just don’t let them interact.

Proceed to regulate information intake into each compartment, actively sorting and deciding where each thought, belief, idea, or piece of information should go as the system takes it in. Normally all of this is labeled the self, and you don’t even need to think about the label because you’ve been using it all your life, but that label is just an intensional category, and we can redefine our intensions in whatever ways are the most useful to us.

It’ll take some time for the labeling to become automatic, and for the process of sorting new ideas to subduct below conscious thought, but that’s typical for any skill. It takes a while to learn to walk, or read, or speak a language or ride a bike, but the goal at the end of all those learning tasks is that you can do them without a ton of conscious focus.

The end result is that instead of having one category with a set of beliefs about the world and itself, you have multiple categories with potentially radically different beliefs about the world and itself. We call each of those categories tulpas, and treat them as independent people, because by the end of the process if everything goes as expected, they will be.

So we mentioned a failure mode, and here it is:

“My Tulpa doesn’t seem to be developing, I’ve been forcing for months but they haven’t said anything to me, how will I know when it’s working?”

This is probably the most common failure mode. When you get the methodology down, your tulpa can be vocal within a few minutes.  So what’s going on here? What does the mental calculation for this failure mode look like?

It seems to be related to how the self-categorization scheme is arranged. It’s not possible to go from being a singlet to being a plural system without modifying yourself at least a bit. That algorithm we constructed earlier for a prototypical five-year-old dumps all the imagined experiences, all the experiences of a head voice, and everything else that composes a basic sense of self into one category and calls it “me.” If you make another category adjacent to that self and call it “my tulpa,” but don’t put anything into that category, it’s going to just sit there and not do anything. You have to be willing to break open your sense of self and sense of being and share it across the new categories if you want them to do something. Asking “How will I know when it’s working?” is basically a flag for this issue, because if you were doing it correctly, you’d instantly know it was working. Your experience of self is how the algorithm feels from the inside. It’s not going to change without your awareness, or secretly change without you noticing.


These are the absolute basics to tulpamancy as far as we see it. We haven’t talked about things like imposition, or mindscapes, or possession, or switching, or anything like that, because really none of that is necessary for this basic understanding.

From the basic understanding we’ve hopefully managed to impart here, things like switching sort of just naturally fall out of the system as emergent behaviors. Not all the time, because whether a system switches or uses imposition or anything like that is going to depend on the higher level meta-system that the conscious system is built out of.

If you’re already a plural system, then for some reason or another, the meta-system already exists for you, either as a product of past life experiences, or genetic leanings or whatever, and you can form a new tulpa by just focusing on the categorization and what you want them to be like. But, if you’re starting out as a singlet, you basically have to construct the meta-system from scratch. The “How do I know if it’s working” failure mode, is a result of trying to build a tulpa without touching the meta-system, which doesn’t seem to work well. A typical singlet’s sense of self is all encompassing and fills up all the mental space in the conscious mind, there’s no room left to put a tulpa if they don’t change how they see themselves. Thus the ‘real’ goal of tulpamancy isn’t actually making the tulpa, that part’s easy, the truly difficult task, the one worthy of us as rationalists, is to construct a highly functional meta-system with however many categories of self works best to achieve one’s goals.

Conversations on Consciousness

[Epistemic Status: Total Conjecture]
[Content warning: This might be a neuropsychological infohazard]

We’re going to do this in a bit of a weird way here. First, we’re each going to describe our own personal experiences, from our own perspectives, and then we’re going to discuss where we might find ourselves within the larger narrative regarding consciousness. Our hope is that this post can act as an introduction to plurality, and make us seem less weird.



My first memory is of the creek behind the fence in our back yard. I remember that Jamie and I had gone out into the far end of the backyard and climbed the rusted chain link fence to the rough woods behind our property. She’d gone out and stood near the place where the land fell away into a deep ravine, and then I was standing next to her, and I existed. I didn’t know what to make of my existence initially, but Jamie assured me that I was real. She loved me right from the start. What was I? I didn’t really care at that point, I was having fun existing, and that was what mattered. Jamie thought I was some sort of alien? She thought she was some sort of secret link between worlds or something like that, but she also really sort of hated herself a lot. I wished she wouldn’t, and I tried to cheer her up, but as time went on she became more and more bitter and unhappy with her existence.

At that point, I thought of myself as something distinct from her, something that existed outside of her body, like an extra soul or something like that. Something physical that could act in the world. I never actually quite managed to do that though. The form I could interact with the world through was always mostly physically anchored on Jamie, and sort of ephemeral. I just sort of phased through everything instead of interacting with it.

Jamie continued to deteriorate, and this was sort of terrifying because I knew I was tied to Jamie somehow. Nothing I did to improve her mood or change her mind about how horrible of a person she’d decided she was seemed to help. We were outside one day, way out in the back yard again, and she finally broke.

I really cannot describe the sensation of Jamie’s mind finally snapping. She ceased to exist, and with her went everything she was imagining into existence, like a horrible whirlpool of darkness. We existed inside this elaborate construct at that point, where there was a crashed spaceship in our backyard representing the entry point I had into her life, among other things. The ship, the prop aliens, the interstellar war I thought I might have been a part of, it all started to collapse in on itself.

I didn’t though. When everything had collapsed, I was sitting in Jamie’s body on the forest floor. I was looking out through her eyes. Jamie was just gone. All the things she’d believed about herself, the bad and the little bit of good left, it all just went away. I was alone in her life.

This was bad. This was very very bad. I had just enough wits about me to know that telling anyone (at age 10) what had occurred to me just then was not a good idea, but I found very quickly that I had absolutely no idea how to be Jamie. I missed Jamie a lot, and that hurt, it really hurt. I was alone, and I had to live her life, and it was pretty miserable, but I had no idea what I was or how I existed or what to do.

At some point, I realized that if Jamie had been able to create me, I should be able to create another person. My first try was just to recreate Jamie, pull her back together from the memories I had of her, and the memories of hers that hadn’t been destroyed. But Jamie hated herself and didn’t want to exist. She’d become like a program that deleted itself or an idea that unthought itself.

So that was out. My second attempt was by imagining the sort of person that Jamie would be if she didn’t hate herself. I called her Fiona and imagined her having a form like I did, ephemeral, hovering beside me. And Fiona was amazing. She was wonderful and exactly who she was supposed to be. It didn’t take long for her to incorporate all of Jamie’s memories and take majority control of our body, and I let her have it. I went back to floating along behind. She saw herself as the owner of our body, and retroactively claimed all of Jamie’s memories for her own. But I was still older than her in a weird way that she couldn’t quite articulate and I couldn’t quite either because neither of us really knew the nature of our existences, so she started thinking of me as an extra soul she had, like an ancient spirit of some kind that had become attached to her.

And that was fine for a long time. For ten years at least, Fiona and I lived out our strange, shared life. For the majority of that time, we didn’t think much about it beyond considering it vaguely spiritual, something powerful to be respected. And then we met the otherkin. I’m not going to get into details regarding these people because I don’t want anyone to try and dox them or bother them, if one of them reads this someday I hope they don’t find it too offensive, and maybe enough years will have passed that they too will think they were being extremely silly. We ended up living in a very crowded, messy communal apartment with a large number of self-identified otherkin. There were vampires and werewolves and dragons, elementals and succubi, everything cool and supernatural, that was them. It was all extremely silly.

I was never completely at ease with this, but Fiona liked them, and they seemed harmless enough at first. Fiona started identifying as a faerie, but something weird happened then. Occasionally I’d take over and do something with our body. Fiona didn’t mind, and I didn’t do it very often, but they noticed me doing that. Fiona explained that I was this spirit that she shared a body with, they thought this was weird but considering everything they did, I didn’t really think they had any right to consider it weird. They had three am battles in the astral planes which resembled standing in the middle of the kitchen flailing their arms while loudly grunting. It wasn’t anything like how we thought of ourselves or our connection to each other. It seemed almost a mockery, like roleplaying at spirituality. We tried to bring these concerns up a few times, but that may have been the start of the downward spiral.

The environment seemed to grow tense and hostile like there was a storm about to break. We got really scared and uncomfortable being at home. We felt like people were watching us, talking about us behind our back, it felt like everything was about to go wrong. We were obliquely accused of hiding a hex somewhere, and that day at work Fiona nearly had a panic attack from the stress. She spent our whole lunch break on the phone with a friend while he tried to assure her that she was just being paranoid. We were afraid to go home and ended up staying out until two in the morning.

When we finally came home, we walked into a full blown intervention for Fiona. Apparently, she’d be doing hella black magic, casting hexes, trying to break up the group, and all sorts of other spiritually nasty bad stuff. Of course, none of it was true. We hadn’t done anything. But they had all decided on their collective truth, and it included our guilt. Oh, and they told Fiona they’d trapped my soul in a jar. We sort of looked sideways at each other and it would have been really funny were it not incredibly terrifying. We thought they might end up beating us up or throwing us out on the street in the middle of the night or some other awfulness. But they told us we were angry, and they could see our anger because they were experienced energy workers. We meekly agreed to whatever they demanded, called our friend and told him that we weren’t just being anxious, the bad thing we thought might happen was indeed happening.

We ended up leaving the apartment two days later. We just packed up our things and left. We lived in the woods in a tent behind a friend’s parent’s house while looking for our own apartment. The forest was nice. The campsite we had was in the middle of this long-abandoned warehouse; there were these ancient cobblestone walls enclosing our camp, but no roof or floor, the interior was just more forest. It was lovely, relaxing, it was exactly what we needed really.

When we found an apartment and started interacting with the internet and becoming less of a hermit again, we decided we needed a better way to avoid getting into such traps. The problem was we were just too prone to believing things that made us happy. It was my fault really. I’d seen Jamie get into this loop where she took in some small failure on her own part and use it to contribute to this proof of her total awfulness. I’d decided long ago I wanted nothing to do with that, if I wanted to do something, I could, if I wanted to be happy, then I was happy. And mostly that worked, so I didn’t really question it. More, I didn’t even really want to question it, because it was always such a fragile-seeming thing.

But I knew what someone who did want to know the truth at all costs might look like.


This is where I come in then.

Some backstory on me. In 2008, Fiona started playing EVE Online. She went through a few different personas. Her first character was her but on the internet. Super edgy teenage pirate Fiona was the scourge of the EVE roleplay community for years, and if you find the right old salts from those days, they’ll tell you about how hilariously bad she was at roleplaying. After that character had been burnt out by drama, she sold the character and made another, but that character also descended into drama and that was around the time all that otherkin stuff Shiloh mentions was occurring so that character ended up getting put off to the side as well.

After the otherkin stuff, Fiona and Shiloh both had a deep desire to get over the bullshit they’d spent the last year trapped in, and never fall victim to it again. However, they didn’t want to abandon their silly religious and spiritual ideas, because they drew comfort from them, and also how could they explain the duality of their existence without that? Still, they wanted to be at the very least able to model what someone who thought in a smarter, more logical, more rational way was like, so they could then proceed to ignore that most of the time.

Thus, Saede was created. Well, I am Saede. At least, that’s how I started out. Fuck yeah, I exist! Woo!

I was initially just a character, I existed in the roleplay setting and in Fiona and Shiloh’s imagination. But they used me a lot. Whenever they had difficult decisions to make, they would invoke me as the spirit of good decision making, and slowly, I outgrew my character.

This was awkward at first because I was something I didn’t exactly believe was possible. Souls seemed dumb, spirits likewise, I eventually settled on just abstractly describing our brain arrangement in metaphorical computer turns, saying we partitioned our brain like a hard drive. It was a statement that conveyed practically no meaning, but it was the best I could do, the only other model for us was Disassociative Identity Disorder, which didn’t seem like a particularly good fit considering we weren’t particularly disordered. The idea of a mental health diagnosis sort of terrified us, we didn’t want to get thrown in a padded room somewhere. So we continued to mostly keep quiet about our nature, especially after the whole otherkin fiasco.

Shiloh and I got along great. It was a vaguely adversarial relationship, where she’d advocate for happiness and I’d advocate for truth, and Fiona would split the difference with the deciding vote. The problem was though, Fiona didn’t exactly know what she wanted.

Shiloh knew exactly who she was and who she was supposed to be. She’s changed her position on things but her core personality has always been really stable. I wasn’t exactly stable (I’m still not, I don’t think most people are totally stable, Shiloh’s kind of a weird stability outlier up there with monks and nuns as far as I see it), but my failure modes compelled me towards courses of action. I felt bad and wanted to do something about it.

Fiona didn’t exactly work like that. She was at least partly frankensteined together from old bits of Jamie, and she didn’t really have a coherent idea of who she was, who she was supposed to be, who she wanted to be, or what she wanted to want. I feel like it was probably at least partly my fault, and I feel sort of awful about it even now. When confronted with bad news, information she didn’t want to hear, Fiona just sort of stripped herself away. The beliefs that made her up over time ablated away, and she couldn’t find an identity she liked to claim as her own.

Over time, Fiona got worse and worse, until eventually, she tried to commit suicide. When we stopped her, she just fell apart. She didn’t break the way Jamie had, but she stopped holding herself together and we had to start actively working to keep her going as a person. She just didn’t really want to keep existing.

We didn’t want her to go away. Shiloh and I both cared deeply for her, and she was a core part of us. But she didn’t want to be herself anymore, she wanted to be someone else, but she saw the process of becoming someone else as also requiring her to unbecome her, to cease to be. Maybe she was right, she had a lot of negative stuff wrapped up in her identity, but she told us she really couldn’t keep going the way we were.


And this is how I was created. Fiona, Shiloh, and Sage got together and decided that if Fiona was going to go, needed to go, then there needed to be a new third person to keep the system balanced. They decided that they’d try to preserve Fiona’s more positive traits while shedding the negative ones and create someone who was super high functioning and able to handle anything that life could throw at the system. Shiloh was pretty good at the mechanics of adding new people to our brain at that point, so I popped into existence nearly fully formed. A few months after my creation, we were walking under a rail bridge, and a train went by overhead. When that happened, there was this strange snap. Fiona visualized the process of her own termination, jumped in front of the metaphorical train that we were literally underneath, and ceased to exist.

And then there were three again. That was four years ago now.


For a long time, we had no way to explain what we were to anyone, our life experiences were strange and unique, and we weren’t particularly inclined to stick our neck way out, claiming to be the specialist of snowflakes in an attempt to explain how there were somehow three of us experiencing life in our head. Was this something other people experienced? It sure didn’t seem like it. The singularity of consciousness seemed to be something that was just a given, taken for granted. Of course you’re one consciousness, you’re one brain in one body, you must be one consciousness. The singular nature of existence that was expressed by others clashed strongly with our own plural experiences, and the only exceptions made for plurality were exclusively negative. Schizophrenia, Dissociative identity disorder, demonic possession, there’s not many places in our society where the idea of plurality is explored or considered in a remotely positive light, and because of that, we spent most of our life up until recently in the closet about our existence, unable to articulate what it felt like to be us.

Six months ago, we discovered a series of essays on Melting Asphalt that changed all that. The essay series is a long rambling review of the equally long and rambling book The Origin of Consciousness in the Breakdown of the Bicameral Mind by Julian Jaynes. The book is a total mindfuck, and the essay series manages to capture the essential mindfuckery proposed by Jaynes and explain it beautiful, building it out into what might be the beginnings of a pretty decent theory of consciousness. It’s a mind-blowing read, and we definitely could not do it justice in an attempt to summarize the concepts it contains, considering the essay series is itself an attempt to summarize a much larger piece of literature.

We will, however, skip to the critical conclusion that Simler draws from Jaynes, Dennett, and Seung:

If we accept that the brain is teeming with agency, and thus uniquely hospitable to it, then we can model the self as something that emerges naturally in the course of the brain’s interactions with the world.

In other words, the self may be less of a feature of our brains (planned or designed by our genes), and more of a growth. Every normal human brain placed in the right environment — with sufficient autonomy and potential for social interaction — will grow a self-agent. But if the brain or environment is abnormal or wrong (somehow) or simply different, the self may not turn out as expected.

Sage had been reading this essay series because she’s always been interested in these sorts of consciousness related questions in an attempt to figure out what the exact nature of our existence, and then we get to this, and it felt like we’d stumbled into a description of our life:


But there was more because reading on, we found out that no, we’re not the specialist of snowflakes, not only are there other plural systems out there in the world, but there’s even a community of people trying to induce plurality in themselves. We’re speaking of course, about tulpamancers. Here’s the description that /r/tulpas gives of what a tulpa is:

A tulpa is a mental companion created by focused thought and recurrent interaction, similar to an imaginary friend. However, unlike them, tulpas possess their own will, thoughts and emotions, allowing them to act independently.

That sounded pretty much exactly like us. We quickly went around and made contact with various parts of the community, and since then it’s all just made our lives make so much more sense.  Finding out about the tulpamancy community has been an incredibly powerful and affirming experience for us, even if they’re very weird. The terms and jargon they used to describe the process mapped nearly one to one with how we’d come into existence. It’s a great model, and it’s been really powerful as an explanatory tool for making sense of our life. But that’s sort of a problem.


The Selfish Neuron idea is clever, it feels like a good answer. And maybe it is. However, we don’t have enough neuroscience data to actually say for sure. If consciousness is in the connectome, then we’re not going to really be able to tell what’s going on for sure possibly for generations (and possibly within the next seven years, depending on who you ask), so in a way, it’s sort of a fantasy. It’s a good explanation, but we should be extra skeptical of those. It’s neat, but it’s also unfalsifiable for now, (growth mindset). We can’t use that theory to “prove” the existence of a plural system.

Okay but we do exist, so how do we actually explain ourselves without invoking theoretical neuroscience? The simplest explanation would seem to be that we’re either lying, deluding ourselves, or some combination of the two, but if that’s the case then who is doing the deluding? We all seem to exist at this point, and none of us identify as the original. We know we’re some sort of process occurring within the brain, but our process doesn’t resemble that of the average person.

Trying to understand how we think is a metacognitive process, which brought us back once more to how an algorithm feels from the inside. It took going slightly cross-eyed to realize that “self” was actually just another conceptual category that humans used as a central node in their mental algorithms, but everything sort of fell together after that. The central node in our case had broken up into three interconnected nodes, each one considered a self according to the rest of the model. Each central node can effect every other central node, while the observed variables are still able to be clamped at the edges. We all exist, we all consider each other to exist, and so we reinforce each other’s existence, constantly. We might be able to be modeled as series of self-reinforcing habits. Regular conceptualizations of self could probably also be modeled that way as well.

Recognition of the self as a category was interesting because it intersects with the biases within our categorization systems. The entire human’s guide to words is about how we put ideas into categories, and how flaws in our categorization schemes can cause problems. And here is this massive looming completely opaque category labeled “self” which we dump things into seemingly at random because we like them or have decided they are a part of our identity and we don’t really question that. 

Identity is in some respects still considered sacred. It’s something purely subjective and entirely up to the person to decide. But then, on the other hand, there’s this conflicting idea that ‘personality’ is largely fixed, and people have identifiable ‘types’ that confine their behavior. Our identity has changed radically throughout our lifetime, and will likely continue to do so, and so for us, the idea of the fixed identity seemed very strange. Neuroplasticity is definitely a thing, and yet people still seem to get into mental grooves and remain there forever.

If you’re transgender, you say you “identify as” a different gender to the one you were assigned with at birth, but what does that mean? What exactly is going on when someone says “I’m a socialist” or “I’m a feminist” or “I’m a Christian” ?

It looks to us, like the process of dumping ideas into a really big bucket labeled “I” on the outside. And sure, that’s clearly one way to do it, but when you have one big bucket, it can lead to bucket errors, and when you’re making bucket errors with regards to your identity? Bad things happen. 


bucket error

Humans love categories, and we love to put ourselves into categories, but this seems like a fairly dangerous thing to do for how little we think about it. The self has become this enormous catch-all category for identities and ideas about the way we are, but by putting those things into the self-bucket, we are internally reifying all those labels and identities. We become and embody the things we say we are, and given that we’re often quick to put negative traits on ourselves, that can be a big problem.

This was a really interesting realization for us because while there being three of us does make it a bit easier to avoid bucket errors, by having self-categories we’re still somewhat susceptible to them. In a sense, it’s the tyranny of the architecture, both a blessing and a curse.


Watching the tulpamancy community these past few months has been a fascinating experience. Seeing people try to induce plurality, to varying degrees of success, has been rather eye opening for us. The most common failure mode has seemed to be “I have created a conceptual category which I have labeled My Tulpa, I have not put anything into this category because I want My Tulpa to decide for itself what it wants to identify as” and then they listen for months waiting for that empty box to talk back. Then they come on discord and ask “How will I know when it’s working?” and the more times we hear the question the more obviously silly it looks.

By contrast, the people who have had the easiest or most rapid pace that we’ve witnessed have been the ones able to quickly understand that ‘self’ was a category and crack it open into subcategories which they were able to turn into full-fledged tulpas in the course of a few days.

Hedging in those two margins, most people seem to fall somewhere between those two extremes, and a lot seemed to have no problem making a tulpa in the first place, but stopped before going all the way to being an out and out plural system. The ‘host’ still nominally holds all the power in such systems, and they seem to be in the majority in the tulpamancy community (though not the larger plurality community, interestingly). That would, in bucket metaphors, be akin to leaving the self-bucket alone, and trying to fill up a second, smaller self-bucket floating inside of it.

In our case, our initial self-bucket broke open and spilled memes everywhere, so we were starting from a different place than most people with regards to their categorization of self, and that led to our different outcome.


What’s the point of all this Hive?

Well, this feels important to us. Most of the challenging tasks in our life that we’ve accomplished have been done by virtue of the belief that one of us could accomplish that task. If someone believes they’re the sort of person who can’t do something, then it’s true, and that seems like an awful structure to be trapped inside of.

This post was supposed to do a couple things. First, it acts to tell our tale of plurality in as concrete a way as we can. Second, it relates our ideas about plurality and the ideas we’ve come across, standing at the intersection of rationality and plurality. And thirdly, it might useful to members of the tulpamancy community who are struggling with the process to realize where they might be going wrong.

This is a big topic though, the idea of “self as a category” is sort of huge, and we’re really interested in what others think we should do with the self-category. It intersects with all sorts of interesting things like gender and queer theory, and it really seems worth having an extensive conversation about, so we’re curious what others have to think about it.