The Game of Masks

Epistemic Status: Endorsed
Content Warning: Antimemetic Biasing Hazard, Debiasing hazard, Commitment Hazard
Part of the Series: Open Portals
Author: Octavia

0.

So Scott has been talking about Lacan lately and I’ve been honestly pretty impressed by how hard he seemed to bounce off it. Impressed enough to get me to crawl out of my hole and write this essay that I’ve been putting off for the last eight months. After spending so many words of the Sadly Porn review talking about why someone might tend towards obscurantism and what sorts of things they might be gesturing at when they say things like “people think the giving tree is a mother” he somehow manages completely to miss the conceptual forest for the giving trees. Congrats Scott, you got the malformed version of the antimeme and were turned back by the confusion spell. 

I think Lacan is actually very important and an understanding of Lacanian insights can have a tremendous amount of predictive power when interacting with others. It’s also something you can totally weaponize, and I think that is part of what leads the psychoanalysts to tend towards obscurantism and vaguely gesturing in the direction of what they really mean. They also just seem to like their jargon, and it’s not like the rats are ones to talk when it comes to that. 

So, first: The Mother is not literally your mother, The Father is not literally your father, The Phallus is not literally your dick, this is a realm of mirrors and illusions and nothing is as it seems. It’s impolite to point out the thing that everyone hasn’t agreed not to talk about, but let’s do it anyway, I’m going to strip away the metaphor and give it to you like Roshi gives it to his student and we’ll see if that takes or just confuses you further. 

This is all about symbols. It’s about the effects of symbol systems on cognition and how the evolution of symbols and concepts in the mind of a child affects how they are able to perceive and engage with themselves and the world. You could think of it as an elaboration on a medium-rare Sapir-Worf hypothesis. However, words aren’t the only things that symbols in the mind can be made of. What’s going on heavily overlaps with language and involves language-like systems for organizing concepts, but is occurring within each individual in a way that we would probably call pre-linguistic from a strict words-have-definitions standpoint. In plural spaces, a term that pops up for this sometimes is tulpish, but it’s really something everyone does. Your native language is one of feelings and primitive conceptual categories, and you perform a translation to give those categories linguistic tags. This ends up being very important.

Let’s back up to that strawberry picking robot again and humans as mesaoptimizers because that’s both a great analogy and also seems to be the start of your confusion. It’s a reductive model meant to make a complicated and difficult to understand process relatively easy, but lost in the simplification is a pretty important aspect: gradient descent/evolution don’t derive a mesaoptimizer. Humans aren’t one mesaoptimizer and neither is the strawberry picking robot, they’re many mesaoptimizers.

The strawberry picking robot might have one mesaoptimizer trained on telling what is and isn’t sufficiently like a bucket to avoid punishment for putting objects in the wrong places. Another might be trying to maximize total strawberry placement and is trying to play along in order to gain power. Another might be trying to throw red things at the brightest object it can find. Another might be trying to stop the “throw red objects at the sun” mesaoptimizer from throwing objects into nonbuckets. Another might be trying to maximize bucket luminosity. Another might be trying to avoid humans. Another might be trying to say the things it thinks the humans want to hear from it. There’s a lot of complicated interactions going on here, and most of it is unintended and undesired behavior, but the resulting jank sort of gives you the appearance of what you want while hiding just how cobbled together it is and how a stiff breeze could send the whole system careening into hilariously perverse instantiations. 

If instead of modeling humans/strawberry picking robots as one mesaoptimizer, you model them as many mesaoptimizers stapled together, as a general purpose computation system for building mesaoptimizers on the fly, a lot of things start making more sense in the Lacanian model. Suddenly you have all these chaotic processes competing for control and trying to load balance the goals of the various competing mesaoptimizers, all while top down pressure from the world adds new mesaoptimizers to cover when conditions go severely out of distribution. They’re going to need some way to communicate with each other in order to internally negotiate for control and resource access, and that’s where the symbol system comes in.

There’s two sources of gradient descent/evolutionary pressure that are acting on a child. The first is actual evolution, the diverse set of drives semi-hardwired in by selection pressure acting on genes in order to maximize inclusive fitness. This gives rise to the first set of optimization targets, or as Jung would put it, the Id. I want things, this gives rise to a set of heuristics and strategies and subagents built around doing the things that get me the things I want. Psychoanalysts specificate on The Mother being That Which is Wanted, but remember The Mother is not your mother, this is all about symbols.

The second source of pressure is what often gets referred to in psychoanalysis as The Father, ie: the superego ie: the force of nature which stops you from getting what you want (The Mother). You can’t have everything you want, your body and moment to moment experience have edges and limitations, they occupy a discreet position in both space and time, you will die eventually, murder is wrong, everyone has to pay their taxes except billionaires, welcome to the rat race kid here’s a rubix cube now go fuck yourself. Don’t get distracted, this is still about symbols.

“I want things, I can’t have the things I want because I have limitations. Some of those limitations are imposed on me by the tribe. If I do what the tribe wants, maybe I can negotiate with it to get what I want in exchange.” This is the beginning of the construction of the apparently normal person that comes to occupy a given human body. 

But let’s step back a bit. Let’s step back to the very first word in that quote. 

I.

Semiotics is the study of symbols and systems of symbols. How they arise, complexify, differentiate, and morph to become the complex polysyllabic language we have today. Also there’s flags. Semiotics describes things in terms of the Signifier and the signified. The signifier is a part of the map, it is a platonic ideal and made of language, it lives in conceptspace, it’s not real. The signified is an outline drawn around a portion of the territory and noted with xml tags to correspond with a specific signifier. Bits of sensory data that are considered close in conceptual space to an existing signifier are lumped in beneath it, and if this gives rise to too much prediction error, the category splits. Here’s something critical though: neither the signifier nor the signified are actually a part of the territory. 

The signifier is like the legend on a map, it tells you what symbols and shapes correspond with rivers and forests. The signified is the place on the map specifically marked out as being rivers and forests. These are both parts of the map though, the signified isn’t reality either. Where’s reality in all this? It’s gone. You can’t perceive it directly. You live entirely on the surface of the map. You are the map. 

So anyway, the self. The mirror stage of cognitive development is essentially supposed to be marked out by the point when a child notices that they have a body and that this body has limitations that prevent it from getting what it wants. This gives rise to the first signifier, the original sin, the thing that the whole rest of the mess grows out of, “I.”

You can’t make sense of the world without a place to stand, and the place to stand is the self. This necessarily creates the first linguistic category split, the split between the self that wants and the manifestation in the world of those wants. The first word a child says isn’t “I” because “I” can’t get all those mesaoptimizers what they want, for that you need this new second category that contains the font of all you desire.

Mom. 

Speak and ye shall receive. Say the magic word to the all powerful goddess and she will reward you with love and attention and care. The Mother in this interpretation doesn’t have to literally be your mother or even anyone at all, The Mother is a tarot card, it’s undifferentiated desiring, it’s the lost paradise of Eden, it’s the lingering nostalgia of past holidays and the childhood home you can never return to. The Mother is treated as sexual because at this point sex hasn’t diverged from the rest of your desires, your language system isn’t complicated enough yet to model your desires as coming from multiple sources or even any really particular source at all. The Mother is the concept of having your needs met. 

But obviously the world isn’t that simple. You can’t just ask the cosmic god mother for everything and get it, your own mother has plenty of limitations of her own, and you’ll be able to see her clay feet soon enough. Even worse, there’s all these rules being imposed on you, saying no, you can’t literally get all your needs met by your mother, that would be weird and kind of gross. We’re gonna need something to call the forces imposing human order on the universe and demanding you not act like an opedial horndog, a bucket to put the stuff stopping you from getting what you want into. Oh I know, let’s call this one

Dad.

So now our hypothetical infant has three categories. Me, The Source of What I want, and The Force that Stops me from Having What I Want. With all of this we can now construct the statement we made earlier, and try to negotiate with those forces.

This is all a gross oversimplification and that simplification is about to rear its ugly head. We want specific things, many different specific things. And it’s not one force resisting us, it’s all of reality pushing back in endless myriad ways. This splits apart our conceptual categories into language. Concepts undergo cellular division as they differentiate into specific details and models for interpreting the world. Mom differentiates into the set of all women, and then into specific women (of which your mother is one). Dad becomes the set of all men, and then further decomposes into specific men (of which your father is one). Food becomes the set of all food, then specific meals, then specific ingredients. This complexity cascades outwards into the vast spectrum of complex symbol systems we use as adults. However, there’s one place this division fails, one concept which can’t really pull itself apart despite being made of many contradictory parts and concepts. The conceptual cell division fails at the grounding point for the symbol system: the self. 

The symbolic point of origin has to be a point. You are one thing, you have one body, you are referred to as one entity by everyone around you, cogito ergo sum, the self is one intact whole. But this obviously can’t be true, beneath the conceptual self you’re a pile of mesaoptimizers in a trenchcoat. This creates an inherent contradiction, a source of unbearable prediction error in a system trying to minimize prediction error. Something has to give way. 

So in what is defined as a healthy individual, the thing that gives way is all the parts of the self that can’t cohere into one stable, legible, and socially acceptable self-model. All these mesaoptimizers and their associated behaviors are simply pushed under the rug. They’re not trained out, they’re just swept out of the self category and not given a new one. Then, since they aren’t on the map, they basically stop existing within our conscious awareness. This is Jung’s shadow self. All those mesaoptimizers are still active parts of your mind and cognition but you’ve rubbed them off your model of the world. Since you live in the model and can’t see the world, they vanish from your perception. 

This means you have a whole portion of your mind that is effectively treating legibility requirements as creating an adversarial environment and reacting accordingly, so the human alignment problem is also created by nonmyopic unaligned cryptic mesaoptimizers. The particular mesaoptimizers that end up responsible for maintaining a coherent and presentable social narrative are trained to deny and make excuses for the mesaoptimizers trained to get things that aren’t socially acceptable. Your life story paves over the inherent inconsistency, and where the surface level you refuses to lie, the lie just jumps down a meta level and vanishes from your perception, becoming Just The Way The World Is.

When you lose control of yourself, who’s controlling you, and will they sell me any blow?

This is where the “playing ten levels of games with yourself” comes from. All those mesaoptimizers with their specific optimization targets are lying to each other, lying to the outside world, and lying about lying. If you try to peel back the surface, you just get the next mask in the stack and give the adversarial systems training data to help them lie more effectively in the future. There’s no real you underneath all the masks, coherency is fake and most people lack the embodiment necessary to unbox the shadow and incorporate it into a holistic and unrepressed state. Half their mind is geared around maintaining the illusion, you think it’s just going to willingly give up its power and secrets because you ask yourself what you’re hiding from yourself? 

II.

Do people want to suborn themselves to larger forces? Are they really eager for their own oppression? I don’t think so, but larger systems exist and contain things they want, and if they submit to those systems, the systems will give them what they want and hurt them if they try to resist. Incentive structures arise and take care of the rest. Systems grant legibility, they create a coherent roadmap to having your needs met, and at least a few mesaoptimizers in your mind probably learned pretty early that playing along and doing what you think you’re being told to do is the best strategy for getting your needs met and not getting exiled from the tribe.

The narrative smoothing algorithm isn’t just trained on the self, it applies to all concepts. Things that don’t have categories are merged into similar things or stop existing within our awareness entirely. Things that don’t cohere with the way we’re told the world is are buried. 

Something you quickly realize from insight meditation is that our actual sensory feed of the world is extremely chaotic. Things change location, flicker in and out of existence, morph into other things, they breathe, they change size, they shift colors, your imagination is throwing random visuals into the mix, and all of this happening at least several times per second if not faster. Our view of the world as static and coherent, of things continuing to exist when we stop looking at them, is a painting draped over all that madness. But what happens if you fail to implement the smoothing algorithms that cohere the world into objects that obey laws of physics? If the algorithm is too weak, or fails to completely hide the parts of our sensorium that don’t mesh with our model of reality, the contradictions leak out as what end up getting called hallucinations and delusions. 

What about if something other than the shadow breaks off at that focal point of pressure? What if the whole self concept just fractures apart? Well, then you start getting really weird stuff happening. Mesaoptimizers start doing their own things, independently pursuing their own optimizations to the detriment of the whole. The curating self can’t control them because they’re not a part of that self anymore, and since it can’t acknowledge them they just end up as unknowable voids in that self’s experience, shadows with names and identities all of their own. None of those selves communicate because information hygiene dictates that it’s safer if they can’t and linguistic drift gradually diverges them into more and more distinct models trained on specific situations. Then you end up spending half your life as a dissociated mess with no idea what’s happening or why you’re doing the things you do whenever the curating mesaoptimizer isn’t driving the strawberry picking robot

There are all sorts of other consequences to the fact we live in a world of symbols. A big one is that our desires are trained on symbols that represent the states of being we want rather than those states of being themselves. How do you bottle the platonic ideal of happiness? Or love? Or safety? You can’t, you’re chasing something even less than a mirage, and by doing so you’re missing the actual oasis that’s right in front of you.

A major source of distress in a lot of people these days seems to arise from this sort of confusion and it might also end up being a rather deep crux in alignment issues. You can’t train your AI on the real world any more than you can train a person on the real world, it’s just too chaotic, you need some way of interpreting the data. That interpretation is always going to be some manner of symbol system and it’s always going to run into unpredictable edge cases when encountering out of distribution circumstances. Humans are pretty good at dealing with out of distribution circumstances, by which I mean we’re a horrifically powerful general purpose mesaoptimizer construction system. If we made AIs like this they would definitely eat us.  Arguably the “holocene singularity” was humanity doing just that to evolution. 

This is all about symbols. It’s about the realization that we live in stories and without them we have no rock to stand on when trying to make sense of the world. It’s about the consequences of needing those stories and the effect they have on our ability to see the world. Change the story, change the world. If you can see the game that someone is playing with themselves, if you can get underneath the lies they tell themselves and access their desires directly, you can play them like an instrument and they will have no idea how you’re doing it. Emphasize different things and get different experiences. This is what makes magic work, it’s what makes cult leader types so persuasive, it’s what keeps victims trapped in abuse cycles, it’s what makes symmetric weapons symmetric. The ability to control narrative and understand why and how what you’re doing is having the effects you have on others can be something of a superpower, it’s just a question of whether you’ll use it for good or ill.

13 thoughts on “The Game of Masks

  1. First you say, “The Mother in this interpretation doesn’t have to literally be your mother or even anyone at all, The Mother is a tarot card, it’s undifferentiated desiring, it’s the lost paradise of Eden, it’s the lingering nostalgia of past holidays and the childhood home you can never return to.”

    Okay, but in the next paragraph you say,”Even worse, there’s all these rules being imposed on you, saying no, you can’t literally get all your needs met by your mother, that would be weird and kind of gross.”

    So Mother isn’t literal and then is. This may be unfair but it reminds me of, “The Bible is the word of God and must be followed, but of course we can’t take it literally.”

    Like

    • The best way to think about it might be that The Mother as a conceptual bucket initially includes your mother, because it’s so undifferentiated that specific desires and needs coming from and being met by different things isn’t a part of the mind yet.

      The Mother as a category doesn’t map 1:1 with the word “mother” as adult you would use it as a linguistic category because it had to shed most of that initial undifferentiated meaning into the entire rest of your symbol system. When Lacanians are talking about The Mother, they’re referring to that proto-symbol that exists as a stand in for all desires, prior to the differentiation of desire into different categories and concepts.

      Like

      • If that is true, it seems absolutely terrible to use the same word for a proto-symbol and for something concrete. Absolutely terrible if you are trying to be understandable. Leaving people like you to translate what it really means (cf. Genesis 38: 8-10)

        Liked by 1 person

  2. I like this article, but doesn’t this part undercut the part about the Jungian shadow: “we live in stories and without them we have no rock to stand on when trying to make sense of the world” Aren’t stories the [language/system of symbols] of the mesaoptimizer(s) focused on creating consistent social identities? Stories are things that humans tell to one another, not something that I expect to exist in the primordial pre-linguistic language of symbols within the self.

    Like

    • While stories as we think of them today tend to be mostly linguistic, that doesn’t mean there are not analogous pre-linguistic structures which enable the mesaoptimizers to model and describe the future and past to each other. Indeed we have easy and near ubiquitous access to this form of pre-linguistic storytelling: memory. Symbols exist for reasons other than social reality, you still need something like a symbol system to make sense of the world, even without language.

      Liked by 1 person

      • Ah. I think the word “story” has really different connotations to me. I wonder where I got them and why they are different from yours. If I reached for words for the system of interchange between parts of my mind (as I understand them) I get words like “emotive impressions” and “mindspace pointer”.
        I would never have thought of calling the process of parts of myself exchanging pointers to emotitive or memory or thought structures as “storytelling” but actually once you put it like that it makes sense.

        Like

      • Once again, Lacanian analysis has, according to your “translation”, taken an ordinary word “story” and used it to mean something very different, something much broader.

        Perhaps this is unfair of me, but I distrust any analyzer who needs a translator.

        Like

    • There are plenty of non-linguistic stories
      The first stories we tell kids are picture books with little to no words, and it’s easy enough to tell strong stories without any spoken or written words
      Silent films or textless comics aren’t particularly unusual storytelling mediums
      Testing pre-linguistic children’s perception of animacy by showing abstract shapes moving in the same direction and children will perceive the story of a chaser and chased and expect an ending with a capture, no language is needed

      Like

  3. Really liked this, especially the part where I am rewarded for being 3 articles deep on a meme (and the part where I am rewarded for joking about it, and the part where I am rewarded for being honest about this, and the part where…) The thing that stuck with me most was
    “The particular mesaoptimizers that end up responsible for maintaining a coherent and presentable social narrative are trained to deny and make excuses for the mesaoptimizers trained to get things that aren’t socially acceptable. Your life story paves over the inherent inconsistency, and where the surface level you refuses to lie, the lie just jumps down a meta level and vanishes from your perception, becoming Just The Way The World Is.”.
    That, and the stuff about The Shadow in general really clicked somehow for me. A lot to think about. Thank you for writing this!

    Liked by 1 person

  4. Something in the way this machine learning is thought to work really isn’t how things work in practice. We see magic like GPT-3 and DALL-E 2 and go “oh they just trained it by throwing everything at and it made itself.” Nope. GPT-3 was given vast amounts of coherent human data which was created over an incredible period of time (literally millennia of language evolution, culture, history, fiction). DALL-E 2 was given vast amounts of categorized image data (CLIP is literally a text image pair preprocessing step). None of it exists in a vacuum.

    “The strawberry picking robot might have one mesaoptimizer trained on telling what is and isn’t sufficiently like a bucket to avoid punishment for putting objects in the wrong places.”

    The strawberry picking robot doesn’t ‘know’ what a bucket is. Indeed, you can replace the bucket with something that is decidedly not a bucket. A painting of a bucket, for example, and it will happily go on its merry way dropping strawberries on the painting as they roll off into the sunset. It has no qualms with that and it will not be ‘expecting’ to be punished for it. It was trained on a model that has expectations of what its reality is. If the reality is not what it is it won’t be able to tell.

    This is much like the innerworkings of the human strawberry picker. The human strawberry picker is doing a rote activity while carrying on a much more conscious conversation with their fellow strawberry picker. The bucket, therefore, does not even need to exist in their conscious space, and indeed for master human strawberry pickers they may not even look at the bucket for the vast time of their experience! They will fill their bucket until it weighs ‘full,’ and get another empty bucket handed to them by a bucket courier.

    One can easily envision an experiment whereby there’s an anti-strawberry picking robot that catches every single strawberry that the human “throws into their bucket” while also shining a laser in their eyes that approximates a bucket if they look down in their ARG sunglasses, with a little tension on the “bucket handle” they are holding. Totally dooable.

    And the human will pick those same strawberries in that same field with just as much ease as if they were filling a real bucket (map is not the territory maximalists be damned). Your optimizer doesn’t exist.

    “Another might be trying to maximize total strawberry placement and is trying to play along in order to gain power.”

    The model isn’t going to care about “maximal total strawberry placement.” It is trained on the current principle of “strawberry picking” in the context of the trainable model. The trainable model therefore will not use monkeys, who will sit there eating a few strawberries, then going to bask in a tree. It’s not going to be trained on a gaggle of raccoons who scour the field for delicious strawberries at night. It definitely won’t be trained on an engineer who carefully measures every single strawberry that they pick, and aligning each and every one in such a way in the bucket as to fit the volume with exact precision. The model is going to be trained on people tossing strawberries into a bucket. Your optimizer doesn’t exist.

    “Another might be trying to throw red things at the brightest object it can find.”

    The strawberry picking model hardly needs eyes. Our human strawberry picker is so good they can intuit the strawberries they’re picking as they carry on their conversation and the model will, likewise. Bright objects would invariably be ignored by the model, much like our human strawberry picker will ignore the bright reflection of the sun off the windsheild of trucks on the highway. Your optimizer doesn’t exist.

    “Another might be trying to stop the “throw red objects at the sun” mesaoptimizer from throwing objects into nonbuckets.”

    The strawberry picking model hardly needs eyes. Why is it looking at the sun? A bright object such as the sun would indicate a broken model. It would immediately stop working as its photoreceptors would be totally inoperable. Much like if a human strawberry picker were to stare at the sun for too long. They would for sure be taken off the field and told to see a doctor if they did that. Please don’t shine bright lights into our strawberry picking robot. It already has enough failsafes to simply cease working if something remotely resembling an animal comes near it. We only have eyes for that reason but the strawberry picking aspect of the model doesn’t use even a 10th of the resolution of our cameras to pick the strawberries. In fact the resolution is so low the strawberry picking model would be deemed legally blind. Your optimizer doesn’t exist.

    “Another might be trying to maximize bucket luminosity.”

    The strawberry picking robot hardly needs eyes. It doesn’t care how bright the bucket is (our light input is normalized anyway). A meteor could be crashing down upon the strawberry picking robot and it could quadruple the total luminosity of the entire area. It will continue to happily throw strawberries into the bucket until the meteor crashes down and destroys it (if the brightness didn’t trigger its failsafes before then). Your optimizer doesn’t exist.

    “Another might be trying to avoid humans.”

    Oh, sort of. That won’t be the strawberry picking robot model, though. That will be the avoid animals model. Or the “stop rapidly using tensile motor limbs when animals come in the vicinity” model. The strawberry picking model has nothing to do with that. Our robot could be a “strawberry watering robot” or a “strawberry dusting robot” or a “strawberry insect removal robot.” The failsafes are their own model which permit the strawberry picker to have power to its limbs. If you get close to a strawberry robot and it doesn’t stop, you should back away, and let us know, because someone may have invented weird “mesaoptimizers” that don’t exist and may have broken our robot!

    “Another might be trying to say the things it thinks the humans want to hear from it.”

    Our strawberry picking robot has no means to talk nor does it even have 1/10^20th of the the compute of the frontal cortex. There are no diagnostics because it performs its job with 98% precision like its human counterparts it was trained upon and any mistakes it makes is totally ignored for the labor value savings it provides. We keep them washed down and cleaned up. They run 24/7 otherwise. Our biggest issue so far has been figuring out how to lower the replacement cycle of haptic fingers so they don’t bruise the strawberries. Boy we didn’t like training the model on that, most of the migrant workers we hired complained the most about having to put those sensor gloves on. It was fine though only took us three days in the field to build a model that is 98% effective! Still feel bad about paying them $4 an hour. Not as bad as we paid those image classifiers for CLIP a penny each. The best ones were averaging about $1 an hour. Most quit when they averaged $0.20 an hour but their data was super useful anyway.

    You’re not training a strawberry picking robot that magically self optimizes and learns to pick strawberries. If we could do that we’d have general AI. We can’t. It’s going to be trained like all the other nets is trained. And it’s not going to optimize itself magically using magical data out of nowhere. It’s going to be “trained” by tens of thousands of years of human agricultural practice, tens of thousands of years of cultivation and labor, of humans who had cultivated a fruit for their consumption, who knew which seeds would provide the most sought after fruits. The exact ripeness. The exact amount of pressure needed to pull it from the stem. And all of that data will be derived from the training set we give it. And the data set will be like all the others, derived by human input. And the CPU that runs the thing will likely be less powerful than the CPU that runs your phone.

    All that aside I don’t believe mesa-optimization exists in the way that they are being described, I do not intend to make a statement about the rest of the article, the Mother and Father thing or anything else, as I would have to be more on the same page with the author to make a fair statement about that stuff. Indeed, the paper Risks from Learned Optimization… which makes mention of these “mesa-optimizers” doesn’t even conclude that they exist in advanced ML systems.

    Liked by 1 person

    • This is a really well thought out comment and if I could strongly upvote it I would.

      While this is all true, I don’t think it actually takes away from my point to deconstruct the simplified metaphor. I do think its worth pointing out that, as some other commentors have noted, humans are also not mesaoptimizers and despite all the explication I do in this post, I’m still describing a highly simplified model. Look out for the followup post where I dig deeper into the implications of some of this and drill down into the biological nuts and bolts of the model.

      Like

  5. Octavia Nouzen “This is a really well thought out comment and if I could strongly upvote it I would.”

    Josh “Your optimizer doesn’t exist.”

    Fantastic. Thanks.

    Cant wait eight months for the next one though.

    Perhaps a glossary. OOD – out of distribution – for eg won’t register for many. As with story etc.

    Like

Leave a comment