Month: December 2023

On Simulation Theory

I’ve already written a post on simulation theory, but I just answered the question again on Quora, and I’m going to post that here for the parts that are better than the other post, and also so that more people see my ideas on the matter, as it seems that most people who come across my blog only view the last few posts.


Someone on Quora asks, “Is it possible that our perception of reality isn’t real at all? Could our entire existence be nothing but a computer simulation from an advanced civilization?” here’s my answer.

No. This consideration is merely a product of the technologistic mindset of our zeitgeist, due to the widespread proliferation and use of computers and technology in general. It’s better to just go out and “touch grass” and regard reality as being exactly as real as it seems to be (and thus also being potentially endlessly deep and nuanced, extending all the way down to the ineffable), and thus to maintain a worldview that’s fully rich, wholesome, open-ended, potentially magical, and true to the heart.

The view supposes that endless technological development of more and more complex systems is the norm for intelligent civilizations, but I think this view is in error. It’s just a projection from our own current industrial fever dream. Intelligent species probably realize eventually that technology—at least highly complex, ubiquitous/immersive, and whatever-the-opposite-of-holistic-and-organic-is technology—takes away more from our wellbeing than it gives. It’s beneficial in specific, overt, and immediate ways, while being much more deleterious in more subtle, long-term ways, so it’s a big trap.

People aren’t happier now than they were 50,000 years ago. In fact, most people are more often unhappy than they are happy, and depression in society is rampant, its frequency only increasing. Technology separates us from nature; hence it separates us from life, thus making us unfulfilled and therefore constantly, anxiously looking to fill the spiritual void with whatever superficial pleasures we can.

A Native American chief once remarked that the whites seemed mad, like they were constantly looking for something they didn’t currently have. And in those days, people were defecting to Native American culture in such large numbers that it became a problem for white society. Anyone who spent some time in Native American culture never wanted to go back to their previous way of life.

So, this technological frenzy of ours is probably, hopefully, just temporary, like a relatively long-running fad, and it probably isn’t the norm for alien civilizations, even/especially the more advanced/older ones. The Kardashev scale is pure fiction, the invention of a highly pathological mindset.

Another problem with simulation theory is that it violates Occam’s razor. The computer that simulates our universe would have to exist in a much larger universe, in order for the computer to be big enough to store and run our entire universe, so that’s introducing a massive number of assumed entities (or one huge assumed entity, depending on how you look at it) that there’s no evidence for and hence no need for it to be included in our model/explanation of the world.

Another problem with simulation theory is that a computer simulation couldn’t possibly give rise to consciousness. There are various arguments I could make for this, but I’ll just include this one, a thought experiment for the purpose of reductio ad absurdum:

1. Take each individual calculation/opcode execution and separate them across a long span of time. Is the resulting “system” conscious?
2. Remove the computation element and just have a sequence of register and/or memory states. Is the resultant information conscious? What part actually matters?
3. Take the register and/or memory states, and maybe even the internal CPU/GPU states composing each individual computation, and encode them in etchings on a marble wall. Is the resulting state of affairs conscious?
4. Instead of etching the encodings into marble, encode them into patterns of water droplets in random places spread over many clouds. Is the resulting data conscious?
5. Just interpret whatever informational patterns that already exist in the water droplets spread over many clouds as the information contained in an AI according to whatever ad hoc encoding is necessary to do that, since the particular method of encoding is arbitrary anyway… are the clouds conscious?

(Maybe the clouds are conscious, but probably not for the reason that they can be arbitrarily interpreted as encoding the digital information of an AI…)

I make more arguments in this essay: On the Possibility of Artificial General Intelligence

And even if a simulation could give rise to consciousness, what’s there to limit any consciousness/mind it creates to one particular character/body in the simulation, which actually has no physical separation in the simulation, only a highly abstract, conceptual one, its actual constituent events being distributed and interspersed in space and time widely across the system? It makes no sense. If any consciousness could come out of it (which is already an absurd proposition), it makes more sense that it would be one consciousness/mind that encompasses the whole system, so you wouldn’t have the experience of being an individual in a single body that you have now.

I’ve already written an essay linked to below about this question, but it probably doesn’t say anything I didn’t say here: No, We’re Not Living In a Simulation


On Validation-Seeking

Validation-seeking is a very common psychological pattern that is often admonished in the name of disseminating wisdom or being helpful. The imperative to stop seeking validation is given without any better alternative or explanation of how to change, as if one had never considered that seeking validation is problematic and can just turn it off at the drop of a hat.

The dictum comes from a place of seeking imperfections in others and trying to fix them in the most facile manner. It also comes, on an unconscious level, from a place of wanting to appear to be wiser than others, to have one’s sh*t together, and trying to glean psychic energy from people they imagine will take their advice.

In truth, people are wiser in their decisions and actions than we think. People tend to do the best thing they can, given their own circumstances and psychological context and conditioning. In the case of validation-seeking, it’s a primal, natural drive to constantly seek wholeness in the only way one knows how.

It may seem futile on the face of it because most of the time one can’t get enough validation: every time one receives it, it boosts their self-esteem for a minute, then the next minute they’re in the same place they started, looking for the next nugget of validation to come their way. But to assume this is the long and the short of it is only cynicism, or at least black-and-white thinking; it’s not that simple. When validation hits just right, it has the potential to melt a person, thus giving them the opportunity to rearrange their insides and truly let in the self-worth. That’s what the validation-seeker is looking for.

Or maybe I’m wrong? Either way, may the very idea of this inspire you to engage in more nuanced, less absolutist thinking and to be less condemning in general.

On the Idea of Self-Love

I posted the following observation on the social network formerly known as Twitter, and it got some positive reactions, so I’ll post it here too.

The idea of self-love is a popular one, yet it always seemed convoluted to me, as if one is separating oneself into two parts: one that is the source of love, and one that is its recipient.

The closest thing to self-love that I can relate to, and think is likely more holy, is to know one’s own value.

One person responded to this idea by suggesting that self-love could be loving one’s “ego self.” I suppose this implies that the self is already split in actuality, into something like the “higher self” or whatever and the egoic self. This seems fair enough, though I can’t personally relate to it. I don’t know how to love from something that’s not my ego, or maybe my spirit and ego are so well-integrated that there’s no separating them.

He also compared self-love to self-compassion. While it’s closely related to self-love, this idea somehow sits a little better with me. I’ve tried practicing self-compassion now and then, and I think it’s actually useful, healing and healthy, even though it does seem to imply creating a model of oneself and regarding it as a separate third person.

The idea of self-compassion seems analogous to the helpful idea that one should issue self-talk, or regard and judge oneself, in the same way one would a good friend, especially in order to avoid the common psychological pitfall of being one’s own worst enemy.