All entries
Entry Nº 003May 14, MMXXVI11 min

Rationalism as Tool, Not Telos

A reflection on threat-aversion, substratal motivation, and rationalism's role as tool rather than telos.


A Read Between the Lines

I like to periodically examine my writings, not just in my blog posts but in my conversations with people or language models, the bios and posts of my online profiles, and my saved information in various AI apps. I also like to run these writings through an AI model prompted to play devil's advocate to hopefully identify my blind spots. Shortly after writing my blog post about morality, I uploaded a copy of it with the introductory post to Claude, and had Opus 4.7 search for logic errors. It highlighted that I made somewhat of a logical leap when I said, "So now, I casually pursue knowledge and share it through mediums like this one, while caring for my substrate's integrity.", like I made a very strong argument for a particular telos, just to dismiss following it as being too stressful in practice. I was somewhat aware of this already, but didn't think about it too much, and my desire to hurry up and click "Publish" definitely didn't help. Regardless of the cause of my logical failure, it underscored a core issue in the way I process information: I am disproportionately threat-averse.

Without superfluous details, my childhood was an environment with a constant, legitimate threat, one that I quickly learned was best mitigated by emotional detachment, rigorous foresight, and careful planning. I suppose this to have been, at least in part, the origin of my drive for epistemic humility and goal-orientation. However, since these were built on the foundation of threat prediction and aversion, they simultaneously fuel and are fueled by unhealthy levels of stress and anxiety that ironically degrade my capacity for accurately processing information. Generally, the drive for my thoughts has not been dopamine-driven curiosity, but cortisol-driven security-seeking.

There is more irony than the physical degradation of my mind. One habit I would have is the obsessive filtering of what I'm "allowed" to think about, passively consume, or work on. They had to be productive, with the amount I enjoyed something only being taken into account as a motivator to make me more productive and exploited accordingly. A specific example of this is maximizing the SNR (Signal-to-Noise Ratio; how much "useful" content I see proportional to how much "useless" content I do) of my social media inputs, achieved by making Mastodon my only form of social media and aggressively curating my feed for signal. What I didn't take into account is that 1) I can't predict utility that effectively, 2) maximizing SNR the way I did actually lowered my total signal, and 3) the stress of forcing myself to look at seemingly boring posts and avoid entertaining ones caused me to occasionally break into reinstalling Instagram or something. By putting artificial guardrails on my inputs, I had actually created a needlessly stressful and counter-productive workflow in place of a modestly utility-seeking dopamine machine. I currently find a partially-curated, partially-dopamine-driven X and Reddit to be the optimal middleground.

A good metaphor for this dynamic is Edward C. Tolman's Rat Maze Experiment. In this experiment were three groups of rats placed in a maze: group 1 were those that had food at the end every day; group 2, those that never had food; and group 3, those that didn't have food until day 11. Everyone talks about how group 1 gradually became better at finding the food daily, while group 3 immediately knew how to get to the food on day 11, proving that learning can happen independently of reward/punishment stimuli. But the part that stands out to me is that group 3 didn't actually match group 1 in performance — group 3 outperformed it. The curious ones that had moved around aimlessly were better equipped to accomplish a goal than the ones that had been moving around for the purpose of accomplishing the same goal the whole time.

I stated in my post about morality that to use rationality to say rationalism is a valid means of truth-seeking is circular and therefore irrational by its own standard, but I still use it to seek truth, not because I believe it to be inherently truth-revealing, but because it is simply the algorithm with which my biological substrate is equipped to use to seek truth. I have a few retrospective thoughts about this. Firstly, I should have clarified something I thought to have been obvious at time of writing: just because I rely on rationalism doesn't mean my mind uses it perfectly; there are still cognitive biases that are virtually inevitable. It currently reads like a claim that I am a perfectly rational creature; that is not what I intended to communicate. Secondly, I actually think I only trust my logic's ability to determine truth above any other method because of my feelings, which are, ironically, not inherently rational. As I explained earlier, my dedication to rationality was born of the anxiety of threat; heuristics have essentially determined that my experiences with rationality generally validate logic's role in truth-seeking, which my emotions have reflected. My morals are still an instance of social intuitionism, even if those morals go directly against their own premise. Thirdly, from a pragmatically teleological standpoint, substratal motivation exists upstream of the mathematically probabilistic telos of maximizing decision-making power — two motivations that are fundamentally distinct, yet not mutually exclusive.

The Self-Referential Map

I opened that morality post by describing how our minds build internal models of reality using perceived data points. Well, if the learning algorithm and the data point-perceiver are downstream of the substratal motivation, then why not apply the internal model to the motivations themselves to better understand and follow them? To make this pragmatic, motivations can be treated as signal for met or unmet needs.

I suppose this is a less eloquent but more precise way of saying, "Your purpose in life is to find your purpose", as it suggests the need to figure out what truly motivates you, and to make better-informed decisions accordingly. I would think that many would interpret this to mean something like reflecting on their feelings and determining their causes, which I would at least agree is a crucial component. But nothing suggests using self-reflection alone: your experienced needs are simulated by a biological substrate that has for some time been rigorously studied in multiple fields of science. Use those resources.

I suppose I should clarify science's role here to be supplementary rather than baseline, however, since the things your body tells you are empirical data points: this method treats subjective dissatisfaction itself as the need failing to be fully met. The pragmatic problem is the actual cause of that dissatisfaction, regardless of whether its factors are environmental or strictly perceived. If a predictive model in science inaccurately predicts what you are feeling in some context, then the model is invalid, not your feelings. To say otherwise would be to commit dogma as I described it in the morality post: "...must contort their perception of observed reality in order for it to be accurately predicted by their internal model. This is blatant dogma..." Be wary, however, for it is also very possible for you to misinterpret the underlying needs behind your own feelings, or even to misinterpret the feelings themselves, as our rational brain wants so badly to parse, label, and categorize feelings that are highly abstract and messy.

With this in mind, I think an effective workflow for this kind of reflection is:

What am I feeling? How does my current self-understanding detail what I am feeling? For example, how are these feelings labelled and categorized and concrete needs could they potentially map to? What inconsistencies with my past experiences and current theories does this create, if any? How can I refine my current self-understanding to eliminate any inconsistencies?

And, to turn this insight into action,

How can I best meet these needs? Can they be broken down to individual concrete factors, which can be addressed individually?

I have found that using this model has made my self-perception feel increasingly authentic, something others might call "Finding [my] purpose". While I must flag the obvious potential for confirmation bias, it is worth asking whether understanding yourself on this level could lead you to make more fulfilling decisions, as you can better understand, articulate, and align yourself to your needs.

While this method isn't rigorously scientifically verified, I have been refining it since high school. At that point, I still assumed myself and my biological substrate to be one, and saw the motivations as teleologically foundational, which led me to create a pseudo-scientific model of human motivation drawing from neuroscience, cognitive psychology, and other pseudo-scientific models like Maslow's hierarchy of needs and the Graves Model. The part that makes this model actually useful for myself is the iterative refinement: I continually refine it with my own positive and negative experiences, which makes this a pragmatic model of my own fulfillment rather than a validated psychological model. While this model is consequentialist (and therefore, goal-oriented) in nature, it does not suggest that goals should always be consciously pursued, because it takes the impact of obsessive goal-pursuit into account: the stress, motivation depletion, and opportunity cost created by this pursuit can — as we learned from Tolman's experiment — actually have a net negative impact on the achievement of the goal itself.

A final caveat and reiteration is that I now see the system as "mind in animal" rather than "mind as animal", which circles back to transhumanism, asking questions about topics like editing the substrate's motivations with the right technology. I don't think those questions or goals should be abandoned, but I think they need to be reframed as secondary to fulfillment, as they exist downstream of these needs; they are teleological demands of the rationalist system that is only given weight by the teleological demands of substratal motivation itself.

Cognitive Utility Theory

This model for a while consumed me, and following it ironically caused its stronghold on me to loosen. It was perhaps the most significant thing I produced while under the influence of obsessive threat-aversion, as I treated it as foundational to determining and neutralizing real threats, thus making it a way to crystallize my telos and capitalize on the knowledge of myself and my environment to pursue it. It was — and to an extent, remains — an alternative to religion that applies epistemic humility where religions might traditionally apply faith.

While I did extensive research and reflection, and even some self-experimentation, I knew as a high-schooler that my knowledge was very limited and I was probably violating several principles of science in developing that which I was calling a "scientific theory". This led me to pursue a minor in psychology alongside a major in business administration after graduation, as I would be better able to refine and understand the model, and could dedicate my career to founding and leading a utilitarian business focused on refining and applying the model professionally. Not long into Advanced General Psychology — which I, at time of writing, still have about a week to complete — I realized several significant flaws in my theory development process, such as the need for falsifiability, which I hadn't taken into account previously. This is also when I recognized that my "theory" is fundamentally pseudo-scientific, even if it is plausible. I will take credit for reading and applying scientific works such as Daniel Kahneman's Thinking, Fast and Slow, which helped me recognize many of my own cognitive biases, such as WYSIATI and its effects like theory-induced blindness, early.

I labelled myself a "Prudential Hedonist Consequentialist", which is one who seeks to maximize the net amount of pleasure experienced throughout their entire life. This label was the most precise one I determined, though it would be more precise to replace "pleasure" with "satisfaction". How I quantify satisfaction and dissatisfaction is grounded in my model, Cognitive Utility Theory (CUT), and is calculated from a map of chemical messengers with predefined baseline and calculated weighted values of satisfaction/dissatisfaction. I plan to share my model in more detail, but will spare you the majority of the details in this post, else it will probably triple in length.

The most commonly recurring, albeit dramatically oversimplified, theme I've found in CUT so far is that I have six primary needs, which are as follows, in no particular order: sensory stimulation (endorphins), security (cortisol), goal achievement (dopamine), contentment (serotonin), relationship cultivation (oxytocin), and relationship protection (vasopressin). Applying this pragmatically implies bringing the chemicals to an experientially healthy level, which entails understanding which things cause them to be released and acting accordingly. It seems simple in theory but is agonizingly complex in execution, especially because of the systemic idea that no perceivable variable is necessarily irrelevant.

As stated, I will share more about my specific model in the future, partly for my readers' benefit if they find any utility in it. But the main point I want to deliver with this post is that our sense of rationalism comes from substratal motivation, so prioritizing that motivation first swaps their roles: rationalism becomes the tool, wellbeing the telos. It's important to remember the needs of your mind and body, and to care for your own health. We can still look ahead and pursue ideas like transhumanism, but in the spirit of meeting our own needs like security or achievement, rather than of rational telos.

Alex·theWebCrawler
Leave a comment
What's next

The next entry is still being spun. In the meantime, the journal index will accumulate everything as it appears.