Morality
A philosophical and psychological analysis of morality.
An Epistemic Acknowledgement
Naturally, our brains are continuously building personal, predictive models of reality. When a phenomenon is observed, we have a kind of under-acknowledged need to determine its cause. In theory, after observing enough phenomena, our brains begin to reverse-engineer the laws of physics and understand reality with increasing scope and precision. In practice, however, there are some common ways the mind can become trapped inside a loop instead of accurately refining its model. Two examples of these loops in cognition, which I find often go hand-in-hand, are the denial or skewing of observed empirical evidence and the explanation of physical mechanics as a black box.
The black box to which I am referring is faith: don't question the system, just trust that it works and perfectly explains any and all observed phenomena. The moment one starts pursuing faith is the moment they stop pursuing accuracy. Faith is a dangerous feedback loop of epistemic stagnancy, as it, by design, hushes the vital algorithm of predictive modeling.
Faith becomes even more dangerous when it becomes so depended upon that its practitioner must contort their perception of observed reality in order for it to be accurately predicted by their internal model. This is blatant dogma, and for apologists, this seems to be treated as a formal practice. For examples, my mind first goes to the young age of the earth, the existence of macroevolution, and the contradictions between the four biblical gospels — from a rational standpoint, all are topics with a most likely accurate viewpoint, yet all are topics that are still hotly debated between faithful groups or between a faithful group and the scientific community.
I use examples from traditionally Christian doctrine because I am most familiar with it and because it is one of the more ubiquitous dogmatic systems, at least in the United States. However, I wish to make a crucial clarification: spirituality is not inherently dogmatic, nor is secularism inherently rational. If the only explanation one can come up with to explain a phenomenon without creating inconsistencies is a supernatural one, then until they switch their perspective from "My current best explanation" to "The only valid explanation", they aren't really being irrational. Contrarily, if an atheist makes the universal claim that deity could never exist because everything is explainable by science, they are being irrational; potentially, a more epistemically humble position to hold would be, "We should attempt to explore the causes of phenomena using science instead of defaulting to attributing them to deity." An epistemic heuristic I've noticed to be fairly reliable is that agnosticism is safe, while gnosticism is dangerous; supernaturalism and naturalism (and by extension, theism and atheism) are irrelevant.
I must be prudent here: I grew up surrounded by highly faithful, highly indoctrinating influences that were epistemically oppressive to say the least. I recognize that at first, my bad experiences pushed me past the center of the bell curve of accuracy, from blind faith in the God of the Bible to vehement opposition to any and all spirituality. I suspect this was a mechanism I subconsciously developed as a teenager to avoid becoming susceptible to the ubiquitous influence. Epistemic humility, however, reminds me of what I stated in the previous paragraph: spirituality is not inherently irrational, nor is secularism inherently rational. I have to periodically ask myself whether a belief I hold is drawn from rationality or from a push away from my childhood influence, the latter of which is its own, similarly deleterious kind of dogma.
"To know that we know what we know, and that we do not know what we do not know, that is true knowledge."
— Confucius
The Psychological Foundation
Our brains, by design, recognize the concepts "good" and "bad". Though they don't actually exist in physical reality, they are heavily ingrained into our perception of the world; therefore, they are highly relevant to the reality-modeling process of our brains. According to Jonathan Haidt's theory of morality, social intuitionism, our minds assume six associations of good or bad with psychologically recognizable concepts. These foundational associations are called, quite simply, "moral foundations". They are as follows: Care/Harm, Fairness/Cheating, Loyalty/Betrayal, Authority/Subversion, Sanctity/Degradation, and Liberty/Oppression.
The theory explains that while we are young, certain events or ideas that we learn to associate with these foundations cause "gut feelings" of right or wrong. Those feelings later develop into authentic moral beliefs, and are typically defended later in life using arguments developed with the moral conclusion, rather than empirical evidence, as the starting point of reason — a concept Haidt described as the "emotional dog and its rational tail".
For a rationalist, the obvious issue with this pattern is that morals are ultimately grounded in evolutionary mechanisms that prioritize reproductive utility while remaining indifferent to objectivity. I'd say this is fairly obvious in most deontological frameworks, while still being foundational to consequentialist and virtue ethical ones as well. Think anywhere from the biblical commandment "Thou shalt not steal" (mostly Fairness/Cheating), to the US doctrine of "justifiable homicide" (Care/Harm; protect the vulnerable), to something as systemically complex as "the greatest amount of good for the greatest number of people" (Care/Harm). Yet, these are all ultimately baseless; rationalism, at least with my current scope of knowledge, demands nihilism.
A Pivot to Probability
So what can we do; what teleological trajectory should we adopt in a moral framework that yields no direction? Here is the best answer I currently have: the pursuit of decision-making power — since later empirical information could reveal a particular decision that should be made, the best logical use for our current understanding is to maximize our chances of being able to make that decision.
But this introduces complications on its own — we need information to determine what kind of power is actually decision-making power. I will loosely use the Graves Model to demonstrate what I mean: suppose your first instinct is to become super strong, vigilant, and agile. You pursue physical power. However, when it comes time to make a decision, all your attempts may be thwarted by an institutional structure, such as law enforcement. Suppose instead, you work your way up in a company, stockpiling wealth and gaining influence. You pursue institutional power. This gives you a much stronger bulwark against and control field around potentially rivaling institutions, such as law enforcement. However, it fully depends on the integrity of your institution, which still leaves you vulnerable to systemic manipulators that can upset this integrity, such as a charismatic spokesperson who convinces the individuals at the base of the institutional hierarchy to abandon your institution. Suppose instead, you track global trends, learning the global system and the skill of exploiting the butterfly effect. You pursue systemic power. It's a vicious cycle with no indication of an end.
To gain power within any of these apparent tiers, or to even understand the concept of maximizing power within a power-limited system, we must first pursue knowledge. Knowledge, therefore, behaves as its own sort of power, not fitting cleanly within the observed tiered power system. Thus, if our goal is to maximize power, then the pursuit of knowledge, to an extent, is a teleological necessity. What this extent should optimally be, and which types and areas of knowledge bear the most teleological weight, are issues with solutions that also require the pursuit of knowledge. This is largely why I'm so fascinated by epistemology and teleology specifically — morally, why should we strive to know which things that we could, and how do we go about knowing them?
How we spend our cognitive resources is only the first layer. Recognizing that these resources are limited is the second. We are only given so much time to pursue knowledge, as this mechanism is interruptible by the death of the conscious mind. We are also only given so much cognitive processing ability to use within this finite time. This is why I currently advocate for ideals related to the maximization of the perpetuity of consciousness, such as:
- Transhumanism: upgrade the substrate running the conscious algorithm to maximize cognitive processing ability and minimize physical fragility.
- Cosmism: spread consciousness throughout the cosmos to prevent a localized catastrophe from becoming a consciousness extinction event.
A Final Note
It's worth noting that in this analysis, rationality is assumed to be a valid determinant of truth. However, this is only verified by rationality itself. Emotions may tell you otherwise, and claim that they are the only valid means. I assume rationality as my base determinant, not because it is verifiably true — arguably, nothing is, without some form of circularity — but because it is simply the algorithm being run by my biological substrate. In other words, rationality is fundamentally being employed by my mind to decipher truth, and regardless of the tool's merit, it is the tool being used to seek the very information shared in this post. I cannot change this, or at least do not know how or what would lead me to try.
I cannot see every variable, and consciously attempting to maximize how many variables I do see and bearing the weight of maximizing the perpetuity of consciousness puts so much strain on my biological hardware that it imposes an epistemic risk. So now, I casually pursue knowledge and share it through mediums like this one, while caring for my substrate's integrity. In practice, this looks like engaging in intellectual activities, such as reading, writing, and debating; while pursuing care for my physiological and non-physiological needs, such as good nutrition, healthy relationships, and financial security. One can, however, take the biological hardware into account when making this conscious attempt, but that's a concept I'll leave for a future post. For now, I'll leave you with this: the best way I've been able to live life so far is to pursue knowledge and contentment, accepting that which you cannot control, and acting wisely with that which you can.