Defining Arrogance
What if you are a brain in a vat? Suppose scientists have invented a system where they hook up wires to a human brain suspended in a vat of some special fluid, and these wires are connected to a computer that perfectly simulates the signals that brain would receive if it were in a real body in the real world. You can’t, really, truly, know you are not a brain in a vat, because you can’t step outside yourself to observe yourself. We are trapped in our bodies and in our brains. Everything we perceive, every observation we make about the outside world, is filtered through those brains and those bodies – Objective reality exists, but we can only perceive it – by the time it becomes an observation, it includes both objective and subjective components; it is no longer objective.
We’re probably not brains in vats [citation needed]. But we do have differences in the subjective filters our observations pass through. Just as there is variation in our sensory perception, there is variation in our internal processing (e.g., thinking and feeling). For example, risk aversion – there isn’t a single Right or Rational way to decide which risks are reasonable and which are unacceptable. In a rationalist framework where we try to quantify every aspect of a decision in terms of probabilities and expected value, there is always guesswork involved in assigning probabilities to real-world phenomena. People with higher risk tolerance have higher variance in their outcomes – some succeed wildly, some fail catastrophically – whereas people with lower risk tolerance tend to have outcomes clustered closer to the mean and fewer outliers. In an individual case, there isn’t a clear way to determine which level of risk tolerance is Correct.
I would like to extend this to “intelligence” and every other fuzzy concept we use to privilege our own processing-machines (brains+bodies) over others’ in terms of access to Truth. Any argument that depends on the objective superiority of one person’s subjective internal processes over another’s, is arrogant. I claim that arrogant arguments should be avoided, on both ethical and epistemological grounds – and any framework that can only be justified with arrogance, should be discarded.
The Scientific Method and Inter-Subjectivity
Even if there is no objective Truth that we can evaluate objectively, it’s clear that some things are False. Abstractions, like pure math, which define their premises and logic systems, can tautologically be evaluated as True or False (1+1=2, no squares are circles). Claims that involve the material world are generally messier – there are confidence intervals and probability distributions. Peer-reviewed papers in the natural sciences generally do not make certain claims about Truth, but they do contribute to Knowledge: Our inter-subjective understanding of Truth. A conclusion from a set of inputs is inter-subjectively true if all subjects evaluating that set of inputs agree that it is true.
In theory, the peer-review process and the scientific method are not arrogant. The scientific method involves posing hypotheses, collecting data through reproducible experiments, refining the hypotheses in response to the results. The reproducibility aspect is a guard against arrogance: A good scientific paper describes its methods and assumptions in detail. The paper then exists outside of any subjective mind – there is an objective, observable collection of words and diagrams. From that write-up, anybody should be able to carry out the same experiment and obtain the same results. If anybody carries out the same experiment and obtains inconsistent results, that calls the conclusions of the original paper into question. “You must have done it wrong because you’re not as smart as Dr. Freud”, is not a valid scientific argument.
Similarly, specialized knowledge is not the same as arrogance. It’s not strictly true that absolutely anybody could pick up any science paper and reliably reproduce the experiment – You have to be fluent enough in the jargon and familiar enough with that field’s assumptions to be able to parse the paper. That is not the same thing as arrogance, because it is externally defined: If anybody with a biology Ph.D. can repeat the experiment and get the same results, then the biology Ph.D. becomes part of the input – a biology Ph.D. is something that is observable in the external world; it corresponds to a common set of experiences and an externally document-able body of knowledge; it does not depend on an intrinsic and un-measurable notion of “smartness”. Deferring to climate scientists on climate change is not stratified in the same way that deferring to Lacan on the causes of homosexuality would be, because if I went through the training to become a climate scientist and read the same materials and thought in depth about the same evidence, I could validate their conclusions myself – I do not need to trust their character or “smartness”, I need only trust their external training.
In practice, of course, science as a discipline is messier and does not always reflect these idealized principles. Richard Feynman’s “problem-solving algorithm”, though facetious, nicely illustrates a non-reproducible process, which would require an arrogant criterion to break ties among multiple bodies applying it to the same input and reaching different results:
The Biology Ph.D. example is dis-analogous because the differences in what it means to “think very hard” are articulable: They could be written down as additional steps and then followed in a way that satisfies intersubjective truth.
Implications for Power/Politics
You’ve heard it by now in some form or other: Effects matter more than intentions. The colloquial understanding of this maxim in social justice circles does not go far enough, and much conflict and confusion could be avoided if we stopped caring about other people’s intentions entirely, except insofar as they predict behavior. What is intent? Why someone did what they did. What do we mean by that and why do we care?
“Why x” can mean either “what caused x” or “for what purpose was x done”. I hold that both of these questions are valuable only insofar as they help predict and influence future events. We care what caused x because we want to tweak those causes so that x happens more or less frequently in the future. We care about “for what purpose” because we want to know whether the person is likely to in general be working toward purposes we share, which affects how we interact with them in the future.
Nowhere in this is it necessary to ascribe moral judgment to someone as a person, nor is it necessary to second guess someone’s self-report of their own motivations. Going beyond this functional evaluation and into uncovering True motivations was the project of Freudian psychoanalysis. Freudian psychoanalysis as an empirical, clinical practice has essentially been debunked, but it has carried on under that name in critical theory, literary criticism, and postmodern philosophy.
The Inherent Arrogance of Psychoanalysis
Freudian psychoanalysis is all about knowing you better than you know yourself: Your subconscious mind, memories you’ve suppressed, secret desires you didn’t know you had, and sexual subtext for just about everything you do. Freud infamously proposed the Oedipus Complex: The idea that people in general (by which he meant men in general), as very small children, subconsciously want to have sex with their mothers and kill their fathers. It’s a non-falsifiable hypothesis: If that doesn’t resonate for you, it must be because you’ve suppressed your memory of it. Dr. Freud knows best!
At the height of its popularity in the United States, analysts “uncovered” memories of events that had supposedly been sublimated, but which were proven to have never occurred. The psychological and psychiatric establishment moved on: The dominant mindset toward the DSM is that it is a guide for clustering sets of symptoms which, when they appear together, tend to respond to particular sets of treatment. There are echoes of the psychoanalytic approach in some of the diagnoses, particularly the “personality disorders”, but for the most part, modern psychology is agnostic to the underlying causes of behavior and is instead concerned with inputs (treatment/therapy) and outputs (behavior). There are advantages and disadvantages to this approach – this has corresponded with a trend toward individualistic rather than systemic analysis – but the critique of Freud and the switch to focusing on inputs and outputs, are on point.
He analyzed, She analyzed
What happens when you put two psychoanalysts together and they disagree? They write very long and boring books, of course. Each tries to explain the root causes, the secret motivations, the subconscious desires inherent in the other’s theories. These processes need not converge: If neither concedes, there is no obvious way for an outsider to declare whose argument is correct. At each step, the psychoanalyst claims special access to what really is going on; they claim to know your subconscious better than you know your own. They claim to know each others’ subconsciouses better than they know their own. These are unfalsifiable claims. If A and B analyze C, and reach opposite conclusions about C’s subconscious motivations, and C has yet a third view, potentially involving A’s relationship with A’s mother, who is right? The only answer psychoanalysis provides is an appeal to one’s own authority: The better analyst is right.
Who is the “better analyst”? Suppose B is the better analyst. B knows they are the better analyst because their superior analysis skills tell them so. But what if A thinks A is the better analyst? If A’s flawed analysis skills incorrectly tell them that A is the superior analyst, then A is every bit as justified in believing A is “better” as B is in believing B is “better”. Worse, since each considers theirself superior, they see no need to reach inter-subjective consensus with the inferior analyst. There is no external, measurable, reproducible way to resolve disputes about truth within psychoanalysis. Psychoanalysis requires a stratified analyst-patient dynamic, with the analyst appealing to their own authority – it is fundamentally arrogant.
Let’s not be like Freud
Consider two CEOs, A and B, who run virtually identical companies with identically exploitative business practices. A is a reformist trying to work within the system; they feel bad for the people their business practices hurt, but they wouldn’t be able to be a CEO if they deviated too much from the norm, and if people like them weren’t CEOs, worse people would fill that role instead (think Walter White/Heisenberg from Breaking Bad). B is a shameless profiteer; they only care about theirself and they want to make as much money as possible no matter who gets hurt along the way. Of course, B knows that people tend not to like bald-faced selfishness and greed, and so B tells everybody they have the same motivations and framing as A, because that’s good for business.
There is no way an external observer would be able to tell which is A and which is B without knowing in advance. I claim that it doesn’t matter. We should not be in the business of weighing souls – if A’s naiveté makes them a better person than B, that’s between them and God. Since B is mimicking A and A is mimicking B, we expect both CEOs to respond the same way to the same external pressures, regardless of their “true” motivations. Both stories have equal predictive value, so if we are only interested in motivations insofar as they affect our strategy, then we do not need to decide which is which.
When the inputs and outputs are identical, by definition any conclusion about which is A and which is B must be based on something other than those externally-observable inputs and outputs. This is where we become mini-Freuds (or mini-priests): Either we trust our own internal processing of those inputs more than we trust others’, even when exposed to each others’ reasoning; or we trust someone else’s internal processing above our own. Both are forms of arrogance and stratification, and both are antithetical to egalitarian movements.
Assume good faith
Where there are multiple narratives that are equally “true” (in predictive value and inter-subjective evaluation), I propose we break ties by looking at what is more useful instead of resorting to arrogance in a quest for Truth. Our models of interlocking systems of domination/oppression work just as well, descriptively and predictively, even if we are overly generous and assume all oppressors have the mindset that A has. Giving B the benefit of the doubt can save a lot of fuss over whether A is telling the truth about their motivations or are really a Bad Person with Bad Intentions – those conversations just make it about A instead of the harm A’s company is causing, which is what we are actually interested in addressing. Sometimes painting CEOs as B is useful shorthand, but I believe it builds better habits (against arrogance and against disposability) to either be agnostic or optimistic with regards to the motivations of individuals, at least in our own heads. And that framing is a useful way to avoid derailing conversations about individual moral worth.