In a recent article, I talked about the liberal morality crisis that sees modern liberals accepting certain morally abhorrent practices. I promised additional articles focused on teasing out some of the factors I think contribute to this.
I mentioned Armin Meiwes, a man who murdered, dismembered and ate a willing victim for sexual pleasure in 2001—and how researcher Jonathan Haidt found that liberal students couldn't condemn his behavior on moral grounds.
Here I'd like to explore one factor in liberals' analysis of this and other scenarios: faith in reason.
Historically, we were the party who favored Enlightenment values—reason, logic, and the scientific method—over other attempts at knowledge acquisition: religion, prayer, tradition, divination, and intuition, for example.
It's indisputable that reason has the better track record than many of these for proving objective, verifiable facts. But reason must start with axioms, which it cannot generate on its own. An axiom is “a statement taken to be true, to serve as a premise or starting point for further reasoning and arguments.” Axioms are chosen by humans and are based on values. Values are not generated by reason, either, but arise from somewhere else—perhaps a collaboration of embodiment, intuition, experience and empathy.
This is why I employ the term faith: at best, reason is only as good as the the axioms and values it uses as inputs. At worst, those of us who value reason are prone to overvaluing it—falling into black-and-white, all-or-nothing thinking; resisting nuance, and undervaluing intuition and other factors informing truth, especially when it comes to moral and social truths.
The debate between Kantian and Utilitarian ethics has raged on forever without resolution precisely because neither is perfect. We've yet to devise a perfect moral system, based on rationality or anything else. Thus, humility is in order.
Those who won't condemn sexual cannibalism, I believe, are probably thinking some combination of the following:
If I accept homosexual behavior as harmless, I must accept all other unusual sexual behavior as harmless
If I can't find proof of coercion, I can't consider any sexual behavior immoral
If I'm aroused by unusual things, I can't condemn anyone else's arousal patterns
If I don't want laws banning unusual sexual behavior, I can't object to any such behavior on moral grounds
But these statements depend upon unstated prior beliefs which may need revision. The first relies on slippery slope, which is a fallacy. The last assumes a need for consistency between morality and the law, an assumption that allowed Christians to outlaw sodomy in the first place.
Here’s an example that avoids the apparently confusing factor of sex. It requires that I introduce, for the benefit of those who did not spend the early days of the World Wide Web immersed in niche discussion forums, the Lesswrong community.
The Lesswrong community is a gathering of economists and philosophers who believe all the world's problems, including the moral ones, can be solved with math (particularly, Bayes' theorem). Its members are also, unrelatedly, incel-adjacent, transhumanist, and obsessed with overcoming death and disease through cryonics, even if it means eternal life as a brain in a jar.
Back in the day, when I read and occasionally contributed to this community, founding member Eliezer Yudkowsky posed the “Torture versus Dust Specks” dilemma. It goes like this: Which would be better? If one human being was “horribly tortured for fifty years without hope or rest”? Or if a large number of people (3^^^3, to be exact) got a “barely” noticeable speck of dust in their eye?
Using math, the author and at least one other charter member found torture to be the “obvious” choice. Something about utilitarianism and scope insensitivity and the net total harm done to the world.
It was at this juncture that I left the community.
I believe somewhere on the Lesswrong site—I can't find it now—the management asserts that reason is the best tool available for getting to right answers. But this is an axiomatic belief, and the “right answers” it seeks are also arbitrary, when we're working in the domain of morality. Thus, upon learning that our mathematical model prescribes torture, we could just as easily conclude “our mathematical model failed” as “we must accept torture.”
We can ponder what went wrong here. Perhaps the community was too focused on the correct calculations to revisit whether math was the tool for the job. Perhaps a man who'll never watch his mom endure fifty years of torture in the service of his thought experiment can bask easily in the luxury of his ignorance.
More than likely, key data was overlooked. For example, torture is more than the sum of its parts, affecting not just the body, but the will to live and the fitness of the world for living in.
I'd take a dust speck in the eye to save someone from torture, and I suspect many of the other 3^^^3 would do the same. What do we make of that? If the claim is that the dust specks are worse—worse according to whom?
Perhaps the avoidance of displeasure—especially at the scale of a dust speck in the eye—is not a worthwhile goal. We know it doesn't lead to happiness, even if we don't understand the mechanism at work. Haidt has observed that humans are “antifragile,” becoming stronger and happier in the face of duress. The malaise and ennui brought by modernity and its comforts confirms this claim, however invested we are in believing otherwise. The author himself notes that if his skydiving trip “causes the world to get warmer by 0.000001 degrees," and everyone chooses to go skydiving anyway, “we all catch on fire.” But who wants to live in a bubble-wrapped world, lest the Earth warm imperceptibly or a speck fly into an eye? The quarantine of the last couple of years gave us a taste of safetyism at the cost of joy. Most of us were glad to see it end.
Something was overlooked in Yudkowsky's analysis. We don't need to determine exactly what it was before we reject torture.
In an undergrad course I was introduced, via a section by philosopher Jonathan Bennett in this book, to two moral actors whose reason required them to “override” their “human sympathies.” The first was the fictional Huck Finn, who helped his enslaved friend Jim escape to freedom, despite his inability to reconcile this action with the moral doctrine of his time. The second was Nazi officer Heinrich Himmler, whose sympathies made it difficult for him to carry out executions, but who ignored his feelings to perform a job he thought aligned with correct doctrine. In each case, reason led the actor astray, while moral intuition prescribed the compassionate course of action—or would have, had the actor heeded it. Bennett concludes that while we shouldn’t write our sympathies a “blank check,” we should give them “great weight,” suspicious of “any principle that conflicts with them.”
I recently heard fundamentalism defined as the belief that a single factor explains every phenomenon. The Lesswrong community, in their defense of reason at all costs, are fundamentalists. Their moral framework, though it was a rationalist one, allowed for torture—just as the moral framework of religious fundamentalists has allowed for torture.
Reason is good. We can trust it for many applications. But we can also say, “I'm not comfortable with this,” even if we can't yet articulate why.
This is brilliantly plainly stated. Your last sentence coincides with a feeling I’ve had, that intellectuals can run verbal circles around “normies” and declare themselves winners because their opponents don’t have the vocabulary or the inclination to match them. That doesn’t make them right; it just makes them clever.
I also browsed the LessWrong community back in the day. The thought experiments were interesting, but the explanations and responses were sometimes unsettling. It became obvious that rationality alone, without any other guiding principle or solid grounding, could lead you to some bizarre and dangerous places.