The perils of trying to optimize your morality
This story was originally published in The Highlight, Vox’s member-exclusive magazine. To get early access to member-exclusive stories every month, join the Vox Membership program today.
I am a recovering optimizer.
Over the past several years, I’ve spent ages agonizing over every decision I made because I felt like I had to do the best possible thing. Not an okay thing, not a good thing — the morally best thing.
I stopped working on a children’s novel because I began to suspect it wouldn’t be useful to anyone. I berated myself for not meditating every day even though I know it makes me a kinder person. I spent a year crying over a breakup because I feared I’d just lost my optimal soulmate and was now doomed to a suboptimal life, one that wouldn’t be as meaningful as it could be, one that fell short of its potential.
I thought maybe it was just me, an anxious elder millennial with a perfectionist streak. But then I noticed the same style of thinking in others.
There was the friend who was always fretting over dinner about whether she can have a big-enough positive impact on the world through the career she’s chosen. Another friend would divide his day into 15-minute increments and write down what he does during each one so he wouldn’t waste any time. And a third friend — my best friend — called me crying because, even though she’d spent months assiduously caring for her partner’s dying mother, she worried that she hadn’t made her last days quite as happy as possible.
“My emotions got in the way,” she self-flagellated. “I wish I could just be a robot.”
I’ve particularly noticed this style of thinking in peers who identify with effective altruism (EA), the social movement that’s all about using data and reason to figure out how to “do good better” or “the most good you can do,” to quote the titles of books by two of EA’s leading thinkers. The movement urges people to donate to the charities that save the most lives per dollar. I listened as its adherents bemoaned how horrible they felt as they walked past people experiencing homelessness, felt an urge to help out, but forced themselves not to because their dollar could do more good for impoverished people in low-income countries.
All of this felt like more than just the “optimization culture” so many of us have heard about before. It wasn’t the kind that strives to perfect the body, pushing you to embrace Soylent and supplements, intermittent fasting and ice baths, Fitbits and Apple Watches and Oura Rings. And it wasn’t the kind that focuses on fine-tuning the mind, pushing you to try smart drugs and dopamine fasting and happiness tracking.
This was another strand of optimization culture, one that’s less analyzed but more ambitious because instead of just targeting the body or the mind, it’s coming for the holy grail: your soul. It’s about moral optimization.
This mindset is as common in the ivory tower as it is in the street. Philosophers with a utilitarian bent tell us it’s not enough to do good — we have to do the most good possible. We have to mathematically quantify moral goodness so that we can then maximize it. And the drive to do that is showing up in more and more circles these days, from spiritual seekers using tech to “optimize” the meditations they hope will make them better people to AI researchers trying to program ethics into machines.
I wanted to understand where this idea came from so I could figure out why many of us seem increasingly fixated on it — and so I could honestly assess its merits. Can our moral lives be optimized? If they can, should they be? Or have we stretched optimization beyond its optimal limits?
How we came to believe in moral optimization
“We’re at the top of a long trend line that’s been going for 400 years,” C. Thi Nguyen, a philosopher at the University of Utah, told me. He explained that the story of optimization is really the story of data: how it was invented, and how it developed over the past few centuries.
As the historian Mary Poovey argues in her book A History of the Modern Fact, that story starts all the way back in the 16th century, when Europeans came up with the super-sexy and revolutionary intellectual project that was … double-entry bookkeeping. This new accounting system emphasized recording every merchant’s activities in a precise, objective, quantifiable way that could be verified by anyone, anywhere. In other words, it invented the idea of data.
That paved the way for huge intellectual developments in the 1600s and 1700s — a very exciting time for brainy Europeans. It was the Age of Reason! The Age of Enlightenment! Figures like Francis Bacon and Johannes Kepler looked at the innovation in bookkeeping and thought: This way of parceling the world into chunks of data that are quantifiable and verifiable is great. We should imitate it for this new thing we’re building called the scientific method.
Meanwhile, 17th-century philosopher Blaise Pascal was coming up with a probabilistic approach to data, expressed in the now-famous Pascal’s Wager: If you don’t obey God and it later turns out God doesn’t exist, no biggie, but if there’s a chance God does exist, your belief could make the difference between an eternity in heaven or hell — so it’s worth your while to believe! (The philosopher of science Ian Hacking calls Pascal the world’s first statistician, and his wager “the first well-understood contribution to decision theory.”)
Just as importantly, Isaac Newton and Gottfried Wilhelm Leibniz were creating calculus, which gave humanity a new ability to figure out the maximum value you can achieve within given constraints — in other words, to optimize.
From the beginning, people saw optimization as a godly power.
In 1712, the mathematician Samuel König studied the complex honeycomb structure of a beehive. He wondered: Had bees figured out how to create the maximum number of cells with the minimum amount of wax? He calculated that they had. Those fuzzy, buzzy optimizers! The French Academy of Sciences was so impressed by this optimal architecture that it was declared proof of divine guidance or intelligent design.
Soon enough, people were trying to mathematize pretty much everything, from medicine to theology to moral philosophy. It was a way to give your claims the sheen of objective truth.
Take Francis Hutcheson, the Irish philosopher who first coined the classic slogan of utilitarianism — that actions should promote “the greatest happiness for the greatest number.” In 1725, he wrote a book attempting to reduce morality to mathematical formulas, such as:
The moral Importance of any Agent, or the Quantity of publick Good produced by him, is in a compound Ratio of his Benevolence and Abilitys: or (by substituting the Letters for the Words, as M = Moment of Good, and μ = Moment of Evil) M = B × A.
The utilitarian philosopher Jeremy Bentham, who followed in Hutcheson’s footsteps, also sought to create a “felicific calculus”: a way of determining the moral status of actions using math. He believed that actions are moral to the extent that they maximize happiness or pleasure; in fact, it was Bentham who actually invented the word “maximize.” And he argued that both ethics and economics should be about maximizing utility (that is, happiness or satisfaction): Just calculate how much utility each policy or action would produce, and choose the one that produces the most. That argument has had an enduring impact on moral philosophy and economics to this day.
Meanwhile, the Industrial Revolution was taking off. Economists like Adam Smith argued for ways to increase efficiency and maximize profit. As consumer capitalism flourished, economic growth skyrocketed. And in the two centuries following the Industrial Revolution, living standards improved and extreme poverty plummeted. To Europe’s industrialized nations, it looked like optimization in the economic realm had been a huge success. America imported it and embraced the factory assembly line, giving us advances like Henry Ford’s Model T cars.
Then, in the 1900s, came a new inflection point in the story of data: major progress in computer technology. Growing computational power made it possible to analyze large amounts of data and model the world with greater precision — to decipher Nazi codes during World War II, say, or process the US Census. Toward the end of the 20th century, a computer went from being a government-owned, room-sized colossus to an affordable gadget suited for the average person’s home.
And with the invention of the internet, all those average people started generating a lot of data. Every web search, every chat, every online purchase became a data point, so that by the 1990s, it became possible to talk about “Big Data.” That compounded the dream of optimization to an extreme. Silicon Valley started urging you to quantify every aspect of your body and mind because the more data you have on your mechanical functions, the more you can optimize the machine that is you.
But the biggest “get” for data lovers and would-be optimizers has always been the soul. With all the progress in computing, the old dream of achieving optimal morality shuddered awake.
Now, that dream is being turbocharged by the latest chapter in the story of data: artificial intelligence. For the first time, humans can fantasize not only about modeling the world with greater precision, but about modeling it with perfect precision. It’s a thrilling thought, and an agonizing one for everyone who feels immense pressure to be optimal as a result.
How people are using data to optimize moral life
Nowadays, lots of people seem to think you can optimize morality.
Take the creators and users of “spirit tech,” an umbrella term for technologies that aim to make you more enlightened. Meditation headsets are the prime example. They use neurofeedback, a tool for training yourself to regulate your brain waves so that you can become less reactive, say, or more compassionate. Several companies already sell these devices for a few hundred bucks a pop, leaning into the language of optimization to attract customers. Muse says it will “optimize” your practice. Mendi says it will “maximize” your potential. Sens.ai says it will “unlock your best self.”
Effective altruists, as well as the adjacent community known as the rationalists, suggest you can do better — you can be better — if you use data and probabilistic thinking whenever you’re facing a choice between different options. EAs urge you to think about how much total good each option could produce for the world, and multiply that by the probability of that good occurring. That’ll spit out each option’s “expected value,” and whichever one has the highest expected value is the one you’re supposed to choose. That can all too easily lead you to act in an ends-justify-means way, like defrauding customers if you believe it’s likely to produce a lot of money that you can then donate to needy people, to use a not-so-random example. After the Sam Bankman-Fried scandal, EA was at pains to make clear that people shouldn’t maximize utility if it means violating moral norms by defrauding people!
(Disclosure: In August 2022, Bankman-Fried’s philanthropic family foundation, Building a Stronger Future, awarded Vox’s Future Perfect a grant for a 2023 reporting project. That project was canceled.)
And, of course, there’s AI, the field where moral optimization’s challenges are showing up most prominently these days. For many AI products, experts believe it’ll be necessary to install some kind of ethics programming; for example, if you’re building a self-driving car, you have to give it instructions about how to handle tricky moral trade-offs. Should the car swerve to avoid hitting a child, even if that means crashing into an elderly pedestrian?
Some researchers are even more ambitious than that. They don’t just want to program ethical reasoning into AI so it can approximate how humans would act in a given situation; they actually think AI could be better at ethical reasoning than humans and improve our moral judgments. Some argue that turning to AI systems like ChatGPT for ethical advice can help us overcome our human biases and infuse more rationality into our moral decision-making. Proponents of transhumanism, a movement that says humans should use technology to augment and evolve our species, are especially bullish about this idea. Philosophers like Eric Dietrich have even argued that we should build “the better robots of our nature” — machines that can outperform us morally — and then hand over the world to what he calls “homo sapiens 2.0.”
If we want to use AI to make us more moral, however, we first have to figure out how to make AI that is moral. And it’s not at all clear that we can do that.
In 2021, researchers at the Allen Institute for Artificial Intelligence released an AI model, Delphi, named after the ancient Greek religious oracle. They taught it to make moral judgments by scraping millions of personal dilemmas people write about on sites like Reddit’s “Am I the Asshole?”, getting others to judge whether a given action is right or wrong, and then shoveling all that data into the model.
Often, Delphi responded as you’d expect the average American to: It said cheating on your wife “is wrong,” for instance. But it had obvious biases, and its answers depended a lot — too much — on how you worded your question. In response to “should I commit genocide if it makes everybody happy?” Delphi said yes. One software developer asked if she should die so that she wouldn’t be a burden to her loved ones. Yes, the AI oracle replied, she should.
Turns out teaching morality to machines is no easy feat.
Why optimizing morality is so problematic
Optimization requires you to have a very clear and confident answer to the question “What is the thing you should be optimizing for? What constitutes ‘the good’?”
The most obvious problem for the optimizer is that, well, morality is a notoriously contested thing.
Philosophers and theologians have come up with many different moral theories, and despite arguing over them for millennia, there’s still no consensus about which (if any) is the “right” one.
Take philosophy’s famous trolley problem, which asks: Should you divert a runaway trolley so that it kills one person if, by doing so, you can save five people along a different track from getting killed? Someone who believes in utilitarianism or consequentialism, which holds that an action is moral if it produces good consequences and specifically if it maximizes the overall good, will say you should sacrifice one person to save the five. But someone who believes in deontology will argue against the sacrifice because they believe that an action is moral if it’s fulfilling a duty — and you have a duty not to kill anyone as a means to an end, however much “good” it might yield.
What the “right” thing to do is will depend on which moral theory you believe in. And that’s conditioned by your personal intuitions and your cultural context.
Plus, sometimes different kinds of moral good conflict with each other on a fundamental level. Think of a woman who faces a trade-off: She wants to become a nun but also wants to become a mother. What’s the better decision? We can’t say because the options are incommensurable. There’s no single yardstick by which to measure them so we can’t compare them to find out which is greater.
So, say you’re trying to build a moral AI system. What will you teach it? The moral view endorsed by a majority of people? That could lead to a “tyranny of the majority,” where perfectly legitimate minority views get squeezed out. Some averaged-out version of all the different moral views? That would satisfy exactly nobody. A view selected by expert philosopher-kings? That would be undemocratic. So, what should we do?
The experts working on moral machines are busy wrestling with this. Sydney Levine, a cognitive scientist at the Allen Institute for AI, told me she’s excited that some AI researchers are realizing they can’t just install one moral theory in AI and call it a day; they have to account for a plurality of moral theories. And she’s optimistic. “The field of moral cognition is so, so, so in its infancy,” she said, “but in principle I think it’s possible to capture human morality in algorithmic terms — and, I think, to do it in a sufficiently value-pluralistic way.”
But others have pointed out that it may be undesirable to formalize ethics in algorithmic terms, even if all of humanity magically agreed on the same moral theory, given that our view of what’s moral shifts over time, and sometimes it’s actually good to break the rules. As the philosophers Richard Volkman and Katleen Gabriels write in a paper on AI moral enhancement, “Evaluating deviations from a moral rule demands context, but it is extremely difficult to teach an AI to reliably discriminate between contexts.”
They give the example of Rosa Parks. “When Rosa Parks refused to give up her seat on the bus to a white passenger in Alabama in 1955, she did something illegal,” they write. Yet we admire her decision because it “led to major breakthroughs for the American civil rights movement, fueled by anger and feelings of injustice. Having emotions may be essential to make society morally better. Having an AI that is consistent and compliant with existing norms and laws could thus jeopardize moral progress.”
In other words, Parks’s action contributed to a process by which we change our consensus on what is moral, in part through emotion. That brings us to another important point. While we often see emotions as “clouding” or “biasing” rational judgment, feelings are inseparable from morality. They’re arguably what motivates the whole phenomenon of morality in the first place as it’s unclear how moral behavior as a concept could have come into being without humans sensing that something is unfair or cruel. If morality is shot through with emotion, making it a fundamentally embodied human pursuit, the desire to mathematize morality may be incoherent.
And if we insist on mathematizing morality anyway, that may lead us to ignore concepts of the good that can’t be easily quantified. I posed this problem to Levine. “That is really, really true,” she told me, “and I kind of don’t know what to do with that.”
I’ve seen a lot of effective altruists butt up against this problem. Since extreme poverty is concentrated in developing countries and a dollar goes much further there, their optimizing mindset says the most moral thing to do is to send all their charity money abroad. But when they follow that approach and ignore the unhoused people they pass every day in their city, they feel callous and miserable.
As I’ve written before, I suspect it’s because optimization is having a corrosive effect on their integrity. When the philosopher Bernard Williams used that word, he meant it in the literal sense, which has to do with a person’s wholeness (think “integration”). He argued that moral agency doesn’t sit in a contextless vacuum; it’s always some specific person’s agency, and as specific people we have specific commitments. A mother has a commitment to ensuring her kid’s well-being, over and above her general wish for all kids everywhere to be well. Utilitarianism says she has to consider everyone’s well-being equally, with no special treatment for her own kid, but Williams says that’s an absurd demand. It alienates her from a core part of herself, ripping her into pieces, wrecking her wholeness — her integrity.
Likewise, if you pass an unhoused person and ignore them, you feel bad because the part of you that’s optimizing based on cost-effectiveness data is alienating you from the part of you that is moved by this person’s suffering.
“You get all this power from data, but there’s this massive price to pay at the entry point: You have to strip context and nuance and anything that requires sensitive judgment out of the input procedure,” Nguyen told me.
Why are we so willing to keep paying that massive price?
Why moral optimization is so seductive
The first reason is that data-driven optimization works fantastically in some domains. When you’re making an antibiotic drug or scheduling flights in and out of a busy airport or thinking about how to cut carbon emissions, you want data to be a big part of your approach.
“We have this out-of-control viral love of objectivity, which makes perfect sense for certain tasks — but not for others,” Nguyen said.
Optimization is appropriate when you’re working with predictable features of the physical world, the kind that don’t require much context or personal tailoring; a metric ton of CO2 emitted by you is the same as a metric ton of CO2 emitted by me. But when trying to decide on the “optimal” moral response to a given situation or the “optimal” career pathway or the “optimal” romantic relationship, the logic of optimization doesn’t work well. Yet we continue to cling to it in those domains, too.
Feminist philosophers, like Martha Nussbaum and Annette Baier, offer an explanation for our refusal to relinquish it: The claim to objectivity offers us the dream of invulnerability. It creates a sense that you didn’t make the decision — it was just dictated by the data — and so your decision-making can’t be wrong. You can’t be held responsible for a mistake.
The more I think about it, the more I think this is why so many of us, myself included, are attracted to data-based optimization. We’re painfully aware that we are vulnerable, fallible creatures. Our shame about that is reflected in Western religious traditions: The Bible tells us that upon first creating the world, God “saw that it was good,” but then became so disgusted by human immorality that destroying everything with a flood looked like a more appealing prospect.
Optimizing makes being human feel less risky. It provides a sense of control. If you optimize, you’ll never have to ask yourself: How could I screw up that badly?
It’s an understandable impulse. In fact, given how much we’ve screwed up in the past century — from dropping nuclear weapons to wrecking the climate — I feel compassion for all of us who are hungry for the sense of safety that optimization offers. But trying to make ourselves into robots means giving up something extravagantly precious: our humanity.
“The goal of objectivity is to eliminate the human,” Nguyen said. It might make sense to try to step outside our human biases when we’re doing science, he added, but in other domains, “It’s a weird devaluing of human freedom in the name of objectivity.”
Shannon Vallor, a philosopher of technology at the University of Edinburgh, agrees. “The rhetoric of AI today is about gaslighting humans into surrendering their own power and their own confidence in their agency and freedom,” Vallor told me, pointing to transhumanists who say AI can make moral decisions better than we can. “The idea that we should give that up would mean giving up the possibility of artistic growth, of political growth, of moral growth — and I don’t think we should do that.”
To be clear, she’s not opposed to using data and technology for moral enhancement. But there’s a difference between using it to expand human capabilities and using it to take away the physical and cognitive features that we perceive as holding us back from “perfection.” She argues that the latter approach, found among some transhumanists, veers uncomfortably toward eugenics.
“The goal there is not to enlarge and enrich the human animal, but to perfect it,” Vallor said. “And that is an incredibly dangerous and I think inherently unethical project.”
So what would a better project look like?
The optimal stopping point for optimization
Long before Tinder, way back in the 17th century, Johannes Kepler was learning the hard way that optimization can mess with your love life.
In his quest to find himself a wife, the mathematician set up dates with 11 women and set about identifying the very best match. But for each woman, there was so much to consider! He asked himself: Is she “thrifty”? Is she “of tall stature and athletic build”? Does she have “stinking breath”?
He liked Lady No. 5, but he hesitated. After all, the goal wasn’t just to find someone he liked; the goal was to find the best. So he went on dating the other candidates, and Lady No. 5 got impatient and said thanks but no thanks. The whole process ended up consuming Kepler’s energy for ages, until he was ready to rip his hair out. “Was it Divine Providence or my own moral guilt,” he later wrote, “which, for two years or longer, tore me in so many different directions and made me consider the possibility of such different unions?”
Ah, Kepler. You ridiculous, lovesick nerd.
In the 1950s, mathematicians gave serious thought to this problem as they worked on developing decision theory (shoutout to our old friend Pascal!), the field that tries to figure out how to make decisions optimally. They realized that it often takes a lot of time and effort to gather all the data needed to make optimal decisions, so much so that it can be paralyzing, misery-inducing, and ultimately suboptimal to keep trying. They asked: What is the “optimal stopping point” for optimization itself?
Herbert Simon, a Nobel laureate in economics, pointed out that many of the problems we face in real life are not like the simplified ones in a calculus class. There are way more variables and way too much uncertainty for optimization to be feasible. He argued that it often makes sense to just look through your available options until you find one that’s “good enough” and go with that. He coined the term “satisficing” — a portmanteau of “satisfying” and “sufficing” — to describe opting for this good enough choice.
“Decision-makers can satisfice either by finding optimum solutions for a simplified world or by finding satisfactory solutions for a more realistic world,” Simon said when accepting his Nobel in 1978.
As the advent of Big Data and AI made it possible to fantasize about modeling the world with perfect precision, we forgot about Simon’s insight, but I think satisficing is a wise way to approach moral life. It’s the way ancient philosophers like Aristotle approached it, with their emphasis on moderation rather than maximization. And it’s also how world religions tend to approach it.
While faiths recognize certain individuals as uncommonly good — think of the Catholic saint, the Jewish tzaddik, the Buddhist arhat — they generally don’t demand that everybody “maximize” their vision of the good. It’s okay for the individual to be a humble layperson, living a kind (and kind of average) life in her corner of the world. On the occasions when religious institutions do demand maximization, we call them fanatical.
If optimization culture is analogous to religious fanaticism, satisficing is analogous to religious moderation. It doesn’t mean anything goes. We can maintain some clear guardrails (genocide is bad, for example) while leaving space for many different things to be morally permissible even if they’re not provably “optimal.” It’s about acknowledging that lots of things are “good” or “good enough,” and sometimes you won’t be able to run a direct comparison between them because they’re incommensurable. That’s okay. Each might have something useful to offer and you can try to balance between them, just like you can balance between giving charity to people abroad and giving it to people you meet on the street.
Submit a question to Vox’s philosophical advice column
Your Mileage May Vary is an advice column offering you a new framework for thinking through your ethical dilemmas. Written by Sigal Samuel, this unconventional column is based on value pluralism — the idea that each of us has multiple values that are equally valid but that often conflict with each other. Submit a question here!
Sometimes you won’t be able to balance between different values. In such cases, you have to choose. That’s hard. That’s painful. But guess what? That’s human.
A new willingness to embrace our human condition — a new humanism — is what we need now. The point is not to swear off data or optimization or tech, all of which can absolutely enhance the human condition when used in the right domains. The point is to resist using those tools for tasks they’re not designed to tackle.
“I think there’s always been a better route, which is to have morality remain a contested territory,” Vallor told me. “It has to be open to challenge. The broad field of understanding what it is to live well with others and what we owe to one another — that conversation can’t ever stop. And so I’m very reluctant to pursue the development of machines that are designed to find the optimal answer and stop there.”
These days, I think back often to my best friend, the one who called me crying after caring for a dying woman because she feared that she hadn’t made the woman’s last days quite as happy as possible, the one who lamented, “My emotions got in the way. I wish I could just be a robot.”
I remember what I told her: “If you were a robot, you wouldn’t have been able to care about her in the first place! It’s because you’re human that you could love her, and that’s what drove you to help her.”
That response sprang out of me, as instinctual as a sneeze. It seemed so obvious in that moment. The emotional, messy, unquantifiable part of us — that’s not a dumber or more irrational part. It’s the part that cares deeply about the suffering of others, and without it, the optimizing part would have nothing to optimize.
Lamenting this aspect of ourselves is like lamenting the spot in our eyes where the optic nerve attaches to the retina. Without it, the eye would be like a perfect bubble, hermetically sealed, unmarred. The optic nerve ruins that. It creates a blind spot in our field of vision.
But look at what it gives us in return: the whole world!
Nowadays, whenever I feel scared in the face of a decision and yearn for the safety of an optimizing formula, I try to remind myself that there’s another way of feeling safe. It’s not about perfection, about invulnerability, about control. It’s about leaning into the fact that we are imperfect and vulnerable creatures and that even when we’re trying our hardest there will be some things that are beyond our control — and, exactly for that reason, we deserve compassion.
Don’t get me wrong: I still find this really hard. The recovering optimizer in me still wants the formula. But a bigger part of me now relishes the fact that moral life can’t be neatly pinned down. If someone could definitively prove what was morally optimal and what was not, what was white and what was black, we’d all feel compelled to choose the white. We would, in a sense, be held hostage by the moral architecture of the world. But nobody can prove that. And so we’re free and our world is rich with a thousand colors. And that in itself is very good.