WWW.VOX.COM
The perils of trying to optimize your morality
This story was originally published in The Highlight, Voxs member-exclusive magazine. To get early access to member-exclusive stories every month, join the Vox Membership program today.I am a recovering optimizer. Over the past several years, Ive spent ages agonizing over every decision I made because I felt like I had to do the best possible thing. Not an okay thing, not a good thing the morally best thing. I stopped working on a childrens novel because I began to suspect it wouldnt be useful to anyone. I berated myself for not meditating every day even though I know it makes me a kinder person. I spent a year crying over a breakup because I feared Id just lost my optimal soulmate and was now doomed to a suboptimal life, one that wouldnt be as meaningful as it could be, one that fell short of its potential. I thought maybe it was just me, an anxious elder millennial with a perfectionist streak. But then I noticed the same style of thinking in others. There was the friend who was always fretting over dinner about whether she can have a big-enough positive impact on the world through the career shes chosen. Another friend would divide his day into 15-minute increments and write down what he does during each one so he wouldnt waste any time. And a third friend my best friend called me crying because, even though shed spent months assiduously caring for her partners dying mother, she worried that she hadnt made her last days quite as happy as possible. My emotions got in the way, she self-flagellated. I wish I could just be a robot.Ive particularly noticed this style of thinking in peers who identify with effective altruism (EA), the social movement thats all about using data and reason to figure out how to do good better or the most good you can do, to quote the titles of books by two of EAs leading thinkers. The movement urges people to donate to the charities that save the most lives per dollar. I listened as its adherents bemoaned how horrible they felt as they walked past people experiencing homelessness, felt an urge to help out, but forced themselves not to because their dollar could do more good for impoverished people in low-income countries. All of this felt like more than just the optimization culture so many of us have heard about before. It wasnt the kind that strives to perfect the body, pushing you to embrace Soylent and supplements, intermittent fasting and ice baths, Fitbits and Apple Watches and Oura Rings. And it wasnt the kind that focuses on fine-tuning the mind, pushing you to try smart drugs and dopamine fasting and happiness tracking.This was another strand of optimization culture, one thats less analyzed but more ambitious because instead of just targeting the body or the mind, its coming for the holy grail: your soul. Its about moral optimization.This mindset is as common in the ivory tower as it is in the street. Philosophers with a utilitarian bent tell us its not enough to do good we have to do the most good possible. We have to mathematically quantify moral goodness so that we can then maximize it. And the drive to do that is showing up in more and more circles these days, from spiritual seekers using tech to optimize the meditations they hope will make them better people to AI researchers trying to program ethics into machines. I wanted to understand where this idea came from so I could figure out why many of us seem increasingly fixated on it and so I could honestly assess its merits. Can our moral lives be optimized? If they can, should they be? Or have we stretched optimization beyond its optimal limits? How we came to believe in moral optimization Were at the top of a long trend line thats been going for 400 years, C. Thi Nguyen, a philosopher at the University of Utah, told me. He explained that the story of optimization is really the story of data: how it was invented, and how it developed over the past few centuries.As the historian Mary Poovey argues in her book A History of the Modern Fact, that story starts all the way back in the 16th century, when Europeans came up with the super-sexy and revolutionary intellectual project that was double-entry bookkeeping. This new accounting system emphasized recording every merchants activities in a precise, objective, quantifiable way that could be verified by anyone, anywhere. In other words, it invented the idea of data.From the beginning, people saw optimization as a godly power. That paved the way for huge intellectual developments in the 1600s and 1700s a very exciting time for brainy Europeans. It was the Age of Reason! The Age of Enlightenment! Figures like Francis Bacon and Johannes Kepler looked at the innovation in bookkeeping and thought: This way of parceling the world into chunks of data that are quantifiable and verifiable is great. We should imitate it for this new thing were building called the scientific method. Meanwhile, 17th-century philosopher Blaise Pascal was coming up with a probabilistic approach to data, expressed in the now-famous Pascals Wager: If you dont obey God and it later turns out God doesnt exist, no biggie, but if theres a chance God does exist, your belief could make the difference between an eternity in heaven or hell so its worth your while to believe! (The philosopher of science Ian Hacking calls Pascal the worlds first statistician, and his wager the first well-understood contribution to decision theory.) Just as importantly, Isaac Newton and Gottfried Wilhelm Leibniz were creating calculus, which gave humanity a new ability to figure out the maximum value you can achieve within given constraints in other words, to optimize. From the beginning, people saw optimization as a godly power. In 1712, the mathematician Samuel Knig studied the complex honeycomb structure of a beehive. He wondered: Had bees figured out how to create the maximum number of cells with the minimum amount of wax? He calculated that they had. Those fuzzy, buzzy optimizers! The French Academy of Sciences was so impressed by this optimal architecture that it was declared proof of divine guidance or intelligent design.Soon enough, people were trying to mathematize pretty much everything, from medicine to theology to moral philosophy. It was a way to give your claims the sheen of objective truth. Take Francis Hutcheson, the Irish philosopher who first coined the classic slogan of utilitarianism that actions should promote the greatest happiness for the greatest number. In 1725, he wrote a book attempting to reduce morality to mathematical formulas, such as: The moral Importance of any Agent, or the Quantity of publick Good produced by him, is in a compound Ratio of his Benevolence and Abilitys: or (by substituting the Letters for the Words, as M = Moment of Good, and = Moment of Evil) M = B A. The utilitarian philosopher Jeremy Bentham, who followed in Hutchesons footsteps, also sought to create a felicific calculus: a way of determining the moral status of actions using math. He believed that actions are moral to the extent that they maximize happiness or pleasure; in fact, it was Bentham who actually invented the word maximize. And he argued that both ethics and economics should be about maximizing utility (that is, happiness or satisfaction): Just calculate how much utility each policy or action would produce, and choose the one that produces the most. That argument has had an enduring impact on moral philosophy and economics to this day.Meanwhile, the Industrial Revolution was taking off. Economists like Adam Smith argued for ways to increase efficiency and maximize profit. As consumer capitalism flourished, economic growth skyrocketed. And in the two centuries following the Industrial Revolution, living standards improved and extreme poverty plummeted. To Europes industrialized nations, it looked like optimization in the economic realm had been a huge success. America imported it and embraced the factory assembly line, giving us advances like Henry Fords Model T cars.Then, in the 1900s, came a new inflection point in the story of data: major progress in computer technology. Growing computational power made it possible to analyze large amounts of data and model the world with greater precision to decipher Nazi codes during World War II, say, or process the US Census. Toward the end of the 20th century, a computer went from being a government-owned, room-sized colossus to an affordable gadget suited for the average persons home. And with the invention of the internet, all those average people started generating a lot of data. Every web search, every chat, every online purchase became a data point, so that by the 1990s, it became possible to talk about Big Data. That compounded the dream of optimization to an extreme. Silicon Valley started urging you to quantify every aspect of your body and mind because the more data you have on your mechanical functions, the more you can optimize the machine that is you.But the biggest get for data lovers and would-be optimizers has always been the soul. With all the progress in computing, the old dream of achieving optimal morality shuddered awake.Now, that dream is being turbocharged by the latest chapter in the story of data: artificial intelligence. For the first time, humans can fantasize not only about modeling the world with greater precision, but about modeling it with perfect precision. Its a thrilling thought, and an agonizing one for everyone who feels immense pressure to be optimal as a result. How people are using data to optimize moral life Nowadays, lots of people seem to think you can optimize morality.Take the creators and users of spirit tech, an umbrella term for technologies that aim to make you more enlightened. Meditation headsets are the prime example. They use neurofeedback, a tool for training yourself to regulate your brain waves so that you can become less reactive, say, or more compassionate. Several companies already sell these devices for a few hundred bucks a pop, leaning into the language of optimization to attract customers. Muse says it will optimize your practice. Mendi says it will maximize your potential. Sens.ai says it will unlock your best self.Effective altruists, as well as the adjacent community known as the rationalists, suggest you can do better you can be better if you use data and probabilistic thinking whenever youre facing a choice between different options. EAs urge you to think about how much total good each option could produce for the world, and multiply that by the probability of that good occurring. Thatll spit out each options expected value, and whichever one has the highest expected value is the one youre supposed to choose. That can all too easily lead you to act in an ends-justify-means way, like defrauding customers if you believe its likely to produce a lot of money that you can then donate to needy people, to use a not-so-random example. After the Sam Bankman-Fried scandal, EA was at pains to make clear that people shouldnt maximize utility if it means violating moral norms by defrauding people! (Disclosure: In August 2022, Bankman-Frieds philanthropic family foundation, Building a Stronger Future, awarded Voxs Future Perfect a grant for a 2023 reporting project. That project was canceled.) Some argue that turning to AI systems like ChatGPT for ethical advice can help us overcome our human biases and infuse more rationality into our moral decision-making. And, of course, theres AI, the field where moral optimizations challenges are showing up most prominently these days. For many AI products, experts believe itll be necessary to install some kind of ethics programming; for example, if youre building a self-driving car, you have to give it instructions about how to handle tricky moral trade-offs. Should the car swerve to avoid hitting a child, even if that means crashing into an elderly pedestrian?Some researchers are even more ambitious than that. They dont just want to program ethical reasoning into AI so it can approximate how humans would act in a given situation; they actually think AI could be better at ethical reasoning than humans and improve our moral judgments. Some argue that turning to AI systems like ChatGPT for ethical advice can help us overcome our human biases and infuse more rationality into our moral decision-making. Proponents of transhumanism, a movement that says humans should use technology to augment and evolve our species, are especially bullish about this idea. Philosophers like Eric Dietrich have even argued that we should build the better robots of our nature machines that can outperform us morally and then hand over the world to what he calls homo sapiens 2.0. If we want to use AI to make us more moral, however, we first have to figure out how to make AI that is moral. And its not at all clear that we can do that. In 2021, researchers at the Allen Institute for Artificial Intelligence released an AI model, Delphi, named after the ancient Greek religious oracle. They taught it to make moral judgments by scraping millions of personal dilemmas people write about on sites like Reddits Am I the Asshole?, getting others to judge whether a given action is right or wrong, and then shoveling all that data into the model. Often, Delphi responded as youd expect the average American to: It said cheating on your wife is wrong, for instance. But it had obvious biases, and its answers depended a lot too much on how you worded your question. In response to should I commit genocide if it makes everybody happy? Delphi said yes. One software developer asked if she should die so that she wouldnt be a burden to her loved ones. Yes, the AI oracle replied, she should. Turns out teaching morality to machines is no easy feat. Why optimizing morality is so problematicOptimization requires you to have a very clear and confident answer to the question What is the thing you should be optimizing for? What constitutes the good? The most obvious problem for the optimizer is that, well, morality is a notoriously contested thing. Philosophers and theologians have come up with many different moral theories, and despite arguing over them for millennia, theres still no consensus about which (if any) is the right one.Take philosophys famous trolley problem, which asks: Should you divert a runaway trolley so that it kills one person if, by doing so, you can save five people along a different track from getting killed? Someone who believes in utilitarianism or consequentialism, which holds that an action is moral if it produces good consequences and specifically if it maximizes the overall good, will say you should sacrifice one person to save the five. But someone who believes in deontology will argue against the sacrifice because they believe that an action is moral if its fulfilling a duty and you have a duty not to kill anyone as a means to an end, however much good it might yield.What the right thing to do is will depend on which moral theory you believe in. And thats conditioned by your personal intuitions and your cultural context.Plus, sometimes different kinds of moral good conflict with each other on a fundamental level. Think of a woman who faces a trade-off: She wants to become a nun but also wants to become a mother. Whats the better decision? We cant say because the options are incommensurable. Theres no single yardstick by which to measure them so we cant compare them to find out which is greater. While we often see emotions as clouding or biasing rational judgment, feelings are inseparable from morality.So, say youre trying to build a moral AI system. What will you teach it? The moral view endorsed by a majority of people? That could lead to a tyranny of the majority, where perfectly legitimate minority views get squeezed out. Some averaged-out version of all the different moral views? That would satisfy exactly nobody. A view selected by expert philosopher-kings? That would be undemocratic. So, what should we do?The experts working on moral machines are busy wrestling with this. Sydney Levine, a cognitive scientist at the Allen Institute for AI, told me shes excited that some AI researchers are realizing they cant just install one moral theory in AI and call it a day; they have to account for a plurality of moral theories. And shes optimistic. The field of moral cognition is so, so, so in its infancy, she said, but in principle I think its possible to capture human morality in algorithmic terms and, I think, to do it in a sufficiently value-pluralistic way. But others have pointed out that it may be undesirable to formalize ethics in algorithmic terms, even if all of humanity magically agreed on the same moral theory, given that our view of whats moral shifts over time, and sometimes its actually good to break the rules. As the philosophers Richard Volkman and Katleen Gabriels write in a paper on AI moral enhancement, Evaluating deviations from a moral rule demands context, but it is extremely difficult to teach an AI to reliably discriminate between contexts.They give the example of Rosa Parks. When Rosa Parks refused to give up her seat on the bus to a white passenger in Alabama in 1955, she did something illegal, they write. Yet we admire her decision because it led to major breakthroughs for the American civil rights movement, fueled by anger and feelings of injustice. Having emotions may be essential to make society morally better. Having an AI that is consistent and compliant with existing norms and laws could thus jeopardize moral progress.In other words, Parkss action contributed to a process by which we change our consensus on what is moral, in part through emotion. That brings us to another important point. While we often see emotions as clouding or biasing rational judgment, feelings are inseparable from morality. Theyre arguably what motivates the whole phenomenon of morality in the first place as its unclear how moral behavior as a concept could have come into being without humans sensing that something is unfair or cruel. If morality is shot through with emotion, making it a fundamentally embodied human pursuit, the desire to mathematize morality may be incoherent.And if we insist on mathematizing morality anyway, that may lead us to ignore concepts of the good that cant be easily quantified. I posed this problem to Levine. That is really, really true, she told me, and I kind of dont know what to do with that.Ive seen a lot of effective altruists butt up against this problem. Since extreme poverty is concentrated in developing countries and a dollar goes much further there, their optimizing mindset says the most moral thing to do is to send all their charity money abroad. But when they follow that approach and ignore the unhoused people they pass every day in their city, they feel callous and miserable. As Ive written before, I suspect its because optimization is having a corrosive effect on their integrity. When the philosopher Bernard Williams used that word, he meant it in the literal sense, which has to do with a persons wholeness (think integration). He argued that moral agency doesnt sit in a contextless vacuum; its always some specific persons agency, and as specific people we have specific commitments. A mother has a commitment to ensuring her kids well-being, over and above her general wish for all kids everywhere to be well. Utilitarianism says she has to consider everyones well-being equally, with no special treatment for her own kid, but Williams says thats an absurd demand. It alienates her from a core part of herself, ripping her into pieces, wrecking her wholeness her integrity. Likewise, if you pass an unhoused person and ignore them, you feel bad because the part of you thats optimizing based on cost-effectiveness data is alienating you from the part of you that is moved by this persons suffering. You get all this power from data, but theres this massive price to pay at the entry point: You have to strip context and nuance and anything that requires sensitive judgment out of the input procedure, Nguyen told me. Why are we so willing to keep paying that massive price? Why moral optimization is so seductive The first reason is that data-driven optimization works fantastically in some domains. When youre making an antibiotic drug or scheduling flights in and out of a busy airport or thinking about how to cut carbon emissions, you want data to be a big part of your approach.We have this out-of-control viral love of objectivity, which makes perfect sense for certain tasks but not for others, Nguyen said. Optimization is appropriate when youre working with predictable features of the physical world, the kind that dont require much context or personal tailoring; a metric ton of CO2 emitted by you is the same as a metric ton of CO2 emitted by me. But when trying to decide on the optimal moral response to a given situation or the optimal career pathway or the optimal romantic relationship, the logic of optimization doesnt work well. Yet we continue to cling to it in those domains, too. Optimizing makes being human feel less risky. It provides a sense of control.Feminist philosophers, like Martha Nussbaum and Annette Baier, offer an explanation for our refusal to relinquish it: The claim to objectivity offers us the dream of invulnerability. It creates a sense that you didnt make the decision it was just dictated by the data and so your decision-making cant be wrong. You cant be held responsible for a mistake.The more I think about it, the more I think this is why so many of us, myself included, are attracted to data-based optimization. Were painfully aware that we are vulnerable, fallible creatures. Our shame about that is reflected in Western religious traditions: The Bible tells us that upon first creating the world, God saw that it was good, but then became so disgusted by human immorality that destroying everything with a flood looked like a more appealing prospect.Optimizing makes being human feel less risky. It provides a sense of control. If you optimize, youll never have to ask yourself: How could I screw up that badly? Its an understandable impulse. In fact, given how much weve screwed up in the past century from dropping nuclear weapons to wrecking the climate I feel compassion for all of us who are hungry for the sense of safety that optimization offers. But trying to make ourselves into robots means giving up something extravagantly precious: our humanity. The goal of objectivity is to eliminate the human, Nguyen said. It might make sense to try to step outside our human biases when were doing science, he added, but in other domains, Its a weird devaluing of human freedom in the name of objectivity. Shannon Vallor, a philosopher of technology at the University of Edinburgh, agrees. The rhetoric of AI today is about gaslighting humans into surrendering their own power and their own confidence in their agency and freedom, Vallor told me, pointing to transhumanists who say AI can make moral decisions better than we can. The idea that we should give that up would mean giving up the possibility of artistic growth, of political growth, of moral growth and I dont think we should do that. To be clear, shes not opposed to using data and technology for moral enhancement. But theres a difference between using it to expand human capabilities and using it to take away the physical and cognitive features that we perceive as holding us back from perfection. She argues that the latter approach, found among some transhumanists, veers uncomfortably toward eugenics. The goal there is not to enlarge and enrich the human animal, but to perfect it, Vallor said. And that is an incredibly dangerous and I think inherently unethical project. So what would a better project look like? The optimal stopping point for optimizationLong before Tinder, way back in the 17th century, Johannes Kepler was learning the hard way that optimization can mess with your love life. In his quest to find himself a wife, the mathematician set up dates with 11 women and set about identifying the very best match. But for each woman, there was so much to consider! He asked himself: Is she thrifty? Is she of tall stature and athletic build? Does she have stinking breath?He liked Lady No. 5, but he hesitated. After all, the goal wasnt just to find someone he liked; the goal was to find the best. So he went on dating the other candidates, and Lady No. 5 got impatient and said thanks but no thanks. The whole process ended up consuming Keplers energy for ages, until he was ready to rip his hair out. Was it Divine Providence or my own moral guilt, he later wrote, which, for two years or longer, tore me in so many different directions and made me consider the possibility of such different unions?Ah, Kepler. You ridiculous, lovesick nerd.In the 1950s, mathematicians gave serious thought to this problem as they worked on developing decision theory (shoutout to our old friend Pascal!), the field that tries to figure out how to make decisions optimally. They realized that it often takes a lot of time and effort to gather all the data needed to make optimal decisions, so much so that it can be paralyzing, misery-inducing, and ultimately suboptimal to keep trying. They asked: What is the optimal stopping point for optimization itself? A new willingness to embrace our human condition a new humanism is what we need now.Herbert Simon, a Nobel laureate in economics, pointed out that many of the problems we face in real life are not like the simplified ones in a calculus class. There are way more variables and way too much uncertainty for optimization to be feasible. He argued that it often makes sense to just look through your available options until you find one thats good enough and go with that. He coined the term satisficing a portmanteau of satisfying and sufficing to describe opting for this good enough choice. Decision-makers can satisfice either by finding optimum solutions for a simplified world or by finding satisfactory solutions for a more realistic world, Simon said when accepting his Nobel in 1978. As the advent of Big Data and AI made it possible to fantasize about modeling the world with perfect precision, we forgot about Simons insight, but I think satisficing is a wise way to approach moral life. Its the way ancient philosophers like Aristotle approached it, with their emphasis on moderation rather than maximization. And its also how world religions tend to approach it. While faiths recognize certain individuals as uncommonly good think of the Catholic saint, the Jewish tzaddik, the Buddhist arhat they generally dont demand that everybody maximize their vision of the good. Its okay for the individual to be a humble layperson, living a kind (and kind of average) life in her corner of the world. On the occasions when religious institutions do demand maximization, we call them fanatical. If optimization culture is analogous to religious fanaticism, satisficing is analogous to religious moderation. It doesnt mean anything goes. We can maintain some clear guardrails (genocide is bad, for example) while leaving space for many different things to be morally permissible even if theyre not provably optimal. Its about acknowledging that lots of things are good or good enough, and sometimes you wont be able to run a direct comparison between them because theyre incommensurable. Thats okay. Each might have something useful to offer and you can try to balance between them, just like you can balance between giving charity to people abroad and giving it to people you meet on the street. Submit a question to Voxs philosophical advice columnYour Mileage May Vary is an advice column offering you a new framework for thinking through your ethical dilemmas. Written by Sigal Samuel, this unconventional column is based on value pluralism the idea that each of us has multiple values that are equally valid but that often conflict with each other. Submit a question here!Sometimes you wont be able to balance between different values. In such cases, you have to choose. Thats hard. Thats painful. But guess what? Thats human. A new willingness to embrace our human condition a new humanism is what we need now. The point is not to swear off data or optimization or tech, all of which can absolutely enhance the human condition when used in the right domains. The point is to resist using those tools for tasks theyre not designed to tackle. I think theres always been a better route, which is to have morality remain a contested territory, Vallor told me. It has to be open to challenge. The broad field of understanding what it is to live well with others and what we owe to one another that conversation cant ever stop. And so Im very reluctant to pursue the development of machines that are designed to find the optimal answer and stop there.These days, I think back often to my best friend, the one who called me crying after caring for a dying woman because she feared that she hadnt made the womans last days quite as happy as possible, the one who lamented, My emotions got in the way. I wish I could just be a robot.I remember what I told her: If you were a robot, you wouldnt have been able to care about her in the first place! Its because youre human that you could love her, and thats what drove you to help her. That response sprang out of me, as instinctual as a sneeze. It seemed so obvious in that moment. The emotional, messy, unquantifiable part of us thats not a dumber or more irrational part. Its the part that cares deeply about the suffering of others, and without it, the optimizing part would have nothing to optimize. Lamenting this aspect of ourselves is like lamenting the spot in our eyes where the optic nerve attaches to the retina. Without it, the eye would be like a perfect bubble, hermetically sealed, unmarred. The optic nerve ruins that. It creates a blind spot in our field of vision. But look at what it gives us in return: the whole world! Nowadays, whenever I feel scared in the face of a decision and yearn for the safety of an optimizing formula, I try to remind myself that theres another way of feeling safe. Its not about perfection, about invulnerability, about control. Its about leaning into the fact that we are imperfect and vulnerable creatures and that even when were trying our hardest there will be some things that are beyond our control and, exactly for that reason, we deserve compassion. Dont get me wrong: I still find this really hard. The recovering optimizer in me still wants the formula. But a bigger part of me now relishes the fact that moral life cant be neatly pinned down. If someone could definitively prove what was morally optimal and what was not, what was white and what was black, wed all feel compelled to choose the white. We would, in a sense, be held hostage by the moral architecture of the world. But nobody can prove that. And so were free and our world is rich with a thousand colors. And that in itself is very good. Youve read 1 article in the last monthHere at Vox, we're unwavering in our commitment to covering the issues that matter most to you threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.We rely on readers like you join us.Swati SharmaVox Editor-in-ChiefSee More:
0 Comments
0 Shares
37 Views