How do you live a good life?
People have been asking this question for thousands of years. That seeking has gathered force in recent decades for us well-off Westerners as we have accumulated resources and comforts that would astonish our recent ancestors and pushed away the wisdom traditions that have historically guided most people to answers.
Over the past decade, the effective altruism (EA) movement stepped into that void. Its analytical framework based on utilitarian philosophy (Peter Singer’s work, most famously) offered well-educated young people a way to serve a purpose greater than themselves without the mythical noise of many traditions. And it offered a path for institutions and our culture more broadly to better channel astronomical wealth towards addressing the appalling inequity in the world.
That movement is under assault. With the fall and disgrace of Sam Bankman-Fried, a prominent EA torchbearer, there has been a steady barrage arguing that he was not an anomaly, that the whole movement is rotten at its core. The latest salvo, a piece by Leif Wenar in Wired, declares “The Deaths of Effective Altruism.”
I do not consider myself an effective altruist. But I admire some of the ways that the movement thinks about our social problems and how to improve philanthropy. While there are fair critiques of the movement (as there are of any social movement and applied ethical framework), the current assault risks throwing the proverbial baby out with the bathwater.
Perhaps effective altruism as it has been does need to die. But I dearly hope it will be reborn. As I explore below, the principles of effective altruism can continue to help correct the deeply sick status quo of our society and philanthropy if it can integrate a greater humility and a greater acceptance of the inevitable limits of reason and the importance of emotion, intuition, and cooperation into finding better ways to live well together.
You Give to What?
One of the more interesting skirmishes in the EA wars that erupted after Bankman-Fried’s empire collapsed, was this vigorous, and often hilarious, debate between two prominent Substackerati, Scott Alexander and Freddie DeBoer. You can read their thrusts and ripostes here, here, and here, or my favorite AI helper and potential eventual overlord, Claude, provided a helpful summary at the end of this piece. Come for the intellectual stimulation and stay for the one-line zingers (my personal favorite is: “It’s not nut-picking if your entire project amounts to a machine for attracting nuts.”)
I am sympathetic to arguments made by both sides, but largely arrive at the same conclusion as Alexander: effective altruism (EA) has it flaws and oddities, but it is a net positive force in our society. To arrive at that conclusion, I draw on over twenty years spending most of my waking hours obsessing about one of the foundational questions of EA—how do we translate limited philanthropic resources into the maximum good for humanity? In that time, I’ve seen and lived the good, the bad, and the ugly (most often the last) and arrived independently at two of the core premises of the EA movement:
Allocating philanthropic resources well is extremely hard; and
Most philanthropic resources are poorly spent today, driven more by marketing, personal relationships, rapid emotional reaction, and status.
A shocking proportion (roughly 26% or $125 billion annually) of American philanthropic giving is directed to colleges and universities. These are extremely wealthy institutions that cater primarily to the global elite and that regularly choose to spend hundreds of millions of tax-free dollars on slightly more comfortable dorm rooms or nicer sports facilities for their clients. It is no secret that college donations are intended partly to boost our personal brand and status by raising their rankings and facilitating the future admission of our children through legacy admissions. A $5,000 check to a college may buy a few bricks for the new wave pool for some of the most privileged children in the history of humanity to kayak in. In contrast, according to EA flag-bearer GiveWell, roughly $5,000 donated to a bed net program will help save a child from dying of malaria.
Note I say “roughly” and “help.” One of Wenar’s critiques of GiveWell is that they overstate the certainty of their claims and don’t consider some inefficiencies and negative externalities. Maybe they do. But we know that bed nets have driven a 50% reduction of global malaria deaths, saving the lives of hundreds of thousands of children every year. Even if GiveWell were very conservative and doubled their estimate to $10,000, how would that compare to the wave pool donation? Wenar may be right that GiveWell could tighten its language, but it should be judged primarily based on the comparative impact it has on the world.
If the EA movement did nothing else other than convince one percent of Americans to donate to effective global health programs rather than their alma mater, they have done our country and the world a massive service. We are social creatures. We rely on each other to set norms and expectations for our behavior. For too long, Americans have set norms for each other that giving to rich colleges that perpetuate inequality and elite self-interest is a strong moral action. We have set norms that giving money based on slick pictures of smiling children we receive in the mail is a strong moral action whether the organization receiving the money squanders it or not. Whatever its weaknesses, EA is trying to shift us to a different and better set of norms.
DeBoer flames EA for being so generic and bland that it is meaningless. Sure, who disagrees with EA’s core principle of spending money well to do maximum good? But then who disagrees with loving thy neighbor? Or the whole shalt not kill thing? Most moral precepts are broad by nature. EA is filling part of what Ayaan Hirsi Ali calls the “god-sized hole” left in many Western individuals and communities by the retreat of traditional religious institutions. The core practices of the EA movement are almost exact mirrors of the religions that people once relied on to answer these questions: the tithing (give 10% of your income), the trust in the higher authority to maximize your moral impact (GiveWell as church).
What spiritual authorities offered, and EA institutions now offer their flock, is the hard translation of broad precepts into practical action. It is very hard to know how to love my neighbor when he tried to get me evicted for being too noisy. Filled with rage, I turn to a guide (priest, rabbi, imam) steeped in the tensions inherent in the principle to help respond in a way most aligned with my values. Analyzing the best recipients of my philanthropy is hard and time-consuming, so I turn to GiveWell to help me best live that generic value of spending money well. DeBoer and Alexander can debate the theory all they like; I will judge EA on its practice. By that standard, there is much to celebrate about how EA has practically guided people towards more and better giving over the past decade.
Fallible Limited Things
And it is equally true that the practice of EA has important flaws. Wenar captures the principal reason public opinion has turned against EA over the past year: its tendency towards arrogance. The extreme examples DeBoer cites of EA community members advocating for the elimination of all carnivores or debating human extinction hundreds of years from now are not that experientially different for most humans and Americans from groups of cardinals sitting in gilded rooms debating obscure theological claims about existence. Movements, secular or religious, spread because they offer masses of people an understanding of how to wrestle with hard choices, tensions, and truths of being human. They push people away when it feels that an elite cabal makes claims about truths that are divorced from their intuitive sense of reality.
Let me pause to say that I am speaking about the community, the collective, not individuals. Some individual effective altruists are more arrogant, some are less. Communities and organizations always develop a distinctive culture. Arrogance is part of the culture at Goldman Sachs. It can also, in my experience, be part of the culture of the EA movement.
Arrogance is the shadow of confidence. Confidence is vital and generative. It enables us to project ourselves into the confusing, noisy world and better our lives and our communities’. But taken too far, confidence metastasizes into arrogance. I have come to see arrogance as one of the greatest obstacles of a life fully and well lived. I have slipped into it many times in my adult life and try to do all I can to contain its resurgence.
Arrogance is the enemy of understanding. Arrogance is the enemy of curiosity. Arrogance is the enemy of connection. Arrogance is, ultimately, the enemy of truth and right action, of finding and living the best answers to that foundational question EA asks.
The wisdom traditions that EA mimics, from stoicism to major religions to great literature, have much to say on arrogance. They remind us that all of us, no matter how brilliant and successful, are fallible and limited. They remind us that all our ideas and projects are fallible and limited. They remind us that our reasoning minds we cherish as modern humans, and all the amazing analytical tools those minds create, are fallible and limited. Surrendering to that limitation is the best antidote to arrogance, its brash cousin hubris, and the great fall that is their inevitable outcome as countless myths and tales caution us.
Sam Bankman-Fried’s extraordinary hubris and equally dizzying plummet back to earth is straight out of one of those myths. The ideas and practices of a movement should not be condemned for one of its leader’s flaws. But as one commentator wryly noted soon after his downfall: “why should we trust you to predict where humanity will be in 200 years if you were not shorting FTX three months ago?” The future is fundamentally unknown. Anyone who tries to predict that future three years from now should always do with a healthy dose of humility. Anyone trying to predict 200 years from now should do so with massive heaps of humility.
But the arrogance that I believe is the greatest risk to the EA community is more subtle. All ideas and principles, even the best, can be taken too far, ad absurdum. During my days as an anti-malaria warrior, my boss once received an angry letter from an animal rights activist denouncing us for killing mosquitoes with insecticide-treated bed nets. Thou shalt not kill—good. Thou shalt not kill most animals for food—debatable but arguably good. Thou shalt not kill mosquitoes killing young children—absurd.
In this way, Wenar and DeBoer have a point. In my experience, the EA movement can fail to recognize the fundamental limitation of our ability to quantify marginal differences in value. Everything has diminishing returns, including our ability to meaningfully assess the moral value of our actions. Determining that a bed net is a better target for philanthropy than wealthy colleges is a slam dunk. Determining whether a bed net distribution program or a campaign to negotiate lower prices for vaccines against pneumonia, another massive child killer, is a better use of dollars is much, much harder. There will always be massive gaps in data. We will always have to make major assumptions based on human opinion and bias. Uncertainty is inevitable.
The only responsible answer to the questions of whether that malaria or pneumonia program is a better investment or what is the best way to prevent the extinction of humanity in 200 years is: we don’t know. We are fallible, limited, human. We can make informed guesses, but we can’t know until we act and the world unfolds.
Diversified Action
But we must act. Humility and uncertainty should not be a case for inaction. If we follow Wenar’s concerns about the potential ripple effects of our attempts to help people to their logical conclusion, we should either do nothing (after all, everything causes harm in some way) or just give money to whatever sounds nice. That thinking is what has led to the appalling current distribution of giving in this country. In contrast, the cost-effectiveness that is the core of GiveWell’s philosophy has been the backbone of the global health movement that has saved tens of millions of lives over the past two decades.
We should prioritize how we use our finite resources by trying to quantify the costs and benefits of available programs. We should spend some societal resources researching long-term threats to humanity. But we should do so in a way that is grounded in humility, that recognizes our fundamental limitations.
One articulation of how to practically apply that principle comes from a pioneer of the EA movement, Holden Karnofsky, in this conversation with Ezra Klein. He describes how he took an approach of “worldview diversification” to the portfolio he managed at Open Philanthropy. That approach still anchors on a preferred framework for assessing funding opportunities but recognizes that framework is fallible so uses alternative ways of thinking and assessing to allocate some resources. It follows the same principle as sound investing. No matter how convinced we are about our thesis on a set of stocks, we should always diversify to other equities and bonds because the world can and will surprise us.
What would it look like for each of us as individuals and institutions to embrace that humility and truly diversify our worldviews? That is a question, diehard EA believers and skeptics alike, should ask more often.
No Cure for Being Human
There is one other important way that the EA community can slip into arrogance. Underlying EA methods is often a subtle assumption that human emotion and trust, that great currency that has driven society for thousands of years, can be removed from important decisions. With enough analysis and debate, the invisible argument runs, we can make perfectly rational decisions protected from emotional and relational messiness within ourselves, our organizations, and the organizations we are considering funding.
But rational analysis is only ever one part of our decisions, individually and collectively. As everyone from Nobel laureate Daniel Kahneman to psychologists to Buddhist teachers tell us, there is always more going on within our mind than we know. As every great wisdom tradition tells us, we are never fully in control. And the more we try to deny and suppress parts of ourselves, the more they will manifest in surprising and often damaging ways in our lives (science and medicine are littered with rationalists causing great harm after getting swept away by their egos and neglected shadow sides, most recently with eminent researchers undermining Covid vaccines or promoting unproven, useless treatments).
I have witnessed effective altruists make funding decisions there were made primarily—though never exclusively—based on quantitative analysis. And I have witnessed effective altruists make funding decisions based primarily on personal relationship, instinct, and emotion while calling it analytical rationalism. There is nothing wrong with the latter form of decision making—it is, to some degree, the unavoidable reality of all human affairs—but the community and its work will be better off if honestly and humbly recognizes all the influences on its decisions beyond cold calculation.
Ezra Klein summarizes a core thesis of Meghan O’Gieblyn’s recent book on generative AI, which is, in a way, a major source of both fear and envy for many rationalists, as this: “when we describe our minds using terms borrowed from computers, we begin to see our minds mirrored in computers, and we cease to value the parts of our minds that differ from computers.”
As much as we modern, hyper-rationalist humans may wish otherwise, we will never be anything resembling a computer. We will always be messy, emotional, intuitive, fallible creatures, capable of beautiful feats and terrible mistakes throughout our brief lives. Rather than constantly reach for a perfect rationality that will elude us, we can embrace that reality, humbly deploying reason alongside intuition, trust, and, yes, even emotion to shape our lives and world. We can, as Richard Rohr urges, pray for one good humiliation each day, recognize that our decisions are imperfect, and try to make better ones the next day.
Will all of this significantly improve the impact of always too-limited philanthropic resources? Will the principles of effective altruism (perhaps under a different brand) spread beyond its wonky niches to unseat the dominant elite college/smiling child approach to giving in America? From my narrow little perch, I hope and believe the answer to both could be yes. But, to practice what I preach, all I can honestly answer is: I don’t know.
Our Friend Claude Summarizes the Alexander – DeBoer Debate
Scott Alexander defends the effective altruism (EA) movement, arguing that despite its flaws and controversies, it has achieved significant positive impact like saving hundreds of thousands of lives through global health initiatives, improving animal welfare standards, and advancing AI safety research. He portrays EA as a rare movement that genuinely cares about important causes neglected by most.
Freddie deBoer, on the other hand, criticizes EA as being more of a branding exercise and social scene rather than a coherent ethical philosophy. He argues that EA's core ideas around doing good effectively are banal beliefs shared by anyone trying to act charitably. DeBoer contends that EA's distinctiveness comes from embracing controversial and abstract beliefs like longtermism and reducing animal suffering through extreme measures, which he finds questionable. He suggests keeping EA's practical charitable work while discarding its philosophical baggage that enables grift and attracts unsavory figures like Sam Bankman-Fried.
Alexander responds that even if EA's central tenet of doing good seems obvious, few people rigorously apply it through analysis, cause prioritization, and tangible actions like donating significantly. He portrays EA as providing valuable social structures and commitment mechanisms to turn good intentions into impact. The debate centers on whether EA's philosophical underpinnings are vital for its successes or unnecessary baggage prone to going awry.