Effective Altruism at University
Craig Shearer - 18 March 2024
With the encouragement of Mark and Bronwyn from the NZ Skeptics committee, I attended a meeting about Effective Altruism at Rationalist House in Auckland on the evening of 13th March.
The meeting was a little different from my expectations. I guess I was expecting a formal presentation about Effective Altruism, but this meeting was not that.
The meeting was run by the EA Club from the University of Auckland. The attendees seemed to be a mixture of students, and some older people, presumably from the Rationalists. I saw a few familiar faces, but most of the people there were new to me, and a lot younger.
Before the meeting started, I got into conversation with a couple of the younger members of the audience, relating that I was from NZ Skeptics, which was met with blank stares not knowing of the Skeptics or what we stood for. I guess we’re a fairly small organisation, and getting traction in peoples’ minds is quite difficult.
Effective Altruism, usually referred to as EA, is a global movement that, according to their website, is about “doing good better”. I’ve been aware of the term since I attended a NZ Skeptics conference in Queenstown in 2016, where Catherine Low gave a talk about it. Indeed, our friend Susan Gerbic, who also attended the conference as a speaker, did a nice writeup of it in Skeptical Inquirer.
At that conference, Catherine gave her talk that explained the basics of EA, and that it was important to be able to analyse how effectively money given to charitable organisations is used. The talk included a game where the audience was broken into teams and we had to vote on how to allocate money to various charities, both before and after hearing about EA. At the end, money from EA advocates was apportioned to the chosen charities.
This is all well and good, and at the conference I was pleased to discover that one of the charities that I donate money to - the Fred Hollows Foundation, which is focussed on providing sight-restoring operations for people in the Pacific Islands - rated very well. They can save the sight of a person for the cost of about $25. I continue to contribute monthly to this worthy cause.
The work of the Fred Hollows Foundation - training doctors and nurses
Of course, it is super-important to check out the charities that you’re donating money to. Historically, it’s been known that many charitable organisations are actually spending quite a high proportion of their donations on non-effective work, or on administration or empire building.
There are also out-and-out scam organisations which unfortunately rake in large amounts of money, mostly to line the pockets of their founders and organisers. Many of these organisations have similar names to reputable, well-run organisations, so they’re trading off the good names of these organisations to dupe people into donating.
I’m hoping we have very few of those in New Zealand, but I was reminded of this aspect of giving when I read about Lara Trump, one of former president Donald Trump’s daughters-in-law was linked with a dog rescue charity that has spent nearly $2M on hosting events at Trump’s Mar-a-Lago property.
From reading the Effective Altruism organisation’s website, their landing page talks about some of the core ideas of EA:
- It’s critical to pay attention to how many people are affected by a particular intervention. When we do this, it seems that some ways of doing good are over 100 times more effective than others.
- We should focus on problems that are important, neglected, and tractable
- We can do a lot of good with our careers.
- New technologies might threaten to wipe out life on earth, and reducing this risk might be a key priority.
- Some sentient beings are ignored because they don’t look like us or are far away.
In recent years, it appears that EA has become hijacked by some very weird ideas, and some of those have echoes in that list above.
EA advocates have now become associated with the concepts of Existential Risk and Longtermerism. This Vox article has a good overview of what Longermerism is all about. From that article, they talk about the concepts becoming mainstream in the EA community.
One of EA’s founding principles is that all lives are equally valuable - well human lives, at least. The life of people suffering poverty in places we’ve never visited is of equal value to the lives of our family, friends, and neighbours. As such, it makes sense to donate money that alleviates suffering no matter where it’s happening. But, some researchers in the EA field have now concluded that the best way to help the most people is to concentrate on work that promotes the future of humanity, and has the best outcome for future people yet to be born! That’s rooted in the principle that all lives have equal value. As the Vox article puts it:
“After all, if all lives are equally valuable no matter where they are, that can also extend to when they are.”
Thus, EA is now concentrating on stuff that ensures humanity’s long-term survival.
Reportedly, such ideas are big in Silicon Valley. Elon Musk’s pretty keen on Space-X sending people to Mars to provide a second-home for humanity and ensure the continuation of human consciousness, even if Earth is destroyed or becomes uninhabitable. (In my opinion, and of many others, this is a pipe-dream, but billionaires like to have their hobby horses to spend money on!) And Musk is also big on the idea that artificial intelligence run amok could destroy humanity. Therefore, AI Governance is another big idea in the EA community.
Coming back to the bullet-point list of ideas from the EA website, those Longtermerism and Existential Risk ideas are expressed there:
“It’s critical to pay attention to how many people are affected by a particular intervention.”
“New technologies might threaten to wipe out life on earth, and reducing this risk might be a key priority.”
There’s also the point that “we can do a lot of good with our careers”. Looking at the EA website, that links to a site called 80000hours.org - alluding to the fact that, on average, our careers are about 80,000 hours long. They have a list of global problems, which they’ve ranked by the impact they see of an additional person working on them, at right at the top of the list is “Risks from artificial intelligence”.
So, has this thinking infected the EA club at the University of Auckland? It would seem so. The discussion at the evening I attended talked about how students could leverage EA as part of their careers, and it seemed to be coming from the perspective of EA projects looking good on your CV, rather than doing good for its own sake. Still, having been a student myself many decades ago, I guess I can have some sympathy for this view at that particular time of life.
There was also mention of some students having been funded to attend seminars and conferences overseas that focused on existential risk and AI governance.
One intriguing and, for me, a little unsettling aspect of the evening, was one of the interactions I had with another attendee.
At the start of the session, the organiser ran an exercise that I think was intended as an ice-breaker. This involved talking to another person about values, and our first memory of our life. Then we swapped around and found another person to talk to, which put me with an older guy who, it turned out, was rather cynical about whether giving money to people was a good idea.
The ice-breaker didn’t seem to have much actual relevance to the topic, but then the presentation and discussion started.
After the meeting, it was time for some snacks and social interaction, and at that point I ended up being in a discussion with the guy I mentioned above with the cynical views, after overhearing him talking to other attendees complaining that AI (of the LLM ChatGPT variety) was “too woke”.
I joined the conversation, and asked him what he meant by “woke”, and stated that I considered myself to be woke. I don’t see the term as pejorative.
This led on to a rather ugly discussion about what it meant to be “woke”, and he revealed all sorts of prejudiced and, in my opinion, misplaced ideas about people. I don’t want to “straw man” his arguments; it was only a conversation, so I have no reference back to what was actually said. But some of his points left me speechless. I don’t think I’m that good at being able to express myself perfectly in confrontational situations, and am more prone to l’esprit de l’escalier; of replaying the conversation in my head afterwards and coming up with perfect responses.
Anyway, his arguments related to DEI initiatives, whether there was such a thing as privilege (according to him, there wasn’t), and what the cause was of Maori and Pacific Island peoples’ worse life outcomes. According to him, it’s all down to culture, that people don’t do well in life because their culture doesn’t encourage it, that health outcomes are worse because “they” don’t prioritise eating well and exercising, and that people from his culture (white, middle-aged, and English) shouldn’t be silenced because of this (they’re not!). He also made the observation that people didn’t like him talking about this, and it was “seen as racist”. Hmmm… I’d agree with that observation!
Anyway, my thoughts relating to this, is that often, organisations such as our own, and the Rationalists, seem to give cover to people who feel their thinking is uber-rational, and based on empirical and indisputable facts, which often leads to simplistic and overconfident thinking about how the world works. As skeptics, we need to do better, digging deeper into research, and be especially prepared to consider and appropriately weigh ideas that conflict with our own beliefs. And, of course, we should give much more weight to experts who’ve actually studied problems properly.
My feelings about EA as a result of the meeting haven’t changed much. I don’t think I’ll be rushing to change how I allocate the money I give to worthy causes, to prioritise preventing the end of humanity or sending people to Mars!