You do not have Javascript enabled. Some elements of this website may not work correctly.

Welcome to the EA Conference Reading List!

This list is meant to help introduce you to a variety of key ideas in effective altruism before you attend a major event in person. You don't have to read through it all; feel free to look at whatever seems most interesting.

If you have any questions about this material, or aren't sure whether to apply to an upcoming conference, we recommend reaching out to the organizer(s) of your local group. You can find contact information for EA groups on EA Hub.

Readings we strongly recommend

If you’ve read any of these before, no need to read them again. But if you haven’t, we strongly recommend them as a way to get a solid foundation before the event.

CEA's introduction to effective altruism quickly outlines basic concepts and prominent cause areas within the movement.

The effectivealtruism.org "Impact" page shows some of the ways in which EA-aligned people and projects have changed the world. (This is a good page to skim through, but you don't need to follow any particular link unless it sounds interesting.)

The 80,000 Hours "Key Ideas" page covers other important concepts and explains how they can help you think about your career. Specifically, we recommend the sections:


Effective Altruism is a Question, not an Ideology (Helen Toner)

This classic post showcases the importance of independent thought to EA work, pointing out that our best ideas about how to help others are always subject to change if we find better ideas. At an EA conference, it's good to pay close attention to what you hear from experts — but also to remember that each talk represents just one of many perspectives on how best to do good.

You have more than one goal, and that's fine (Julia Wise)

Sometimes, people who discover effective altruism try to apply this mindset to many areas of their life, leading to difficult trade-offs between (for example) helping others and caring for themselves. This essay explores how we can pursue a variety of goals without feeling pressured to maximize the social impact of every action.

Optional readings you might enjoy

These are a few examples of interesting writing about important concepts. There are dozens of others we could have recommended, too. If you want to read more, check out the EA Forum!

Career choice

Any other sections of the 80,000 Hours "Key Ideas" page

Some promising career ideas beyond 80,000 Hours' priority paths (Arden Koehler)

Something of an addendum to the Key Ideas page, this list from an 80,000 Hours staffer covers ideas that the organization hasn't researched as closely — but which they believe could still be worth pursuing for the right person.

Why and how to start a for-profit company serving emerging markets (Ben Kuhn)

Information security careers for global catastrophic risk reduction (Claire Zabel and Luke Muehlhauser)

Outside of 80,000 Hours, many other people have investigated career options they thought could be highly impactful. The above are two examples of such investigations, though there are many others we could have chosen; we don't mean to specifically endorse these career options.

For more posts along these lines, check out the EA Forum's "Career Choice" tag.

Inspiration // what motivates effective altruism

500 million, and not a single one more (Jai Dhyani)

One of history's deadliest killers was trapped and destroyed through a remarkable feat of global cooperation. This is the story of that killer, and the heroes who put it down for good.

No one is a statistic (Julia Wise)

Some critics argue that EA tends to favor "statistics" while ignoring the nuanced stories of individuals in need. This post shows how someone involved in EA might respond to such claims — by pointing out that "statistics" represent real people whose lives matter, even if we don't know their names.

The world is much better; the world is awful; the world can be much better (Max Roser)

This post uses global health statistics to showcase three ideas which, collectively, motivate many people to take altruistic action:

  1. We look at the world, and see that it needs to be changed.
  2. We look at the past, and understand that change is possible because it has already happened.
  3. We imagine what a better future could look like — and start working to create it.

Cause areas and charities

As with the rest of this reading list, this section isn’t meant to be comprehensive. These are just a few examples of people discussing and comparing different ways to help others.

GiveWell's intervention reports

GiveWell examines interventions in global health and development, then uses rigorous evaluation criteria to find and recommend a tiny number of charities that work on those interventions.

Open Philanthropy's focus areas

Open Philanthropy aims to make high-impact grants across a much wider range of causes than GiveWell. This page covers their work on areas ranging from criminal justice reform to pandemic preparedness.

Animal Charity Evaluators' charity reviews

ACE aims to find and promote the most effective ways to help animals. These reviews cover the mission, plans, strengths, and weaknesses of each charity they recommend.

The case for taking AI seriously as a threat to humanity (Kelsey Piper)

There's no fire alarm for artificial general intelligence (Eliezer Yudkowsky)

Ben Garfinkel on scrutinizing classic AI risk arguments podcast and transcript

These three posts discuss artificial intelligence alignment — a research field that exists largely because of the EA movement, and that seeks to address what many see as among the most dangerous global catastrophic risks.

  • Piper outlines the basic case for why advanced AI could threaten human civilization.
  • Yudkowsky explains why the threat could creep up on us, arriving sooner than we might expect.
  • Garfinkel pushes back somewhat, arguing that "classic" arguments for AI risk haven't been subject to enough scrutiny or criticism, and that EA may be too focused on this area relative to others.

Reducing global catastrophic biological risks (Gregory Lewis)

Pandemics have been wreaking havoc on civilization for thousands of years. A sufficiently deadly pandemic could even reduce humanity's long-term potential (for example, by damaging our civilization such that recovery becomes very difficult). And as more people gain access to advanced biotechnology, it becomes more likely that some will try to develop new or modified pathogens (for use in war, terrorism, etc.)

The above profile from 80,000 Hours collects many important details about this area, and about how one might prepare for a career in biosecurity.

Growth and the case against randomista development (Hauke Hillebrandt and John Halstead)

One of the EA Forum's most popular posts of all time! Hillebrandt and Halstead argue that EA hasn't taken economic growth seriously enough as a cause area. They also claim that we focus too closely on certain types of projects for which it's easier to gather evidence — but at the cost of having less of an impact.

Key ideas in longtermism and cause prioritization

Ways in which cost-effectiveness estimates can be misleading (Saulius Šimčikas)

Cost-effectiveness estimates are a key tool for evaluating the expected impact of working on a cause, supporting a charity, etc. But while they are very valuable, they can also lead us astray if we aren’t careful. This post outlines things to watch out for when estimating cost-effectiveness.

Are we living at the most influential time in history? (Will MacAskill)

Whether we live at the most influential time in history (or at least an unusually influential time) has enormous implications for how we (as individuals, a movement, or a species) should use resources and set priorities.

A brief summary wouldn’t do this post justice; the comments alone could inspire several new philosophy papers.

The research agenda of the Global Priorities Institute (Hilary Greaves et al.)

"The central focus of GPI is what we call ‘global priorities research’: research into issues that arise in response to the question, ‘What should we do with a given amount of limited resources if our aim is to do the most good?’

This agenda covers topics related to improving the long-term prospects of our civilization. It’s very dense (reader beware!), but it may be the single best resource for understanding longtermist thinking.

For a shorter and more general introduction to these ideas, try Toby Ord’s talk from EA Global: London 2019.