About this talk
In what sense does EA need to think about "doing good together"? I will disentangle two claims in this vicinity. The collectivist claim is that we have to think irreducibly about groups as agents, rather than only individual agents. Quite different is the coordination claim: that some of the best ways of doing good involve deliberate coordination between multiple agents. EA has often been criticised by collectivists, for neglecting group agents. Indeed, it has often been suggested that this neglect makes it impossible in principle for EA to recognise some of the most important ways of doing good. I will argue that this is a mistake. We need coordination, but there is nothing fundamentally missing in a picture that recognises only individual agents.