About this talk
The development of strong AI could be pivotal for the future. This has led many within EA to think it's important to try to ensure good outcomes. But we have large uncertainties about how and when AI will be developed.
This talk will explore this perspective, asking: (i) What portfolio of work we should be pursuing to take the low-hanging fruit for all plausible scenarios? (ii) What types of work not ostensibly focused on AI are particularly important even if the argument for the importance of AI is correct?