Advice‎ > ‎

EA research topics

Minor update 14 July 2014, last major update May 2014

Recently, a few people have asked me about what kinds of research would be valuable to the EA community, especially from a long-run perspective. I’m posting some off-the-cuff thoughts on this below. It reflects the issues that I have some personal knowledge about, and doesn’t attempt to be comprehensive. For instance, there are lots of interesting questions about marketing effective altruism that I don’t say anything about here, and there are many more mainstream causes and questions that I haven’t listed here. It’s also more of a summary of what I think than an explanation of why I think it. Please treat it as food for thought rather than an attempt at an authoritative list of most important questions ever. A lot of it is focused on issues that I think might be reasonably tractable for someone who is likely to be reading this post.

Examples of EA-relevant research I know something about and would welcome

One large area of low-hanging fruit is overviews of promising career areas that are frequently discussed in EA circles, such as these. I can imagine an output that is a cross between US BLS occupational outlook handbook, the overviews that Cognito Mentoring has done, and the information that 80,000 Hours has put in the post linked above. I can imagine a research process that builds on existing materials and interviews with experts in these areas which could create accessible summaries of promising career areas that are significantly better for the target audience than anything that is currently available.

I’d like to see GiveWell-style shallow investigations of work in the following areas: promoting “evidence-based policy”; promoting the use of cost-effectiveness research in government decision-making; changing democratic institutions to give more weight to the voices of future generations; increasing transparency of foundations; GCRs from bioengineering/synthetic biology; nanotechnology; emerging technology governance; iterated embryo selection. Some of these might be clusters of causes rather than individual causes. I’d be particularly interested in specific proposals about what governments and other organizations could be doing to manage emerging technologies that might be GCR-relevant; I think we’re really light on those, especially for AI.

I’d welcome explorations of the following questions:

  1. When have people tried to think about the distant future? When has it paid off? When has it not paid off?
  2. In the last century, how has (social impact/money spent) been changing in the smartest major foundations? E.g., would the world be better or worse today if the Rockefeller Foundation had invested everything and given it to the Gates Foundation in 2000? It’s striking to me that extant discussions of giving now vs. giving later haven’t done more back-testing of this kind.
  3. What events (i) happened long ago, (ii) could have happened differently, and (iii) would have changed the course of history of they had happened differently? (With an answer in hand, we could start thinking more about what made these events go well or badly, and what might have made them go better (without the aid of hindsight).)
  4. What literatures in economics are relevant to the long-run consequences of increasing the rate of innovation? What do these literatures say about this issue? (I’m imagining one project which would involve just identifying a few of the most relevant literatures, and other projects which would investigate them individually.)
  5. How bad does a catastrophe have to be before it is especially likely to change long-run outcomes for civilization? What would it take to cause a collapse of industrial and other social infrastructure? E.g., what would it take to wipe out electric grids for years or decades? And how hard it would be to recover from such a collapse? Is it helpful to treat questions like these in a bimodal way? 
  6. What is the intellectual history of the idea that we should focus on systemic change rather than smaller issues? What historical evidence is there, and what arguments have been made?
  7. If you want to influence transformational developments in artificial intelligence or bioengineering, where should you be working today?
  8. What effects on society might be more important from a long-run perspective than from a more ordinary short-run perspective?
  9. Apart from building refuges, what could be done to make the world more likely to recover in the event of a global catastrophe? (Here I mean to consider global catastrophes in general, rather than responses to specific catastrophes (e.g developing vaccines which could be deployed rapidly in order to decrease damage from an extreme pandemic would not count.))
  10. Is whole brain emulation going to happen eventually unless something weird happens? Here I’d be looking for a review of informed opinion—probably involving on-the-record conversations with especially informed people, including those most likely to be skeptical—rather than detailed arguments alone.
  11. Is Drexlerian molecular manufacturing going to happen eventually unless something weird happens? Here I’d be looking for a review of informed opinion— probably involving on-the-record conversations with especially informed people, including those most likely to be skeptical —rather than detailed arguments alone.
  12. What do currently known approaches to decision-making under moral uncertainty imply about the case for the overwhelming importance of shaping the far future?
  13. What do we know about tail risk from climate change? I'd be interested in seeing someone summarize the literature around Weitzman and interview a few of the main players there, with an emphasis on potential damage from which humanity may never recover.

If you decide to do a detailed investigation of any of these issues, I’d be interested to hear about it because I’m likely to take some of them up myself.

General features of research I do and don’t favor now

I think most highly abstract philosophical research is unlikely to justify making different decisions. For example, I am skeptical of the “EA upside” of most philosophical work on decision theory, anthropics, normative ethics, disagreement, epistemology, the Fermi paradox, and animal consciousness—despite the fact that I’ve done a decent amount of work in the first few categories. If someone was going to do work in these areas, I’d probably be most interested in seeing a very thorough review of the Fermi Paradox, and second most interested in a detailed critique of arguments for the overwhelming importance of the very long-term future.

I’m also skeptical of developing frameworks for making comparisons across causes right now. Rather than, e.g., trying to come up with some way of trying to trade off IQ increases per person with GDP per capita increases, I would favor learning more about how we could increase IQ and how we could increase GDP per capita. There are some exceptions to this; e.g., I see how someone could make a detailed argument that, from a long-run perspective, human interests are much more instrumentally important than animal interests. But, for the most part, I think it makes more sense to get information about promising causes now, and do this kind of analysis later. Likewise, rather than developing frameworks for choosing between career areas, I’d like to see people just gather information about career paths that look particularly promising at the moment.

Other things being equal, I strongly prefer research that involves less guesswork. This is less because I’m on board with the stuff Holden Karnofsky has said about expected value calculations—though I agree with much of it—and more because I believe we’re in the early days of effective altruism research, and most of our work will be valuable in service of future work. It is therefore important that we do our research in a way that makes it possible for others to build on it later. So far, my experience has been that it’s really hard to build on guesswork. I have much less objection to analysis that involves guesswork if I can be confident that the parts of the analysis that involve guesswork factor in the opinions of the people who are most likely to be informed on the issues.

Comments