Minor update 14 July 2014, last major update May 2014 Recently, a few people have asked me about what kinds of research would be valuable to the EA community, especially from a long-run perspective. I’m posting some off-the-cuff thoughts on this below. It reflects the issues that I have some personal knowledge about, and doesn’t attempt to be comprehensive. For instance, there are lots of interesting questions about marketing effective altruism that I don’t say anything about here, and there are many more mainstream causes and questions that I haven’t listed here. It’s also more of a summary of what I think than an explanation of why I think it. Please treat it as food for thought rather than an attempt at an authoritative list of most important questions ever. A lot of it is focused on issues that I think might be reasonably tractable for someone who is likely to be reading this post. Examples of EA-relevant research I know something about and would welcomeOne large area of low-hanging fruit is overviews of promising career areas that are frequently discussed in EA circles, such as these. I can imagine an output that is a cross between US BLS occupational outlook handbook, the overviews that Cognito Mentoring has done, and the information that 80,000 Hours has put in the post linked above. I can imagine a research process that builds on existing materials and interviews with experts in these areas which could create accessible summaries of promising career areas that are significantly better for the target audience than anything that is currently available. I’d like to see GiveWell-style shallow investigations of work in the following areas: promoting “evidence-based policy”; promoting the use of cost-effectiveness research in government decision-making; changing democratic institutions to give more weight to the voices of future generations; increasing transparency of foundations; GCRs from bioengineering/synthetic biology; nanotechnology; emerging technology governance; iterated embryo selection. Some of these might be clusters of causes rather than individual causes. I’d be particularly interested in specific proposals about what governments and other organizations could be doing to manage emerging technologies that might be GCR-relevant; I think we’re really light on those, especially for AI. I’d welcome explorations of the following questions:
If you decide to do a detailed investigation of any of these issues, I’d be interested to hear about it because I’m likely to take some of them up myself. General features of research I do and don’t favor nowI think most highly abstract philosophical research is unlikely to justify making different decisions. For example, I am skeptical of the “EA upside” of most philosophical work on decision theory, anthropics, normative ethics, disagreement, epistemology, the Fermi paradox, and animal consciousness—despite the fact that I’ve done a decent amount of work in the first few categories. If someone was going to do work in these areas, I’d probably be most interested in seeing a very thorough review of the Fermi Paradox, and second most interested in a detailed critique of arguments for the overwhelming importance of the very long-term future. I’m also skeptical of developing frameworks for making comparisons across causes right now. Rather than, e.g., trying to come up with some way of trying to trade off IQ increases per person with GDP per capita increases, I would favor learning more about how we could increase IQ and how we could increase GDP per capita. There are some exceptions to this; e.g., I see how someone could make a detailed argument that, from a long-run perspective, human interests are much more instrumentally important than animal interests. But, for the most part, I think it makes more sense to get information about promising causes now, and do this kind of analysis later. Likewise, rather than developing frameworks for choosing between career areas, I’d like to see people just gather information about career paths that look particularly promising at the moment. Other things being equal, I strongly prefer research that involves less guesswork. This is less because I’m on board with the stuff Holden Karnofsky has said about expected value calculations—though I agree with much of it—and more because I believe we’re in the early days of effective altruism research, and most of our work will be valuable in service of future work. It is therefore important that we do our research in a way that makes it possible for others to build on it later. So far, my experience has been that it’s really hard to build on guesswork. I have much less objection to analysis that involves guesswork if I can be confident that the parts of the analysis that involve guesswork factor in the opinions of the people who are most likely to be informed on the issues. |
Advice >