I often hear people concerned about existential risks from emerging technology say things like, “I wouldn’t want to work in AI/syn bio/whole brain emulation because it would advance risky technologies, and it would be better if we had more time to prepare for these technologies to arrive.” I'm unsure about the claim that advancing the pace of progress in these fields is negative, but I would reject this argument even if I wasn't. In fact I am generally excited about the idea of people concerned about existential risks from emerging technology working in these fields, and would feel this way even if I were more concerned about speeding up progress in these fields. Without arguing for the claim here, my intuition is that any negative effects from speeding up technological development in these areas are likely to be small in comparison with the positive effects from putting people in place who might be in a position to influence the technical and social context that these technologies develop in. For example, I believe the effect of Stuart Russell being concerned about existential risks from AI is much more important than any negative effect from speeding up progress in AI. For people who have a strong skill set in an area like this, I think working in this area is an idea that deserves closer investigation. Someone investigating these issues and sharing their conclusions would be very welcome in my mind.