AI Safety Support

The Mission

The ultimate mission is to help the world. But the more specific aim of this initiative is to reduce existential risk from advanced AI by providing support for anyone trying to help work on this project. Even more specifically, we are mainly focusing on helping early career and aspiring AI Safety researchers because I think that helping this group is a neglected problem.

Current Projects

We are a small and flexible organization, which means that our plans sometimes change fast, so don't completely trust this list. But at least at some point in time, this was more or less our plans and ongoing projects. (Last updated 2020-10-05)

  • AI Safety Career Bottlenecks Survey (Paused)

  • Planning and running events (Active)

  • Better understand the concept of research collaborations. How can this concept be broken down? What are people really looking for when they say they want collaborators? (Future project)

  • Find AI Safety friendly PhD programs and supervisors. "AI Safety friendly" = are open for supervising AI Safety research, even if they are not currently doing this type of research themselves. (Active)

  • Setting up an non profit, so that we can get fiscal sponsorship, so that we can receive donations, so that we can get paid. Most projects is on hold until we have sorted this out. (Active)

  • Mentorship program (Active)

Team

We are a new initiative with a growing team. Contact us if you want to join.

JJ Hepburn

Contact: You can reach me through email, twitter or calendly

Currently in my first year of a Masters of Computing (Advanced) with an Artificial Intelligence Specialisation at The Australian National University, I am working towards a research career in technical AI Safety. I am passionate about working on AI safety and enabling others to do so as well. In 2019 I was a participant in the third AI Safety Camp where I was able to build some great relationships. Following this I helped organise the fourth AI Safety Camp, which was a challenging but rewarding experience. This has inspired me to continue working to help others contribute to the field.

Linda Linsefors

Contact: You can reach me through email, messenger, or calendly

I learned about Effective Altruism in 2015, around the time when I was finishing my physics PhD and looking for what to do next. I was soon fascinated by the AI Safety problem. However, I found that getting started by myself was hard, so in November 2017 I found some like minded friends, and over then next months, we developed the concept of AI Safety Camp, and ran the first camp. During the summer of 2018, I was a research intern at MIRI. I spent most of 2019 at EA Hotel, where I explored various career alternative and got some much needed time and help for self growth. During the summer of 2019 I started organising again, and this has been my main focus ever since. I expect to get back to research eventually, but currently I believe I can do more good in this role.