Applying for Grad School

Q&A Panel

Looking for help applying to graduate studies in AI Safety?

Join our event to hear from current PhD students as they share their experiences navigating the application process and AI Safety research in academia.

We’re happy to be joined by three students from MIT, Berkeley, and Oxford. You can submit questions ahead of time and vote up questions other’s have submitted using the following link:

sli.do/gradsafety

The final 30 minutes of the event will consist of ice breakers so you can debrief with other attendees and connect with applicants! Additionally, after RSVPing, we’ll invite current applicants to sign up for our Matching Program. We’ll pair you with two other applicants so you can form connections and get extra feedback on your application materials.

Yes, this event will be recorded and made available after on this page. If you're unable to attend, you can still submit questions!

Please RSVP to the event so we can keep you informed. If you have any queries about the event itself, feel free to email Frances at frances@aisafetysupport.org

Sunday, 7 November 6pm UTC

Panellists

Rachel Freedman

Berkeley PhD candidate

Rachel is a PhD student at the University of California, Berkeley, where she is supervised by Stuart Russell and a PhD affiliate at the Center for Human-Compatible Artificial Intelligence. Her research concerns reinforcement learning, reward modeling, and model misspecification. Before Berkeley, Rachel completed her undergraduate degree at Duke University in Artificial Intelligence Systems and studied computer science, neuroscience, and philosophy at Duke, UNC Chapel Hill, and Oxford University as a visiting student. Feel free to read more about Rachel at her website.

Pablo Moreno

Complutense University of Madrid PhD

Pablo completed his BSc in Physics at a university in Spain (University of Extremadura) and was then accepted into the MSc in Mathematical Physics at Oxford. There, he discovered Effective Altruism some 6 months prior to finishing his master's. He then decided to move back to Spain to complete a PhD in quantum computing because:

  • It seemed like a career-wise intelligent move. This was 2018, so a few months before Google would show the "quantum supremacy". A good time to do it.

  • This seemed like the closest thing in Physics to AI Safety. Probably right, but he feels it was not really very useful anyway.

  • He enjoys the presence of his girlfriend and she is in Spain.

In Madrid, Pablo joined the local EA community and started trying to do things that would put him in a better position to work in AI Safety. In particular, he attended the 3rd AI Safety Camp (on JJ’s team!), the 2019 EAG London, an AI Safety conference in Prague, and the AISRP. Pablo also tried to do something at the intersection of quantum computing and adversarial examples, and although he got a publication out of it, Pablo notes he is not particularly proud of it. Finally, this past summer Pablo did an internship with José Hernández-Orallo. To the best of his knowledge, Dr. Hernández-Orallo is the only professor in Spain that does some safety work. Pablo's current objective is getting a postdoc in something related to AIS.

Lewis Hammond

Oxford PhD candidate

Lewis Hammond is based at the University of Oxford where he is a DPhil candidate in Computer Science and a DPhil Affiliate at the Future of Humanity Institute. His research concerns safety, control, and incentives in multi-agent systems and spans game theory, formal methods, and machine learning. He is primarily motivated by the problem of ensuring that AI and other powerful technologies are developed and governed safely and responsibly, so as to benefit the whole of humanity, including its future generations. Before coming to Oxford he obtained a BSc (Hons) in Mathematics and Philosophy from the University of Warwick, and an MSc in Artificial Intelligence from the University of Edinburgh. Feel free to read more about Lewis at his website.

Stephen Casper

MIT PhD candidate

Stephen “Cas” Casper is a first year Ph.D student at MIT in Computer Science (EECS) supervised by Dylan Hadfield-Menell. He previously completed his undergraduate degree at Harvard and has worked with the Harvard Kreiman Lab and the Centre for Human-Compatible AI. His main research interests include interpretability, adversaries, robust reinforcement learning, and decision theory. Feel free to read more about him at his website.