Applying for Grad School
Questions on Applying to Grad School
(Note: The following questions and answers are from the transcript of our Q&A panel, held on November 7, 2021. They have been edited for brevity. A recording of the event and info on the panelists can be found in the following sections.)
How do you choose supervisors, if your university doesn't have anyone directly interested in AI safety/AI alignment?
How hard is it to get a useful job in AI safety after graduating from a related master's programme?
Instead of working on AI Safety, could pursuing an ML or AI PhD with the intention of earning-to-give be effective?
Given how competitive admissions are, is it worth trying to get into a top programme? If you don't get in, is it worth waiting and reapplying?
For undergrad research, would you prioritize working on problems tied to AI safety, or working with good advisors on topics you can make the most progress on?
When approaching supervisors, do you recommend openly stating an interest in AI safety/alignment? What if they’ve never stated an interest in the topic?
What criteria for choosing a school did you not think of, but should have?
If I don't have publications, what could I have/do instead to improve my application?
How do you self-study math and other things, such that you can do productive work at CHAI? And what is CHAI actually doing specifically?
How can you pitch a non standard background on your application?
Rachel [Note: you can review Rachel’s educational background in her biography below]
How much AI safety work is philosophical vs. mathematical vs. computational? (Would it make sense to specialize in philosophy, e.g. decision theory?)
Berkeley PhD candidate
Rachel is a PhD student at the University of California, Berkeley, where she is supervised by Stuart Russell and a PhD affiliate at the Center for Human-Compatible Artificial Intelligence. Her research concerns reinforcement learning, reward modeling, and model misspecification. Before Berkeley, Rachel completed her undergraduate degree at Duke University in Artificial Intelligence Systems and studied computer science, neuroscience, and philosophy at Duke, UNC Chapel Hill, and Oxford University as a visiting student. Feel free to read more about Rachel at her website.
Complutense University of Madrid PhD
Pablo completed his BSc in Physics at a university in Spain (University of Extremadura) and was then accepted into the MSc in Mathematical Physics at Oxford. There, he discovered Effective Altruism some 6 months prior to finishing his master's. He then decided to move back to Spain to complete a PhD in quantum computing because:
It seemed like a career-wise intelligent move. This was 2018, so a few months before Google would show the "quantum supremacy". A good time to do it.
This seemed like the closest thing in Physics to AI Safety. Probably right, but he feels it was not really very useful anyway.
He enjoys the presence of his girlfriend and she is in Spain.
In Madrid, Pablo joined the local EA community and started trying to do things that would put him in a better position to work in AI Safety. In particular, he attended the 3rd AI Safety Camp (on JJ’s team!), the 2019 EAG London, an AI Safety conference in Prague, and the AISRP. Pablo also tried to do something at the intersection of quantum computing and adversarial examples, and although he got a publication out of it, Pablo notes he is not particularly proud of it. Finally, this past summer Pablo did an internship with José Hernández-Orallo. To the best of his knowledge, Dr. Hernández-Orallo is the only professor in Spain that does some safety work. Pablo's current objective is getting a postdoc in something related to AIS.
Oxford PhD candidate
Lewis Hammond is based at the University of Oxford where he is a DPhil candidate in Computer Science and a DPhil Affiliate at the Future of Humanity Institute. His research concerns safety, control, and incentives in multi-agent systems and spans game theory, formal methods, and machine learning. He is primarily motivated by the problem of ensuring that AI and other powerful technologies are developed and governed safely and responsibly, so as to benefit the whole of humanity, including its future generations. Before coming to Oxford he obtained a BSc (Hons) in Mathematics and Philosophy from the University of Warwick, and an MSc in Artificial Intelligence from the University of Edinburgh. Feel free to read more about Lewis at his website.
MIT PhD candidate
Stephen “Cas” Casper is a first year Ph.D student at MIT in Computer Science (EECS) supervised by Dylan Hadfield-Menell. He previously completed his undergraduate degree at Harvard and has worked with the Harvard Kreiman Lab and the Centre for Human-Compatible AI. His main research interests include interpretability, adversaries, robust reinforcement learning, and decision theory. Feel free to read more about him at his website.