Applying for Grad School

Q&A Panel

Questions on Applying to Grad School

(Note: The following questions and answers are from the transcript of our Q&A panel, held on November 7, 2021. They have been edited for brevity. A recording of the event and info on the panelists can be found in the following sections.)

How do you choose supervisors, if your university doesn't have anyone directly interested in AI safety/AI alignment?

Pablo

I work in a university where no one really knows about AI safety. However, to choose an AI supervisor, I think there are a few things that are very important. Most importantly, you should find a good supervisor, which is something I didn't find so obvious when I was applying. Having a good supervisor can make a huge difference in avoiding burn out and having adequate support. I would also look specially for supervisors that have an open mind for new topics or would give you some freedom to work on different topics. This includes allowing you to go to AI safety camps, conferences, or things of that nature- really allowing you to get in touch with other AI Safety researchers. I recommend asking a potential supervisor’s current students how happy they are, how much freedom they have, and just generally gauging their satisfaction. Additionally, if there is little opportunity to work directly on AI Safety during your graduate studies, you can choose a supervisor that will help you build skills to work on AI safety in the future (such as reinforcement learning, large language models, etc.)


Stephen

I would say that numerous schools have supervisors doing AI safety relevant research, even if they don't necessarily consider themselves to be an AI safety researcher in the way effective altruists do. Most schools where people are working on deep networks, AI policy, or algorithms and justice are trying to make those systems work better for us, which is broadly in-line with AI Safety research. The only difference between something I might call an explicit AI safety paper and other papers is some of the problem framing and what is being emphasized. As a result, I think finding a supervisor that is explicitly working on AI Safety can be over-prioritized relative to the importance of finding a good advisor. Something that is relatively under emphasised is the importance of the other people that you may be working with in the lab. Even if there's not an advisor who's explicitly focused on AI Safety, if they're focused on a broad range of topics and their students are interesting, relevant people you can collaborate with, then that's a great reason to consider that programme.


Lewis

Implicitly within this question, it seems there's an assumption that part of the point of doing a PhD is to do useful AI safety research. I think this is a model that most people, or many people aim for, and it's something that can be done. However, I also know a bunch of people who are strongly interested in working on AI safety in the future and are doing AI PhDs, but are not directly doing safety work in their PhDs. They're just trying to do good, impactful, maybe safety adjacent research and trying to scale up as much as possible so that they are in a strong position to pursue safety research later on.

How hard is it to get a useful job in AI safety after graduating from a related master's programme?

Lewis

There certainly are jobs out there, such as research engineer or machine learning engineer positions at larger companies, that don't require PhDs. If you want to work as a research scientist you might be expected to have a PhD, but for example, DeepMind regularly recruits people with master's or even bachelor's sometimes. I know people who have joined their safety team with those backgrounds and are doing important, impactful work, although it can perhaps be harder than if you have a PhD. Overall, I think if you're competent, and especially if you can code and build things well, people are more than happy to hire you from a masters for certain positions.

Instead of working on AI Safety, could pursuing an ML or AI PhD with the intention of earning-to-give be effective?

Pablo

My intuition is that working directly on AI Safety is more important than earning to give, and that seems to be a general intuition within the community. For example, Open Philanthropy has lots of money they're willing to give away to people who work on safety, so presumably they consider it quite important.

Given how competitive admissions are, is it worth trying to get into a top programme? If you don't get in, is it worth waiting and reapplying?

Rachel

I think it depends a lot on what your goals are. Stephen and Lewis have already mentioned that you don't necessarily need to be at a top AI safety place in order to do useful AI safety work. When I was applying, I really wanted to get into one of the top US programmes such as Berkeley and Stanford because I was aware of people doing safety relevant research there. But even in just the past few years, there has been a proliferation of people interested in safety relevant topics and just general acceptability of doing AI Safety work at a lab that's not explicitly safety oriented. Thus, I think what you should be aiming for is a lab and a research group that's going to work really well for you, which is one where you have good collaborators, a good adviser, and can work on topics relevant to your interests. That does not necessarily mean that you have to go to one of the top four programmes, so it’s worth the effort to get into a top programme only if all the labs you want to work at are there, which was the case for me a few years ago. I don't know for sure if that would be the case now.


With respect to waiting a year and reapplying, it depends on what you would do with that year. In general, if you don't get admitted to PhD programmes and want to spend a year strengthening your application, you need to be doing research. If you have some publications in the pipeline that are going to come out in the next year, or there's some sort of relevant research position you could do in that time, that's likely worthwhile. If during the year you're going to be doing something that's not related to research, it's probably not going to make your application stronger and so it's probably not worth waiting. I should say, however, I was an extremely rare exception to this. If I didn't get in anywhere, my plan was to do a technical masters and then reapply, which I wouldn’t recommend to most people. I thought this could make sense in my case because I didn't have a technical background but I did have a research background. I had publications, but I hadn't taken a linear algebra course and thought that some universities might be concerned by this.


Stephen

I think Rachel answered exactly as well as I ever could with respect to waiting a year. I want to add, I think there are two potential reasons one would not necessarily want to focus on getting into a few particular programmes. Firstly, AI safety is still somewhat of a wild west which can’t be said for other academic disciplines. For example, you can meaningfully shape people's thinking on research directions just by posting something really interesting on Less Wrong. That regularly happens multiple times a year. In that sense, someone doesn't necessarily need to be surrounded by a particular group of people or in a particular programme to make useful progress. Secondly, long distance collaboration is common and useful, and probably easier now that the pandemic has happened. Fun fact, I have a co-author that I've never met and we've worked on two papers before. I don’t know how tall he is, but we have a really good relationship and we frequently talk about research topics. You can find great collaborators in the community without being at a top program or in a big AI Safety lab.


Pablo

I’ll also say, when I applied for a Master's, I didn't have a good sense of how strong my application was. For example, I applied to both Cambridge and Oxford and a few other places around Europe. I got into them despite coming from an unknown University in Spain, which was kind of a surprise. This should tell you about the difficulty of self assessing. So I would recommend trying for top programs because you have very little to lose.

For undergrad research, would you prioritize working on problems tied to AI safety, or working with good advisors on topics you can make the most progress on?

Stephen

If you're forced to make a trade off like this, I think choosing a really good advisor and making some really good progress is useful. As far as building capital and experience, I would generally suggest erring on the side of the good advisor; however, that doesn't mean you can't also do AI safety related work on the side. You can do meaningful things independently by posting on Less Wrong or writing a survey paper, which applies less for different fields. AI Safety is still the kind of place where a blog post might be relevant enough for a resume.


Lewis

I think you should fairly strongly prioritize working with good, important advisors on good projects as opposed to something that's AI safety relevant. If you're doing undergrad research, then your goal is likely to do work that gets you into a competitive masters or PhD later. In that case, it just makes sense to work with the best advisor you can find.

When approaching supervisors, do you recommend openly stating an interest in AI safety/alignment? What if they’ve never stated an interest in the topic?

Rachel

Some professors don’t want to be approached by students until they’re accepted by the university, so be mindful of that. You can typically check whether this is the case on a professor’s website. That said, when you approach a supervisor or any potential research collaborator, you should look at their recent work and try to pitch your interests in relation to or as an extension of their recent work.


Pablo

I totally support what Rachel said. I think it is most important to show potential supervisors that you understand what they are working on. When reading their papers, try to answer the question, why is this paper important? And then if you can make some connection between what they’re doing and what you're interested in, that's super good.


Stephen

I think when it comes to most AI safety relevant topics, you can frequently frame them as solving near term problems. When talking about your research interests, I recommend being just as good at talking about AGI as you are about self driving cars or debugging biased models, things like that.

What criteria for choosing a school did you not think of, but should have?

Lewis

Something I didn't think about much is the composition of the lab that you might be joining and who else is there, beyond the adviser. This is important since a lot of your time will be spent with other students, maybe postdocs, and so on. In some labs, if the professor is particularly busy or high profile, you might end up being predominantly supervised by the more senior people in your lab (i.e. postdocs). Chatting with them and finding out who they are/the flavour of the lab is something I think I should have done


Rachel

I didn't realise that the funding resources of my school would influence my experience; although, I don’t think it would have swayed my opinion. When I was applying, I just looked at the specific programmes or labs and their resources. There are some situations where the resources of the university come in. I happen to be in one of the most underfunded public university departments on a per student level in the US (UC Berkeley), so this has come up quite a lot in the past couple of years. Sometimes wild things happen. For example, for a while we just didn't have enough desks for everyone. For most of this semester, there hasn't been usable Wi-Fi. COVID has definitely worsened the situation, but it is worth noting more funded schools like Stanford have provided better resources to their students through this crisis. And even beforehand, there was a notable difference in resources at public versus private universities, at least in the US.


Lewis

One thing to consider is that PhDs are typically a lot longer in the US when compared to the UK, for instance. When I was applying, the idea of a shorter PhD appealed to me quite a bit; however, it can be good or bad depending on how you want to use your PhD. I am now about halfway through my PhD and I think it is way too short. I'm trying to drag it out as much as possible, but this is very hard to do in the UK as you generally need a concrete reason and can’t just extend your research.

If I don't have publications, what could I have/do instead to improve my application?

Pablo

The admissions committee wants to know that you're able to do research, that's why they focus so much on publications. If you don’t have any, the second best thing you can do is try to demonstrate your knowledge on the topic and why you’re capable of research.


Stephen

One good thing could be having a research job or experience doing research like things, even if it didn't lead to a publication. Aside from a job, there are non paper deliverables from research such as a really good blog post that shows a strong understanding of what you were doing. A third thing could be connecting with a professor at a school you’re interested in and trying to pitch yourself and your ideas (ex. maybe they’re sympathetic to or EA-aligned and would be interested in your Less Wrong posts). Having such a professor vouch for you will definitely make an application to that school stronger. Finally, just because you don't currently have a clear opportunity to publish, doesn’t mean you can’t write something on your own in the next year. Even if you don’t have access to compute, you can still write a survey paper and put it on Archive. That's something people would pay attention to.

How do you self-study math and other things, such that you can do productive work at CHAI? And what is CHAI actually doing specifically?

Rachel

So first, self studying math and other things. There are lots of great resources online, I used MIT OpenCourseWare pretty heavily. Berkeley also has a number of courses with lectures on YouTube, such as a deep reinforcement learning course. It's not too difficult to find the resources to study these things, it is really difficult to find the time and mental energy. Taking a break in between finishing my undergraduate degree and applying to PhDs helped give me extra time for self-study.

What is CHAI doing? Our research agenda is a little bit amorphous. Since PhD students are very independent, we have a lot of freedom to set our own agendas and motivations, so CHAI is kind of a collection of half formed PhD student research agendas. That said, the most popular topics are probably related to reward modelling, learning from human feedback, or learning from human interaction. This can broadly be defined to include things like Cooperative Inverse Reinforcement Learning (CIRL), assistance games, and inverse reinforcement learning. There's also at least one person working on interpretability of neural networks and at least one person focusing on game theory. There are also researchers who are associated with CHAI but are not PhD students, I can't speak directly to their work.


Stephen

I’ll share what I consider a recipe for doing a deep dive into a subfield of the AI Safety literature. If you ignore everything else, this is most likely to be the most valuable thing I might say. I think a deep dive has three really important components, one is choosing a subfield of academic literature you're interested in, the second is reading 40 or 50 papers or more and taking notes on all of them. The third is producing some sort of end deliverable. You can choose any topic you're interested in as long as it's not too broad or too narrow. For example, you could choose safe exploration in reinforcement learning, adversaries and networks, interpretability of neurons and networks, bias within neural systems, etc. Once you’ve taken notes on 40 or 50 papers, you have an understanding comparable to the people doing active research in that area. Following-up with a synthesis project will solidify this, whether it's a research proposal, blog, post, survey paper, application, repository, etc. I will attest to having done a couple deep dives and I would trade any class I've ever taken for one of them.

How can you pitch a non standard background on your application?

Rachel [Note: you can review Rachel’s educational background in her biography below]

In my statement of purpose, I really emphasised the research that I had done and also a reading group. Regarding independent study, I was really explicit about how everything I had researched or learned related to what I proposed to do in my PhD. Because most of my undergraduate coursework was not related to my PhD, I ignored it and focused on my independent study. In the interview, if asked about my weird background, I did a similar thing. Additionally, my letters of recommendation were from technical researchers I had worked with. I asked them to emphasise my technical knowledge- your letters of recommendation are a great space to include evidence of whatever your application doesn’t show. For example, if you've submitted a research paper to a conference, but you're not going to hear back before the application deadline, get one of your letter writers to say that you've submitted it. In my case, my thesis advisor was aware of the independent study I had done and so I asked him to include that in his reference letter. My general advice is to think explicitly about what you want to be doing in a research programme and then explain how everything you've done has prepared you for that.


Most importantly, when proposing research topics in my statement of purpose, I showed more than a superficial understanding, which is what they care about. They don't care whether you got that from a class or independent study, they just care that you have a technical understanding, you can phrase research topics, and you can understand research that's already been done. That should be a decent part of your statement of purpose.

How much AI safety work is philosophical vs. mathematical vs. computational? (Would it make sense to specialize in philosophy, e.g. decision theory?)

Pablo

I would strongly favour mathematical and computational degrees. I think that philosophy is great for highlighting the importance of AI safety, but solving concrete problems requires you to be a bit technical.


Lewis

Having done three degrees in philosophy, mathematics, and now computer science, I want to echo what Pablo has said. Although a philosophy background can be helpful for addressing interesting questions in AI Safety, it isn’t clearly relevant to specific AI Safety problems. If your goal is to be a technical AI Safety researcher, going so far as to specialise in philosophy is probably less helpful than being fairly versed in the field. Philosophy may be comparably easier to self-study than technical topics, so that’s another factor. Additionally, having a solid background in maths is quite helpful as it’s difficult to learn as you go. I feel very glad to have started with maths before moving into computation rather than the other way around.

Event Recording

Panellists

Profile picture of Rachel Freedman

Rachel Freedman

Berkeley PhD candidate

Rachel is a PhD student at the University of California, Berkeley, where she is supervised by Stuart Russell and a PhD affiliate at the Center for Human-Compatible Artificial Intelligence. Her research concerns reinforcement learning, reward modeling, and model misspecification. Before Berkeley, Rachel completed her undergraduate degree at Duke University in Artificial Intelligence Systems and studied computer science, neuroscience, and philosophy at Duke, UNC Chapel Hill, and Oxford University as a visiting student. Feel free to read more about Rachel at her website.

Profile picture of Pablo Moreno

Pablo Moreno

Complutense University of Madrid PhD

Email: pabloamo@ucm.es

Pablo completed his BSc in Physics at a university in Spain (University of Extremadura) and was then accepted into the MSc in Mathematical Physics at Oxford. There, he discovered Effective Altruism some 6 months prior to finishing his master's. He then decided to move back to Spain to complete a PhD in quantum computing because:

  • It seemed like a career-wise intelligent move. This was 2018, so a few months before Google would show the "quantum supremacy". A good time to do it.

  • This seemed like the closest thing in Physics to AI Safety. Probably right, but he feels it was not really very useful anyway.

  • He enjoys the presence of his girlfriend and she is in Spain.

In Madrid, Pablo joined the local EA community and started trying to do things that would put him in a better position to work in AI Safety. In particular, he attended the 3rd AI Safety Camp (on JJ’s team!), the 2019 EAG London, an AI Safety conference in Prague, and the AISRP. Pablo also tried to do something at the intersection of quantum computing and adversarial examples, and although he got a publication out of it, Pablo notes he is not particularly proud of it. Finally, this past summer Pablo did an internship with José Hernández-Orallo. To the best of his knowledge, Dr. Hernández-Orallo is the only professor in Spain that does some safety work. Pablo's current objective is getting a postdoc in something related to AIS.

Profile picture of Lewis Hammond

Lewis Hammond

Oxford PhD candidate

Email: lewis.hammond@cs.ox.ac.uk

Lewis Hammond is based at the University of Oxford where he is a DPhil candidate in Computer Science and a DPhil Affiliate at the Future of Humanity Institute. His research concerns safety, control, and incentives in multi-agent systems and spans game theory, formal methods, and machine learning. He is primarily motivated by the problem of ensuring that AI and other powerful technologies are developed and governed safely and responsibly, so as to benefit the whole of humanity, including its future generations. Before coming to Oxford he obtained a BSc (Hons) in Mathematics and Philosophy from the University of Warwick, and an MSc in Artificial Intelligence from the University of Edinburgh. Feel free to read more about Lewis at his website.

Profile picture of Stephen Casper

Stephen Casper

MIT PhD candidate

Email: scasper@mit.edu

Stephen “Cas” Casper is a first year Ph.D student at MIT in Computer Science (EECS) supervised by Dylan Hadfield-Menell. He previously completed his undergraduate degree at Harvard and has worked with the Harvard Kreiman Lab and the Centre for Human-Compatible AI. His main research interests include interpretability, adversaries, robust reinforcement learning, and decision theory. Feel free to read more about him at his website.