Online and/or international

Robert Miles is the world's best/only AI Safety YouTuber.

The Reading Group meets weekly, usually Thursdays at 18:45 UTC.

AI Safety Camp connects you with interesting collaborators worldwide to discuss and decide on a concrete research proposal, prepare online as a team, and try your hand at research during a 9-day intensive retreat.

Local AI Safety/Alignment groups

MIT AI Alignment Reading Group

An interdisciplinary AI Alignment reading group at MIT (past agendas). Contact Xuan at xuan [at] mit [dot] edu if you would like to join.

Stanford AI Safety organizes reading groups, speakers, and discussions around a variety of aspects of AI Safety, both technical and non-technical.

Oxford AI Safety Reading Group

We meet once a week to discuss the Alignment Newsletter and once a fortnight to read a specific, recent AI safety paper. The group is aimed primarily at postgrads and early career researchers, though undergrads are also welcome. If people are interested they can join our Facebook group and/or email Lewis to be added to the Google Calendar event.


CEEALAR (formerly the EA Hotel) is an Effective Altruist community hub in the North West of England. We host people working on promising charitable projects, be they research, remote charity work, self-study, starting new EA-aligned charities, or similar.

Our friends are other people and groups who are working towards similar goals as AI Safety Support, i.e. to empower people who want to do AI Safety research.

If you know someone or some group we should be friends with, please let us know.