Fostering the next generation of AI safety experts

Impact Academy’s Global AI Safety Fellowship is a 3-6 month fully-funded research program for exceptional STEM talent to work with the world’s leading AI safety organisations on advancing the safe and beneficial development of AI.

Applications are now open!

Global AI Safety Fellowship 2025

We will select Fellows from around the world for a 3-6 month placement with our partners. After an initial application, we invite candidates to participate in a rigorous and purpose-driven selection process that lasts 4 to 6 weeks. We aim to select a cohort of technically skilled individuals with a knack for working on complex problems.

Some of our partner organisations are the Center for Human Compatible AI (CHAI) at UC Berkeley, Conjecture, FAR.AI, and the UK AI Safety Institute (AISI). 

Fellows will receive comprehensive financial support that covers their living expenses and research costs, along with dedicated resources for building foundational knowledge in AI safety, regular mentorship, 1:1 coaching calls with our team, and facilitation for in-person work with our partner organisations.

  • After completing our selection phase, candidates will receive full-time placement offers from our partner orgs.

    Fellows will work as colleagues with researchers from the respective organisations, though the name and scope of their roles may vary.

    Although the start date of the fellowship will be mutually decided by the candidate and the placement org, we expect Fellows to begin applying for visas from February 2025 onwards.

    Fellows who perform well would have reliable opportunities to continue working full-time with the placement orgs.

  • We expect candidates to be available for full-time work for at least 3 months starting February or March 2025.

    We welcome applications from candidates around the globe, including those from regions traditionally underrepresented in AI safety research.

    As of now, we cannot accept applicants who will be under 18 years of age in January 2025.

  • We expect our ideal candidates to have:

    • Demonstrated knowledge of Machine Learning and Deep Learning (e.g. a first-author ML research paper).

    • Demonstrated programming experience (e.g. >1 year of software engineering experience at a leading tech firm).

    • Scholastic excellence or any other achievements, such as 99th percentile performance in standardised STEM tests, like Math or Informatics Olympiads or competitive exams for graduate study.

    • An interest in pursuing research to reduce the risks from advanced AI systems.

    Even if you feel you don’t possess some of these qualifications, we encourage you to apply!

  • Fellows who receive offers from our partner organisations will get to work with them in person in the US, the UK, or Canada.

    We may be able to support select Fellows constrained by logistical issues like visa delays in working at a shared office space in South Asia.

  • We offer comprehensive support for Fellows to have an optimised learning and research environment:

    • Expert-led technical assessments and interview preparation

    • Access to a global network of leading AI safety researchers

    • Fully-funded AI Safety BootCamp with personalized mentoring

    • Competitive compensation package tailored to your location

Expected Timeline

Some of Our Mentors & Collaborators Are from

Why AI Safety

AI might be the most transformative technology of all time. To make it go well for humanity, we must seriously consider the types of risks advanced AI systems of the future might pose. Fortunately, there is a growing ecosystem of professionals and institutions dedicated to researching and solving these problems- AI safety.

AI safety focuses on developing technology and governance interventions to prevent both short-term and long-term harm caused by advanced AI systems. To learn more, check out this list of resources.

We believe we can support global talent, who might otherwise not have had the opportunity, to play an important role in advancing research and other work in the field through their careers.

How to Apply

Phase 1: Applications & Technical Assessment

  • We know you might have a lot on your plate, so the first application asks only for your basic information, CV, and background in ML, programming and research.

    We may optionally follow up to learn more about your motivation to join the Fellowship, career plans, and research interests.

  • Apply your math and coding skills to solve multiple problems in a given amount of time. This will be a general Python programming proficiency test.

  • This test will measure your ability to make progress on small open-ended research questions requiring ML skills.

  • Get invited to interview with our technical Research Management staff to assess your knowledge of Machine Learning and Deep Learning.

    This may or may not be accompanied by another conversation with our team to understand your level of engagement with AI safety and alignment research.

Phase 1.5: Optional BootCamp for Candidates New to AI Safety

  • Proficient in ML and programming but new to AI safety? For candidates who are new to the field but have excelled in Phase 1, we will offer a part-time (4 weeks) or full-time (2 weeks) remote BootCamp.

    The BootCamp will be a paid opportunity for candidates to upskill before they interview with our partner organisations.

    Participants will go through an expert-curated curriculum that can be customised per their research interests.

  • Undertake up to five 1:1 coaching sessions with our technical Research Manager or experts already engaged in AI safety research.

    These calls can help you clarify doubts about the curriculum, get career guidance, and understand the field better.

  • Participate in a test to evaluate your knowledge of the foundations of AI safety and research areas you want to work on.

    Note that this is for candidates who have yet to explore areas of interest in the field. Candidates already engaged substantially in AI safety work may be directly eligible for Phase 2.

Phase 2: Assessment with Placement Labs and Organisations

  • This process in this phase will be highly dependent on your performance in Phase 1 (and 1.5).

    It will take into account the specialised research and personnel requirements of the placement orgs as well as your overall strengths and interests. It is likely that this will play out differently for different candidates.

  • Shortlisted candidates will be invited for 1-2 interviews with Research Leads at our partner organisations. Interviews will typically revolve around your technical proficiency and fit with the organisation’s research mission.

    Candidates could be eligible for this directly after Phase 1 or after having completed the funded AI Safety BootCamp.

    Some orgs might ask you to complete an additional written test.

  • Depending on their process of assessing candidates’ capabilities, partner organisations will invite you for a technical test before or after the interviews.

Research Directions

Depending on their interests and the placement orgs’ projects, Fellows may get to work on a range of research directions in AI safety and alignment. Below, we have indicatively outlined some of these research agendas.

Adversarial Robustness

Adversarial Robustness is aimed at developing machine learning models that can maintain their performance and reliability even when faced with intentionally misleading or manipulated inputs. See FAR.AI’s work on adversarial training of Go AIs.

Cognitive Emulation

Cognitive Emulation (CogEm) is primarily the research agenda of Conjecture, wherein the goal is to build AI systems that emulate human reasoning and are scalable, auditable and controllable. Through this approach, the systems could be sufficiently understood and bounded to ensure they do not suddenly dramatically shift their behaviour.

Model Evaluations

Model Evaluations are about producing empirical evidence on a model's capabilities and behavioural tendencies, which allows stakeholders to make important decisions about training or deploying the model. For examples, see DeepMind’s evaluations for dangerous capabilities, or Sam Brown’s AutoEnhance evaluation proposal.

Scalable Oversight

Scalable Oversight refers to a set of approaches to help humans effectively monitor, evaluate, and control such complex AI systems. Approaches include constitutional AI, AI safety via debate, iterated distillation and amplification and reward modelling. To learn more, check out Anthropic’s Constitutional AI, or OpenAI’s AI Safety via Debate.

Mechanistic Interpretability

Mechanistic Interpretability is an area of interpretability concerned with reverse-engineering the trained models into human-understandable algorithms. See, for example, recent work from FAR.AI on investigating planning in RNN model playing Sokoban.

Value Learning

Value learning (also called preference inference) is a proposed method for incorporating human values in an AI system. Human values are difficult to specify. Current approaches work on learning human values through human feedback and interaction. Read MIRI’s Learning What to Value and CHAI’s work on Cooperative Inverse Reinforcement Learning.

FAQs

Can I participate in the Fellowship part-time?

We are only looking for candidates who can commit to full-time participation for this iteration of the Fellowship.

I know someone who would be a good fit. Can I refer them?

Yes! We're offering $2,000 to anyone who refers a successful candidate not already in our database who we end up selecting for the Fellowship. This applies to external referrers who we are not already collaborating with. Please use this form to refer potential candidates. 

What if I am new to AI safety?

We’re looking for candidates with strong technical qualifications and research aptitude who are interested in working on ways to reduce the risks of advanced AI. For candidates who excel in our Phase 1 assessment, we will offer a paid opportunity to upskill in the foundations of AI safety and alignment research through a guided curriculum. Even if you have less prior knowledge of this, we encourage you to apply. We are happy to work with you to improve your understanding of the field.

What will I work on if selected?

Our partner organisations work on a portfolio of several experimental and established research agendas for safer and more aligned advanced AI systems. The exact research agendas will depend on the selection and matching process, but Fellows would broadly be working in areas like adversarial robustness, mechanistic interpretability, scalable oversight, cognitive emulation, and control problems, to name some.

What outcomes can I target from this program if selected?

You will work on impactful research projects with some of the most talented researchers in the field, at top AI safety labs and research institutions. Fellows who perform well would have reliable opportunities to continue working full-time on their projects. As a Fellow, you would:

  • Publish papers in top ML conferences like ICML, ICLR, NeurIPS, CVPR, etc.

  • Develop empirical ML research experience on Large Language Models.

  • Participate in a niche research community to collaborate on challenging problems.

  • Advance existing AI safety strategies and research agendas or develop new ones.

Want to apply but have more questions? Write to us!

If you think you may be a good fit for the program but would like to clarify some doubts, please email us at aisafety@impactacademy.org. We are happy to answer queries and potentially get on a quick call to understand you better.

About Us

Impact Academy is a startup that runs cutting-edge fellowships to enable global talent to use their careers to contribute to the safe and beneficial development of AI. Our focus is to provide opportunities for students and professionals to explore challenging ideas in the field and spearhead new strategies for AI safety (AIS). We also offer mentorship for career development and support for job placements.

Since 2023, we have launched, incubated or co-organised several programs in AI safety, technical and alignment research, and governance. These include:

Learn more about us here.

Get in touch

Excited to learn more or looking for a way to get involved? Apply to collaborate as a Talent Identification Advisor, or use the form below to contact us about anything else!