Enabling the AI safety experts of 2025 and beyond
Impact Academy’s Global AI Safety Fellowship is a fully funded research program for up to 6 months for exceptional STEM talent to work with the world’s leading AI safety organisations to advance the safe and beneficial development of AI.
Priority deadline: 15 December 2024
All applications received by this date will be given due consideration.
Final deadline: 31 December 2024
Applications submitted after the ‘priority deadline’ but before the final deadline will be evaluated subject to the number of spots left.
Global AI Safety Fellowship 2025
We will select Fellows from around the world to pursue AI safety research for up to 6 months (or longer). After an initial application, candidates will be invited to participate in a rigorous and purpose-driven selection process that lasts 4 to 6 weeks. We aim to select a cohort of 10-20 technically skilled candidates with a knack for working on complex problems. This number may increase or decrease depending on the strength of our applicant pool.
Our Fellows will work with collaborators from partner organisations like the Center for Human Compatible AI (CHAI) at UC Berkeley, Conjecture, FAR.AI, and the UK AI Safety Institute (AISI). We are excited to support Fellows who will transition to full-time AI safety research after the program.
Fellows will receive comprehensive financial support that covers their living expenses and research costs, along with dedicated resources for building foundational knowledge in AI safety, regular mentorship, 1:1 coaching calls with our team, and facilitation for in-person work with our partner orgs.
-
Location: In-person is preferred, but hybrid or remote may be possible.
Deadline: Our Priority Deadline is 15 December 2024. The Final Deadline is 31 December 2024.
Start Date: Flexible
Duration: The Fellowship duration may be up to 6 months or longer, depending on what is mutually agreed upon by the candidate and the placement org.
Compensation: USD 30,000 for 6 months. The amount will vary depending on the candidate’s location and the Fellowship duration.
Eligibility: We accept applicants from all over the world.
Selection Process: All applicants are required to go through the two-phase selection process (8-12 hours in total). The first phase will be managed by the Impact Academy team, and the second phase will be governed by our partner orgs. The final decision on an applicant’s candidature will lie with the placement org.
-
Candidates will receive full-time placement offers from our partner orgs only if they complete all phases of the selection process. Participating in Phase 1 or 1.5 does not guarantee placements with the partner orgs. To learn more, please see the Application Process Overview.
Selected Fellows will work with researchers from the respective organisations, though the name and scope of their roles may vary. Depending on what is most feasible and decided upon mutually, Fellows may:
Apply for a visa for in-person work at the office of the placement org.
Work remotely at a co-working space facilitated by us at one of the global AI safety hubs.
Work remotely from the location they expect to be based in for the duration of the Fellowship.
Although the start date of the Fellowship will be mutually decided by the candidate and the placement org, we expect candidates to begin receiving offers from March 2025 onwards.
-
We expect candidates to be available for full-time work for at least 3 months starting February or March 2025.
We welcome applications from candidates around the globe, including those from regions traditionally underrepresented in AI safety research.
As of now, we cannot accept applicants who will be under 18 years of age in January 2025.
-
We expect our ideal candidates to have:
Demonstrated knowledge of Machine Learning and Deep Learning (e.g. a first-author ML research paper).
Demonstrated programming experience (e.g. >1 year of software engineering experience at a leading tech firm).
Scholastic excellence or any other achievements, such as 99th percentile performance in standardised STEM tests, like Math or Informatics Olympiads or competitive exams for graduate study.
An interest in pursuing research to reduce the risks from advanced AI systems.
Even if you feel you don’t possess some of these qualifications, we encourage you to apply!
We are looking for candidates excited to transition to AI safety research full-time after the Fellowship.
-
Fellows who receive offers from our partner organisations may get to work with them in person in the US or the UK.
We can support select Fellows constrained by logistical issues like visa delays in working at a shared office space facilitated by us in one of the global AI safety hubs.
Although we prefer that Fellows work in person or at a shared office space, we may be open to hybrid or remote work, depending on the feasibility.
-
We offer comprehensive support for Fellows to have an optimised learning and research environment:
Expert-led technical assessments and interview preparation.
Access to a global network of leading AI safety researchers.
An optional, fully-funded AI Safety BootCamp with personalized mentoring.
A competitive compensation package tailored to your location— up to USD 30,000 for a 6-month duration to cover your salary and cost of living.
Operational support with visa applications, if needed.
-
Fellows who demonstrate strong performance during the program would have reliable opportunities to continue working full-time with the placement orgs, e.g. FAR.AI or the UK AISI.
Fellows selected to work with the UK AISI may even receive a full-time contract for up to 24 months at the outset.
Work With Researchers From
Application Process Overview
Phase 1: Applications & Technical Assessment
-
We know you might have a lot on your plate, so the first application asks only for your basic information, CV, and background in ML, programming and research.
We may optionally follow up to learn more about your motivation to join the Fellowship, career plans, and research interests.
-
Apply your math and coding skills to solve multiple problems in a given amount of time. This will be a general Python programming proficiency test.
-
This test will measure your ability to make progress on small open-ended research questions requiring ML skills.
-
Get invited to interview with our technical Research Management staff to assess your knowledge of Machine Learning and Deep Learning.
This may or may not be accompanied by another conversation with our team to understand your level of engagement with AI safety and alignment research.
Phase 1.5: Optional BootCamp for Candidates New to AI Safety
-
Proficient in ML and programming but new to AI safety? For candidates who are new to the field but have excelled in Phase 1, we will offer a part-time (4 weeks) or full-time (2 weeks) remote BootCamp.
The BootCamp will be a paid opportunity for candidates to upskill before they interview with our partner organisations.
Participants will go through an expert-curated curriculum that can be customised per their research interests.
-
Undertake up to five 1:1 coaching sessions with our technical Research Manager or experts already engaged in AI safety research.
These calls can help you clarify doubts about the curriculum, get career guidance, and understand the field better.
-
Participate in a test to evaluate your knowledge of the foundations of AI safety and research areas you want to work on.
Note that this is for candidates who have yet to explore areas of interest in the field. Candidates already engaged substantially in AI safety work may be directly eligible for Phase 2.
Phase 2: Assessment with Placement Labs and Organisations
-
This process in this phase will be highly dependent on your performance in Phase 1 (and 1.5).
It will take into account the specialised research and personnel requirements of the placement orgs as well as your overall strengths and interests. It is likely that this will play out differently for different candidates.
-
Shortlisted candidates will be invited for 1-2 interviews with Research Leads at our partner organisations. Interviews will typically revolve around your technical proficiency and fit with the organisation’s research mission.
Candidates could be eligible for this directly after Phase 1 or after having completed the funded AI Safety BootCamp.
Some orgs might ask you to complete an additional written test.
-
Depending on their process of assessing candidates’ capabilities, partner organisations will invite you for a technical test before or after the interviews.
Expected Timeline
Why AI Safety
AI might be the most transformative technology of all time. To make it go well for humanity, we must seriously consider the types of risks advanced AI systems of the future might pose. Fortunately, there is a growing ecosystem of professionals and institutions dedicated to researching and solving these problems- AI safety.
AI safety focuses on developing technology and governance interventions to prevent both short-term and long-term harm caused by advanced AI systems. To learn more, check out this list of resources.
We believe we can support global talent, who might otherwise not have had the opportunity, to play an important role in advancing research and other work in the field through their careers.
Research Directions
Depending on their interests and the placement orgs’ projects, Fellows may get to work on a range of research directions in AI safety and alignment. Below, we have indicatively outlined some of these research agendas.
Adversarial Robustness
Adversarial Robustness is aimed at developing machine learning models that can maintain their performance and reliability even when faced with intentionally misleading or manipulated inputs. See FAR.AI’s work on adversarial training of Go AIs.
Cognitive Emulation
Cognitive Emulation (CogEm) is primarily the research agenda of Conjecture, wherein the goal is to build AI systems that emulate human reasoning and are scalable, auditable and controllable. Through this approach, the systems could be sufficiently understood and bounded to ensure they do not suddenly dramatically shift their behaviour.
Model Evaluations
Model Evaluations are about producing empirical evidence on a model's capabilities and behavioural tendencies, which allows stakeholders to make important decisions about training or deploying the model. For examples, see DeepMind’s evaluations for dangerous capabilities, or Sam Brown’s AutoEnhance evaluation proposal.
Scalable Oversight
Scalable Oversight refers to a set of approaches to help humans effectively monitor, evaluate, and control such complex AI systems. Approaches include constitutional AI, AI safety via debate, iterated distillation and amplification and reward modelling. To learn more, check out Anthropic’s Constitutional AI, or OpenAI’s AI Safety via Debate.
Mechanistic Interpretability
Mechanistic Interpretability is an area of interpretability concerned with reverse-engineering the trained models into human-understandable algorithms. See, for example, recent work from FAR.AI on investigating planning in RNN model playing Sokoban.
Value Learning
Value learning (also called preference inference) is a proposed method for incorporating human values in an AI system. Human values are difficult to specify. Current approaches work on learning human values through human feedback and interaction. Read MIRI’s Learning What to Value and CHAI’s work on Cooperative Inverse Reinforcement Learning.
FAQs
Can I participate in the Fellowship part-time?
We are only looking for candidates who can commit to full-time participation for this iteration of the Fellowship.
I know someone who would be a good fit. Can I refer them?
Yes! We're offering $2,000 to anyone who refers a successful candidate not already in our database who we end up selecting for the Fellowship. This applies to external referrers who we are not already collaborating with. Please use this form to refer potential candidates.
What if I am new to AI safety?
We’re looking for candidates with strong technical qualifications and research aptitude who are interested in working on ways to reduce the risks of advanced AI. For candidates who excel in our Phase 1 assessment, we will offer a paid opportunity to upskill in the foundations of AI safety and alignment research through a guided curriculum. Even if you have less prior knowledge of this, we encourage you to apply. We are happy to work with you to improve your understanding of the field.
What will I work on if selected?
Our partner organisations work on a portfolio of several experimental and established research agendas for safer and more aligned advanced AI systems. The exact research agendas will depend on the selection and matching process, but Fellows would broadly be working in areas like adversarial robustness, mechanistic interpretability, scalable oversight, cognitive emulation, and control problems, to name some.
What can I gain from this program?
You will work on impactful research projects with some of the most talented researchers in the field, at top AI safety labs and research institutions. Fellows who perform well would have reliable opportunities to continue working full-time on their projects. As a Fellow, you would:
Publish papers in top ML conferences like ICML, ICLR, NeurIPS, CVPR, etc.
Develop empirical ML research experience on Large Language Models.
Participate in a niche research community to collaborate on challenging problems.
Advance existing AI safety strategies and research agendas or develop new ones.
Want to apply but have more questions? Write to us!
If you think you may be a good fit for the program but would like to clarify some doubts, please email us at aisafety@impactacademy.org. We are happy to answer queries and potentially get on a quick call to understand you better.
About Us
Impact Academy is a startup that runs cutting-edge fellowships to enable global talent to use their careers to contribute to the safe and beneficial development of AI. Our focus is to provide opportunities for students and professionals to explore challenging ideas in the field and spearhead new strategies for AI safety (AIS). We also offer mentorship for career development and support for job placements.
Since 2023, we have launched, incubated or co-organised several programs in AI safety, technical and alignment research, and governance. These include:
Versions 1 and 2 of Future Academy, a multi-track program in impactful careers
A pilot AI Safety Careers Fellowship at IIT Delhi
An online AI Safety Careers Course
A fully-funded Summer Research Fellowship
The ACM India Summer School on Safe & Responsible AI
Miscellaneous student chapters and reading groups at top tech institutions in India
Learn more about us here.
Get in touch
Excited to learn more or looking for a way to get involved? Apply to collaborate as a Talent Identification Advisor, or use the form below to contact us about anything else!