COL Limited
Applications deadline: We are reviewing applications on a rolling basis. It might take a few weeks until you hear from us. ABOUT APOLLO RESEARCH The capabilities of current AI systems are evolving at a rapid pace. While these advancements offer tremendous opportunities, they also present significant risks, such as the potential for deliberate misuse or the deployment of sophisticated yet misaligned models. At Apollo Research , our primary concern lies with deceptive alignment, a phenomenon where a model appears to be aligned but is, in fact, misaligned and capable of evading human oversight. Our approach focuses on behavioral model evaluations, which we then use to audit real-world models. We also combine black-box approaches with applied interpretability. In our evaluations, we focus on LM agents, i.e. LLMs with agentic scaffolding similar to AIDE or SWE agent . We also study model organisms in controlled environments (see our security policies ), e.g. to better understand capabilities related to scheming. At Apollo, we aim for a culture that emphasizes truth-seeking, being goal-oriented, giving and receiving constructive feedback, and being friendly and helpful. If you're interested in more details about what it's like working at Apollo, you can find more information here . ABOUT THE TEAM The current evals team consists of Mikita Balesni , Jérémy Scheurer , Alex Meinke , Rusheb Shah , Bronson Schoen , and Axel Højmark . Marius Hobbhahn manages and advises the evals team, though team members lead individual projects. You will mostly work with the evals team, but you will likely sometimes interact with the interpretability team, e.g. for white-box evaluations, and with the governance team to translate technical knowledge into concrete recommendations. You can find our full team here . ABOUT THE ROLE We're looking for research scientists, research engineers, and software engineers who are excited to work on these and similar projects. We intend to hire people with a broad range of experience and encourage applications even if you don't yet have experience in any of our current team efforts. We welcome applicants of all ethnicities, genders, sexes, ages, abilities, religions, sexual orientations, regardless of pregnancy or maternity, marital status, or gender reassignment. EVALS TEAM WORK. The evals team focuses on the following efforts: Conceptual work on safety cases for scheming, for example, our work on evaluation-based safety cases for scheming Building evaluations for scheming-related properties, such as situational awareness or deceptive reasoning. Conducting evaluations on frontier models and publishing the results either to the general public or a target audience such as AI developers or governments, for example, our work in OpenAI's o1-preview system card . Creating model organisms and demonstrations of behavior related to deceptive alignment, e.g. exploring the influence of goal-directedness on scheming. Applied interpretability work that directly informs our evaluations, e.g. Detecting Strategic Deception Using Linear Probes . Designing and evaluating AI control protocols. We have not started these efforts yet but intend to work on them starting Q2 2025. Building a high-quality software stack to support all of these efforts. We have recently switched to Inspect as our primary evals framework. CANDIDATE CHARACTERISTICS in strong candidates For all skills, we don't require a formal background or industry experience and welcome self-taught candidates. Large Language Model (LLM) steering: The core skill of our evals research scientist role is steering LLMs. This can take many different forms, such as: Prompting: eliciting specific behavior through clever word choice. LM agents & scaffolding: chaining inputs and outputs from various models in a structured way, making them more goal-directed and agentic. Fluent LLM usage: With increasing capabilities, we can use LLMs to speed up all parts of our pipeline. We welcome candidates who have integrated LLMs into their workflow. Supervised fine-tuning: creating datasets and then fine-tuning models to improve a specific capability or to study aspects of learning/generalization. RL(HF/AIF): using other models, programmatic reward functions, or custom reward models as a source of feedback for fine-tuning an existing LLM. Software engineering: Model evaluators benefit from a solid foundation in software engineering. This can include developing APIs (ideally around LLMs or eval tasks), data science, system design, data engineering, and front-end development. Generalist: Most evals tasks require a wide range of skills ranging from LLM fine-tuning to developing frontend labeling interfaces. Therefore, we're seeking individuals with diverse skill sets, a readiness to acquire new skills rapidly, and a strong focus on results. Empirical Research Experience: We're looking for candidates with prior empirical research experience. This includes the design and execution of experiments as well as writing up and communicating these findings. Optimally, the research included working with LLMs. This experience can come from academia, industry, or independent research. Scientific mindset: We think it is easy to overinterpret evals results and, thus, think a core skill of a good evals engineer or scientist is to keep track of potential alternative explanations for findings. Ideally, any candidate should be able to propose and test these alternative hypotheses in new experiments. Values: We're looking for team members who thrive in a collaborative environment and are results-oriented. You can find out more about our culture here . Additionally, "nice to have" skills include experience related to AI control and cyber security. Depending on your preferred role, we will weigh these characteristics differently, e.g. software engineers don't have to have research experience, but must have strong software engineering skills. LOGISTICS Start Date: Target of 2-3 months after the first interview. Time Allocation: Full-time. Location: The office is in London, and the building is shared with the London Initiative for Safe AI (LISA) offices. This is an in-person role. In rare situations, we may consider partially remote arrangements on a case-by-case basis. Work Visas: We can sponsor UK visas BENEFITS: Salary: a competitive UK-based salary. Flexible work hours and schedule. Unlimited vacation. Unlimited sick leave. Lunch, dinner, and snacks are provided for all employees on workdays. Paid work trips, including staff retreats, business trips, and relevant conferences. A yearly $1,000 (USD) professional development budget. We want to emphasize that people who feel they don't fulfill all of these characteristics but think they would be a good fit for the position nonetheless are strongly encouraged to apply . We believe that excellent candidates can come from a variety of backgrounds and are excited to give you opportunities to shine. Equality Statement: Apollo Research is an Equal Opportunity Employer. We value diversity and are committed to providing equal opportunities to all, regardless of age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, or sexual orientation. How to apply: Please complete the application form with your CV. The provision of a cover letter is optional but not necessary. Please also feel free to share links to relevant work samples. About the interview process: Our multi-stage process includes a screening interview, a take-home test (approx. 2 hours), 3 technical interviews, and a final interview with Marius (CEO). The technical interviews will be closely related to tasks the candidate would do on the job. There are no leetcode-style general coding interviews. If you want to prepare for the interviews, we suggest working on hands-on LLM evals projects (e.g. as suggested in our starter guide ), such as building LM agent evaluations in Inspect . Applications deadline: We are reviewing applications on a rolling basis. It might take a few weeks until you hear from us. This role is supported by AI Futures Grants , a UK Government program designed to help the next generation of AI leaders meet the costs of relocating to the UK. AI Futures Grants provide financial support to reimburse relocation costs such as work visa fees, immigration health surcharge and travel/subsistence expenses. Successful candidates for this role may beable to get up to £10,000 to meet associated relocation costs, subject to terms and conditions.
Applications deadline: We are reviewing applications on a rolling basis. It might take a few weeks until you hear from us. ABOUT APOLLO RESEARCH The capabilities of current AI systems are evolving at a rapid pace. While these advancements offer tremendous opportunities, they also present significant risks, such as the potential for deliberate misuse or the deployment of sophisticated yet misaligned models. At Apollo Research , our primary concern lies with deceptive alignment, a phenomenon where a model appears to be aligned but is, in fact, misaligned and capable of evading human oversight. Our approach focuses on behavioral model evaluations, which we then use to audit real-world models. We also combine black-box approaches with applied interpretability. In our evaluations, we focus on LM agents, i.e. LLMs with agentic scaffolding similar to AIDE or SWE agent . We also study model organisms in controlled environments (see our security policies ), e.g. to better understand capabilities related to scheming. At Apollo, we aim for a culture that emphasizes truth-seeking, being goal-oriented, giving and receiving constructive feedback, and being friendly and helpful. If you're interested in more details about what it's like working at Apollo, you can find more information here . ABOUT THE TEAM The current evals team consists of Mikita Balesni , Jérémy Scheurer , Alex Meinke , Rusheb Shah , Bronson Schoen , and Axel Højmark . Marius Hobbhahn manages and advises the evals team, though team members lead individual projects. You will mostly work with the evals team, but you will likely sometimes interact with the interpretability team, e.g. for white-box evaluations, and with the governance team to translate technical knowledge into concrete recommendations. You can find our full team here . ABOUT THE ROLE We're looking for research scientists, research engineers, and software engineers who are excited to work on these and similar projects. We intend to hire people with a broad range of experience and encourage applications even if you don't yet have experience in any of our current team efforts. We welcome applicants of all ethnicities, genders, sexes, ages, abilities, religions, sexual orientations, regardless of pregnancy or maternity, marital status, or gender reassignment. EVALS TEAM WORK. The evals team focuses on the following efforts: Conceptual work on safety cases for scheming, for example, our work on evaluation-based safety cases for scheming Building evaluations for scheming-related properties, such as situational awareness or deceptive reasoning. Conducting evaluations on frontier models and publishing the results either to the general public or a target audience such as AI developers or governments, for example, our work in OpenAI's o1-preview system card . Creating model organisms and demonstrations of behavior related to deceptive alignment, e.g. exploring the influence of goal-directedness on scheming. Applied interpretability work that directly informs our evaluations, e.g. Detecting Strategic Deception Using Linear Probes . Designing and evaluating AI control protocols. We have not started these efforts yet but intend to work on them starting Q2 2025. Building a high-quality software stack to support all of these efforts. We have recently switched to Inspect as our primary evals framework. CANDIDATE CHARACTERISTICS in strong candidates For all skills, we don't require a formal background or industry experience and welcome self-taught candidates. Large Language Model (LLM) steering: The core skill of our evals research scientist role is steering LLMs. This can take many different forms, such as: Prompting: eliciting specific behavior through clever word choice. LM agents & scaffolding: chaining inputs and outputs from various models in a structured way, making them more goal-directed and agentic. Fluent LLM usage: With increasing capabilities, we can use LLMs to speed up all parts of our pipeline. We welcome candidates who have integrated LLMs into their workflow. Supervised fine-tuning: creating datasets and then fine-tuning models to improve a specific capability or to study aspects of learning/generalization. RL(HF/AIF): using other models, programmatic reward functions, or custom reward models as a source of feedback for fine-tuning an existing LLM. Software engineering: Model evaluators benefit from a solid foundation in software engineering. This can include developing APIs (ideally around LLMs or eval tasks), data science, system design, data engineering, and front-end development. Generalist: Most evals tasks require a wide range of skills ranging from LLM fine-tuning to developing frontend labeling interfaces. Therefore, we're seeking individuals with diverse skill sets, a readiness to acquire new skills rapidly, and a strong focus on results. Empirical Research Experience: We're looking for candidates with prior empirical research experience. This includes the design and execution of experiments as well as writing up and communicating these findings. Optimally, the research included working with LLMs. This experience can come from academia, industry, or independent research. Scientific mindset: We think it is easy to overinterpret evals results and, thus, think a core skill of a good evals engineer or scientist is to keep track of potential alternative explanations for findings. Ideally, any candidate should be able to propose and test these alternative hypotheses in new experiments. Values: We're looking for team members who thrive in a collaborative environment and are results-oriented. You can find out more about our culture here . Additionally, "nice to have" skills include experience related to AI control and cyber security. Depending on your preferred role, we will weigh these characteristics differently, e.g. software engineers don't have to have research experience, but must have strong software engineering skills. LOGISTICS Start Date: Target of 2-3 months after the first interview. Time Allocation: Full-time. Location: The office is in London, and the building is shared with the London Initiative for Safe AI (LISA) offices. This is an in-person role. In rare situations, we may consider partially remote arrangements on a case-by-case basis. Work Visas: We can sponsor UK visas BENEFITS: Salary: a competitive UK-based salary. Flexible work hours and schedule. Unlimited vacation. Unlimited sick leave. Lunch, dinner, and snacks are provided for all employees on workdays. Paid work trips, including staff retreats, business trips, and relevant conferences. A yearly $1,000 (USD) professional development budget. We want to emphasize that people who feel they don't fulfill all of these characteristics but think they would be a good fit for the position nonetheless are strongly encouraged to apply . We believe that excellent candidates can come from a variety of backgrounds and are excited to give you opportunities to shine. Equality Statement: Apollo Research is an Equal Opportunity Employer. We value diversity and are committed to providing equal opportunities to all, regardless of age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, or sexual orientation. How to apply: Please complete the application form with your CV. The provision of a cover letter is optional but not necessary. Please also feel free to share links to relevant work samples. About the interview process: Our multi-stage process includes a screening interview, a take-home test (approx. 2 hours), 3 technical interviews, and a final interview with Marius (CEO). The technical interviews will be closely related to tasks the candidate would do on the job. There are no leetcode-style general coding interviews. If you want to prepare for the interviews, we suggest working on hands-on LLM evals projects (e.g. as suggested in our starter guide ), such as building LM agent evaluations in Inspect . Applications deadline: We are reviewing applications on a rolling basis. It might take a few weeks until you hear from us. This role is supported by AI Futures Grants , a UK Government program designed to help the next generation of AI leaders meet the costs of relocating to the UK. AI Futures Grants provide financial support to reimburse relocation costs such as work visa fees, immigration health surcharge and travel/subsistence expenses. Successful candidates for this role may beable to get up to £10,000 to meet associated relocation costs, subject to terms and conditions.
COL Limited
Applications deadline: We review applications on a rolling basis and encourage early submissions. ABOUT APOLLO RESEARCH The capabilities of current AI systems are evolving at a rapid pace. While these advancements offer tremendous opportunities, they also present significant risks, such as the potential for deliberate misuse or the deployment of sophisticated yet misaligned models. At Apollo Research , our primary concern lies with deceptive alignment, a phenomenon where a model appears to be aligned but is, in fact, misaligned and capable of evading human oversight. Our approach focuses on behavioral model evaluations, which we then use to audit real-world models. We also combine black-box approaches with applied interpretability. In our evaluations, we focus on LM agents, i.e. LLMs with agentic scaffolding similar to AIDE or SWE agent . We also study model organisms in controlled environments (see our security policies ), e.g. to better understand capabilities related to scheming. At Apollo, we aim for a culture that emphasizes truth-seeking, being goal-oriented, giving and receiving constructive feedback, and being friendly and helpful. If you're interested in more details about what it's like working at Apollo, you can find more information here . THE OPPORTUNITY We're seeking a Software Engineer who will enhance our capability to evaluate Large Language Models (LLMs) through building critical tools and libraries for our Evals team. Your work will directly impact our mission to make AI systems safer and more aligned. What You'll Accomplish in Your First Year 1. Accelerate our frontier LLM evaluations research by leading the design and implementation of software libraries and tools that underpin our end-to-end research workflows 2. Ensure the reliability of our experimental results by building tools that identify subtle changes in LLM behavior and maintain integrity across our research 3. Shape the vision for our internal software platform, leading key decisions about how researchers will run workloads, interact with data, analyze results, and share insights 4. Increase team productivity by providing design guidance, debugging, and technical support to unblock researchers and enable them to focus on their core research 5. Build expertise working with state of the art (SOTA) AI systems and tackling the unique challenges posed when building software around them Key Responsibilities - Rapidly prototype and iterate on internal tools and libraries for building and running frontier language model evaluations - Lead the development of major features from ideation to implementation - Collaboratively define and shape the software roadmap and priorities - Establish and advocate for good software design practices and codebase health - Establish design patterns for new types of evaluations - Build LLM agents that automate our internal software development and research - Work closely with researchers to understand what challenges they face - Assist researchers with implementation and debugging of research code - Communicate clearly about technical decisions and tradeoffs Job Requirements You must have experience writing production-quality python code. We are looking for strong generalist software engineers with a track record of taking ownership. Candidates may demonstrate these skills in different ways. For example, you might have one of more of these: - Led the development of a successful software tool or product over an extended period (e.g. 1 year or more) - Started and built the tech stack for a company - Worked your way up in a large organisation, repeatedly gaining more responsibility and influencing a large part of the codebase - Authored and/or maintained a popular open-source tool or library - 5+ years of professional software engineering experience The following experience would be a bonus: - Experience working with LLM agents or LLM evaluations - Infosecurity / cybersecurity experience - Experience working with AWS - Interest in AI Safety We want to emphasize that people who feel they don't fulfill all of these characteristics but think they would be a good fit for the position nonetheless are strongly encouraged to apply . We believe that excellent candidates can come from a variety of backgrounds and are excited to give you opportunities to shine. Representative projects - Implement an internal job orchestration tool which allows researchers to run evals on remote machines. - Build out an eval runs database which stores all historical results in a queryable format. - Implement LLM agents to automate internal software engineering and research tasks. - Design and implement research tools for loading, viewing and interacting with transcripts from eval runs. - Establish internal patterns and conventions for building new types of evaluations within the Inspect framework. - Optimize the CI pipeline to reduce execution time and eliminate flaky tests. ABOUT THE TEAM The current evals team consists of Mikita Balesni , Jérémy Scheurer , Alex Meinke , Rusheb Shah , Bronson Schoen , Andrei Matveiakin, Felix Hofstätter, and Axel Højmark . MariusHobbhahn manages and advises the team, though team members lead individual projects. You would work closely with Rusheb and Andrei, who are the full-time software engineers on the evals team, but you would also interact a lot with everyone else. You can find our full team here . EVALS TEAM WORK. The evals team focuses on the following efforts: We have recently switched to Inspect as our primary evals framework. If you want to prepare for the SWE role, we recommend playing around with Inspect. Conceptual work on safety cases for scheming, for example, our work on evaluation-based safety cases for scheming Building evaluations for scheming-related properties , such as situational awareness or deceptive reasoning. Conducting evaluations on frontier models and publishing the results either to the general public or a target audience such as AI developers or governments, for example, our work in OpenAI's o1-preview system card . Creating model organisms and demonstrations of behavior related to deceptive alignment, e.g. exploring the influence of goal-directedness on scheming. Designing and evaluating AI control protocols. We have not started these efforts yet but intend to work on them starting Q2 2025. LOGISTICS Start Date: Target of 2-3 months after the first interview. Time Allocation: Full-time. Location: The office is in London, and the building is shared with the London Initiative for Safe AI (LISA) offices. This is an in-person role. In rare situations, we may consider partially remote arrangements on a case-by-case basis. Work Visas: We can sponsor UK visas BENEFITS Salary: a competitive UK-based salary. Flexible work hours and schedule. Unlimited vacation. Unlimited sick leave. Lunch, dinner, and snacks are provided for all employees on workdays. Paid work trips, including staff retreats, business trips, and relevant conferences. A yearly $1,000 (USD) professional development budget. Equality Statement: Apollo Research is an Equal Opportunity Employer. We value diversity and are committed to providing equal opportunities to all, regardless of age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, or sexual orientation. How to apply: Please complete the application form with your CV. The provision of a cover letter is optional but not necessary. Please also feel free to share links to relevant work samples. About the interview process: Our multi-stage process includes a screening interview, a take-home test (approx. 2 hours), 3 technical interviews, and a final interview with Marius (CEO). The technical interviews will be closely related to tasks the candidate would do on the job. There are no leetcode-style general coding interviews. If you want to prepare for the interviews, we suggest working on hands-on LLM evals projects (e.g. as suggested in our starter guide ), such as building LM agent evaluations in Inspect . Applications deadline: W e review applications on a rolling basis and encourage early submissions. This role is supported by AI Futures Grants , a UK Government program designed to help the next generation of AI leaders meet the costs of relocating to the UK. AI Futures Grants provide financial support to reimburse relocation costs such as work visa fees, immigration health surcharge and travel/subsistence expenses. Successful candidates for this role may beable to get up to £10,000 to meet associated relocation costs, subject to terms and conditions. Thank you very much for applying to Apollo Research.
Applications deadline: We review applications on a rolling basis and encourage early submissions. ABOUT APOLLO RESEARCH The capabilities of current AI systems are evolving at a rapid pace. While these advancements offer tremendous opportunities, they also present significant risks, such as the potential for deliberate misuse or the deployment of sophisticated yet misaligned models. At Apollo Research , our primary concern lies with deceptive alignment, a phenomenon where a model appears to be aligned but is, in fact, misaligned and capable of evading human oversight. Our approach focuses on behavioral model evaluations, which we then use to audit real-world models. We also combine black-box approaches with applied interpretability. In our evaluations, we focus on LM agents, i.e. LLMs with agentic scaffolding similar to AIDE or SWE agent . We also study model organisms in controlled environments (see our security policies ), e.g. to better understand capabilities related to scheming. At Apollo, we aim for a culture that emphasizes truth-seeking, being goal-oriented, giving and receiving constructive feedback, and being friendly and helpful. If you're interested in more details about what it's like working at Apollo, you can find more information here . THE OPPORTUNITY We're seeking a Software Engineer who will enhance our capability to evaluate Large Language Models (LLMs) through building critical tools and libraries for our Evals team. Your work will directly impact our mission to make AI systems safer and more aligned. What You'll Accomplish in Your First Year 1. Accelerate our frontier LLM evaluations research by leading the design and implementation of software libraries and tools that underpin our end-to-end research workflows 2. Ensure the reliability of our experimental results by building tools that identify subtle changes in LLM behavior and maintain integrity across our research 3. Shape the vision for our internal software platform, leading key decisions about how researchers will run workloads, interact with data, analyze results, and share insights 4. Increase team productivity by providing design guidance, debugging, and technical support to unblock researchers and enable them to focus on their core research 5. Build expertise working with state of the art (SOTA) AI systems and tackling the unique challenges posed when building software around them Key Responsibilities - Rapidly prototype and iterate on internal tools and libraries for building and running frontier language model evaluations - Lead the development of major features from ideation to implementation - Collaboratively define and shape the software roadmap and priorities - Establish and advocate for good software design practices and codebase health - Establish design patterns for new types of evaluations - Build LLM agents that automate our internal software development and research - Work closely with researchers to understand what challenges they face - Assist researchers with implementation and debugging of research code - Communicate clearly about technical decisions and tradeoffs Job Requirements You must have experience writing production-quality python code. We are looking for strong generalist software engineers with a track record of taking ownership. Candidates may demonstrate these skills in different ways. For example, you might have one of more of these: - Led the development of a successful software tool or product over an extended period (e.g. 1 year or more) - Started and built the tech stack for a company - Worked your way up in a large organisation, repeatedly gaining more responsibility and influencing a large part of the codebase - Authored and/or maintained a popular open-source tool or library - 5+ years of professional software engineering experience The following experience would be a bonus: - Experience working with LLM agents or LLM evaluations - Infosecurity / cybersecurity experience - Experience working with AWS - Interest in AI Safety We want to emphasize that people who feel they don't fulfill all of these characteristics but think they would be a good fit for the position nonetheless are strongly encouraged to apply . We believe that excellent candidates can come from a variety of backgrounds and are excited to give you opportunities to shine. Representative projects - Implement an internal job orchestration tool which allows researchers to run evals on remote machines. - Build out an eval runs database which stores all historical results in a queryable format. - Implement LLM agents to automate internal software engineering and research tasks. - Design and implement research tools for loading, viewing and interacting with transcripts from eval runs. - Establish internal patterns and conventions for building new types of evaluations within the Inspect framework. - Optimize the CI pipeline to reduce execution time and eliminate flaky tests. ABOUT THE TEAM The current evals team consists of Mikita Balesni , Jérémy Scheurer , Alex Meinke , Rusheb Shah , Bronson Schoen , Andrei Matveiakin, Felix Hofstätter, and Axel Højmark . MariusHobbhahn manages and advises the team, though team members lead individual projects. You would work closely with Rusheb and Andrei, who are the full-time software engineers on the evals team, but you would also interact a lot with everyone else. You can find our full team here . EVALS TEAM WORK. The evals team focuses on the following efforts: We have recently switched to Inspect as our primary evals framework. If you want to prepare for the SWE role, we recommend playing around with Inspect. Conceptual work on safety cases for scheming, for example, our work on evaluation-based safety cases for scheming Building evaluations for scheming-related properties , such as situational awareness or deceptive reasoning. Conducting evaluations on frontier models and publishing the results either to the general public or a target audience such as AI developers or governments, for example, our work in OpenAI's o1-preview system card . Creating model organisms and demonstrations of behavior related to deceptive alignment, e.g. exploring the influence of goal-directedness on scheming. Designing and evaluating AI control protocols. We have not started these efforts yet but intend to work on them starting Q2 2025. LOGISTICS Start Date: Target of 2-3 months after the first interview. Time Allocation: Full-time. Location: The office is in London, and the building is shared with the London Initiative for Safe AI (LISA) offices. This is an in-person role. In rare situations, we may consider partially remote arrangements on a case-by-case basis. Work Visas: We can sponsor UK visas BENEFITS Salary: a competitive UK-based salary. Flexible work hours and schedule. Unlimited vacation. Unlimited sick leave. Lunch, dinner, and snacks are provided for all employees on workdays. Paid work trips, including staff retreats, business trips, and relevant conferences. A yearly $1,000 (USD) professional development budget. Equality Statement: Apollo Research is an Equal Opportunity Employer. We value diversity and are committed to providing equal opportunities to all, regardless of age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, or sexual orientation. How to apply: Please complete the application form with your CV. The provision of a cover letter is optional but not necessary. Please also feel free to share links to relevant work samples. About the interview process: Our multi-stage process includes a screening interview, a take-home test (approx. 2 hours), 3 technical interviews, and a final interview with Marius (CEO). The technical interviews will be closely related to tasks the candidate would do on the job. There are no leetcode-style general coding interviews. If you want to prepare for the interviews, we suggest working on hands-on LLM evals projects (e.g. as suggested in our starter guide ), such as building LM agent evaluations in Inspect . Applications deadline: W e review applications on a rolling basis and encourage early submissions. This role is supported by AI Futures Grants , a UK Government program designed to help the next generation of AI leaders meet the costs of relocating to the UK. AI Futures Grants provide financial support to reimburse relocation costs such as work visa fees, immigration health surcharge and travel/subsistence expenses. Successful candidates for this role may beable to get up to £10,000 to meet associated relocation costs, subject to terms and conditions. Thank you very much for applying to Apollo Research.