Aisafety
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We're in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally. We're here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action. About the Team Security Engineering at the AI Security Institute (AISI) exists to help our researchers move fast, safely. We are founding the Security Engineering team in a largely greenfield cloud environment, we treat security as a measurable, researcher centric product. Secure by design platforms, automated governance, and intelligence led detection that protects our people, partners, models, and data. We work shoulder to shoulder with research units and core technology teams, and we optimise for enablement over gatekeeping, proportionate controls, low ego, and high ownership. What you might work on Help design and ship paved roads and secure defaults across our platform so researchers can build quickly and safely Build provenance and integrity into the software supply chain (signing, attestation, artefact verification, reproducibility) Support strengthened identity, segmentation, secrets, and key management to create a defensible foundation for evaluations at scale Develop automated, evidence driven assurance mapped to relevant standards, reducing audit toil and improving signal Create detections and response playbooks tailored to model evaluations and research workflows, and run exercises to validate them Threat model new evaluation pipelines with research and core technology teams, fixing classes of issues at the platform layer Assess third party services and hardware/software supply chains; introduce lightweight controls that raise the bar Contribute to open standards and open source, and share lessons with the broader community where appropriate If you want to build security that accelerates frontier scale AI safety research, and see your work land in production quickly, this is a good place to do it. Role Summary Own and operationalise AISI's governance, risk, and compliance (GRC) engineering practice. This role sits at the intersection of security engineering, assurance, and policy, turning paper-based requirements into actionable, testable, and automatable controls. You will lead the technical response to GovAssure and other regulatory requirements, and ensure compliance is continuous and evidence driven. You will also extend GRC disciplines to frontier AI systems, integrating model lifecycle artefacts, evaluations, and release gates into the control and evidence pipeline. Responsibilities Translate regulatory frameworks (e.g. GovAssure, CAF) into programmatic controls and technical artefacts Build and maintain a continuous control validation and evidence pipeline Develop and own a capability-based risk management approach aligned to AISI's delivery model Maintain the AISI risk register and risk acceptance/exception handling process Act as the key interface for DSIT governance, policy, and assurance stakeholders Work cross-functionally to ensure risk and compliance are embedded into AISI delivery lifecycles Extend controls and evidence to the frontier AI model Integrate AI safety evidence (e.g., model/dataset documentation, evaluations, red-team results, release gates) into automated compliance workflows Define and implement controls for model weights handling, compute governance, third-party model/API usage, and model misuse/abuse monitoring Support readiness for AI governance standards and regulations (e.g., NIST AI RMF, ISO/IEC 42001, ISO/IEC 23894; EU AI Act exposure where relevant) Profile requirements Staff or Principal-level engineer or technical GRC specialist Experience in compliance-as-code, control validation, or regulated cloud environments Familiar with YAML, GitOps, structured artefacts, and automated policy checks Equally confident in engineering meetings and policy/gov forums Practical understanding of frontier AI system risks and artefacts (e.g., model evaluations, red-teaming, model/dataset documentation, release gating, weights handling) sufficient to translate AI policy into controls and machine checkable evidence Desirable: familiarity with MLOps tooling (e.g., experiment tracking, model registries) and integrating ML artefacts into CI/CD or evidence pipelines Translating policy into technical controls Designing controls as code or machine checkable evidence Familiarity with frameworks (GovAssure, CAF, NIST) and AI governance standards (NIST AI RMF, ISO/IEC 42001, ISO/IEC 23894) Experience building risk management workflows, including for AI specific risks (model misuse, capability escalation, data/weights security) Stakeholder engagement with governance teams and AI/ML engineering teams What We Offer Impact you couldn't have anywhere else Incredibly talented, mission-driven and supportive colleagues. Direct influence on how frontier AI is governed and deployed globally. Work with the Prime Minister's AI Advisor and leading AI companies. Opportunity to shape the first & best-resourced public-interest research team focused on AI security. Resources & access Pre release access to multiple frontier models and ample compute. Extensive operational support so you can focus on research and ship quickly. Work with experts across national security, policy, AI research and adjacent sciences. If you're talented and driven, you'll own important problems early. 5 days off learning and development, annual stipends for learning and development and funding for conferences and external collaborations. Freedom to pursue research bets without product pressure. Opportunities to publish and collaborate externally. Life & family Modern central London office (cafes, food court, gym) or option to work in similar government offices in Birmingham, Cardiff, Darlington, Edinburgh, Salford or Bristol. Hybrid working, flexibility for occasional remote work abroad and stipends for work from home equipment. At least 25 days' annual leave, 8 public holidays, extra team-wide breaks and 3 days off for volunteering. Generous paid parental leave (36 weeks of UK statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time). On top of your salary, we contribute 28.97% of your base salary to your pension. Discounts and benefits for cycling to work, donations and retail/gyms. Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 (base plus technical allowance), with 28.97% employer pension and other benefits on top . This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures. Additional Information Internal Fraud Database The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see -Internal Fraud Register. We are committed to providing equal opportunities and promoting diversity and inclusion for all applicants.
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We're in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally. We're here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action. About the Team Security Engineering at the AI Security Institute (AISI) exists to help our researchers move fast, safely. We are founding the Security Engineering team in a largely greenfield cloud environment, we treat security as a measurable, researcher centric product. Secure by design platforms, automated governance, and intelligence led detection that protects our people, partners, models, and data. We work shoulder to shoulder with research units and core technology teams, and we optimise for enablement over gatekeeping, proportionate controls, low ego, and high ownership. What you might work on Help design and ship paved roads and secure defaults across our platform so researchers can build quickly and safely Build provenance and integrity into the software supply chain (signing, attestation, artefact verification, reproducibility) Support strengthened identity, segmentation, secrets, and key management to create a defensible foundation for evaluations at scale Develop automated, evidence driven assurance mapped to relevant standards, reducing audit toil and improving signal Create detections and response playbooks tailored to model evaluations and research workflows, and run exercises to validate them Threat model new evaluation pipelines with research and core technology teams, fixing classes of issues at the platform layer Assess third party services and hardware/software supply chains; introduce lightweight controls that raise the bar Contribute to open standards and open source, and share lessons with the broader community where appropriate If you want to build security that accelerates frontier scale AI safety research, and see your work land in production quickly, this is a good place to do it. Role Summary Own and operationalise AISI's governance, risk, and compliance (GRC) engineering practice. This role sits at the intersection of security engineering, assurance, and policy, turning paper-based requirements into actionable, testable, and automatable controls. You will lead the technical response to GovAssure and other regulatory requirements, and ensure compliance is continuous and evidence driven. You will also extend GRC disciplines to frontier AI systems, integrating model lifecycle artefacts, evaluations, and release gates into the control and evidence pipeline. Responsibilities Translate regulatory frameworks (e.g. GovAssure, CAF) into programmatic controls and technical artefacts Build and maintain a continuous control validation and evidence pipeline Develop and own a capability-based risk management approach aligned to AISI's delivery model Maintain the AISI risk register and risk acceptance/exception handling process Act as the key interface for DSIT governance, policy, and assurance stakeholders Work cross-functionally to ensure risk and compliance are embedded into AISI delivery lifecycles Extend controls and evidence to the frontier AI model Integrate AI safety evidence (e.g., model/dataset documentation, evaluations, red-team results, release gates) into automated compliance workflows Define and implement controls for model weights handling, compute governance, third-party model/API usage, and model misuse/abuse monitoring Support readiness for AI governance standards and regulations (e.g., NIST AI RMF, ISO/IEC 42001, ISO/IEC 23894; EU AI Act exposure where relevant) Profile requirements Staff or Principal-level engineer or technical GRC specialist Experience in compliance-as-code, control validation, or regulated cloud environments Familiar with YAML, GitOps, structured artefacts, and automated policy checks Equally confident in engineering meetings and policy/gov forums Practical understanding of frontier AI system risks and artefacts (e.g., model evaluations, red-teaming, model/dataset documentation, release gating, weights handling) sufficient to translate AI policy into controls and machine checkable evidence Desirable: familiarity with MLOps tooling (e.g., experiment tracking, model registries) and integrating ML artefacts into CI/CD or evidence pipelines Translating policy into technical controls Designing controls as code or machine checkable evidence Familiarity with frameworks (GovAssure, CAF, NIST) and AI governance standards (NIST AI RMF, ISO/IEC 42001, ISO/IEC 23894) Experience building risk management workflows, including for AI specific risks (model misuse, capability escalation, data/weights security) Stakeholder engagement with governance teams and AI/ML engineering teams What We Offer Impact you couldn't have anywhere else Incredibly talented, mission-driven and supportive colleagues. Direct influence on how frontier AI is governed and deployed globally. Work with the Prime Minister's AI Advisor and leading AI companies. Opportunity to shape the first & best-resourced public-interest research team focused on AI security. Resources & access Pre release access to multiple frontier models and ample compute. Extensive operational support so you can focus on research and ship quickly. Work with experts across national security, policy, AI research and adjacent sciences. If you're talented and driven, you'll own important problems early. 5 days off learning and development, annual stipends for learning and development and funding for conferences and external collaborations. Freedom to pursue research bets without product pressure. Opportunities to publish and collaborate externally. Life & family Modern central London office (cafes, food court, gym) or option to work in similar government offices in Birmingham, Cardiff, Darlington, Edinburgh, Salford or Bristol. Hybrid working, flexibility for occasional remote work abroad and stipends for work from home equipment. At least 25 days' annual leave, 8 public holidays, extra team-wide breaks and 3 days off for volunteering. Generous paid parental leave (36 weeks of UK statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time). On top of your salary, we contribute 28.97% of your base salary to your pension. Discounts and benefits for cycling to work, donations and retail/gyms. Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 (base plus technical allowance), with 28.97% employer pension and other benefits on top . This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures. Additional Information Internal Fraud Database The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see -Internal Fraud Register. We are committed to providing equal opportunities and promoting diversity and inclusion for all applicants.
Aisafety
Why Faculty? We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we've worked with over 350 global customers to transform their performance through human centric AI. You can read about our real world impact here. We don't chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence. Our business, and reputation, is growing fast and we're always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology. AI is an epoch defining technology, join a company where you'll be empowered to envision its most powerful applications, and to make them happen. About the team Our Government and Public Services business unit is committed to leveraging AI for the benefit of individual citizens and the public good. From our work informing strategic government decisions, to optimising our NHS, through to protecting children from harmful online content - we know that AI offers opportunities to drive improvements at every level of Government and we are proud to lead on some of the most impactful work happening in the sector. Because of the nature of the work we do with our Government clients, you may need to be eligible for UK Security Clearance (SC) and willing to work on site with these customers from time to time. About the role As a Senior Manager within our AI Safety team, you'll lead delivery across our AI Safety portfolio, focusing on frontier model evaluations and crucial crossover work with our customers in the Government & Public Services space. This is a strategic role working alongside peers to shape the commercial delivery of our AI Safety projects in the UK. You'll serve as the primary link between clients and our dedicated data scientists to translate cutting edge AI safety research into actionable changes as well as model red teaming and safeguard testing with labs like OpenAI and Anthropic into strategic, impactful solutions. What you'll be doing: Overseeing, and providing thought leadership on, the delivery of novel and complex AI safety evaluations and red teaming projects for clients. Forming strong, trusting relationships with customers, internal safety data scientists, and technical partners, including frontier labs. Developing and executing compelling proposals to grow our AI safety and governance work across a wide range of sectors. Advising clients on AI safety strategy and technical implementation, acting as a trusted partner and consultant. Mentoring and developing team members, aligning their responsibilities with the fast growing and important AI safety domain. Supporting wider delivery work as a Senior Manager when needed business to ensure maximum strategic flexibility and commercial impact. Who we're looking for: You bring proven experience or a passion for Applied AI safety, possibly from labs, academia, or evaluation/red teaming roles. You understand the commercial consulting delivery model, allowing you to focus immediately on account growth and project oversight. You can effectively bridge the gap between highly technical AI safety research and strategic business challenges, communicating complex ideas clearly. You are excited by the opportunity to join a globally leading team in the fast growing and vital AI safety ecosystem. You possess the flexibility to support broader senior manager work and embrace an entrepreneurial approach to a highly visible portfolio. You thrive in ambiguous settings and demonstrate a structured approach to problem solving and delivering high quality, high stakes projects. The Interview Process Talent Team Screen (30 minutes) Introduction to the team (60 minutes) Case Study Interview (60 minutes) Culture and Leadership Interview (60 minutes) Our Recruitment Ethos We aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We're united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations. Some of our standout benefits: Unlimited Annual Leave Policy Private healthcare and dental Enhanced parental leave Family-Friendly Flexibility & Flexible working Sanctus Coaching Hybrid Working (2 days in our Old Street office, London) If you don't feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please do apply or reach out to our Talent Acquisition team for a confidential chat - . Please know we are open to conversations about part time roles or condensed hours.
Why Faculty? We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we've worked with over 350 global customers to transform their performance through human centric AI. You can read about our real world impact here. We don't chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence. Our business, and reputation, is growing fast and we're always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology. AI is an epoch defining technology, join a company where you'll be empowered to envision its most powerful applications, and to make them happen. About the team Our Government and Public Services business unit is committed to leveraging AI for the benefit of individual citizens and the public good. From our work informing strategic government decisions, to optimising our NHS, through to protecting children from harmful online content - we know that AI offers opportunities to drive improvements at every level of Government and we are proud to lead on some of the most impactful work happening in the sector. Because of the nature of the work we do with our Government clients, you may need to be eligible for UK Security Clearance (SC) and willing to work on site with these customers from time to time. About the role As a Senior Manager within our AI Safety team, you'll lead delivery across our AI Safety portfolio, focusing on frontier model evaluations and crucial crossover work with our customers in the Government & Public Services space. This is a strategic role working alongside peers to shape the commercial delivery of our AI Safety projects in the UK. You'll serve as the primary link between clients and our dedicated data scientists to translate cutting edge AI safety research into actionable changes as well as model red teaming and safeguard testing with labs like OpenAI and Anthropic into strategic, impactful solutions. What you'll be doing: Overseeing, and providing thought leadership on, the delivery of novel and complex AI safety evaluations and red teaming projects for clients. Forming strong, trusting relationships with customers, internal safety data scientists, and technical partners, including frontier labs. Developing and executing compelling proposals to grow our AI safety and governance work across a wide range of sectors. Advising clients on AI safety strategy and technical implementation, acting as a trusted partner and consultant. Mentoring and developing team members, aligning their responsibilities with the fast growing and important AI safety domain. Supporting wider delivery work as a Senior Manager when needed business to ensure maximum strategic flexibility and commercial impact. Who we're looking for: You bring proven experience or a passion for Applied AI safety, possibly from labs, academia, or evaluation/red teaming roles. You understand the commercial consulting delivery model, allowing you to focus immediately on account growth and project oversight. You can effectively bridge the gap between highly technical AI safety research and strategic business challenges, communicating complex ideas clearly. You are excited by the opportunity to join a globally leading team in the fast growing and vital AI safety ecosystem. You possess the flexibility to support broader senior manager work and embrace an entrepreneurial approach to a highly visible portfolio. You thrive in ambiguous settings and demonstrate a structured approach to problem solving and delivering high quality, high stakes projects. The Interview Process Talent Team Screen (30 minutes) Introduction to the team (60 minutes) Case Study Interview (60 minutes) Culture and Leadership Interview (60 minutes) Our Recruitment Ethos We aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We're united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations. Some of our standout benefits: Unlimited Annual Leave Policy Private healthcare and dental Enhanced parental leave Family-Friendly Flexibility & Flexible working Sanctus Coaching Hybrid Working (2 days in our Old Street office, London) If you don't feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please do apply or reach out to our Talent Acquisition team for a confidential chat - . Please know we are open to conversations about part time roles or condensed hours.