Aisafety
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We're in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally. We're here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action. About the Team Security Engineering at the AI Security Institute (AISI) exists to help our researchers move fast, safely. We are founding the Security Engineering team in a largely greenfield cloud environment, we treat security as a measurable, researcher centric product. Secure by design platforms, automated governance, and intelligence led detection that protects our people, partners, models, and data. We work shoulder to shoulder with research units and core technology teams, and we optimise for enablement over gatekeeping, proportionate controls, low ego, and high ownership. What you might work on Help design and ship paved roads and secure defaults across our platform so researchers can build quickly and safely Build provenance and integrity into the software supply chain (signing, attestation, artefact verification, reproducibility) Support strengthened identity, segmentation, secrets, and key management to create a defensible foundation for evaluations at scale Develop automated, evidence driven assurance mapped to relevant standards, reducing audit toil and improving signal Create detections and response playbooks tailored to model evaluations and research workflows, and run exercises to validate them Threat model new evaluation pipelines with research and core technology teams, fixing classes of issues at the platform layer Assess third party services and hardware/software supply chains; introduce lightweight controls that raise the bar Contribute to open standards and open source, and share lessons with the broader community where appropriate If you want to build security that accelerates frontier scale AI safety research, and see your work land in production quickly, this is a good place to do it. Role Summary Own and operationalise AISI's governance, risk, and compliance (GRC) engineering practice. This role sits at the intersection of security engineering, assurance, and policy, turning paper-based requirements into actionable, testable, and automatable controls. You will lead the technical response to GovAssure and other regulatory requirements, and ensure compliance is continuous and evidence driven. You will also extend GRC disciplines to frontier AI systems, integrating model lifecycle artefacts, evaluations, and release gates into the control and evidence pipeline. Responsibilities Translate regulatory frameworks (e.g. GovAssure, CAF) into programmatic controls and technical artefacts Build and maintain a continuous control validation and evidence pipeline Develop and own a capability-based risk management approach aligned to AISI's delivery model Maintain the AISI risk register and risk acceptance/exception handling process Act as the key interface for DSIT governance, policy, and assurance stakeholders Work cross-functionally to ensure risk and compliance are embedded into AISI delivery lifecycles Extend controls and evidence to the frontier AI model Integrate AI safety evidence (e.g., model/dataset documentation, evaluations, red-team results, release gates) into automated compliance workflows Define and implement controls for model weights handling, compute governance, third-party model/API usage, and model misuse/abuse monitoring Support readiness for AI governance standards and regulations (e.g., NIST AI RMF, ISO/IEC 42001, ISO/IEC 23894; EU AI Act exposure where relevant) Profile requirements Staff or Principal-level engineer or technical GRC specialist Experience in compliance-as-code, control validation, or regulated cloud environments Familiar with YAML, GitOps, structured artefacts, and automated policy checks Equally confident in engineering meetings and policy/gov forums Practical understanding of frontier AI system risks and artefacts (e.g., model evaluations, red-teaming, model/dataset documentation, release gating, weights handling) sufficient to translate AI policy into controls and machine checkable evidence Desirable: familiarity with MLOps tooling (e.g., experiment tracking, model registries) and integrating ML artefacts into CI/CD or evidence pipelines Translating policy into technical controls Designing controls as code or machine checkable evidence Familiarity with frameworks (GovAssure, CAF, NIST) and AI governance standards (NIST AI RMF, ISO/IEC 42001, ISO/IEC 23894) Experience building risk management workflows, including for AI specific risks (model misuse, capability escalation, data/weights security) Stakeholder engagement with governance teams and AI/ML engineering teams What We Offer Impact you couldn't have anywhere else Incredibly talented, mission-driven and supportive colleagues. Direct influence on how frontier AI is governed and deployed globally. Work with the Prime Minister's AI Advisor and leading AI companies. Opportunity to shape the first & best-resourced public-interest research team focused on AI security. Resources & access Pre release access to multiple frontier models and ample compute. Extensive operational support so you can focus on research and ship quickly. Work with experts across national security, policy, AI research and adjacent sciences. If you're talented and driven, you'll own important problems early. 5 days off learning and development, annual stipends for learning and development and funding for conferences and external collaborations. Freedom to pursue research bets without product pressure. Opportunities to publish and collaborate externally. Life & family Modern central London office (cafes, food court, gym) or option to work in similar government offices in Birmingham, Cardiff, Darlington, Edinburgh, Salford or Bristol. Hybrid working, flexibility for occasional remote work abroad and stipends for work from home equipment. At least 25 days' annual leave, 8 public holidays, extra team-wide breaks and 3 days off for volunteering. Generous paid parental leave (36 weeks of UK statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time). On top of your salary, we contribute 28.97% of your base salary to your pension. Discounts and benefits for cycling to work, donations and retail/gyms. Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 (base plus technical allowance), with 28.97% employer pension and other benefits on top . This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures. Additional Information Internal Fraud Database The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see -Internal Fraud Register. We are committed to providing equal opportunities and promoting diversity and inclusion for all applicants.
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We're in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally. We're here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action. About the Team Security Engineering at the AI Security Institute (AISI) exists to help our researchers move fast, safely. We are founding the Security Engineering team in a largely greenfield cloud environment, we treat security as a measurable, researcher centric product. Secure by design platforms, automated governance, and intelligence led detection that protects our people, partners, models, and data. We work shoulder to shoulder with research units and core technology teams, and we optimise for enablement over gatekeeping, proportionate controls, low ego, and high ownership. What you might work on Help design and ship paved roads and secure defaults across our platform so researchers can build quickly and safely Build provenance and integrity into the software supply chain (signing, attestation, artefact verification, reproducibility) Support strengthened identity, segmentation, secrets, and key management to create a defensible foundation for evaluations at scale Develop automated, evidence driven assurance mapped to relevant standards, reducing audit toil and improving signal Create detections and response playbooks tailored to model evaluations and research workflows, and run exercises to validate them Threat model new evaluation pipelines with research and core technology teams, fixing classes of issues at the platform layer Assess third party services and hardware/software supply chains; introduce lightweight controls that raise the bar Contribute to open standards and open source, and share lessons with the broader community where appropriate If you want to build security that accelerates frontier scale AI safety research, and see your work land in production quickly, this is a good place to do it. Role Summary Own and operationalise AISI's governance, risk, and compliance (GRC) engineering practice. This role sits at the intersection of security engineering, assurance, and policy, turning paper-based requirements into actionable, testable, and automatable controls. You will lead the technical response to GovAssure and other regulatory requirements, and ensure compliance is continuous and evidence driven. You will also extend GRC disciplines to frontier AI systems, integrating model lifecycle artefacts, evaluations, and release gates into the control and evidence pipeline. Responsibilities Translate regulatory frameworks (e.g. GovAssure, CAF) into programmatic controls and technical artefacts Build and maintain a continuous control validation and evidence pipeline Develop and own a capability-based risk management approach aligned to AISI's delivery model Maintain the AISI risk register and risk acceptance/exception handling process Act as the key interface for DSIT governance, policy, and assurance stakeholders Work cross-functionally to ensure risk and compliance are embedded into AISI delivery lifecycles Extend controls and evidence to the frontier AI model Integrate AI safety evidence (e.g., model/dataset documentation, evaluations, red-team results, release gates) into automated compliance workflows Define and implement controls for model weights handling, compute governance, third-party model/API usage, and model misuse/abuse monitoring Support readiness for AI governance standards and regulations (e.g., NIST AI RMF, ISO/IEC 42001, ISO/IEC 23894; EU AI Act exposure where relevant) Profile requirements Staff or Principal-level engineer or technical GRC specialist Experience in compliance-as-code, control validation, or regulated cloud environments Familiar with YAML, GitOps, structured artefacts, and automated policy checks Equally confident in engineering meetings and policy/gov forums Practical understanding of frontier AI system risks and artefacts (e.g., model evaluations, red-teaming, model/dataset documentation, release gating, weights handling) sufficient to translate AI policy into controls and machine checkable evidence Desirable: familiarity with MLOps tooling (e.g., experiment tracking, model registries) and integrating ML artefacts into CI/CD or evidence pipelines Translating policy into technical controls Designing controls as code or machine checkable evidence Familiarity with frameworks (GovAssure, CAF, NIST) and AI governance standards (NIST AI RMF, ISO/IEC 42001, ISO/IEC 23894) Experience building risk management workflows, including for AI specific risks (model misuse, capability escalation, data/weights security) Stakeholder engagement with governance teams and AI/ML engineering teams What We Offer Impact you couldn't have anywhere else Incredibly talented, mission-driven and supportive colleagues. Direct influence on how frontier AI is governed and deployed globally. Work with the Prime Minister's AI Advisor and leading AI companies. Opportunity to shape the first & best-resourced public-interest research team focused on AI security. Resources & access Pre release access to multiple frontier models and ample compute. Extensive operational support so you can focus on research and ship quickly. Work with experts across national security, policy, AI research and adjacent sciences. If you're talented and driven, you'll own important problems early. 5 days off learning and development, annual stipends for learning and development and funding for conferences and external collaborations. Freedom to pursue research bets without product pressure. Opportunities to publish and collaborate externally. Life & family Modern central London office (cafes, food court, gym) or option to work in similar government offices in Birmingham, Cardiff, Darlington, Edinburgh, Salford or Bristol. Hybrid working, flexibility for occasional remote work abroad and stipends for work from home equipment. At least 25 days' annual leave, 8 public holidays, extra team-wide breaks and 3 days off for volunteering. Generous paid parental leave (36 weeks of UK statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time). On top of your salary, we contribute 28.97% of your base salary to your pension. Discounts and benefits for cycling to work, donations and retail/gyms. Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 (base plus technical allowance), with 28.97% employer pension and other benefits on top . This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures. Additional Information Internal Fraud Database The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see -Internal Fraud Register. We are committed to providing equal opportunities and promoting diversity and inclusion for all applicants.
Aisafety
# Head of Operations Build the systems that enable BlueDot to scale from 8 to 25+ staff so we can build the workforce that protects humanity.We're a nonprofit startup building the workforce that protects humanity .Since 2022, we've trained over 6,000 people in AI safety and biosecurity. Our alumni work at Anthropic, DeepMind, the UK AI Safety Institute, and dozens of other organisations shaping how transformative technologies affect the world.We're a small team based in London, expanding to San Francisco in Q1 2026. We raised $25M in 2025. Why this role matters We're about to grow fast. We've doubled from 4 to 8 over the past 4 months. Over the next 12 months, we'll scale from 8 to 25+ people, set up a new office in San Francisco, and launch an entrepreneurs-in-residence program hosting 10-20 founders at any given time starting mid-2026.This role exists to build the operational infrastructure for BlueDot's next phase of growth. You'll own the systems, processes and vendor relationships that make the organisation work, and you'll create an environment where our team and visiting entrepreneurs can do exceptional work. What you'll do Fix problems before anyone notices them Spot friction that's slowing people down and eliminate it. Know when a problem needs a scrappy quick fix versus a durable new system, and solve it accordingly. Scale the team from 8 to 25+ Build recruiting and onboarding systems that let us 3x the team without losing quality or speed. Own people operations: contracts, payroll coordination, benefits, and the admin infrastructure that keeps a growing team running. You'll manage external contractors/agencies for most of this. Manage relationships with external partners, e.g. accountants and lawyers. Build and run the SF office Set up our San Francisco headquarters from scratch: find and negotiate the lease, design the layout, source furniture and equipment, and create an environment where our team and visiting entrepreneurs can do exceptional work. Own ongoing office operations and improvements, so that we build a space that people actively want to work from. Support our entrepreneurs-in-residence program Starting in mid-2026, we'll host 10-20 entrepreneurs at any given time, building new AI safety and biosecurity organisations. You'll own the operational side: coordinating with immigration lawyers on visas, sourcing accommodation, and making sure their working environment enables them to succeed. You'll report to , BlueDot's CEO. About you We're looking for someone sharp and high-agency who's energised to help scale a mission-driven startup. You might be a great fit if you: Have scaled operations at a fast-growth startup. You've seen what breaks when a company triples in size, and you've built the systems to prevent it. You know the difference between "process that accelerates us" and "bureaucracy that slows things down." Bias toward action. You don't wait for someone to tell you what's broken. Your instinct is to fix things immediately, not write a document and schedule a meeting. Build only what's needed. You're allergic to unnecessary infrastructure. When you build systems, they're lean and they solve real problems. You delete things that aren't working. Can hold complexity. You can track many moving pieces without dropping any, e.g. visas, leases, contractors, staff onboarding, and compliance, and you have systems for keeping everything in check. Manage external partners effectively. You won't do bookkeeping or legal work yourself, but you'll make sure accountants and lawyers deliver what we need, when we need it. Care about the mission. You understand why AI safety and pandemic preparedness matter, and you're keen to learn more. You know that BlueDot's operational work is essential for helping us protect humanity.We encourage speculative applications - most strong candidates won't meet all these criteria! What we offer Massive impact: You'll build the operational infrastructure for an organisation that's training and placing people who steer the development of AI. San Francisco-based role: This is an in-person role in our new SF office starting in Q1 2026. US visa sponsorship available. Freedom and autonomy: Our expense policy is "act in BlueDot's best interest", unlimited PTO, minimal bureaucracy. Room to grow: As BlueDot scales, so does this role. You'll likely build and lead a small ops team within 12 months. $150-225K salary depending on experience, 10% employer 401(k) contribution (no match required), and comprehensive health insurance. Apply today Applying takes 20-30 minutes, and we encourage you to apply as soon as possible .We're evaluating candidates on a rolling basis and want to make an offer quickly.Check out our . Application process Initial application 3-hour work test (paid) 50-minute interview In-person work trial in San FranciscoIf you have any questions about the role, email
# Head of Operations Build the systems that enable BlueDot to scale from 8 to 25+ staff so we can build the workforce that protects humanity.We're a nonprofit startup building the workforce that protects humanity .Since 2022, we've trained over 6,000 people in AI safety and biosecurity. Our alumni work at Anthropic, DeepMind, the UK AI Safety Institute, and dozens of other organisations shaping how transformative technologies affect the world.We're a small team based in London, expanding to San Francisco in Q1 2026. We raised $25M in 2025. Why this role matters We're about to grow fast. We've doubled from 4 to 8 over the past 4 months. Over the next 12 months, we'll scale from 8 to 25+ people, set up a new office in San Francisco, and launch an entrepreneurs-in-residence program hosting 10-20 founders at any given time starting mid-2026.This role exists to build the operational infrastructure for BlueDot's next phase of growth. You'll own the systems, processes and vendor relationships that make the organisation work, and you'll create an environment where our team and visiting entrepreneurs can do exceptional work. What you'll do Fix problems before anyone notices them Spot friction that's slowing people down and eliminate it. Know when a problem needs a scrappy quick fix versus a durable new system, and solve it accordingly. Scale the team from 8 to 25+ Build recruiting and onboarding systems that let us 3x the team without losing quality or speed. Own people operations: contracts, payroll coordination, benefits, and the admin infrastructure that keeps a growing team running. You'll manage external contractors/agencies for most of this. Manage relationships with external partners, e.g. accountants and lawyers. Build and run the SF office Set up our San Francisco headquarters from scratch: find and negotiate the lease, design the layout, source furniture and equipment, and create an environment where our team and visiting entrepreneurs can do exceptional work. Own ongoing office operations and improvements, so that we build a space that people actively want to work from. Support our entrepreneurs-in-residence program Starting in mid-2026, we'll host 10-20 entrepreneurs at any given time, building new AI safety and biosecurity organisations. You'll own the operational side: coordinating with immigration lawyers on visas, sourcing accommodation, and making sure their working environment enables them to succeed. You'll report to , BlueDot's CEO. About you We're looking for someone sharp and high-agency who's energised to help scale a mission-driven startup. You might be a great fit if you: Have scaled operations at a fast-growth startup. You've seen what breaks when a company triples in size, and you've built the systems to prevent it. You know the difference between "process that accelerates us" and "bureaucracy that slows things down." Bias toward action. You don't wait for someone to tell you what's broken. Your instinct is to fix things immediately, not write a document and schedule a meeting. Build only what's needed. You're allergic to unnecessary infrastructure. When you build systems, they're lean and they solve real problems. You delete things that aren't working. Can hold complexity. You can track many moving pieces without dropping any, e.g. visas, leases, contractors, staff onboarding, and compliance, and you have systems for keeping everything in check. Manage external partners effectively. You won't do bookkeeping or legal work yourself, but you'll make sure accountants and lawyers deliver what we need, when we need it. Care about the mission. You understand why AI safety and pandemic preparedness matter, and you're keen to learn more. You know that BlueDot's operational work is essential for helping us protect humanity.We encourage speculative applications - most strong candidates won't meet all these criteria! What we offer Massive impact: You'll build the operational infrastructure for an organisation that's training and placing people who steer the development of AI. San Francisco-based role: This is an in-person role in our new SF office starting in Q1 2026. US visa sponsorship available. Freedom and autonomy: Our expense policy is "act in BlueDot's best interest", unlimited PTO, minimal bureaucracy. Room to grow: As BlueDot scales, so does this role. You'll likely build and lead a small ops team within 12 months. $150-225K salary depending on experience, 10% employer 401(k) contribution (no match required), and comprehensive health insurance. Apply today Applying takes 20-30 minutes, and we encourage you to apply as soon as possible .We're evaluating candidates on a rolling basis and want to make an offer quickly.Check out our . Application process Initial application 3-hour work test (paid) 50-minute interview In-person work trial in San FranciscoIf you have any questions about the role, email