AI Security Institute
Research Engineer Scientist (Mitigations) - Chem Bio London, UK About the AI Security Institute The AI Security Institute is the world's largest and best funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We're in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally. We're here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action. About the Chem Bio team AISI's Chem Bio (CB) team conducts research to assess evolving AI capabilities related to science R&D and CB misuse, and the effectiveness of technical safeguards that might mitigate risks arising from those capabilities. The goal of our research is to inform critical decisions on security, opportunities, policy, and risk mitigation made by governments and AI developers. We're a close-knit, unusually interdisciplinary team - made up of machine learning researchers and engineers, software engineers, virologists and bacteriologists, behavioural research scientists, biosecurity experts, long standing CB policy specialists and talented generalists - who work closely with other technical and policy teams across government. The team is currently led by Sophie Rose. This role also involves collaborating closely with AISI's Safeguards team, who evaluate the protections on current frontier AI systems and research what measures could better secure them in the future. The Safeguards team is currently led by Xander Davies and advised by Geoffrey Irving and Yarin Gal. Role Responsibilities Lead ambitious research projects to understand the feasibility and effectiveness of potential technical safeguards for AI system's CB capabilities Partner with frontier AI developers and the Safeguards team to rigorously assess and strengthen existing technical mitigations designed to reduce misuse of models' CB capabilities (e.g. strengthening biological and chemical classifiers - see our recent collaborations with Anthropic and OpenAI) Design, build and run evaluations that stress test CB safeguards; analyse results and deliver clear, actionable findings Critically review developers' CB capability assessments, safeguards safety cases, and related policies to raise the bar on safety Translate findings into practical guidance that informs developer practices and decisions Example questions you might tackle How effective is pre training data filtering at reducing harmful CB capabilities while preserving benign performance? What scope of filtering works best, and how does this extend to open weight models? What would an effective differential or structured access regime look like for advanced CB related AI system capabilities? Requirements We are looking for the following skills, experience and attitudes, but a successful candidate will not necessarily need to meet all these criteria. We can be flexible in shaping the role and salary to your background, expertise, and level of experience. Broad knowledge of frontier AI development, safety and governance: training/fine tuning pipelines, evaluations and safeguards, developers' frontier safety frameworks, and technical mitigations for AI-CB risk. Hands on experience building or working deeply with general purpose AI systems and their safety/safeguards stacks. Experience writing production level Python code that is scalable, robust and easy to maintain, ideally in a team. Knowledge of scaffolding, prompting, fine tuning and/or evaluating large language models. Knowledge of math, statistics, and machine learning sufficient to read and critique AI research. Demonstrated research taste and execution: originate high leverage ideas, drive them independently, and ship impactful technical or governance products. Bias to action and ownership; quickly learn unfamiliar domains and prioritise policy relevant technical work over purely academic novelty. High agency and adaptability; communicate clearly and collaborate effectively across disciplines while operating autonomously in a fast paced, evolving environment. Familiarity with relevant datasets, benchmarks, or evaluation methodologies for CB risks from AI. Please note that this role requires Security Clearance (SC), which requires at least 2 years of UK residency, and a willingness to undergo Developed Vetting (DV) if required. Other core requirements Spend at least 9 days per fortnight working with us Work from our office in London (Whitehall) at least 3 days/week Be UK based What We Offer Impact you couldn't have anywhere else Incredibly talented, mission driven and supportive colleagues Direct influence on how frontier AI is governed and deployed globally Work with the Prime Minister's AI Advisor and leading AI companies Opportunity to shape the first & best resourced public interest research team focused on AI security Resources & access Pre release access to multiple frontier models and ample compute Extensive operational support so you can focus on research and ship quickly Work with experts across national security, policy, AI research and adjacent sciences Own important problems early if you're talented and driven 5 days off for learning and development, annual stipends for learning and development and funding for conferences and external collaborations Freedom to pursue research bets without product pressure Opportunities to publish and collaborate externally Life & family Modern central London office (cafes, food court, gym) or option to work in similar government offices in Birmingham, Cardiff, Darlington, Edinburgh, Salford or Bristol Hybrid working, flexibility for occasional remote work abroad and stipends for work from home equipment At least 25 days' annual leave, 8 public holidays, extra team wide breaks and 3 days off for volunteering Generous paid parental leave (36 weeks of UK statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time) On top of your salary, we contribute 28.97% of your base salary to your pension Discounts and benefits for cycling to work, donations and retail/gyms Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 (base plus technical allowance), with 28.97% employer pension and other benefits on top. This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures. Selection process In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process. The interview process may vary candidate to candidate, however, you should expect a typical process to include some technical proficiency tests, discussions with a cross section of our team at AISI (including non technical staff), conversations with your team lead. The process will culminate in a conversation with members of the senior team here at AISI. Initial interview Technical take home test Second interview and review of take home test Third interview Final interview with members of the senior team
Research Engineer Scientist (Mitigations) - Chem Bio London, UK About the AI Security Institute The AI Security Institute is the world's largest and best funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We're in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally. We're here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action. About the Chem Bio team AISI's Chem Bio (CB) team conducts research to assess evolving AI capabilities related to science R&D and CB misuse, and the effectiveness of technical safeguards that might mitigate risks arising from those capabilities. The goal of our research is to inform critical decisions on security, opportunities, policy, and risk mitigation made by governments and AI developers. We're a close-knit, unusually interdisciplinary team - made up of machine learning researchers and engineers, software engineers, virologists and bacteriologists, behavioural research scientists, biosecurity experts, long standing CB policy specialists and talented generalists - who work closely with other technical and policy teams across government. The team is currently led by Sophie Rose. This role also involves collaborating closely with AISI's Safeguards team, who evaluate the protections on current frontier AI systems and research what measures could better secure them in the future. The Safeguards team is currently led by Xander Davies and advised by Geoffrey Irving and Yarin Gal. Role Responsibilities Lead ambitious research projects to understand the feasibility and effectiveness of potential technical safeguards for AI system's CB capabilities Partner with frontier AI developers and the Safeguards team to rigorously assess and strengthen existing technical mitigations designed to reduce misuse of models' CB capabilities (e.g. strengthening biological and chemical classifiers - see our recent collaborations with Anthropic and OpenAI) Design, build and run evaluations that stress test CB safeguards; analyse results and deliver clear, actionable findings Critically review developers' CB capability assessments, safeguards safety cases, and related policies to raise the bar on safety Translate findings into practical guidance that informs developer practices and decisions Example questions you might tackle How effective is pre training data filtering at reducing harmful CB capabilities while preserving benign performance? What scope of filtering works best, and how does this extend to open weight models? What would an effective differential or structured access regime look like for advanced CB related AI system capabilities? Requirements We are looking for the following skills, experience and attitudes, but a successful candidate will not necessarily need to meet all these criteria. We can be flexible in shaping the role and salary to your background, expertise, and level of experience. Broad knowledge of frontier AI development, safety and governance: training/fine tuning pipelines, evaluations and safeguards, developers' frontier safety frameworks, and technical mitigations for AI-CB risk. Hands on experience building or working deeply with general purpose AI systems and their safety/safeguards stacks. Experience writing production level Python code that is scalable, robust and easy to maintain, ideally in a team. Knowledge of scaffolding, prompting, fine tuning and/or evaluating large language models. Knowledge of math, statistics, and machine learning sufficient to read and critique AI research. Demonstrated research taste and execution: originate high leverage ideas, drive them independently, and ship impactful technical or governance products. Bias to action and ownership; quickly learn unfamiliar domains and prioritise policy relevant technical work over purely academic novelty. High agency and adaptability; communicate clearly and collaborate effectively across disciplines while operating autonomously in a fast paced, evolving environment. Familiarity with relevant datasets, benchmarks, or evaluation methodologies for CB risks from AI. Please note that this role requires Security Clearance (SC), which requires at least 2 years of UK residency, and a willingness to undergo Developed Vetting (DV) if required. Other core requirements Spend at least 9 days per fortnight working with us Work from our office in London (Whitehall) at least 3 days/week Be UK based What We Offer Impact you couldn't have anywhere else Incredibly talented, mission driven and supportive colleagues Direct influence on how frontier AI is governed and deployed globally Work with the Prime Minister's AI Advisor and leading AI companies Opportunity to shape the first & best resourced public interest research team focused on AI security Resources & access Pre release access to multiple frontier models and ample compute Extensive operational support so you can focus on research and ship quickly Work with experts across national security, policy, AI research and adjacent sciences Own important problems early if you're talented and driven 5 days off for learning and development, annual stipends for learning and development and funding for conferences and external collaborations Freedom to pursue research bets without product pressure Opportunities to publish and collaborate externally Life & family Modern central London office (cafes, food court, gym) or option to work in similar government offices in Birmingham, Cardiff, Darlington, Edinburgh, Salford or Bristol Hybrid working, flexibility for occasional remote work abroad and stipends for work from home equipment At least 25 days' annual leave, 8 public holidays, extra team wide breaks and 3 days off for volunteering Generous paid parental leave (36 weeks of UK statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time) On top of your salary, we contribute 28.97% of your base salary to your pension Discounts and benefits for cycling to work, donations and retail/gyms Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 (base plus technical allowance), with 28.97% employer pension and other benefits on top. This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures. Selection process In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process. The interview process may vary candidate to candidate, however, you should expect a typical process to include some technical proficiency tests, discussions with a cross section of our team at AISI (including non technical staff), conversations with your team lead. The process will culminate in a conversation with members of the senior team here at AISI. Initial interview Technical take home test Second interview and review of take home test Third interview Final interview with members of the senior team
AI Security Institute
Research Scientist, Open Source Technical Safeguards London, UK About the AI Security Institute The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We're in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally. We're here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action. Societal Resilience Societal Resilience is a multidisciplinary team that studies how advanced AI models can impact people and society. We research the prevalence and severity of high impact societal risks caused by frontier AI deployment, and develop mitigations to address these risks. Core research topics include the use of AI for assisting with criminal activities, preventing critical overreliance on insufficiently robust systems, undermining trust in information, jeopardising psychological wellbeing, or for malicious social engineering. We are interested in both immediate and medium term risks. Why this team matters One emerging risk area we are concerned with is the use of open weight models to drive risks like child sexual abuse material (CSAM) and non consensual intimate imagery (NCII) generation. AISI has previously published research on methods for making open weight models more robust against malicious tampering. In this role, you'll join a strongly collaborative technical research team to help design and develop technical safeguards for open weight models that will reduce the risks of CSAM, NCII, and other risks. We do not expect this role to handle this kind of content directly. About the role This is a research scientist position focused on developing technical safeguards against tampering with open weight models. The role will focus on mitigating AI generated CSAM and NCII by targeting the real world supply chain driving harm: open weight models, adaptation artifacts (LoRAs, guides), and downstream distribution infrastructure (hosting platforms, app stores, operating systems). Our approach prioritises downstream mitigations and actors beyond frontier model developers. This role will build technical tools, protocols, and evidence that platforms and OS/app ecosystems can adopt. This work belongs inside UK government because effective mitigation requires cross agency coordination (Home Office, DSIT, Ofcom), engagement with regulated platforms under the Online Safety Act, and credible evidence to inform policy trade offs across innovation, competition, and child protection. This role will synthesize threat intelligence on how AI generated CSAM and NCII are developed, create scalable screening methodologies that platforms can realistically run, and publish best practice protocols with NGOs to raise the floor across the ecosystem. You'll work closely with engineers and domain experts across AISI, as well as external research collaborators at Home Office, Internet Watch Foundation, and Ofcom. Researchers on this team have substantial freedom to shape independent research agendas, lead collaborations, and initiate projects that push the frontier of what evaluations can reveal. Example Projects Publish a Problem Book framing the technical challenges and research directions for preventing CSAM/NCII misuse across model and hosting layers. Develop threat models for how AI generated CSAM and NCII are created and shared. Design and pilot scalable, automated screening methodologies platforms can run pre publication on uploads (topic general prototypes that avoid exposure to illegal content). Develop approaches for identifying and tracking known or novel CSAM LoRAs to enable platform blocking at upload. Co develop best practice protocols with NGOs (e.g., Thorn/IWF) for hosting, app store, and OS enforcement. This is an individual contributor role with no line management responsibilities. You will report into a senior Research Scientist overseeing our team's misuse workstream. Impact Your work will raise safety standards across hosting and distribution layers, reduce the availability of CSAM/NCII generating artifacts (e.g., LoRAs) on major platforms, inform industry protocols and possibly standards, and provide actionable evidence for government decisions. Crucially, we do not expect this role to handle NCII or CSAM material. Role Requirements We're flexible on the exact profile and expect successful candidates will meet many (but not necessarily all) of the criteria below. Depending on experience, we will consider candidates at either the RS or Senior RS level. At least 3+ years of relevant experience in applied ML, trust & safety tooling, content moderation, security engineering, or adjacent technical fields; we also welcome strong earlier career applicants (2-3 years) with demonstrated impact in open source technical work. Deep familiarity with open weight image/video models (diffusion, LoRA), model hosting ecosystems (e.g., Hugging Face, GitHub), and the limitations of pre deployment safeguards. Strong methodological rigor and creativity; able to design automated, scalable evaluations and detection methods that generalise and avoid reliance on illegal content. Strong Python and ML stack (PyTorch/JAX), data engineering, and systems skills; experience building pipelines and tooling that run at platform scale. Knowledge of fingerprinting and detection approaches (e.g., perceptual hashing, embedding based similarity, behavioural signatures), and their privacy and robustness trade offs. Excellent writing and communication for technical and policy audiences; ability to translate evidence into practical governance guidance. High agency, ethical judgment, and safe working practices for sensitive topics. Commit to work from our London office in Whitehall for parts of the week, with flexibility for remote work. We're looking for full time commitment but are open to part time arrangements. Preferred Experience collaborating with hosting platforms, app stores, OS vendors, or regulators (e.g., Ofcom) on safety by design initiatives. Familiarity with Online Safety Act requirements and platform trust & safety operations; prior work with NGOs such as IWF, Thorn, or STOPNCII.org. Expertise in diffusion models and adaptation techniques (LoRA), model evaluation, and secure tooling for sensitive domains. Experience with privacy preserving computation, metadata poor detection, and standardization efforts (RFCs, protocols). Open source contributions (tools, libraries) and evidence of leading cross sector technical projects. Example backgrounds Senior trust & safety engineer who built automated content integrity pipelines for a large platform; strong OS/Strack record; experience with model hosting ecosystems. Applied ML researcher with a PhD/postdoc in computer vision or ML safety; hands on with diffusion/LoRA; led evaluations and published tooling used by industry. Security/data engineer with 3+ years building scalable detection systems; experience in fingerprinting, hashing, and privacy preserving methods; collaborated with regulators/NGOs. What we offer Impact you couldn't have anywhere else Incredibly talented, mission driven and supportive colleagues Direct influence on how frontier AI is governed and deployed globally Work with the Prime Minister's AI Advisor and leading AI companies Opportunity to shape the first & best resourced public interest research team focused on AI security Resources & access Pre release access to multiple frontier models and ample compute Extensive operational support so you can focus on research and ship quickly Work with experts across national security, policy, AI research, and adjacent sciences If you're talented and driven, you'll own important problems early. 5 development days per year, an annual L&D budget, and travel support for conferences and external collaborations. Freedom to pursue research bets without product pressure Opportunities to publish and collaborate externally Life & family Modern central London office (cafes, food court, gym) or option to work in similar government offices in Birmingham, Cardiff, Darlington, Edinburgh, Salford, or Bristol Hybrid working with opportunities for occasional remote work abroad At least 25 days' annual leave, 8 public holidays, and extra team wide breaks Generous paid parental leave (36 weeks of UK statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time) Plus: 27% government funded pension contribution on top of salary, work from home equipment and dental insurance Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 (base plus technical allowance), with 27% employer pension and other benefits on top (details on the "what we offer" section on our careers page). This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures. Salary ranges Level 3 - Total Package £65,000 - £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280 . click apply for full job details
Research Scientist, Open Source Technical Safeguards London, UK About the AI Security Institute The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We're in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally. We're here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action. Societal Resilience Societal Resilience is a multidisciplinary team that studies how advanced AI models can impact people and society. We research the prevalence and severity of high impact societal risks caused by frontier AI deployment, and develop mitigations to address these risks. Core research topics include the use of AI for assisting with criminal activities, preventing critical overreliance on insufficiently robust systems, undermining trust in information, jeopardising psychological wellbeing, or for malicious social engineering. We are interested in both immediate and medium term risks. Why this team matters One emerging risk area we are concerned with is the use of open weight models to drive risks like child sexual abuse material (CSAM) and non consensual intimate imagery (NCII) generation. AISI has previously published research on methods for making open weight models more robust against malicious tampering. In this role, you'll join a strongly collaborative technical research team to help design and develop technical safeguards for open weight models that will reduce the risks of CSAM, NCII, and other risks. We do not expect this role to handle this kind of content directly. About the role This is a research scientist position focused on developing technical safeguards against tampering with open weight models. The role will focus on mitigating AI generated CSAM and NCII by targeting the real world supply chain driving harm: open weight models, adaptation artifacts (LoRAs, guides), and downstream distribution infrastructure (hosting platforms, app stores, operating systems). Our approach prioritises downstream mitigations and actors beyond frontier model developers. This role will build technical tools, protocols, and evidence that platforms and OS/app ecosystems can adopt. This work belongs inside UK government because effective mitigation requires cross agency coordination (Home Office, DSIT, Ofcom), engagement with regulated platforms under the Online Safety Act, and credible evidence to inform policy trade offs across innovation, competition, and child protection. This role will synthesize threat intelligence on how AI generated CSAM and NCII are developed, create scalable screening methodologies that platforms can realistically run, and publish best practice protocols with NGOs to raise the floor across the ecosystem. You'll work closely with engineers and domain experts across AISI, as well as external research collaborators at Home Office, Internet Watch Foundation, and Ofcom. Researchers on this team have substantial freedom to shape independent research agendas, lead collaborations, and initiate projects that push the frontier of what evaluations can reveal. Example Projects Publish a Problem Book framing the technical challenges and research directions for preventing CSAM/NCII misuse across model and hosting layers. Develop threat models for how AI generated CSAM and NCII are created and shared. Design and pilot scalable, automated screening methodologies platforms can run pre publication on uploads (topic general prototypes that avoid exposure to illegal content). Develop approaches for identifying and tracking known or novel CSAM LoRAs to enable platform blocking at upload. Co develop best practice protocols with NGOs (e.g., Thorn/IWF) for hosting, app store, and OS enforcement. This is an individual contributor role with no line management responsibilities. You will report into a senior Research Scientist overseeing our team's misuse workstream. Impact Your work will raise safety standards across hosting and distribution layers, reduce the availability of CSAM/NCII generating artifacts (e.g., LoRAs) on major platforms, inform industry protocols and possibly standards, and provide actionable evidence for government decisions. Crucially, we do not expect this role to handle NCII or CSAM material. Role Requirements We're flexible on the exact profile and expect successful candidates will meet many (but not necessarily all) of the criteria below. Depending on experience, we will consider candidates at either the RS or Senior RS level. At least 3+ years of relevant experience in applied ML, trust & safety tooling, content moderation, security engineering, or adjacent technical fields; we also welcome strong earlier career applicants (2-3 years) with demonstrated impact in open source technical work. Deep familiarity with open weight image/video models (diffusion, LoRA), model hosting ecosystems (e.g., Hugging Face, GitHub), and the limitations of pre deployment safeguards. Strong methodological rigor and creativity; able to design automated, scalable evaluations and detection methods that generalise and avoid reliance on illegal content. Strong Python and ML stack (PyTorch/JAX), data engineering, and systems skills; experience building pipelines and tooling that run at platform scale. Knowledge of fingerprinting and detection approaches (e.g., perceptual hashing, embedding based similarity, behavioural signatures), and their privacy and robustness trade offs. Excellent writing and communication for technical and policy audiences; ability to translate evidence into practical governance guidance. High agency, ethical judgment, and safe working practices for sensitive topics. Commit to work from our London office in Whitehall for parts of the week, with flexibility for remote work. We're looking for full time commitment but are open to part time arrangements. Preferred Experience collaborating with hosting platforms, app stores, OS vendors, or regulators (e.g., Ofcom) on safety by design initiatives. Familiarity with Online Safety Act requirements and platform trust & safety operations; prior work with NGOs such as IWF, Thorn, or STOPNCII.org. Expertise in diffusion models and adaptation techniques (LoRA), model evaluation, and secure tooling for sensitive domains. Experience with privacy preserving computation, metadata poor detection, and standardization efforts (RFCs, protocols). Open source contributions (tools, libraries) and evidence of leading cross sector technical projects. Example backgrounds Senior trust & safety engineer who built automated content integrity pipelines for a large platform; strong OS/Strack record; experience with model hosting ecosystems. Applied ML researcher with a PhD/postdoc in computer vision or ML safety; hands on with diffusion/LoRA; led evaluations and published tooling used by industry. Security/data engineer with 3+ years building scalable detection systems; experience in fingerprinting, hashing, and privacy preserving methods; collaborated with regulators/NGOs. What we offer Impact you couldn't have anywhere else Incredibly talented, mission driven and supportive colleagues Direct influence on how frontier AI is governed and deployed globally Work with the Prime Minister's AI Advisor and leading AI companies Opportunity to shape the first & best resourced public interest research team focused on AI security Resources & access Pre release access to multiple frontier models and ample compute Extensive operational support so you can focus on research and ship quickly Work with experts across national security, policy, AI research, and adjacent sciences If you're talented and driven, you'll own important problems early. 5 development days per year, an annual L&D budget, and travel support for conferences and external collaborations. Freedom to pursue research bets without product pressure Opportunities to publish and collaborate externally Life & family Modern central London office (cafes, food court, gym) or option to work in similar government offices in Birmingham, Cardiff, Darlington, Edinburgh, Salford, or Bristol Hybrid working with opportunities for occasional remote work abroad At least 25 days' annual leave, 8 public holidays, and extra team wide breaks Generous paid parental leave (36 weeks of UK statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time) Plus: 27% government funded pension contribution on top of salary, work from home equipment and dental insurance Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 (base plus technical allowance), with 27% employer pension and other benefits on top (details on the "what we offer" section on our careers page). This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures. Salary ranges Level 3 - Total Package £65,000 - £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280 . click apply for full job details