• Home
  • Search Jobs
  • Register CV
  • Post a Job
  • Employer Pricing
  • Contact Us
  • Sign in
  • Sign up
  • Home
  • Search Jobs
  • Register CV
  • Post a Job
  • Employer Pricing
  • Contact Us
Sorry, that job is no longer available. Here are some results that may be similar to the job you were looking for.

16 jobs found

Email me jobs like this
Refine Search
Current Search
python aws terraform
Founding AI Engineer (Staff)
Tracer Cloud Inc
About the job Do you get excited by tackling engineering challenges that others deem impossible? Do you want to build AI agents that investigate alerts in production workflows? Do you like working in a team with only the absolute best of the best? If your answers are yes, then you should keep reading. About the Role Tracer is building agentic alert investigation for production data pipelines. Teams already have alerts. Tracer investigates pipeline incidents before they page your team, filtering noise, correlating evidence across your stack, and producing an evidence-based RCA for the issues that actually matter. We're hiring a Founding Lead Engineer in London to own core architecture and ship an agent that produces grounded RCAs (and fix suggestions) for a set of high-value alerts. Humans stay in control of production decisions. Tech Stack Python + Langraph (for multi-agentic alert investigation) Rust (because we like systems that are fast and correct) ClickHouse (high-volume event + investigation history at scale) AWS + Terraform (infrastructure that builds itself) Next.js + TypeScript (because front-end should be sexy too) Key Responsibilities Architect and build the core alert, investigation, root cause analysis (RCA) pipeline in Python Design and implement key systems including: Alert ingestion + normalization Context enrichment + correlation Problem framing outputs Hypothesis orchestration engine Investigation execution runtime Investigation artifacts + reporting Drive core architecture decisions and ensure the system is observable, auditable, and reliable from day one Partner with founders to ship a small set of high-value alert types that work extremely well, then expand coverage deliberately Build customer ready integrations across the pipeline stack Educate and guide future engineers, setting a high bar for technical quality, speed, and pragmatism What We Are Looking For 5+ years (ideally 10+) professional software engineering experience. Proven track record of shipping real products at high velocity. Strong backend and distributed systems foundations, ideally with experience in data platforms and production pipeline stacks and incident/observability tooling. Experience working at an early stage startup and bonus points for having joined earlier. High ownership and sharp product instincts: you build what matters and cut what doesn't. Compensation Salary: £80,000 - £125,000 Equity: Determined on a case by case basis depending on skill and experience level (0.3%-1%) Visa sponsorship: Yes Location: London Recruitment Process Introductory Call (15-30 mins): Call with our hiring manager to discuss your background, motivations, and learn more about Tracer Role Fit Interview (45 mins): Meet with your manager or a similar level team member to review your working style, skills, and fit for the role Take home & Competency Deep Dive (1 hour): Complete a practical exercise (e.g., case study, presentation, or technical problem solving) to explore the role's responsibilities and expectations On site meetup (Half Day): On site interviews and team lunch at our headquarters to ask any questions and experience our office and culture firsthand Offer: Final decision and offer
Feb 06, 2026
Full time
About the job Do you get excited by tackling engineering challenges that others deem impossible? Do you want to build AI agents that investigate alerts in production workflows? Do you like working in a team with only the absolute best of the best? If your answers are yes, then you should keep reading. About the Role Tracer is building agentic alert investigation for production data pipelines. Teams already have alerts. Tracer investigates pipeline incidents before they page your team, filtering noise, correlating evidence across your stack, and producing an evidence-based RCA for the issues that actually matter. We're hiring a Founding Lead Engineer in London to own core architecture and ship an agent that produces grounded RCAs (and fix suggestions) for a set of high-value alerts. Humans stay in control of production decisions. Tech Stack Python + Langraph (for multi-agentic alert investigation) Rust (because we like systems that are fast and correct) ClickHouse (high-volume event + investigation history at scale) AWS + Terraform (infrastructure that builds itself) Next.js + TypeScript (because front-end should be sexy too) Key Responsibilities Architect and build the core alert, investigation, root cause analysis (RCA) pipeline in Python Design and implement key systems including: Alert ingestion + normalization Context enrichment + correlation Problem framing outputs Hypothesis orchestration engine Investigation execution runtime Investigation artifacts + reporting Drive core architecture decisions and ensure the system is observable, auditable, and reliable from day one Partner with founders to ship a small set of high-value alert types that work extremely well, then expand coverage deliberately Build customer ready integrations across the pipeline stack Educate and guide future engineers, setting a high bar for technical quality, speed, and pragmatism What We Are Looking For 5+ years (ideally 10+) professional software engineering experience. Proven track record of shipping real products at high velocity. Strong backend and distributed systems foundations, ideally with experience in data platforms and production pipeline stacks and incident/observability tooling. Experience working at an early stage startup and bonus points for having joined earlier. High ownership and sharp product instincts: you build what matters and cut what doesn't. Compensation Salary: £80,000 - £125,000 Equity: Determined on a case by case basis depending on skill and experience level (0.3%-1%) Visa sponsorship: Yes Location: London Recruitment Process Introductory Call (15-30 mins): Call with our hiring manager to discuss your background, motivations, and learn more about Tracer Role Fit Interview (45 mins): Meet with your manager or a similar level team member to review your working style, skills, and fit for the role Take home & Competency Deep Dive (1 hour): Complete a practical exercise (e.g., case study, presentation, or technical problem solving) to explore the role's responsibilities and expectations On site meetup (Half Day): On site interviews and team lunch at our headquarters to ask any questions and experience our office and culture firsthand Offer: Final decision and offer
Government Digital & Data
DevOps Engineer - HM Land Registry - HEO
Government Digital & Data Plymouth, Devon
Location Plymouth, South West England, PL6 5WS About the job Job summary It is an exciting time for HM Land Registry (HMLR) as we continue on a major transformation programme. HMLR's ambition is to become the world's leading land registry for speed, simplicity and an open approach to data. We are looking for two DevOps Engineer's to join our Transformation & Technology team to help us to achieve this. The DevOps function is responsible for working with our IT software development teams to support the delivery of the platform and infrastructure for our new micro service applications as part of the digital transformation of our internal and external services. Engineers are responsible for configuration, administration and support of the infrastructure using a DevOps methodology across multiple cloud infrastructures, ensuring they are secure, performant, supportable whilst following an agile approach to incremental delivery. Job description As a DevOps Engineer for HM Land Registry, you will provide technical engineering capability for the Web Operations team, responsible for supporting production systems and working with Agile development teams to deliver new services in a highly available and supportable configuration using DevOps processes and tooling. You will engage with other DevOps Engineers and Senior DevOps Engineers, as well as coach and support Junior DevOps Engineers. The role holder will take forward technical consolidation and/or improvement activities providing guidance and leadership to technicians within the IT Operations Practice and wider, whilst also working across DDaT to support and deliver solutions in line with the Technology and Business Strategies. A minimum of 32 hours per week is essential for these roles. Please note that the roles may require travel, requiring an overnight stay. This role does require occasional planned out of hours working, in order to deal with IT changes and maintenance and may include an participation in an on-call rota. HMLR expect everyone to spend at least 60% of their working time in the office. For more information about the role, please see the attached candidate pack. Person specification To meet the requirements of this role, you will hold a qualification in Information Technology or a related area (Degree Level or equivalent) and / or experience in an IT field. You will have experience of supporting Linux Operating Systems, containers and containerised workloads e.g. orchestration services such as OpenShift and Kubernetes. You will be experienced with at least one programming language such as Ruby, Java, Python, Javascript or Go. In addition, you will have knowledge and experience of DevOps working practices of Continuous Integration. The use of version control (Git and GitLab CI) configuration-as-code across cloud and on-premise environments using Terraform and version control systems. You will be used to working in an agile environment building, deploying, supporting, and operating cloud applications (in particular AWS).
Feb 05, 2026
Full time
Location Plymouth, South West England, PL6 5WS About the job Job summary It is an exciting time for HM Land Registry (HMLR) as we continue on a major transformation programme. HMLR's ambition is to become the world's leading land registry for speed, simplicity and an open approach to data. We are looking for two DevOps Engineer's to join our Transformation & Technology team to help us to achieve this. The DevOps function is responsible for working with our IT software development teams to support the delivery of the platform and infrastructure for our new micro service applications as part of the digital transformation of our internal and external services. Engineers are responsible for configuration, administration and support of the infrastructure using a DevOps methodology across multiple cloud infrastructures, ensuring they are secure, performant, supportable whilst following an agile approach to incremental delivery. Job description As a DevOps Engineer for HM Land Registry, you will provide technical engineering capability for the Web Operations team, responsible for supporting production systems and working with Agile development teams to deliver new services in a highly available and supportable configuration using DevOps processes and tooling. You will engage with other DevOps Engineers and Senior DevOps Engineers, as well as coach and support Junior DevOps Engineers. The role holder will take forward technical consolidation and/or improvement activities providing guidance and leadership to technicians within the IT Operations Practice and wider, whilst also working across DDaT to support and deliver solutions in line with the Technology and Business Strategies. A minimum of 32 hours per week is essential for these roles. Please note that the roles may require travel, requiring an overnight stay. This role does require occasional planned out of hours working, in order to deal with IT changes and maintenance and may include an participation in an on-call rota. HMLR expect everyone to spend at least 60% of their working time in the office. For more information about the role, please see the attached candidate pack. Person specification To meet the requirements of this role, you will hold a qualification in Information Technology or a related area (Degree Level or equivalent) and / or experience in an IT field. You will have experience of supporting Linux Operating Systems, containers and containerised workloads e.g. orchestration services such as OpenShift and Kubernetes. You will be experienced with at least one programming language such as Ruby, Java, Python, Javascript or Go. In addition, you will have knowledge and experience of DevOps working practices of Continuous Integration. The use of version control (Git and GitLab CI) configuration-as-code across cloud and on-premise environments using Terraform and version control systems. You will be used to working in an agile environment building, deploying, supporting, and operating cloud applications (in particular AWS).
Government Digital & Data
Lead Data Engineer - Department for Transport - G7
Government Digital & Data
Location Birmingham, Hastings, Leeds, Swansea About the job Job summary Can you lead secure, production-grade data pipelines on GCP while balancing live operations and innovation? Do you enjoy mentoring engineers and translating complex data engineering concepts for diverse stakeholders? If so, we'd love to hear from you! In recent years DfT's digital and data teams have implemented a range of advanced data services, making use of the latest cloud technologies to deliver the services and platforms that our users need, with excellent customer satisfaction rates. We are proud of our ability develop and grow as a team, and we look forward to you sharing that sense of pride! At DfT, we recognise that everyone has different needs and aspirations. We have created an inclusive and welcoming working environment so you can feel comfortable to be yourself at work. We'll help you to reach your full potential, offering rewarding opportunities alongside access to the latest training and technologies. Joining our department comes with many benefits, including: Employer pension contribution of 28.97% of your salary. Read more about Civil Service Pensions here 25 days annual leave, increasing by 1 day each year of service (up to a maximum of 30 days annual leave), plus 8 bank holidays a privilege day for the King's birthday Flexible working options where we encourage a great work-life balance. Read more in the Benefits section below! Find out more about what it's like working at DfTc: Department for Transport Central - Department for Transport Careers Job description Working as part of a talented and collaborative team, you will: Lead the build and operation of DfT's production grade data pipelines and platforms, ensuring reliability and security across our Google Cloud Platform environment. Own and manage live data services, triaging and resolving issues at pace to maintain high quality data delivery for analysts, policy teams and external commitments. Drive innovation within data engineering, identifying opportunities to modernise tooling, adopt emerging GCP capabilities and introduce new approaches that improve efficiency and data quality. Plan delivery across legacy migration, operational support and new development, ensuring that resources are allocated effectively and that risks, dependencies and priorities are well managed. Work closely with technical and non-technical stakeholders, translating technical concepts, shaping data related decisions, and responding to business need. Line manage and develop engineers at varying levels, providing technical guidance, coaching and oversight, and fostering a culture of continuous improvement, collaboration and knowledge Drive adoption of Infrastructure as Code (IaC), establishing repeatable patterns for environments, access, and data services. Lead the development of our metadata catalogue, curating business and technical metadata so users can effectively discover and use data. In return, we can offer you: access to new and emerging technologies, varied projects developed in a cloud-first environment, support and investment to further your training and development, flexible and hybrid working supporting a healthy work-life balance, industry-leading pension and employee benefits package. For further information on the role, please read the role profile. Please note that the role profile is for information purposes only - whilst all elements are relevant to the role, they may not all be assessed during the recruitment process. This job advert will detail exactly what will be assessed during the recruitment process. About Us At the heart of data innovation and evolution in DfT, you will join a talented, experienced, data engineering team imagining and shaping the delivery of the next wave of data services. The team is embedded within the wider data directorate, and works alongside analysts, data scientists, architects and other engineers to deliver some of the most impactful data projects within DfT. You will support and shape various areas within the business which delivers an innovational transport policy agenda. As DfT is a cloud-only enterprise, you will develop the latest cloud solutions meeting complex digital, identity and data needs. This role will give you the opportunity to share your experience and further develop your skills every day as you work on new and exciting projects with advanced technologies. We provide a supportive and constructive learning environment where your career growth is important. Person specification You will be an experienced data engineer with deep technical foundations and expertise in both Python and SQL. You will also be highly proficient in Google Cloud Platform, or an expert user of AWS or Azure with a willingness to apply your skills to a new cloud platform. You combine hands on engineering excellence with the ability to communicate complex ideas simply, engaging effectively with a wide range of technical and non technical stakeholders. You are comfortable balancing the demands of operating reliable, production grade data services with delivering innovation: shaping new approaches, modernising legacy systems, and driving improvements in data quality and tooling. Alongside this, you bring thoughtful planning and change management skills, helping the organisation evolve its data capabilities while ensuring continuity, stability, and high quality outcomes across DfT. You will need to demonstrate the following experience: Enterprise-scale delivery of robust, maintainable data pipelines in Google Cloud Platform. Including building reusable components, optimising performance and cost on cloud data platforms while meeting security, privacy and governance controls. Expert user of data engineering tools including relevant languages (e.g. Python/SQL), IaC tools (e.g. terraform), GCP or equivalent cloud tooling or (e.g. BigQuery, Cloud Functions) CI/CD (e.g. Github Actions), logging and testing. Setting and leading engineering standards, ensuring high quality coding practices, maintainable solutions, and consistent technical approaches across the team. Leading Agile delivery of data engineering work, managing and operating live services at pace to maintain continuity and high quality data delivery into DfT, while also driving innovation by developing new data engineering approaches and patterns. Stakeholder leadership and technical translation, partnering with architecture and data teams to align designs with strategy and standards, and managing change and innovation within DfT's data landscape.
Feb 05, 2026
Full time
Location Birmingham, Hastings, Leeds, Swansea About the job Job summary Can you lead secure, production-grade data pipelines on GCP while balancing live operations and innovation? Do you enjoy mentoring engineers and translating complex data engineering concepts for diverse stakeholders? If so, we'd love to hear from you! In recent years DfT's digital and data teams have implemented a range of advanced data services, making use of the latest cloud technologies to deliver the services and platforms that our users need, with excellent customer satisfaction rates. We are proud of our ability develop and grow as a team, and we look forward to you sharing that sense of pride! At DfT, we recognise that everyone has different needs and aspirations. We have created an inclusive and welcoming working environment so you can feel comfortable to be yourself at work. We'll help you to reach your full potential, offering rewarding opportunities alongside access to the latest training and technologies. Joining our department comes with many benefits, including: Employer pension contribution of 28.97% of your salary. Read more about Civil Service Pensions here 25 days annual leave, increasing by 1 day each year of service (up to a maximum of 30 days annual leave), plus 8 bank holidays a privilege day for the King's birthday Flexible working options where we encourage a great work-life balance. Read more in the Benefits section below! Find out more about what it's like working at DfTc: Department for Transport Central - Department for Transport Careers Job description Working as part of a talented and collaborative team, you will: Lead the build and operation of DfT's production grade data pipelines and platforms, ensuring reliability and security across our Google Cloud Platform environment. Own and manage live data services, triaging and resolving issues at pace to maintain high quality data delivery for analysts, policy teams and external commitments. Drive innovation within data engineering, identifying opportunities to modernise tooling, adopt emerging GCP capabilities and introduce new approaches that improve efficiency and data quality. Plan delivery across legacy migration, operational support and new development, ensuring that resources are allocated effectively and that risks, dependencies and priorities are well managed. Work closely with technical and non-technical stakeholders, translating technical concepts, shaping data related decisions, and responding to business need. Line manage and develop engineers at varying levels, providing technical guidance, coaching and oversight, and fostering a culture of continuous improvement, collaboration and knowledge Drive adoption of Infrastructure as Code (IaC), establishing repeatable patterns for environments, access, and data services. Lead the development of our metadata catalogue, curating business and technical metadata so users can effectively discover and use data. In return, we can offer you: access to new and emerging technologies, varied projects developed in a cloud-first environment, support and investment to further your training and development, flexible and hybrid working supporting a healthy work-life balance, industry-leading pension and employee benefits package. For further information on the role, please read the role profile. Please note that the role profile is for information purposes only - whilst all elements are relevant to the role, they may not all be assessed during the recruitment process. This job advert will detail exactly what will be assessed during the recruitment process. About Us At the heart of data innovation and evolution in DfT, you will join a talented, experienced, data engineering team imagining and shaping the delivery of the next wave of data services. The team is embedded within the wider data directorate, and works alongside analysts, data scientists, architects and other engineers to deliver some of the most impactful data projects within DfT. You will support and shape various areas within the business which delivers an innovational transport policy agenda. As DfT is a cloud-only enterprise, you will develop the latest cloud solutions meeting complex digital, identity and data needs. This role will give you the opportunity to share your experience and further develop your skills every day as you work on new and exciting projects with advanced technologies. We provide a supportive and constructive learning environment where your career growth is important. Person specification You will be an experienced data engineer with deep technical foundations and expertise in both Python and SQL. You will also be highly proficient in Google Cloud Platform, or an expert user of AWS or Azure with a willingness to apply your skills to a new cloud platform. You combine hands on engineering excellence with the ability to communicate complex ideas simply, engaging effectively with a wide range of technical and non technical stakeholders. You are comfortable balancing the demands of operating reliable, production grade data services with delivering innovation: shaping new approaches, modernising legacy systems, and driving improvements in data quality and tooling. Alongside this, you bring thoughtful planning and change management skills, helping the organisation evolve its data capabilities while ensuring continuity, stability, and high quality outcomes across DfT. You will need to demonstrate the following experience: Enterprise-scale delivery of robust, maintainable data pipelines in Google Cloud Platform. Including building reusable components, optimising performance and cost on cloud data platforms while meeting security, privacy and governance controls. Expert user of data engineering tools including relevant languages (e.g. Python/SQL), IaC tools (e.g. terraform), GCP or equivalent cloud tooling or (e.g. BigQuery, Cloud Functions) CI/CD (e.g. Github Actions), logging and testing. Setting and leading engineering standards, ensuring high quality coding practices, maintainable solutions, and consistent technical approaches across the team. Leading Agile delivery of data engineering work, managing and operating live services at pace to maintain continuity and high quality data delivery into DfT, while also driving innovation by developing new data engineering approaches and patterns. Stakeholder leadership and technical translation, partnering with architecture and data teams to align designs with strategy and standards, and managing change and innovation within DfT's data landscape.
Government Digital & Data
Senior Platform Engineer - Department for Business and Trade - G7
Government Digital & Data
Location Belfast, Birmingham, Cardiff, Darlington, Edinburgh, London, Salford About the job Job summary If you would like to find out more about the role, the Platform Engineering team and what it's like to work at DBT, we are holding a Hiring Manager Q&A session for this role where you can virtually 'meet the team' on Tuesday 17th February at 12:30pm. Please click here to book your spot. About us The Department for Business and Trade (DBT) has a clear mission - to grow the economy. Our role is to help businesses invest, grow and export to create jobs and opportunities right across the country. We do this in three ways. Firstly, we help to build a strong, competitive business environment, where consumers are protected and companies rewarded for treating their employees properly. Secondly, we open international markets and ensure resilient supply chains. This can be through Free Trade Agreements, trade facilitation and multilateral agreements. Finally, we work in partnership with businesses every day, providing advance, finance and deal-making support to those looking to start up, invest, export and grow. The Digital, Data and Technology (DDaT) directorate develops and operates tools and services to support us in this mission. Job description We've successfully completed the migration of DBT services from GOV.UK PaaS to our new developer platform in AWS. Now, we're entering the next phase: evolving this platform into a full Platform-as-a-Service (PaaS) offering. Are you ready to help shape the future of digital delivery at DBT? We're looking for Platform Engineers to help us build the most performant, secure, and feature-rich hosting environment possible, one that puts developer experience front and centre. This is your chance to be part of something transformative, where your work will directly impact how digital services are built and run across government. Main responsibilities As a Senior Platform Engineer, you will work to give development teams the tools for their job, including application performance monitoring, exception, log and metrics aggregation, dashboards, and declarative CI/CD (continuous integration/continuous delivery) pipelines. You'll evangelise product teams about service-level indicators, objectives, and error budgets, and negotiate them. You'll help build and scale our global product platform and participate in an on-call rota for which you will receive an additional allowance. Specific projects the team are working on include rolling out an observability tool to enhance system monitoring and incident response, streamlining deployment processes to reduce downtime and speed up feature delivery, and developing a CLI tool to automate tasks and boost developer productivity. You will be using: Amazon Web Services Azure AWS CodePipelines and AWS CodeBuild Terraform & AWS Copilot (CloudFormation) Docker, Elastic Container Service (ECS) and Elastic Container Registry (ECR) ElasticSearch/OpenSearch Python and Django framework PostgreSQL as a service (Amazon RDS) Sentry Redis/Elasticache Person specification It is essential that you have: Cloud experience with either Amazon Web Services, Azure or Google Cloud Ability to build code-defined, reliable, and well tested infrastructure on top of cloud computing systems (e.g. Terraform, AWS Copilot, CloudFormation, Pulumi) Experience and fluency in one or more programming languages (e.g., writing clean and effective code) Knowledge of Linux/Unix fundamentals and TCP/IP networking Ability to see user impact in the infrastructure and platform changes, including a drive to improve the Developer Experience at every turn In depth experience of designing solutions to complex technical problems independently It is desirable that you have: Experience in designing and implementing Docker images through containerisation Experience in prototyping through reuse of existing Open-Source components
Feb 05, 2026
Full time
Location Belfast, Birmingham, Cardiff, Darlington, Edinburgh, London, Salford About the job Job summary If you would like to find out more about the role, the Platform Engineering team and what it's like to work at DBT, we are holding a Hiring Manager Q&A session for this role where you can virtually 'meet the team' on Tuesday 17th February at 12:30pm. Please click here to book your spot. About us The Department for Business and Trade (DBT) has a clear mission - to grow the economy. Our role is to help businesses invest, grow and export to create jobs and opportunities right across the country. We do this in three ways. Firstly, we help to build a strong, competitive business environment, where consumers are protected and companies rewarded for treating their employees properly. Secondly, we open international markets and ensure resilient supply chains. This can be through Free Trade Agreements, trade facilitation and multilateral agreements. Finally, we work in partnership with businesses every day, providing advance, finance and deal-making support to those looking to start up, invest, export and grow. The Digital, Data and Technology (DDaT) directorate develops and operates tools and services to support us in this mission. Job description We've successfully completed the migration of DBT services from GOV.UK PaaS to our new developer platform in AWS. Now, we're entering the next phase: evolving this platform into a full Platform-as-a-Service (PaaS) offering. Are you ready to help shape the future of digital delivery at DBT? We're looking for Platform Engineers to help us build the most performant, secure, and feature-rich hosting environment possible, one that puts developer experience front and centre. This is your chance to be part of something transformative, where your work will directly impact how digital services are built and run across government. Main responsibilities As a Senior Platform Engineer, you will work to give development teams the tools for their job, including application performance monitoring, exception, log and metrics aggregation, dashboards, and declarative CI/CD (continuous integration/continuous delivery) pipelines. You'll evangelise product teams about service-level indicators, objectives, and error budgets, and negotiate them. You'll help build and scale our global product platform and participate in an on-call rota for which you will receive an additional allowance. Specific projects the team are working on include rolling out an observability tool to enhance system monitoring and incident response, streamlining deployment processes to reduce downtime and speed up feature delivery, and developing a CLI tool to automate tasks and boost developer productivity. You will be using: Amazon Web Services Azure AWS CodePipelines and AWS CodeBuild Terraform & AWS Copilot (CloudFormation) Docker, Elastic Container Service (ECS) and Elastic Container Registry (ECR) ElasticSearch/OpenSearch Python and Django framework PostgreSQL as a service (Amazon RDS) Sentry Redis/Elasticache Person specification It is essential that you have: Cloud experience with either Amazon Web Services, Azure or Google Cloud Ability to build code-defined, reliable, and well tested infrastructure on top of cloud computing systems (e.g. Terraform, AWS Copilot, CloudFormation, Pulumi) Experience and fluency in one or more programming languages (e.g., writing clean and effective code) Knowledge of Linux/Unix fundamentals and TCP/IP networking Ability to see user impact in the infrastructure and platform changes, including a drive to improve the Developer Experience at every turn In depth experience of designing solutions to complex technical problems independently It is desirable that you have: Experience in designing and implementing Docker images through containerisation Experience in prototyping through reuse of existing Open-Source components
TRG Screen
Senior DevOps Engineer Belfast, Northern Ireland, United Kingdom
TRG Screen
Belfast, Northern Ireland, United Kingdom Join TRG Screen: Building World-Class Teams. One Expert at a Time. Are you ready to be part of a dynamic team at the forefront of subscription spend management innovation? At TRG Screen, we're not just redefining how organizations manage their subscription expenses - we're shaping the future of the industry. With cutting-edge solutions and a commitment to excellence, we empower businesses around the globe to optimize their subscription investments and drive sustainable growth. Join us in our mission to revolutionize subscription management and make a meaningful impact on the way businesses access and utilize critical information. At TRG Screen, your talent and ambition will find a home, where opportunities for growth and advancement abound. About TRG Screen TRG Screen is the leading provider of market data and subscription management technology and automation solutions, tailored to the unique needs of financial institutions and legal firms. Our integrated suite of solutions includes market data and subscription spend management, usage management, compliance reporting, and comprehensive managed services, which hundreds of clients worldwide use to remove cumbersome and inaccurate manual processes and gain control over market data and subscription costs at scale. For more than 25 years, TRG Screen has enabled businesses who rely on market data to monitor and strategically manage spending and usage of data and information services, including market data, research, software licenses, consulting and other necessary corporate expenses. TRG Screen solutions give decisionmakers full transparency into subscription spend and usage, enabling them to proactively manage subscription costs at scale, conduct more informed vendor negotiations, improve governance, and avoid unnecessary spending on these mission critical business services. TRG Screen is headquartered in New York City, with offices in Europe and Asia, as well as a 24x7 client support center in Bangalore, India. TRG Screen is a portfolio company of Vista Equity Partners, one of the world's largest and most respected private equity firms. The Role TRG Screen are searching for a Senior DevOps Engineer, who will be responsible for driving the automation, tooling, and infrastructure that powers our platform's deployment, monitoring, and operations. Sitting within our CloudOps team, you'll partner with engineering teams to ensure reliability, compliance, and security while mentoring team members and leading critical infrastructure initiatives. Responsibilities Infrastructure & Platform Design and implement scalable, cloud-native infrastructure using Infrastructure as Code (Terraform, CloudFormation) Lead architectural decisions for platform reliability, security, and performance Own critical infrastructure components and drive standards across the organization Manage containerized applications running in Docker Be able to troubleshoot application components such as connectivity to RabbitMQ or remote sftp servers. 2. CI/CD & Automation Build and optimize CI/CD pipelines for maximum deployment velocity and safety Implement GitOps practices and automate operational workflows Develop custom tooling, dashboards, and scripts to enhance team productivity 3. Security & Compliance Integrate security guardrails early in the development lifecycle (DevSecOps) Maintain system hardening, patching, and compliance requirements Develop and validate disaster recovery and fault tolerance strategies Ability to create detailed technical documentation, such as runbooks, for complex deployments. Mentor junior engineers and promote engineering best practices Partner with architects and security teams on platform evolution Create clear documentation for operational procedures and architecture decisions Qualifications and Experience 6+ years with Infrastructure as Code tools (Terraform, Ansible, Pulumi) 3+ years with container orchestration (Kubernetes, EKS, etc) Deep understanding of cloud platforms (AWS or Azure) and cloud-native patterns Proven track record building and maintaining CI/CD pipelines (GitHub Actions, GitLab CI, Azure DevOps, Jenkins) Experience withconfiguration managementtools such asChef / Puppet Strong proficiency in scripting/programming (Python, Go, or similar) Experience with observability platforms (Datadog, New Relic, Prometheus/Grafana) Knowledge of microservices architecture and service mesh technologies Understanding of security best practices and compliance frameworks Comfortable with asynchronous collaboration tools (Slack, Teams) Agile mindset with focus on iterative delivery Ability to evaluate and adopt new technologies strategically Nice to Have Experience with platform engineering and internal developer platforms Knowledge of GitOps tools (ArgoCD, Flux) Familiarity with policy-as-code (OPA, Kyverno) Experience with FinOps and cloud cost optimization Contributions to open-source DevOps projects Familiarity with Docker image pipelines and artifact repositories Salary Range Join TRG Screen and unlock your potential in an environment where innovation thrives, opportunities abound, and your contributions make a difference. We are an equal opportunities employer. We recognise and value the power of diversity in our workplace and are committed to being an employer of choice for everyone. We welcome and encourage applicants from all backgrounds. All applications for employment are considered strictly on the basis of merit. At TRG Screen, we understand that diverse and inclusive teams are not just beneficial, they are essential to our success. We recognize that embracing diverse perspectives, backgrounds, and experiences fosters innovation, enhances problem-solving capabilities, and drives better business outcomes. By cultivating a culture of inclusion where every voice is heard and valued, we empower our world class teams to thrive, excel, and drive positive change. We are proud of our diverse workforce and are dedicated to creating a safe and welcoming environment for all employees. People from various ethnicities, ages, genders, and abilities are encouraged to apply.
Feb 03, 2026
Full time
Belfast, Northern Ireland, United Kingdom Join TRG Screen: Building World-Class Teams. One Expert at a Time. Are you ready to be part of a dynamic team at the forefront of subscription spend management innovation? At TRG Screen, we're not just redefining how organizations manage their subscription expenses - we're shaping the future of the industry. With cutting-edge solutions and a commitment to excellence, we empower businesses around the globe to optimize their subscription investments and drive sustainable growth. Join us in our mission to revolutionize subscription management and make a meaningful impact on the way businesses access and utilize critical information. At TRG Screen, your talent and ambition will find a home, where opportunities for growth and advancement abound. About TRG Screen TRG Screen is the leading provider of market data and subscription management technology and automation solutions, tailored to the unique needs of financial institutions and legal firms. Our integrated suite of solutions includes market data and subscription spend management, usage management, compliance reporting, and comprehensive managed services, which hundreds of clients worldwide use to remove cumbersome and inaccurate manual processes and gain control over market data and subscription costs at scale. For more than 25 years, TRG Screen has enabled businesses who rely on market data to monitor and strategically manage spending and usage of data and information services, including market data, research, software licenses, consulting and other necessary corporate expenses. TRG Screen solutions give decisionmakers full transparency into subscription spend and usage, enabling them to proactively manage subscription costs at scale, conduct more informed vendor negotiations, improve governance, and avoid unnecessary spending on these mission critical business services. TRG Screen is headquartered in New York City, with offices in Europe and Asia, as well as a 24x7 client support center in Bangalore, India. TRG Screen is a portfolio company of Vista Equity Partners, one of the world's largest and most respected private equity firms. The Role TRG Screen are searching for a Senior DevOps Engineer, who will be responsible for driving the automation, tooling, and infrastructure that powers our platform's deployment, monitoring, and operations. Sitting within our CloudOps team, you'll partner with engineering teams to ensure reliability, compliance, and security while mentoring team members and leading critical infrastructure initiatives. Responsibilities Infrastructure & Platform Design and implement scalable, cloud-native infrastructure using Infrastructure as Code (Terraform, CloudFormation) Lead architectural decisions for platform reliability, security, and performance Own critical infrastructure components and drive standards across the organization Manage containerized applications running in Docker Be able to troubleshoot application components such as connectivity to RabbitMQ or remote sftp servers. 2. CI/CD & Automation Build and optimize CI/CD pipelines for maximum deployment velocity and safety Implement GitOps practices and automate operational workflows Develop custom tooling, dashboards, and scripts to enhance team productivity 3. Security & Compliance Integrate security guardrails early in the development lifecycle (DevSecOps) Maintain system hardening, patching, and compliance requirements Develop and validate disaster recovery and fault tolerance strategies Ability to create detailed technical documentation, such as runbooks, for complex deployments. Mentor junior engineers and promote engineering best practices Partner with architects and security teams on platform evolution Create clear documentation for operational procedures and architecture decisions Qualifications and Experience 6+ years with Infrastructure as Code tools (Terraform, Ansible, Pulumi) 3+ years with container orchestration (Kubernetes, EKS, etc) Deep understanding of cloud platforms (AWS or Azure) and cloud-native patterns Proven track record building and maintaining CI/CD pipelines (GitHub Actions, GitLab CI, Azure DevOps, Jenkins) Experience withconfiguration managementtools such asChef / Puppet Strong proficiency in scripting/programming (Python, Go, or similar) Experience with observability platforms (Datadog, New Relic, Prometheus/Grafana) Knowledge of microservices architecture and service mesh technologies Understanding of security best practices and compliance frameworks Comfortable with asynchronous collaboration tools (Slack, Teams) Agile mindset with focus on iterative delivery Ability to evaluate and adopt new technologies strategically Nice to Have Experience with platform engineering and internal developer platforms Knowledge of GitOps tools (ArgoCD, Flux) Familiarity with policy-as-code (OPA, Kyverno) Experience with FinOps and cloud cost optimization Contributions to open-source DevOps projects Familiarity with Docker image pipelines and artifact repositories Salary Range Join TRG Screen and unlock your potential in an environment where innovation thrives, opportunities abound, and your contributions make a difference. We are an equal opportunities employer. We recognise and value the power of diversity in our workplace and are committed to being an employer of choice for everyone. We welcome and encourage applicants from all backgrounds. All applications for employment are considered strictly on the basis of merit. At TRG Screen, we understand that diverse and inclusive teams are not just beneficial, they are essential to our success. We recognize that embracing diverse perspectives, backgrounds, and experiences fosters innovation, enhances problem-solving capabilities, and drives better business outcomes. By cultivating a culture of inclusion where every voice is heard and valued, we empower our world class teams to thrive, excel, and drive positive change. We are proud of our diverse workforce and are dedicated to creating a safe and welcoming environment for all employees. People from various ethnicities, ages, genders, and abilities are encouraged to apply.
EXPERIS
DevOps Engineer - Azure DevOps + Some AWS
EXPERIS Anslow, Staffordshire
Senior Azure DevOps Engineer - Permanent - Azure DevOps + Some AWS Staffordshire - Derbyshire - East Midlands or London, Paddington Hybrid Role - 1 Day Per Week on Site 75,000pa + Pension, Health Care & Excellent Benefits With over 200,000 clients across the UK and Europe, this established health and welfare organisation are seeking an experienced Senior DevOps Engineer to assist with consistent growth and client expansion. As the Senior DevOps Engineer, you will be responsible for designing, implementing, and maintaining cloud infrastructure across Azure Platforms and some AWS Platforms. You will play a key role in enabling continuous integration and delivery, ensuring system reliability, and embedding security best practices. In addition to hands-on technical work, you will actively contribute to the growth and capability of the wider team by sharing knowledge. Key Deliverables: Azure DevOps & CI/CD: Strong understanding of DevOps principles and hands-on experience with CI/CD tools like Azure DevOps, Azure Tooling, GitHub Actions, or Jenkins. Microsoft Certified: DevOps Engineer Expert (AZ-400) Team Leading, Workload Delegation, Project Resource Planning. Project Scoping, Initiation & Project Support Design, deploy, and manage scalable and secure infrastructure in Azure DevOps and Azure DevOps Tooling. Build and maintain CI/CD pipelines using tools such as Azure DevOps. Implement and manage monitoring, alerting, and logging systems (e.g., Datadog, Logic Monitor, Solarwinds). Automate infrastructure provisioning and configuration using Infrastructure as Code (IaC) tools such as Terraform. Ensure compliance with security policies and manage access controls (IAM, PIM, RBAC). Respond to incidents and participate in root cause analysis and post-mortems. Create and maintain documentation and runbooks. Collaborate with other teams to align DevOps practices with project goals. Scripting & Automation: Proficiency in scripting languages such as PowerShell, Bash, or Python to automate infrastructure and operational tasks. Infrastructure as Code (IaC): Experience with tools like Terraform, Bicep, or ARM templates for managing infrastructure declaratively: HashiCorp Certified: Terraform Associate Monitoring & Observability: Familiarity with monitoring tools such as Azure Monitor, AWS CloudWatch, Prometheus, or Grafana. Containerisation with AKS / EKS: Design and deployment with AWS Cloudformation or ARM templates Security & Compliance: Solid grasp of cloud security best practices, identity and access management, and compliance frameworks. Collaboration & Mentorship: Excellent communication skills with a passion for mentoring, documentation, and enabling others through knowledge sharing. Technical Requirements: Cloud Platform Expertise: Proven experience with AWS and Azure cloud platforms. DevOps & CI/CD: Strong understanding of DevOps principles and hands-on experience with CI/CD tools like Azure DevOps, GitHub Actions, or Jenkins. Containerisation with AKS / EKS: Design and deployment with AWS Cloudformation or ARM templates Scripting & Automation: Proficiency in scripting languages such as PowerShell, Bash, or Python to automate infrastructure and operational tasks. Infrastructure as Code (IaC): Experience with tools like Terraform, Bicep, or ARM templates for managing infrastructure declaratively: Monitoring & Observability: Familiarity with monitoring tools such as Azure Monitor, AWS CloudWatch, Prometheus, or Grafana. Highly Desirable Certifications: Microsoft Certified: Azure Administrator Associate (AZ-104) Microsoft Certified: Azure Solutions Architect Expert (AZ-305) Microsoft Certified: DevOps Engineer Expert (AZ-400) HashiCorp Certified: Terraform Associate AWS Certified Solutions Architect Associate or Professional Call Experis IT today on (phone number removed)
Feb 02, 2026
Full time
Senior Azure DevOps Engineer - Permanent - Azure DevOps + Some AWS Staffordshire - Derbyshire - East Midlands or London, Paddington Hybrid Role - 1 Day Per Week on Site 75,000pa + Pension, Health Care & Excellent Benefits With over 200,000 clients across the UK and Europe, this established health and welfare organisation are seeking an experienced Senior DevOps Engineer to assist with consistent growth and client expansion. As the Senior DevOps Engineer, you will be responsible for designing, implementing, and maintaining cloud infrastructure across Azure Platforms and some AWS Platforms. You will play a key role in enabling continuous integration and delivery, ensuring system reliability, and embedding security best practices. In addition to hands-on technical work, you will actively contribute to the growth and capability of the wider team by sharing knowledge. Key Deliverables: Azure DevOps & CI/CD: Strong understanding of DevOps principles and hands-on experience with CI/CD tools like Azure DevOps, Azure Tooling, GitHub Actions, or Jenkins. Microsoft Certified: DevOps Engineer Expert (AZ-400) Team Leading, Workload Delegation, Project Resource Planning. Project Scoping, Initiation & Project Support Design, deploy, and manage scalable and secure infrastructure in Azure DevOps and Azure DevOps Tooling. Build and maintain CI/CD pipelines using tools such as Azure DevOps. Implement and manage monitoring, alerting, and logging systems (e.g., Datadog, Logic Monitor, Solarwinds). Automate infrastructure provisioning and configuration using Infrastructure as Code (IaC) tools such as Terraform. Ensure compliance with security policies and manage access controls (IAM, PIM, RBAC). Respond to incidents and participate in root cause analysis and post-mortems. Create and maintain documentation and runbooks. Collaborate with other teams to align DevOps practices with project goals. Scripting & Automation: Proficiency in scripting languages such as PowerShell, Bash, or Python to automate infrastructure and operational tasks. Infrastructure as Code (IaC): Experience with tools like Terraform, Bicep, or ARM templates for managing infrastructure declaratively: HashiCorp Certified: Terraform Associate Monitoring & Observability: Familiarity with monitoring tools such as Azure Monitor, AWS CloudWatch, Prometheus, or Grafana. Containerisation with AKS / EKS: Design and deployment with AWS Cloudformation or ARM templates Security & Compliance: Solid grasp of cloud security best practices, identity and access management, and compliance frameworks. Collaboration & Mentorship: Excellent communication skills with a passion for mentoring, documentation, and enabling others through knowledge sharing. Technical Requirements: Cloud Platform Expertise: Proven experience with AWS and Azure cloud platforms. DevOps & CI/CD: Strong understanding of DevOps principles and hands-on experience with CI/CD tools like Azure DevOps, GitHub Actions, or Jenkins. Containerisation with AKS / EKS: Design and deployment with AWS Cloudformation or ARM templates Scripting & Automation: Proficiency in scripting languages such as PowerShell, Bash, or Python to automate infrastructure and operational tasks. Infrastructure as Code (IaC): Experience with tools like Terraform, Bicep, or ARM templates for managing infrastructure declaratively: Monitoring & Observability: Familiarity with monitoring tools such as Azure Monitor, AWS CloudWatch, Prometheus, or Grafana. Highly Desirable Certifications: Microsoft Certified: Azure Administrator Associate (AZ-104) Microsoft Certified: Azure Solutions Architect Expert (AZ-305) Microsoft Certified: DevOps Engineer Expert (AZ-400) HashiCorp Certified: Terraform Associate AWS Certified Solutions Architect Associate or Professional Call Experis IT today on (phone number removed)
Full-Stack Software Developer
Greenpixie City Of Westminster, London
Greenpixie, a Climate-Tech Company, working on our cutting-edge software products. Location: Hybrid (2+ days per week in our Central London office) Employment Type: Full-Time Salary Range: £50 - 70k p.a. dependent on experience. Share options available. The ideal candidate will have strong hands-on experience with modern web frameworks (ReactJS, Python, AWS) & APIs, as well as a working knowledge with back-end Python. AI development tooling such as Claude Code and Cursor is encouraged. Our technical tests will be testing candidates both with and without AI support. Who are Greenpixie? Greenpixie Ltd is an exciting, innovative tech start-up dedicated to measuring and reducing the environmental impact of cloud computing. Our mission is to develop cutting-edge methodologies and tools to accurately quantify cloud emissions, driving sustainability in public cloud usage. What are we looking for? We are seeking a dedicated and passionate Full-Stack Software Developer with 3+ years of experience to join our growing team. The ideal candidate will have a solid background in Javascript & Python development, experience building web applications with React, and a core understanding of APIs and modern infrastructure tooling. You will play a key role in building the applications and tools that surface our cloud sustainability insights to customers, contributing to our mission of making cloud computing more environmentally responsible. This role requires a product-minded developer who can own features end-to-end and will work directly with other areas of the business to build amazing things. What will you be doing? Building and maintaining APIs that power our products. Developing full-stack applications using React and Next.js., with a Python Back-end. Build AI-driven product features with Claude Code, and continuously refine the prompts, patterns, and team workflows around it. Use frontend frameworks such as shadcn and tailwind-ui to build features without the need for high-fidelity designs. Working with our data team to surface insights in usable interfaces. Improving deployment pipelines and infrastructure (AWS, Terraform). Shipping iteratively and responding to customer feedback quickly. Collaborating directly with the platform and data teams to ensure seamless integration. Building a comprehensive understanding of our existing products and identifying areas for enhancement. Where required, helping with DevOps tasks and infrastructure automation. Documenting implementation processes clearly and concisely. Contributing to product discussions, bringing innovative solutions to complex challenges. Working as part of a tight-knit start-up team to make our vision of sustainability across the cloud a reality. What skills are we looking for? 3+ years of experience in software development with a focus on building across the full stack of products. Designing features as they are built, extrapolating from existing or rough designs using UI frameworks such as shadcn and Tailwind UI. Experience building web applications with React and NextJS. Strong understanding of APIs (REST, GraphQL) and how to build and consume them, using Python in FastAPI, Flask or similar framework Experience using Claude Code and/or Cursor to build features Working knowledge of AWS or experience working with other major cloud platforms. Ability to work independently and manage multiple projects concurrently. Understanding of cloud infrastructure, deployment best practices & the ability to ship full-stack features quickly and iteratively Strong problem-solving skills and clear communication. Nice to have: Experience with infrastructure-as-code and CI pipelines. Nice to have: Experience with data pipelines or working with large datasets. Nice to have: Passion for sustainability and reducing the environmental impact of cloud computing. We are a fast-growing start-up with a lot of moving parts - while we'll support you along the way, we want someone who is confident to work independently, as well as collaboratively. What We Offer? ️ Flexible working hours with hybrid office space in Central London. The opportunity to contribute to a meaningful project with a real-world impact . Working with a fun, passionate team in a company that is growing quickly. ️ Exposure to cutting-edge cloud technologies and large-scale data systems. Opportunities to attend global conferences in the tech and sustainability sectors. (even speaking opportunities if that floats your boat!) ️ Build up an extra day of holiday for every year you spend with us 2 weeks global remote working allowance Competitive salary package. Share options scheme. How to Apply: If you are passionate about building great products, software development and sustainability, we would love to hear from you. Please send your details to . The pay range for this role is: 50,000 - 70,000 GBP per year(London) PI408b06ecc26b-8912
Feb 01, 2026
Full time
Greenpixie, a Climate-Tech Company, working on our cutting-edge software products. Location: Hybrid (2+ days per week in our Central London office) Employment Type: Full-Time Salary Range: £50 - 70k p.a. dependent on experience. Share options available. The ideal candidate will have strong hands-on experience with modern web frameworks (ReactJS, Python, AWS) & APIs, as well as a working knowledge with back-end Python. AI development tooling such as Claude Code and Cursor is encouraged. Our technical tests will be testing candidates both with and without AI support. Who are Greenpixie? Greenpixie Ltd is an exciting, innovative tech start-up dedicated to measuring and reducing the environmental impact of cloud computing. Our mission is to develop cutting-edge methodologies and tools to accurately quantify cloud emissions, driving sustainability in public cloud usage. What are we looking for? We are seeking a dedicated and passionate Full-Stack Software Developer with 3+ years of experience to join our growing team. The ideal candidate will have a solid background in Javascript & Python development, experience building web applications with React, and a core understanding of APIs and modern infrastructure tooling. You will play a key role in building the applications and tools that surface our cloud sustainability insights to customers, contributing to our mission of making cloud computing more environmentally responsible. This role requires a product-minded developer who can own features end-to-end and will work directly with other areas of the business to build amazing things. What will you be doing? Building and maintaining APIs that power our products. Developing full-stack applications using React and Next.js., with a Python Back-end. Build AI-driven product features with Claude Code, and continuously refine the prompts, patterns, and team workflows around it. Use frontend frameworks such as shadcn and tailwind-ui to build features without the need for high-fidelity designs. Working with our data team to surface insights in usable interfaces. Improving deployment pipelines and infrastructure (AWS, Terraform). Shipping iteratively and responding to customer feedback quickly. Collaborating directly with the platform and data teams to ensure seamless integration. Building a comprehensive understanding of our existing products and identifying areas for enhancement. Where required, helping with DevOps tasks and infrastructure automation. Documenting implementation processes clearly and concisely. Contributing to product discussions, bringing innovative solutions to complex challenges. Working as part of a tight-knit start-up team to make our vision of sustainability across the cloud a reality. What skills are we looking for? 3+ years of experience in software development with a focus on building across the full stack of products. Designing features as they are built, extrapolating from existing or rough designs using UI frameworks such as shadcn and Tailwind UI. Experience building web applications with React and NextJS. Strong understanding of APIs (REST, GraphQL) and how to build and consume them, using Python in FastAPI, Flask or similar framework Experience using Claude Code and/or Cursor to build features Working knowledge of AWS or experience working with other major cloud platforms. Ability to work independently and manage multiple projects concurrently. Understanding of cloud infrastructure, deployment best practices & the ability to ship full-stack features quickly and iteratively Strong problem-solving skills and clear communication. Nice to have: Experience with infrastructure-as-code and CI pipelines. Nice to have: Experience with data pipelines or working with large datasets. Nice to have: Passion for sustainability and reducing the environmental impact of cloud computing. We are a fast-growing start-up with a lot of moving parts - while we'll support you along the way, we want someone who is confident to work independently, as well as collaboratively. What We Offer? ️ Flexible working hours with hybrid office space in Central London. The opportunity to contribute to a meaningful project with a real-world impact . Working with a fun, passionate team in a company that is growing quickly. ️ Exposure to cutting-edge cloud technologies and large-scale data systems. Opportunities to attend global conferences in the tech and sustainability sectors. (even speaking opportunities if that floats your boat!) ️ Build up an extra day of holiday for every year you spend with us 2 weeks global remote working allowance Competitive salary package. Share options scheme. How to Apply: If you are passionate about building great products, software development and sustainability, we would love to hear from you. Please send your details to . The pay range for this role is: 50,000 - 70,000 GBP per year(London) PI408b06ecc26b-8912
Akkodis
DevOps Engineer
Akkodis Newcastle Upon Tyne, Tyne And Wear
DevOps Engineer Akkodis are currently working in partnership with a leading service provider to recruit an experienced DevOps Engineer to join their leading cloud services team. Please note this is a hybrid role where you will be required to attend the office 2 days a week. The Role As a DevOps Engineer you will be responsible for designing, building, and maintaining the infrastructure that powers our clients' cutting-edge platforms. In this role, you will be instrumental in automating the development pipeline and ensuring the reliability, scalability, and security of services within telecommunications and a managed service provider (MSP) environment. The Responsibilities CI/CD Pipeline Management: Design, implement, and manage continuous integration and continuous delivery (CI/CD) pipelines for all platforms, enabling rapid and reliable software releases. Infrastructure as Code (IaC): Develop and maintain cloud and on-premise infrastructure using IaC principles with tools like Terraform and Ansible. Containerization & Orchestration: Manage and scale containerized applications, ensuring high availability and efficient resource utilization in a multi-tenant environment. Automation & Scripting: Automate manual processes related to deployment, monitoring, and operations using scripting languages such as Python, Bash, or Go. Monitoring & Logging: Implement and manage robust monitoring, logging, and alerting solutions (e.g., Prometheus, Grafana, ELK Stack) to proactively identify and resolve system issues. Collaboration: Work closely with software developers, network engineers, and product managers to troubleshoot issues and optimize performance Security: Integrate security best practices (DevSecOps) into the development lifecycle, including vulnerability scanning, static code analysis, and compliance checks. The Requirements Hands-on experience in a DevOps, SRE, or similar role. Strong proficiency with at least one major cloud provider (AWS, Azure, or GCP). In-depth knowledge of container orchestration. Demonstrable experience with CI/CD tools like Jenkins, GitHub Actions, or Azure DevOps. Expertise in using tools like Terraform or Ansible. Proficiency in a scripting language such as Python or Bash. Solid understanding of networking principles (TCP/IP, DNS, HTTP/S, Firewalls If you are looking for an exciting new challenge to play a pivotal part in a market leading organisation please apply now. Modis International Ltd acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers in the UK. Modis Europe Ltd provide a variety of international solutions that connect clients to the best talent in the world. For all positions based in Switzerland, Modis Europe Ltd works with its licensed Swiss partner Accurity GmbH to ensure that candidate applications are handled in accordance with Swiss law. Both Modis International Ltd and Modis Europe Ltd are Equal Opportunities Employers. By applying for this role your details will be submitted to Modis International Ltd and/ or Modis Europe Ltd. Our Candidate Privacy Information Statement which explains how we will use your information is available on the Modis website.
Jan 31, 2026
Full time
DevOps Engineer Akkodis are currently working in partnership with a leading service provider to recruit an experienced DevOps Engineer to join their leading cloud services team. Please note this is a hybrid role where you will be required to attend the office 2 days a week. The Role As a DevOps Engineer you will be responsible for designing, building, and maintaining the infrastructure that powers our clients' cutting-edge platforms. In this role, you will be instrumental in automating the development pipeline and ensuring the reliability, scalability, and security of services within telecommunications and a managed service provider (MSP) environment. The Responsibilities CI/CD Pipeline Management: Design, implement, and manage continuous integration and continuous delivery (CI/CD) pipelines for all platforms, enabling rapid and reliable software releases. Infrastructure as Code (IaC): Develop and maintain cloud and on-premise infrastructure using IaC principles with tools like Terraform and Ansible. Containerization & Orchestration: Manage and scale containerized applications, ensuring high availability and efficient resource utilization in a multi-tenant environment. Automation & Scripting: Automate manual processes related to deployment, monitoring, and operations using scripting languages such as Python, Bash, or Go. Monitoring & Logging: Implement and manage robust monitoring, logging, and alerting solutions (e.g., Prometheus, Grafana, ELK Stack) to proactively identify and resolve system issues. Collaboration: Work closely with software developers, network engineers, and product managers to troubleshoot issues and optimize performance Security: Integrate security best practices (DevSecOps) into the development lifecycle, including vulnerability scanning, static code analysis, and compliance checks. The Requirements Hands-on experience in a DevOps, SRE, or similar role. Strong proficiency with at least one major cloud provider (AWS, Azure, or GCP). In-depth knowledge of container orchestration. Demonstrable experience with CI/CD tools like Jenkins, GitHub Actions, or Azure DevOps. Expertise in using tools like Terraform or Ansible. Proficiency in a scripting language such as Python or Bash. Solid understanding of networking principles (TCP/IP, DNS, HTTP/S, Firewalls If you are looking for an exciting new challenge to play a pivotal part in a market leading organisation please apply now. Modis International Ltd acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers in the UK. Modis Europe Ltd provide a variety of international solutions that connect clients to the best talent in the world. For all positions based in Switzerland, Modis Europe Ltd works with its licensed Swiss partner Accurity GmbH to ensure that candidate applications are handled in accordance with Swiss law. Both Modis International Ltd and Modis Europe Ltd are Equal Opportunities Employers. By applying for this role your details will be submitted to Modis International Ltd and/ or Modis Europe Ltd. Our Candidate Privacy Information Statement which explains how we will use your information is available on the Modis website.
EXPERIS
SC Cleared Senior AWS Cloud Engineer
EXPERIS
Job Title: SC Cleared Senior Cloud Engineer (AWS) Location: Malvern 2days per week Duration: 6 months with possible extension Rate: 660- 700 based on experience and via an approved umbrella company Must be willing and eligible to go through the SC Clearance process Our client, a reputable organisation, is hiring for a Senior Cloud Engineer to join their dynamic Delivery Team. This is an exciting opportunity to work on cloud infrastructure deployment, development, and management, supporting the lifecycle of cloud services through automation, security, and scalability. What you'll be doing: Support deployment, testing, and implementation of cloud-based infrastructures and services Drive continuous improvement of operational patterns and practices Collaborate across teams to develop robust, scalable software solutions Support bid and pre-sales activities, including architecture, design, and costing Ensure compliance with change management and security protocols Maintain documentation, monitor performance, and troubleshoot escalated issues Foster an inclusive, collaborative environment that promotes learning and innovation What you'll bring: Extensive experience working with AWS cloud services such as VPCs, Subnets, IAM, Lambda, S3, RDS, and CloudWatch Proven expertise in Infrastructure as Code using Terraform and CLI management Strong scripting skills in Python or Shell Experience leading technical teams and managing project delivery Knowledge of Agile methodologies, DevOps practices, and systems security Excellent communication skills, with the ability to present solutions confidently to stakeholders Relevant certifications (AWS, ITIL, etc.) are desirable Background in Linux administration and familiarity with container orchestration tools like Kubernetes is a plus This role offers a fantastic chance to contribute to innovative cloud projects within a supportive environment. If you're ready to make an impact and meet the criteria, we encourage you to apply today!
Jan 30, 2026
Contractor
Job Title: SC Cleared Senior Cloud Engineer (AWS) Location: Malvern 2days per week Duration: 6 months with possible extension Rate: 660- 700 based on experience and via an approved umbrella company Must be willing and eligible to go through the SC Clearance process Our client, a reputable organisation, is hiring for a Senior Cloud Engineer to join their dynamic Delivery Team. This is an exciting opportunity to work on cloud infrastructure deployment, development, and management, supporting the lifecycle of cloud services through automation, security, and scalability. What you'll be doing: Support deployment, testing, and implementation of cloud-based infrastructures and services Drive continuous improvement of operational patterns and practices Collaborate across teams to develop robust, scalable software solutions Support bid and pre-sales activities, including architecture, design, and costing Ensure compliance with change management and security protocols Maintain documentation, monitor performance, and troubleshoot escalated issues Foster an inclusive, collaborative environment that promotes learning and innovation What you'll bring: Extensive experience working with AWS cloud services such as VPCs, Subnets, IAM, Lambda, S3, RDS, and CloudWatch Proven expertise in Infrastructure as Code using Terraform and CLI management Strong scripting skills in Python or Shell Experience leading technical teams and managing project delivery Knowledge of Agile methodologies, DevOps practices, and systems security Excellent communication skills, with the ability to present solutions confidently to stakeholders Relevant certifications (AWS, ITIL, etc.) are desirable Background in Linux administration and familiarity with container orchestration tools like Kubernetes is a plus This role offers a fantastic chance to contribute to innovative cloud projects within a supportive environment. If you're ready to make an impact and meet the criteria, we encourage you to apply today!
Amtis professional Ltd
DevOps Engineer
Amtis professional Ltd Burton-on-trent, Staffordshire
DevOps Engineer - Remote -1 Day P/W Derby - £50,000 - £55,000 + Benefits AWS, Azure, CI/CD, Terraform, Git, Python, ARM Role Overview We are seeking a skilled DevOps Engineer to design, implement and maintain robust cloud infrastructure solutions across AWS and Azure platforms click apply for full job details
Jan 30, 2026
Full time
DevOps Engineer - Remote -1 Day P/W Derby - £50,000 - £55,000 + Benefits AWS, Azure, CI/CD, Terraform, Git, Python, ARM Role Overview We are seeking a skilled DevOps Engineer to design, implement and maintain robust cloud infrastructure solutions across AWS and Azure platforms click apply for full job details
SF Recruitment
Senior Software Engineer
SF Recruitment
Senior Software Engineer with a solid Python, Node.js, Typescript and Terraform experience ideally gained in a product lead, scale up environment is sought by a high growth B2B generative AI scale up based in central London. Working at the bleeding edge of generative AI this Senior Software Engineer will play a key role in greenfield innovation utilising the latest technology to design and implement new solutions within the existing platform. This role would suit an Engineer with a solid product engineering and a real passion for generative AI solution development who is looking to join a multi award winning scale up at the forefront of market innovation. In return this Senior Software Engineer can expect excellent autonomy with a clear-cut progression pathway within this innovative, knowledge chare driven culture. This Senior Software Engineer near London should have most of the following key skills: - Strong full stack JavaScript engineering skills - Node.js, Next.js, React, Typescript etc - Strong product development using python - Cloud infrastructure provisioning - Terraform, Ansible, Kubernetes etc - Strong general cloud skills - Azure, AWS, GCP - Experience building cloud based products ideally in a scale up environment - A delivery focused, mission driven personality - An interest in AI/ automation - Pytorch, TensorFlow, LlamaCPP, Keras etc This Senior Software Engineer near London will receive: - Starting salary of up £110,000 base salary - Bonus scheme - Long term hybrid working (2 days a week on-site in central London) - Flexible working hours - Excellent progression opportunities - Personal development scheme - 25 days holiday - Private pension - Fast paced, autonomous culture with extensive growth potential - Regular remuneration reviews So if you are a Senior Software engineer who loves the idea of joining this product led, high growth AI scale up an exciting phase of their development please apply now to be considered and for more info. Senior Software Engineer London (hybrid) Node.js, Terraform, Ansible, AI, Microservices, CI/CD, Docker, Kubernetes, AWS, next.js, React, Python, Generative AI
Jan 30, 2026
Full time
Senior Software Engineer with a solid Python, Node.js, Typescript and Terraform experience ideally gained in a product lead, scale up environment is sought by a high growth B2B generative AI scale up based in central London. Working at the bleeding edge of generative AI this Senior Software Engineer will play a key role in greenfield innovation utilising the latest technology to design and implement new solutions within the existing platform. This role would suit an Engineer with a solid product engineering and a real passion for generative AI solution development who is looking to join a multi award winning scale up at the forefront of market innovation. In return this Senior Software Engineer can expect excellent autonomy with a clear-cut progression pathway within this innovative, knowledge chare driven culture. This Senior Software Engineer near London should have most of the following key skills: - Strong full stack JavaScript engineering skills - Node.js, Next.js, React, Typescript etc - Strong product development using python - Cloud infrastructure provisioning - Terraform, Ansible, Kubernetes etc - Strong general cloud skills - Azure, AWS, GCP - Experience building cloud based products ideally in a scale up environment - A delivery focused, mission driven personality - An interest in AI/ automation - Pytorch, TensorFlow, LlamaCPP, Keras etc This Senior Software Engineer near London will receive: - Starting salary of up £110,000 base salary - Bonus scheme - Long term hybrid working (2 days a week on-site in central London) - Flexible working hours - Excellent progression opportunities - Personal development scheme - 25 days holiday - Private pension - Fast paced, autonomous culture with extensive growth potential - Regular remuneration reviews So if you are a Senior Software engineer who loves the idea of joining this product led, high growth AI scale up an exciting phase of their development please apply now to be considered and for more info. Senior Software Engineer London (hybrid) Node.js, Terraform, Ansible, AI, Microservices, CI/CD, Docker, Kubernetes, AWS, next.js, React, Python, Generative AI
WK Tech Expert and Consultancy
Junior Cloud Engineer
WK Tech Expert and Consultancy Luton, Bedfordshire
This Junior Cloud Engineer role represents a typical entry-level opportunity available to candidates starting a career in cloud infrastructure and DevOps. The position is suited to individuals with foundational cloud knowledge who are looking to transition into a junior engineering role within a professional environment. Candidates are expected to have core technical understanding and a willingness to continue developing their skills. Optional learning and career-support pathways are available for individuals who wish to strengthen their readiness for graduate cloud roles. Role Responsibilities Support the day-to-day operation of AWS-based cloud infrastructure Assist with provisioning and managing infrastructure using Infrastructure as Code tools Contribute to configuration management and automation tasks Support CI/CD pipelines and deployment processes Monitor systems and assist with troubleshooting incidents Work collaboratively with development teams on cloud deployments Maintain technical documentation and operational runbooks Essential Skills Foundational understanding of AWS services (EC2, S3, IAM, VPC, Lambda) Awareness of Infrastructure as Code concepts (Terraform preferred) Basic Linux/Unix command-line experience Familiarity with Git and version control workflows Introductory scripting knowledge (Bash and/or Python) Desirable Skills Exposure to Docker and containerisation Awareness of CI/CD tools (GitHub Actions, GitLab CI, Jenkins, etc.) Basic understanding of Kubernetes concepts Familiarity with monitoring and logging tools Development & Support Some candidates choose to undertake additional learning to improve their readiness for graduate cloud engineering roles. Structured learning pathways, technical mentoring, and certification preparation resources are available to support ongoing development. Any additional training undertaken is optional and independently funded by the individual. What s Offered Competitive graduate-level salary 26 days annual leave + bank holidays Flexible working arrangements Pension scheme Supportive team environment with mentoring Access to optional learning and certification pathways
Jan 30, 2026
Full time
This Junior Cloud Engineer role represents a typical entry-level opportunity available to candidates starting a career in cloud infrastructure and DevOps. The position is suited to individuals with foundational cloud knowledge who are looking to transition into a junior engineering role within a professional environment. Candidates are expected to have core technical understanding and a willingness to continue developing their skills. Optional learning and career-support pathways are available for individuals who wish to strengthen their readiness for graduate cloud roles. Role Responsibilities Support the day-to-day operation of AWS-based cloud infrastructure Assist with provisioning and managing infrastructure using Infrastructure as Code tools Contribute to configuration management and automation tasks Support CI/CD pipelines and deployment processes Monitor systems and assist with troubleshooting incidents Work collaboratively with development teams on cloud deployments Maintain technical documentation and operational runbooks Essential Skills Foundational understanding of AWS services (EC2, S3, IAM, VPC, Lambda) Awareness of Infrastructure as Code concepts (Terraform preferred) Basic Linux/Unix command-line experience Familiarity with Git and version control workflows Introductory scripting knowledge (Bash and/or Python) Desirable Skills Exposure to Docker and containerisation Awareness of CI/CD tools (GitHub Actions, GitLab CI, Jenkins, etc.) Basic understanding of Kubernetes concepts Familiarity with monitoring and logging tools Development & Support Some candidates choose to undertake additional learning to improve their readiness for graduate cloud engineering roles. Structured learning pathways, technical mentoring, and certification preparation resources are available to support ongoing development. Any additional training undertaken is optional and independently funded by the individual. What s Offered Competitive graduate-level salary 26 days annual leave + bank holidays Flexible working arrangements Pension scheme Supportive team environment with mentoring Access to optional learning and certification pathways
Akkodis
DevOps Engineer
Akkodis Nelson, Lancashire
DevOps Engineer Akkodis are currently working in partnership with a leading service provider to recruit an experienced DevOps Engineer to join their leading cloud services team. Please note this is a hybrid role where you will be required to attend the office 2 days a week. The Role As a DevOps Engineer you will be responsible for designing, building, and maintaining the infrastructure that powers our clients' cutting-edge platforms. In this role, you will be instrumental in automating the development pipeline and ensuring the reliability, scalability, and security of services within telecommunications and a managed service provider (MSP) environment. The Responsibilities CI/CD Pipeline Management: Design, implement, and manage continuous integration and continuous delivery (CI/CD) pipelines for all platforms, enabling rapid and reliable software releases. Infrastructure as Code (IaC): Develop and maintain cloud and on-premise infrastructure using IaC principles with tools like Terraform and Ansible. Containerization & Orchestration: Manage and scale containerized applications, ensuring high availability and efficient resource utilization in a multi-tenant environment. Automation & Scripting: Automate manual processes related to deployment, monitoring, and operations using scripting languages such as Python, Bash, or Go. Monitoring & Logging: Implement and manage robust monitoring, logging, and alerting solutions (e.g., Prometheus, Grafana, ELK Stack) to proactively identify and resolve system issues. Collaboration: Work closely with software developers, network engineers, and product managers to troubleshoot issues and optimize performance Security: Integrate security best practices (DevSecOps) into the development lifecycle, including vulnerability scanning, static code analysis, and compliance checks. The Requirements Hands-on experience in a DevOps, SRE, or similar role. Strong proficiency with at least one major cloud provider (AWS, Azure, or GCP). In-depth knowledge of container orchestration. Demonstrable experience with CI/CD tools like Jenkins, GitHub Actions, or Azure DevOps. Expertise in using tools like Terraform or Ansible. Proficiency in a scripting language such as Python or Bash. Solid understanding of networking principles (TCP/IP, DNS, HTTP/S, Firewalls If you are looking for an exciting new challenge to play a pivotal part in a market leading organisation please apply now. Modis International Ltd acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers in the UK. Modis Europe Ltd provide a variety of international solutions that connect clients to the best talent in the world. For all positions based in Switzerland, Modis Europe Ltd works with its licensed Swiss partner Accurity GmbH to ensure that candidate applications are handled in accordance with Swiss law. Both Modis International Ltd and Modis Europe Ltd are Equal Opportunities Employers. By applying for this role your details will be submitted to Modis International Ltd and/ or Modis Europe Ltd. Our Candidate Privacy Information Statement which explains how we will use your information is available on the Modis website.
Jan 30, 2026
Full time
DevOps Engineer Akkodis are currently working in partnership with a leading service provider to recruit an experienced DevOps Engineer to join their leading cloud services team. Please note this is a hybrid role where you will be required to attend the office 2 days a week. The Role As a DevOps Engineer you will be responsible for designing, building, and maintaining the infrastructure that powers our clients' cutting-edge platforms. In this role, you will be instrumental in automating the development pipeline and ensuring the reliability, scalability, and security of services within telecommunications and a managed service provider (MSP) environment. The Responsibilities CI/CD Pipeline Management: Design, implement, and manage continuous integration and continuous delivery (CI/CD) pipelines for all platforms, enabling rapid and reliable software releases. Infrastructure as Code (IaC): Develop and maintain cloud and on-premise infrastructure using IaC principles with tools like Terraform and Ansible. Containerization & Orchestration: Manage and scale containerized applications, ensuring high availability and efficient resource utilization in a multi-tenant environment. Automation & Scripting: Automate manual processes related to deployment, monitoring, and operations using scripting languages such as Python, Bash, or Go. Monitoring & Logging: Implement and manage robust monitoring, logging, and alerting solutions (e.g., Prometheus, Grafana, ELK Stack) to proactively identify and resolve system issues. Collaboration: Work closely with software developers, network engineers, and product managers to troubleshoot issues and optimize performance Security: Integrate security best practices (DevSecOps) into the development lifecycle, including vulnerability scanning, static code analysis, and compliance checks. The Requirements Hands-on experience in a DevOps, SRE, or similar role. Strong proficiency with at least one major cloud provider (AWS, Azure, or GCP). In-depth knowledge of container orchestration. Demonstrable experience with CI/CD tools like Jenkins, GitHub Actions, or Azure DevOps. Expertise in using tools like Terraform or Ansible. Proficiency in a scripting language such as Python or Bash. Solid understanding of networking principles (TCP/IP, DNS, HTTP/S, Firewalls If you are looking for an exciting new challenge to play a pivotal part in a market leading organisation please apply now. Modis International Ltd acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers in the UK. Modis Europe Ltd provide a variety of international solutions that connect clients to the best talent in the world. For all positions based in Switzerland, Modis Europe Ltd works with its licensed Swiss partner Accurity GmbH to ensure that candidate applications are handled in accordance with Swiss law. Both Modis International Ltd and Modis Europe Ltd are Equal Opportunities Employers. By applying for this role your details will be submitted to Modis International Ltd and/ or Modis Europe Ltd. Our Candidate Privacy Information Statement which explains how we will use your information is available on the Modis website.
CBSbutler Holdings Limited trading as CBSbutler
DevSecOps Engineer
CBSbutler Holdings Limited trading as CBSbutler Romsey, Hampshire
DevSecOps Engineer x2 +Permananet opportunity +Hybrid working 2/3 days a week onsite in Romsey + 80,000 - 100,000 +SC cleared roles Skills: +MOD +Terraform +Ansible +Gitlab +SC clearance - sole British nationals only due to the nature of the project We are seeking a DevSecOps Engineer to join our Defence Information Advantage team, helping to drive best practice across secure software delivery, deployment automation, and live system operations. Working in the defence domain presents unique challenges across DevSecOps, MLOps and secure cloud adoption . You will use modern cloud and automation technologies to accelerate deployments while applying SRE principles to improve system resilience and uptime. Key Responsibilities Build and maintain CI/CD pipelines and deployment automation Support and operate live systems, resolving incidents and issues Coach teams in DevSecOps best practice Work closely with developers, product owners, security architects and QA Contribute to agile ceremonies (Scrum, Kanban or SAFe) Participate in code reviews and secure-by-design delivery Skills & Experience Degree in a STEM subject or equivalent practical experience Cloud experience (AWS essential; Azure/GCP desirable) Strong DevSecOps tooling knowledge (Git, GitLab CI/CD, Terraform, Ansible) Containerisation and orchestration (Docker, Kubernetes; GPU containers desirable) Cyber security practices (vulnerability management, IAM, secure networking) Experience with microservices, APIs, streaming platforms (Kafka/MQTT) Scripting or automation using Python, Rust or similar Observability/SRE tools such as Prometheus, Grafana or Elastic You'll be proactive, curious, and an effective communicator with a strong commitment to continuous improvement. Previous defence-sector DevSecOps experience is a bonus. If you'd like to discuss the DevSecOps Engineer role in more detail, please send your updated CV to (url removed) and I will get in touch.
Jan 23, 2026
Full time
DevSecOps Engineer x2 +Permananet opportunity +Hybrid working 2/3 days a week onsite in Romsey + 80,000 - 100,000 +SC cleared roles Skills: +MOD +Terraform +Ansible +Gitlab +SC clearance - sole British nationals only due to the nature of the project We are seeking a DevSecOps Engineer to join our Defence Information Advantage team, helping to drive best practice across secure software delivery, deployment automation, and live system operations. Working in the defence domain presents unique challenges across DevSecOps, MLOps and secure cloud adoption . You will use modern cloud and automation technologies to accelerate deployments while applying SRE principles to improve system resilience and uptime. Key Responsibilities Build and maintain CI/CD pipelines and deployment automation Support and operate live systems, resolving incidents and issues Coach teams in DevSecOps best practice Work closely with developers, product owners, security architects and QA Contribute to agile ceremonies (Scrum, Kanban or SAFe) Participate in code reviews and secure-by-design delivery Skills & Experience Degree in a STEM subject or equivalent practical experience Cloud experience (AWS essential; Azure/GCP desirable) Strong DevSecOps tooling knowledge (Git, GitLab CI/CD, Terraform, Ansible) Containerisation and orchestration (Docker, Kubernetes; GPU containers desirable) Cyber security practices (vulnerability management, IAM, secure networking) Experience with microservices, APIs, streaming platforms (Kafka/MQTT) Scripting or automation using Python, Rust or similar Observability/SRE tools such as Prometheus, Grafana or Elastic You'll be proactive, curious, and an effective communicator with a strong commitment to continuous improvement. Previous defence-sector DevSecOps experience is a bonus. If you'd like to discuss the DevSecOps Engineer role in more detail, please send your updated CV to (url removed) and I will get in touch.
VERTECH GROUP (UK) LTD
DevOps Engineer
VERTECH GROUP (UK) LTD
DevOps Engineer Location: 3 days from home / 2 days in London Salary: Circa 75K - 85K + Benefits DevOps Engineer required by pioneering, fast-growing Top Tech Company! This is a challenging, hands-on role where you ll design, build and maintain infrastructure on Google Cloud Platform (GCP) , set up robust monitoring and automation, and develop CI/CD pipelines from an early stage. You ll have the autonomy to shape DevOps processes and make a genuine impact on how systems are scaled and secured across the business. Essential: Proven experience with GCP Dockeror Kubernetes for containerisation and orchestration Solid understanding of CI/CD principles and tools (Google Cloud Build, Jenkins, or GitHub Actions) Proficiency in Python or Bash scripting Hands-on experience ofTerraform Strong communication skills with the ability to explain technical decisions clearly Nice-to-have: Multi-cloud exposure (AWS/Azure) Experience with monitoring tools such as Prometheus, Datadog etc. This is a tremendous opportunity offering plenty of scope for career progression in a friendly, innovative environment where you ll be able to shape modern DevOps practices and enjoy a healthy work/life balance! Apply now for FULL details!
Jan 09, 2026
Full time
DevOps Engineer Location: 3 days from home / 2 days in London Salary: Circa 75K - 85K + Benefits DevOps Engineer required by pioneering, fast-growing Top Tech Company! This is a challenging, hands-on role where you ll design, build and maintain infrastructure on Google Cloud Platform (GCP) , set up robust monitoring and automation, and develop CI/CD pipelines from an early stage. You ll have the autonomy to shape DevOps processes and make a genuine impact on how systems are scaled and secured across the business. Essential: Proven experience with GCP Dockeror Kubernetes for containerisation and orchestration Solid understanding of CI/CD principles and tools (Google Cloud Build, Jenkins, or GitHub Actions) Proficiency in Python or Bash scripting Hands-on experience ofTerraform Strong communication skills with the ability to explain technical decisions clearly Nice-to-have: Multi-cloud exposure (AWS/Azure) Experience with monitoring tools such as Prometheus, Datadog etc. This is a tremendous opportunity offering plenty of scope for career progression in a friendly, innovative environment where you ll be able to shape modern DevOps practices and enjoy a healthy work/life balance! Apply now for FULL details!
ARM
AWS Cloud Engineer
ARM
AWS Cloud Engineer 6-month contract - Inside IR35 - up to 480 per day London based - hybrid working - 3 days office based Responsibilities: Responsible for technical delivery of managed services across customer account base. Working as part of a team providing a Shared Managed Service. The following is a list of expected responsibilities: To manage and support a customer's AWS and Data platform To be technical hands on Provide Incident and problem management on the AWS IaaS and PaaS Platform Monitoring and observability of system and platform performance Collaboration with development and build teams on application and platform deployments and changes Involvement in the resolution of Incidents and problems in an efficient and timely manner Actively monitor an AWS platform and components for technical issues Implement and improve on existing monitoring and observability solution To be involved in the resolution of technical incidents tickets Assist in the root cause analysis of incidents Assist with improving efficiency and processes within the team Examining traces and logs Escalate incidents and problems to the appropriate teams Working with third party suppliers and AWS to jointly resolve incidents Experience and Skills Requirements: Essential Technical troubleshooting and problem solving AWS management of large-scale IaaS PaaS solutions Monitoring and troubleshooting servers, networks, and applications Cloud networking and security fundamentals Collaboration and communication skills Highly adaptable to changes in a technical environment Desirable Experience using monitoring and observer ability toolsets inc. Splunk, Datadog Experience using Github Actions Experience using AWS RDS/SQL based solutions Experience using containerization in AWS Working data warehouse knowledge Redshift and Snowflake preferred Working with IaC - Terraform and Cloud Formation Working understanding of scripting languages including Python and Shell Experience working with streaming technologies inc. Kafka, Apache Flink Experience working with a ETL environments Experience working with a confluent cloud platform Disclaimer: This vacancy is being advertised by either Advanced Resource Managers Limited, Advanced Resource Managers IT Limited or Advanced Resource Managers Engineering Limited ("ARM"). ARM is a specialist talent acquisition and management consultancy. We provide technical contingency recruitment and a portfolio of more complex resource solutions. Our specialist recruitment divisions cover the entire technical arena, including some of the most economically and strategically important industries in the UK and the world today. We will never send your CV without your permission. Where the role is marked as Outside IR35 in the advertisement this is subject to receipt of a final Status Determination Statement from the end Client and may be subject to change.
Nov 05, 2025
Contractor
AWS Cloud Engineer 6-month contract - Inside IR35 - up to 480 per day London based - hybrid working - 3 days office based Responsibilities: Responsible for technical delivery of managed services across customer account base. Working as part of a team providing a Shared Managed Service. The following is a list of expected responsibilities: To manage and support a customer's AWS and Data platform To be technical hands on Provide Incident and problem management on the AWS IaaS and PaaS Platform Monitoring and observability of system and platform performance Collaboration with development and build teams on application and platform deployments and changes Involvement in the resolution of Incidents and problems in an efficient and timely manner Actively monitor an AWS platform and components for technical issues Implement and improve on existing monitoring and observability solution To be involved in the resolution of technical incidents tickets Assist in the root cause analysis of incidents Assist with improving efficiency and processes within the team Examining traces and logs Escalate incidents and problems to the appropriate teams Working with third party suppliers and AWS to jointly resolve incidents Experience and Skills Requirements: Essential Technical troubleshooting and problem solving AWS management of large-scale IaaS PaaS solutions Monitoring and troubleshooting servers, networks, and applications Cloud networking and security fundamentals Collaboration and communication skills Highly adaptable to changes in a technical environment Desirable Experience using monitoring and observer ability toolsets inc. Splunk, Datadog Experience using Github Actions Experience using AWS RDS/SQL based solutions Experience using containerization in AWS Working data warehouse knowledge Redshift and Snowflake preferred Working with IaC - Terraform and Cloud Formation Working understanding of scripting languages including Python and Shell Experience working with streaming technologies inc. Kafka, Apache Flink Experience working with a ETL environments Experience working with a confluent cloud platform Disclaimer: This vacancy is being advertised by either Advanced Resource Managers Limited, Advanced Resource Managers IT Limited or Advanced Resource Managers Engineering Limited ("ARM"). ARM is a specialist talent acquisition and management consultancy. We provide technical contingency recruitment and a portfolio of more complex resource solutions. Our specialist recruitment divisions cover the entire technical arena, including some of the most economically and strategically important industries in the UK and the world today. We will never send your CV without your permission. Where the role is marked as Outside IR35 in the advertisement this is subject to receipt of a final Status Determination Statement from the end Client and may be subject to change.

Modal Window

  • Home
  • Contact
  • About Us
  • Terms & Conditions
  • Privacy
  • Employer
  • Post a Job
  • Search Resumes
  • Sign in
  • Job Seeker
  • Find Jobs
  • Create Resume
  • Sign in
  • Facebook
  • Twitter
  • Google Plus
  • LinkedIn
Parent and Partner sites: IT Job Board | Jobs Near Me | RightTalent.co.uk | Quantity Surveyor jobs | Building Surveyor jobs | Construction Recruitment | Talent Recruiter | Construction Job Board | Property jobs | myJobsnearme.com | Jobs near me
© 2008-2026 Jobsite Jobs | Designed by Web Design Agency