A hugely successful, rapidly growing fin tech company are looking for a Back End/DevOps Engineer to join an established, small development team across their user friendly platforms with new integrations and products. With a 4.8/5 Trustpilot score and a string of industry awards and recent investment, they're ready to grow by another 70% next year. They need an experienced PHP/Laravel Developer with some DevOps experience in AWS or similar who has experience managing and configuring Docker environments for production. What experience you'll need Experience building and shipping production apps using PHP, Laravel API's. DevOps experience in AWS or similar. Experience configuring and managing Docker environments for production, CI/CD and using tools like GitHub Actions or similar. Able to optimise performance and enable teams to ship with confidence. Ability to thrive in a fast paced, startup environment where your work has visible impact. Any experience/knowledge of cloud security principles, Flare, NewRelic or similar observability tooling would be great. What you'll be doing You'll be part of a team of 6 7 developers maintaining and enhancing existing software products writing clean, scalable Laravel code to improve functionality and performance. In addition to back end development, they'll look for you to help design and scale secure, robust infrastructure to support their full stack of API's and web applications. Further duties will include building DevOps workflows that support CI/CD across multiple environments working with Docker and deployment tooling to ensure system resilience. You will ensure system observability through logging, metrics and alerts and help to write, improve and maintain automated tests (PHPUnit). Your work will assist people who are often unfairly overlooked for financial help so you should feel really good about what you're building. What you'll get in return for your talents A competitive salary of up to £65K, plus generous holiday allowance, company shares scheme, fun team events and social gatherings, enhanced pension plan, MacBook Pro, comprehensive training and development opportunities plus more. You'll be part of a growing team working on challenging projects and a chance to have a real impact not just in the evolution of the products but in helping thousands of people. What's next? You'll be an integral part of their team, working closely online with other talented people. If you're up to the challenge, can work well remotely and have the technical capabilities, please get your CV over to me asap for consideration
Jan 08, 2026
Full time
A hugely successful, rapidly growing fin tech company are looking for a Back End/DevOps Engineer to join an established, small development team across their user friendly platforms with new integrations and products. With a 4.8/5 Trustpilot score and a string of industry awards and recent investment, they're ready to grow by another 70% next year. They need an experienced PHP/Laravel Developer with some DevOps experience in AWS or similar who has experience managing and configuring Docker environments for production. What experience you'll need Experience building and shipping production apps using PHP, Laravel API's. DevOps experience in AWS or similar. Experience configuring and managing Docker environments for production, CI/CD and using tools like GitHub Actions or similar. Able to optimise performance and enable teams to ship with confidence. Ability to thrive in a fast paced, startup environment where your work has visible impact. Any experience/knowledge of cloud security principles, Flare, NewRelic or similar observability tooling would be great. What you'll be doing You'll be part of a team of 6 7 developers maintaining and enhancing existing software products writing clean, scalable Laravel code to improve functionality and performance. In addition to back end development, they'll look for you to help design and scale secure, robust infrastructure to support their full stack of API's and web applications. Further duties will include building DevOps workflows that support CI/CD across multiple environments working with Docker and deployment tooling to ensure system resilience. You will ensure system observability through logging, metrics and alerts and help to write, improve and maintain automated tests (PHPUnit). Your work will assist people who are often unfairly overlooked for financial help so you should feel really good about what you're building. What you'll get in return for your talents A competitive salary of up to £65K, plus generous holiday allowance, company shares scheme, fun team events and social gatherings, enhanced pension plan, MacBook Pro, comprehensive training and development opportunities plus more. You'll be part of a growing team working on challenging projects and a chance to have a real impact not just in the evolution of the products but in helping thousands of people. What's next? You'll be an integral part of their team, working closely online with other talented people. If you're up to the challenge, can work well remotely and have the technical capabilities, please get your CV over to me asap for consideration
Overview As a DevOps Engineer at Payter, you will play a crucial role in the company's growth by delivering key software solutions. Joining a small, close-knit team, you will engage in software delivery, collaborating closely with domain owners to deliver high-quality, clean, testable code in line with standards, strategies, and best practices. At Payter, we are innovators, pioneers, and leaders in the dynamic realm of unattended/self-service contactless and cashless payment technology in a wide range of markets such as Electrical Vehicle Charging, Transportation, Retail, Hospitality, Vending, Charity, Parking, and beyond. The adaptable Payter platform accommodates a diverse range of payment technologies (NFC, EMV, ApplePay, GooglePay, etc.), international banking processes, closed-loop payment and loyalty schemes and telemetry. Through continuous innovation and in-house development, we redefine how vendors connect with their customers, empowering them to boost revenue, enhance user experiences, and access real-time sales and performance data. We support a broad range of technologies, from Contact & Contactless EMV, Mifare, WiFi, 5G, Bluetooth, Touch Screens and more. Our state-of-the-art products have an extremely long service life, are of high quality, compliant with multiple international standards, boast great design, are user-friendly for all, multifunctional, and easy to integrate. Examples of successful collaboration include: EV Charging: Fastned, Shell, BP, Ionity, Alfen, EVBOX Cashless Charity Donations: Hartstichting, WWF, Save the Children, Royal British Legion Food & Drink Vending: Coca Cola, Lavazza, Starbucks, Jacobs Douwe Egberts, Costa, Heineken, Maas International, Franke, WMF, Wurlitzer, Selecta Hospitality & public locations: Compass Group, Sodexo, Albron, TU Delft, TU Eindhoven Gaming & Entertainment: Pinball, Slot Machines, Gaming Arcades, Efteling Petrol Stations services Laundry, Car Wash, Kiosks, Toilets: Shell, BP, Exxon Special Products: Photo Booths, Dog Wash Station Responsibilities Lead the design and implementation of scalable, secure cloud infrastructure for our new platform while maintaining and enhancing existing production systems alongside the current DevOps team Architect and implement comprehensive Infrastructure as Code solutions using Terraform, establishing reusable modules and advanced deployment patterns across multiple environments Design, build, and optimise complex CI/CD pipelines using Bitbucket Pipelines, incorporating advanced deployment strategies including canary releases, and automated rollback mechanisms Implement sophisticated monitoring, logging, and alerting systems using GCP native tools, with emphasis on proactive anomaly detection, predictive failure analysis, and SLO/SLI management Drive automation initiatives across the organisation, eliminating manual processes and establishing GitOps workflows and Everything-as-Code practices for consistent infrastructure management Lead security implementation ensuring PCI-DSS, PCI-PIN, and PCI-P2PE compliance, including infrastructure hardening, access controls, and integrated security scanning within deployment pipelines Work with existing DevOps engineers and collaborate with Software Engineering teams to optimise deployment processes and foster a culture of continuous improvement Establish and maintain disaster recovery procedures, backup strategies, and high-availability architectures to ensure business continuity and system resilience Qualifications 5+ years of Senior DevOps/Site Reliability Engineering experience with demonstrated expertise in leading complex, enterprise-scale cloud infrastructure projects and team mentorship Expert-level proficiency with Google Cloud Platform including advanced services (GKE, Cloud Run Services, VPC networking, IAM, Cloud Armor, API Gateways) and cost optimisation strategies Deep expertise in Bitbucket and Bitbucket Pipelines including complex workflow orchestration, parallel execution strategies, advanced branching models, and pipeline optimisation techniques Advanced Terraform experience including module development, remote state management, workspace strategies, and enterprise-scale infrastructure provisioning patterns Proven experience with Kubernetes administration including cluster management, advanced networking, service mesh implementation, resource optimisation, and troubleshooting distributed systems Demonstrated ability to design and implement highly available, fault-tolerant systems with experience in event-driven architectures, microservices, and distributed system challenges Advanced knowledge of database administration (SQL/NoSQL), caching technologies, and data pipeline optimisation in cloud-native environments Strong expertise in security practices, compliance frameworks (PCI-DSS, PCI-PIN, PCI-P2PE), infrastructure hardening, and security monitoring with hands-on experience in network security and access management would be advantageous Technical Skills Cloud Platform: Expert-level Google Cloud Platform (GCP) including advanced service integration, networking, and cost management Version Control & CI/CD: Advanced Bitbucket and Bitbucket Pipelines expertise with complex workflow design and optimisation capabilities Infrastructure as Code: Deep Terraform proficiency including enterprise module development, state management, and advanced provisioning patterns Container Orchestration: Advanced Kubernetes (GKE) administration including service mesh, advanced networking, and performance tuning Monitoring & Observability: Google Cloud Logging, Cloud Monitoring, distributed tracing, and advanced alerting with SLI/SLO implementation Security & Compliance: PCI-DSS, PCI-PIN and PCI-P2PE standards implementation, infrastructure security hardening, and automated security scanning integration Database & Caching: Advanced experience with cloud databases (Alloy DB, Cloud SQL, Firestore, BigQuery), Memcached, and data pipeline optimisation Networking & Load Balancing: Expert knowledge of GCP VPC design, Cloud Load Balancer configuration, firewall management, and hybrid cloud connectivity Software Development Practices: Advanced Git workflows, automated testing integration, code review processes, and understanding of software architecture patterns Soft Skills Excellent technical communication skills, demonstrated by experience in bringing the team on the journey, and promoting DevOps best practices. Advanced problem-solving and analytical thinking capabilities, with proven expertise in diagnosing complex distributed system issues and implementing comprehensive solutions. Outstanding communication skills, capable of clearly articulating technical challenges and solutions to diverse audiences, including executives, product managers, and development teams A strong commitment to quality and continuous improvement, coupled with a passion for delivering enterprise-grade, maintainable solutions that inspire pride within teams What we have to offer Competitive compensation including a discretionary bonus based on business results; Great benefits like 25 leave days plus extra monthly "wellbeing days", a travel allowance and an attractive pension plan; Flexible working options within the Netherlands (Rotterdam office or hybrid/remote) or fully remote in the UK; we are not hiring outside these regions at this time. Thriving in a close-knit environment valuing flexibility, work-life balance, and mental well-being; Join Payter and become part of an international scale-up, shaping the future in a booming market where you can have impact and growth opportunities.
Jan 01, 2026
Full time
Overview As a DevOps Engineer at Payter, you will play a crucial role in the company's growth by delivering key software solutions. Joining a small, close-knit team, you will engage in software delivery, collaborating closely with domain owners to deliver high-quality, clean, testable code in line with standards, strategies, and best practices. At Payter, we are innovators, pioneers, and leaders in the dynamic realm of unattended/self-service contactless and cashless payment technology in a wide range of markets such as Electrical Vehicle Charging, Transportation, Retail, Hospitality, Vending, Charity, Parking, and beyond. The adaptable Payter platform accommodates a diverse range of payment technologies (NFC, EMV, ApplePay, GooglePay, etc.), international banking processes, closed-loop payment and loyalty schemes and telemetry. Through continuous innovation and in-house development, we redefine how vendors connect with their customers, empowering them to boost revenue, enhance user experiences, and access real-time sales and performance data. We support a broad range of technologies, from Contact & Contactless EMV, Mifare, WiFi, 5G, Bluetooth, Touch Screens and more. Our state-of-the-art products have an extremely long service life, are of high quality, compliant with multiple international standards, boast great design, are user-friendly for all, multifunctional, and easy to integrate. Examples of successful collaboration include: EV Charging: Fastned, Shell, BP, Ionity, Alfen, EVBOX Cashless Charity Donations: Hartstichting, WWF, Save the Children, Royal British Legion Food & Drink Vending: Coca Cola, Lavazza, Starbucks, Jacobs Douwe Egberts, Costa, Heineken, Maas International, Franke, WMF, Wurlitzer, Selecta Hospitality & public locations: Compass Group, Sodexo, Albron, TU Delft, TU Eindhoven Gaming & Entertainment: Pinball, Slot Machines, Gaming Arcades, Efteling Petrol Stations services Laundry, Car Wash, Kiosks, Toilets: Shell, BP, Exxon Special Products: Photo Booths, Dog Wash Station Responsibilities Lead the design and implementation of scalable, secure cloud infrastructure for our new platform while maintaining and enhancing existing production systems alongside the current DevOps team Architect and implement comprehensive Infrastructure as Code solutions using Terraform, establishing reusable modules and advanced deployment patterns across multiple environments Design, build, and optimise complex CI/CD pipelines using Bitbucket Pipelines, incorporating advanced deployment strategies including canary releases, and automated rollback mechanisms Implement sophisticated monitoring, logging, and alerting systems using GCP native tools, with emphasis on proactive anomaly detection, predictive failure analysis, and SLO/SLI management Drive automation initiatives across the organisation, eliminating manual processes and establishing GitOps workflows and Everything-as-Code practices for consistent infrastructure management Lead security implementation ensuring PCI-DSS, PCI-PIN, and PCI-P2PE compliance, including infrastructure hardening, access controls, and integrated security scanning within deployment pipelines Work with existing DevOps engineers and collaborate with Software Engineering teams to optimise deployment processes and foster a culture of continuous improvement Establish and maintain disaster recovery procedures, backup strategies, and high-availability architectures to ensure business continuity and system resilience Qualifications 5+ years of Senior DevOps/Site Reliability Engineering experience with demonstrated expertise in leading complex, enterprise-scale cloud infrastructure projects and team mentorship Expert-level proficiency with Google Cloud Platform including advanced services (GKE, Cloud Run Services, VPC networking, IAM, Cloud Armor, API Gateways) and cost optimisation strategies Deep expertise in Bitbucket and Bitbucket Pipelines including complex workflow orchestration, parallel execution strategies, advanced branching models, and pipeline optimisation techniques Advanced Terraform experience including module development, remote state management, workspace strategies, and enterprise-scale infrastructure provisioning patterns Proven experience with Kubernetes administration including cluster management, advanced networking, service mesh implementation, resource optimisation, and troubleshooting distributed systems Demonstrated ability to design and implement highly available, fault-tolerant systems with experience in event-driven architectures, microservices, and distributed system challenges Advanced knowledge of database administration (SQL/NoSQL), caching technologies, and data pipeline optimisation in cloud-native environments Strong expertise in security practices, compliance frameworks (PCI-DSS, PCI-PIN, PCI-P2PE), infrastructure hardening, and security monitoring with hands-on experience in network security and access management would be advantageous Technical Skills Cloud Platform: Expert-level Google Cloud Platform (GCP) including advanced service integration, networking, and cost management Version Control & CI/CD: Advanced Bitbucket and Bitbucket Pipelines expertise with complex workflow design and optimisation capabilities Infrastructure as Code: Deep Terraform proficiency including enterprise module development, state management, and advanced provisioning patterns Container Orchestration: Advanced Kubernetes (GKE) administration including service mesh, advanced networking, and performance tuning Monitoring & Observability: Google Cloud Logging, Cloud Monitoring, distributed tracing, and advanced alerting with SLI/SLO implementation Security & Compliance: PCI-DSS, PCI-PIN and PCI-P2PE standards implementation, infrastructure security hardening, and automated security scanning integration Database & Caching: Advanced experience with cloud databases (Alloy DB, Cloud SQL, Firestore, BigQuery), Memcached, and data pipeline optimisation Networking & Load Balancing: Expert knowledge of GCP VPC design, Cloud Load Balancer configuration, firewall management, and hybrid cloud connectivity Software Development Practices: Advanced Git workflows, automated testing integration, code review processes, and understanding of software architecture patterns Soft Skills Excellent technical communication skills, demonstrated by experience in bringing the team on the journey, and promoting DevOps best practices. Advanced problem-solving and analytical thinking capabilities, with proven expertise in diagnosing complex distributed system issues and implementing comprehensive solutions. Outstanding communication skills, capable of clearly articulating technical challenges and solutions to diverse audiences, including executives, product managers, and development teams A strong commitment to quality and continuous improvement, coupled with a passion for delivering enterprise-grade, maintainable solutions that inspire pride within teams What we have to offer Competitive compensation including a discretionary bonus based on business results; Great benefits like 25 leave days plus extra monthly "wellbeing days", a travel allowance and an attractive pension plan; Flexible working options within the Netherlands (Rotterdam office or hybrid/remote) or fully remote in the UK; we are not hiring outside these regions at this time. Thriving in a close-knit environment valuing flexibility, work-life balance, and mental well-being; Join Payter and become part of an international scale-up, shaping the future in a booming market where you can have impact and growth opportunities.
At Aztec, our goal is to add privacy to Ethereum. In the current public blockchain paradigm, users and entities unknowingly broadcast data in the public, compromising privacy and security to get trustlessness. Not only are unencrypted blockchains inherently privacy-exposing, they require significant redundancy to compute and verify the legitimacy of the chain to be trustworthy. Implementing scalable encryption in a public blockchain paradigm requires cutting-edge math and engineering. Thankfully, our team of scientists and engineers invented Plonk, the industry-standard zkSNARK, and Noir, the universal language of zero knowledge. Now, we're building a first of its kind Layer 2 with private smart contracts. This requires new cryptographic primitives, a zero-knowledge DSL for writing contracts, a privacy-friendly execution environment, a carefully designed set of circuits that prove the validity of the chain to L1, a decentralized block-building and proving mechanism, and a top-tier user and developer experience. And it's now time to bring it to market. We've raised $125 million from industry-leading investors including a16z crypto, Paradigm, Variant, Consensys, and a_capital, and we're growing quickly. Role Focus: We're looking for an DevOps Engineering Lead who thrives in a fast-paced environment and is excited by the prospect of growing a team with the mandate to 10x our current development velocity while preserving quality and security. Please note: The role is based out of either London, UK / New York City, US Key Responsibilities: Own the internal platforms critical to our ability to develop, test, deploy, and monitor our code Design and implement IaC, CI/CD pipelines, and automation to minimize lead time across the stack Develop and enforce best practices for observability, monitoring, alerting, and incident response to minimize MTTR Architect and implement scalable, secure, and cost-efficient cloud infrastructure (GCP/AWS) Lead technical design discussions, post-mortems, and architecture reviews as they pertain to the systems described above Identify and proactively address performance bottlenecks, cost inefficiencies, and system vulnerabilities as they pertain to the systems described above Keep a cool head when on-call Do all the other good stuff that pragmatic, seasoned engineers do Desired Skills and Experience: 7+ years of relevant industry experience Strong expertise in cloud platforms, bare metal and distributed systems Proven experience building, scaling, and maintaining production infrastructure Hands on proficiency with Infrastructure-as-Code (Terraform, etc.), container orchestration (Kubernetes), and CI/CD pipelines Experience with observability stacks (Prometheus, Grafana) Strong background in automation, scripting, and tooling (Bash; you will be writing lots of code in this role) Excellent ability to diagnose and resolve complex system, scaling, or performance issues Self starter mindset with the ability to balance technical depth, team leadership, and strategic impact Experience in Web3 or high growth startups, a plus Excellent written and verbal communication (we're serious) Comfortable working autonomously and asynchronously within a distributed team. Located in or able to work within GMT to EST time zones. Our Stack: Typescript and C++ LMDB for persistence Discv5 and libp2p Ethereum as L1 Terraform OpenTelemetry Grafana GCP AWS K8s What We Offer: Flexible, remote first culture with HQ in London. Competitive salary + equity/token options. 25 days annual leave + bank holidays. Health, dental, and retirement benefits (based on location). Regular offsites for team collaboration and bonding. Conference and learning budget for continual professional development. A chance to work on truly cutting edge zero knowledge infrastructure with some of the best minds in the field. Aztec Labs is an equal opportunity employer and we value creativity, diversity, and intellectual curiosity. If you're passionate about leveraging your creative talents to make a real world impact, and if you want to be part of a team that's shaping the future of digital privacy, then we would love to hear from you.
Jan 01, 2026
Full time
At Aztec, our goal is to add privacy to Ethereum. In the current public blockchain paradigm, users and entities unknowingly broadcast data in the public, compromising privacy and security to get trustlessness. Not only are unencrypted blockchains inherently privacy-exposing, they require significant redundancy to compute and verify the legitimacy of the chain to be trustworthy. Implementing scalable encryption in a public blockchain paradigm requires cutting-edge math and engineering. Thankfully, our team of scientists and engineers invented Plonk, the industry-standard zkSNARK, and Noir, the universal language of zero knowledge. Now, we're building a first of its kind Layer 2 with private smart contracts. This requires new cryptographic primitives, a zero-knowledge DSL for writing contracts, a privacy-friendly execution environment, a carefully designed set of circuits that prove the validity of the chain to L1, a decentralized block-building and proving mechanism, and a top-tier user and developer experience. And it's now time to bring it to market. We've raised $125 million from industry-leading investors including a16z crypto, Paradigm, Variant, Consensys, and a_capital, and we're growing quickly. Role Focus: We're looking for an DevOps Engineering Lead who thrives in a fast-paced environment and is excited by the prospect of growing a team with the mandate to 10x our current development velocity while preserving quality and security. Please note: The role is based out of either London, UK / New York City, US Key Responsibilities: Own the internal platforms critical to our ability to develop, test, deploy, and monitor our code Design and implement IaC, CI/CD pipelines, and automation to minimize lead time across the stack Develop and enforce best practices for observability, monitoring, alerting, and incident response to minimize MTTR Architect and implement scalable, secure, and cost-efficient cloud infrastructure (GCP/AWS) Lead technical design discussions, post-mortems, and architecture reviews as they pertain to the systems described above Identify and proactively address performance bottlenecks, cost inefficiencies, and system vulnerabilities as they pertain to the systems described above Keep a cool head when on-call Do all the other good stuff that pragmatic, seasoned engineers do Desired Skills and Experience: 7+ years of relevant industry experience Strong expertise in cloud platforms, bare metal and distributed systems Proven experience building, scaling, and maintaining production infrastructure Hands on proficiency with Infrastructure-as-Code (Terraform, etc.), container orchestration (Kubernetes), and CI/CD pipelines Experience with observability stacks (Prometheus, Grafana) Strong background in automation, scripting, and tooling (Bash; you will be writing lots of code in this role) Excellent ability to diagnose and resolve complex system, scaling, or performance issues Self starter mindset with the ability to balance technical depth, team leadership, and strategic impact Experience in Web3 or high growth startups, a plus Excellent written and verbal communication (we're serious) Comfortable working autonomously and asynchronously within a distributed team. Located in or able to work within GMT to EST time zones. Our Stack: Typescript and C++ LMDB for persistence Discv5 and libp2p Ethereum as L1 Terraform OpenTelemetry Grafana GCP AWS K8s What We Offer: Flexible, remote first culture with HQ in London. Competitive salary + equity/token options. 25 days annual leave + bank holidays. Health, dental, and retirement benefits (based on location). Regular offsites for team collaboration and bonding. Conference and learning budget for continual professional development. A chance to work on truly cutting edge zero knowledge infrastructure with some of the best minds in the field. Aztec Labs is an equal opportunity employer and we value creativity, diversity, and intellectual curiosity. If you're passionate about leveraging your creative talents to make a real world impact, and if you want to be part of a team that's shaping the future of digital privacy, then we would love to hear from you.
Stay in the loop. on Twitter for product updates, engineering deep dives, and a closer look at how we're building the future of blockspace. Location: Remote (Europe-friendly time zones preferred) Type: Full-Time Compensation: Competitive Salary + Token Allocation Infrastructure & DevOps Engineer at Raiku As an Infrastructure & DevOps Engineer at Raiku, you will lead critical engineering efforts that drive the performance and scalability of next-generation distributed systems. You'll join a highly proficient team of core engineers, all of whom have contributed to foundational and novel infrastructure innovations. We believe in hiring exceptional individuals who are deeply motivated by complex core infrastructure challenges and guided by a coherent system design philosophy that moves our industry forward. Expect frequent group discussions on architecture, design specs, and code reviews. We ship quality code quickly and often - and we are deeply committed to it. About Raiku Raiku is reengineering blockchain infrastructure from first principles to make global digital markets as precise and dependable as physical systems. Built on Solana, our platform delivers deterministic execution, guaranteed inclusion, and low-latency performance-solving the foundational failures that cause transactions to miss, trades to revert, and systems to collapse under pressure. By placing high-performance compute close to where transactions happen and coordinating execution through our advanced scheduling engine, Raiku empowers developers to build scalable, high-performance applications-and gives institutions the reliability and control they demand. We believe financial infrastructure should behave like physics: fast, reliable, and predictable-every time, without exception. About the Team Our entire platform is built in Rust, connecting Solana's L1 with advanced L2 extensions and enabling deeply complex network interactions. Your primary focus in this role will involve Rust systems programming, infrastructure architecture (e.g., Kubernetes), and operational scripting across languages such as Rust, Go, and Typescript. Full-stack understanding - from bare metal to application interfaces - is critical to success here. What You'll Bring 4+ years of experience in infrastructure engineering, preferably within high-standard fintech, data, cloud, or DevOps environments. Senior-level experience as an SRE or Infrastructure Engineer with a strong Kubernetes background. Deep proficiency with Infrastructure as Code tools (Terraform/Terragrunt) and infrastructure automation (Helm, GitOps). Familiarity with monitoring and alerting using Prometheus and PromQL. Experience with infrastructure and data security practices, including KMS and HashiCorp Vault. Ability to ship high-quality, opinionated architectural choices and uphold software best practices. Strong communication skills in English (both written and spoken). Our Stack Infrastructure: AWS/GCP + bare metal, Kubernetes, Terraform/Terragrunt, Prometheus/Thanos, Helm, HashiCorp Vault, FluxCD Languages: Rust, Golang, Typescript, PostgreSQL Smart Contract Development: Anchor, Solana Program Library (SPL) Preferred Qualifications Experience operating validators on distributed networks. Active engagement with the blockchain security research community, through open-source contributions, publications, or speaking at conferences. Benefits Competitive compensation packages based on ongoing market analysis, including tokens. Remote-first culture with flexible hours - self-initiative and independence are highly valued. Work with highly talented engineers passionate about building the future of the internet. A collaborative and dynamic work environment that rewards individual ownership and impact. Opportunities for rapid professional growth within one of the most technically ambitious projects in web3.
Jan 01, 2026
Full time
Stay in the loop. on Twitter for product updates, engineering deep dives, and a closer look at how we're building the future of blockspace. Location: Remote (Europe-friendly time zones preferred) Type: Full-Time Compensation: Competitive Salary + Token Allocation Infrastructure & DevOps Engineer at Raiku As an Infrastructure & DevOps Engineer at Raiku, you will lead critical engineering efforts that drive the performance and scalability of next-generation distributed systems. You'll join a highly proficient team of core engineers, all of whom have contributed to foundational and novel infrastructure innovations. We believe in hiring exceptional individuals who are deeply motivated by complex core infrastructure challenges and guided by a coherent system design philosophy that moves our industry forward. Expect frequent group discussions on architecture, design specs, and code reviews. We ship quality code quickly and often - and we are deeply committed to it. About Raiku Raiku is reengineering blockchain infrastructure from first principles to make global digital markets as precise and dependable as physical systems. Built on Solana, our platform delivers deterministic execution, guaranteed inclusion, and low-latency performance-solving the foundational failures that cause transactions to miss, trades to revert, and systems to collapse under pressure. By placing high-performance compute close to where transactions happen and coordinating execution through our advanced scheduling engine, Raiku empowers developers to build scalable, high-performance applications-and gives institutions the reliability and control they demand. We believe financial infrastructure should behave like physics: fast, reliable, and predictable-every time, without exception. About the Team Our entire platform is built in Rust, connecting Solana's L1 with advanced L2 extensions and enabling deeply complex network interactions. Your primary focus in this role will involve Rust systems programming, infrastructure architecture (e.g., Kubernetes), and operational scripting across languages such as Rust, Go, and Typescript. Full-stack understanding - from bare metal to application interfaces - is critical to success here. What You'll Bring 4+ years of experience in infrastructure engineering, preferably within high-standard fintech, data, cloud, or DevOps environments. Senior-level experience as an SRE or Infrastructure Engineer with a strong Kubernetes background. Deep proficiency with Infrastructure as Code tools (Terraform/Terragrunt) and infrastructure automation (Helm, GitOps). Familiarity with monitoring and alerting using Prometheus and PromQL. Experience with infrastructure and data security practices, including KMS and HashiCorp Vault. Ability to ship high-quality, opinionated architectural choices and uphold software best practices. Strong communication skills in English (both written and spoken). Our Stack Infrastructure: AWS/GCP + bare metal, Kubernetes, Terraform/Terragrunt, Prometheus/Thanos, Helm, HashiCorp Vault, FluxCD Languages: Rust, Golang, Typescript, PostgreSQL Smart Contract Development: Anchor, Solana Program Library (SPL) Preferred Qualifications Experience operating validators on distributed networks. Active engagement with the blockchain security research community, through open-source contributions, publications, or speaking at conferences. Benefits Competitive compensation packages based on ongoing market analysis, including tokens. Remote-first culture with flexible hours - self-initiative and independence are highly valued. Work with highly talented engineers passionate about building the future of the internet. A collaborative and dynamic work environment that rewards individual ownership and impact. Opportunities for rapid professional growth within one of the most technically ambitious projects in web3.
Cloud Integration Engineer page is loaded Cloud Integration Engineerlocations: Home - UK- Englandtime type: Full timeposted on: Posted Todayjob requisition id: RHowden is a global insurance group with employee ownership at its heart. Together, we have pushed the boundaries of insurance. We are united by a shared passion and no-limits mindset, and our strength lies in our ability to collaborate as a powerful international team comprised of 23,000 employees spanning over 56 countries.People join Howden for many different reasons, but they stay for the same one: our culture. It's what sets us apart, and the reason our employees have been turning down headhunters for years. Whatever your priorities - work / life balance, career progression, sustainability, volunteering - you'll find like-minded people driving change at Howden. Cloud Integration & Automation Engineer Location: Remote/Hybrid/Witney Type: Fixed-Term (12 Months), Full-time Team: Cloud Integration & Automation About the Role You'll work hands-on with Azure Integration Services, automation tools, and AI-powered solutions to streamline business data transit, drive operational efficiency, and enable smarter business outcomes. What You'll Do Design, build, and support cloud integrations using Azure services (API Management, Logic Apps, Service Bus, Event Grid, Data Factory, and Azure Functions) to connect business systems and automate data flows. Automate infrastructure and deployments using Infrastructure as Code (Bicep), ensuring repeatable, secure, and scalable environments. Implement and maintain observability , set up dashboards, alerts, and logging (App Insights, Log Analytics, KQL) so we always know how our integrations are performing. Enable secure, private connectivity for business data using Azure networking features (Private Link, Private DNS, Managed VNETs). Drive business efficiency by automating manual processes, integrating new SaaS/data sources, and leveraging AI and Power Platform tools where they add value. Contribute to solution design , work with architects and business analysts to scope, estimate, and document integration solutions that meet business needs. Champion best practices in security, cost management, and documentation, ensuring our solutions are robust, efficient, and well-understood. What We're Looking For You'll be a great fit if you have: Experience building and supporting integrations on Azure (API Management, Logic Apps, Service Bus, Event Grid, Data Factory, Azure Functions). Hands-on skills with Infrastructure as Code (ideally Bicep, but ARM or Terraform also valued). A track record of automating deployments and promoting solutions through multiple environments (dev/test/prod) using CI/CD pipelines. Familiarity with monitoring and troubleshooting integrations using Azure's observability tools (App Insights, Log Analytics, KQL). Understanding of secure networking in Azure (Private Endpoints, Private DNS, VNETs). A focus on business outcomes and always looking for ways to automate, streamline, and improve data flows and processes. Clear, concise communication skills, able to document solutions, estimate work, and explain technical concepts to non-technical colleagues. An interest in (or experience with) AI-powered automation, Power Platform, or Copilot Studio is a plus. Impact: Your work will directly improve business efficiency, data quality, and decision-making. Growth: We'll support your learning in AI, automation, and advanced integration patterns. Collaboration: Work with a friendly, expert team that values knowledge sharing and continuous improvement. Innovation: Be part of a team that's always looking for smarter, faster, and more reliable ways to improve business efficiency. How to Apply Send us your CV and a short note describing a business integration or automation you've delivered, what problem it solved, and how you made it robust and efficient.A career that you define. At Howden, we value diversity - there is no one Howden type. Instead, we're looking for individuals who share the same values as us: Our successes have all come from someone brave enough to try something new We support each other in the small everyday moments and the bigger challenges We are determined to make a positive difference at work and beyond Reasonable adjustments We're committed to providing reasonable accommodations at Howden to ensure that our positions align well with your needs. Besides the usual adjustments such as software, IT, and office setups, we can also accommodate other changes such as flexible hours or hybrid working .If you're excited by this role but have some doubts about whether it's the right fit for you, send us your application - if your profile fits the role's criteria, we will be in touch to assist in helping to get you set up with any reasonable adjustments you may require. Not all positions can accommodate changes to working hours or locations. Reach out to your Recruitment Partner if you want to know more.Fixed Term Contract (Fixed Term)
Jan 01, 2026
Full time
Cloud Integration Engineer page is loaded Cloud Integration Engineerlocations: Home - UK- Englandtime type: Full timeposted on: Posted Todayjob requisition id: RHowden is a global insurance group with employee ownership at its heart. Together, we have pushed the boundaries of insurance. We are united by a shared passion and no-limits mindset, and our strength lies in our ability to collaborate as a powerful international team comprised of 23,000 employees spanning over 56 countries.People join Howden for many different reasons, but they stay for the same one: our culture. It's what sets us apart, and the reason our employees have been turning down headhunters for years. Whatever your priorities - work / life balance, career progression, sustainability, volunteering - you'll find like-minded people driving change at Howden. Cloud Integration & Automation Engineer Location: Remote/Hybrid/Witney Type: Fixed-Term (12 Months), Full-time Team: Cloud Integration & Automation About the Role You'll work hands-on with Azure Integration Services, automation tools, and AI-powered solutions to streamline business data transit, drive operational efficiency, and enable smarter business outcomes. What You'll Do Design, build, and support cloud integrations using Azure services (API Management, Logic Apps, Service Bus, Event Grid, Data Factory, and Azure Functions) to connect business systems and automate data flows. Automate infrastructure and deployments using Infrastructure as Code (Bicep), ensuring repeatable, secure, and scalable environments. Implement and maintain observability , set up dashboards, alerts, and logging (App Insights, Log Analytics, KQL) so we always know how our integrations are performing. Enable secure, private connectivity for business data using Azure networking features (Private Link, Private DNS, Managed VNETs). Drive business efficiency by automating manual processes, integrating new SaaS/data sources, and leveraging AI and Power Platform tools where they add value. Contribute to solution design , work with architects and business analysts to scope, estimate, and document integration solutions that meet business needs. Champion best practices in security, cost management, and documentation, ensuring our solutions are robust, efficient, and well-understood. What We're Looking For You'll be a great fit if you have: Experience building and supporting integrations on Azure (API Management, Logic Apps, Service Bus, Event Grid, Data Factory, Azure Functions). Hands-on skills with Infrastructure as Code (ideally Bicep, but ARM or Terraform also valued). A track record of automating deployments and promoting solutions through multiple environments (dev/test/prod) using CI/CD pipelines. Familiarity with monitoring and troubleshooting integrations using Azure's observability tools (App Insights, Log Analytics, KQL). Understanding of secure networking in Azure (Private Endpoints, Private DNS, VNETs). A focus on business outcomes and always looking for ways to automate, streamline, and improve data flows and processes. Clear, concise communication skills, able to document solutions, estimate work, and explain technical concepts to non-technical colleagues. An interest in (or experience with) AI-powered automation, Power Platform, or Copilot Studio is a plus. Impact: Your work will directly improve business efficiency, data quality, and decision-making. Growth: We'll support your learning in AI, automation, and advanced integration patterns. Collaboration: Work with a friendly, expert team that values knowledge sharing and continuous improvement. Innovation: Be part of a team that's always looking for smarter, faster, and more reliable ways to improve business efficiency. How to Apply Send us your CV and a short note describing a business integration or automation you've delivered, what problem it solved, and how you made it robust and efficient.A career that you define. At Howden, we value diversity - there is no one Howden type. Instead, we're looking for individuals who share the same values as us: Our successes have all come from someone brave enough to try something new We support each other in the small everyday moments and the bigger challenges We are determined to make a positive difference at work and beyond Reasonable adjustments We're committed to providing reasonable accommodations at Howden to ensure that our positions align well with your needs. Besides the usual adjustments such as software, IT, and office setups, we can also accommodate other changes such as flexible hours or hybrid working .If you're excited by this role but have some doubts about whether it's the right fit for you, send us your application - if your profile fits the role's criteria, we will be in touch to assist in helping to get you set up with any reasonable adjustments you may require. Not all positions can accommodate changes to working hours or locations. Reach out to your Recruitment Partner if you want to know more.Fixed Term Contract (Fixed Term)
About Provectus At Provectus, we architect enterprise-grade AI and data solutions that transform how organizations leverage their most valuable asset - data. We combine deep technical expertise with strategic product thinking to deliver scalable, production-ready AI systems and modern data platforms. Who We're Looking For We're seeking a Senior Technical Product Manager with strong engineering acumen and product leadership experience to drive sophisticated AI and data platform initiatives. You bring the technical depth to engage in architecture discussions, evaluate trade-offs, and make informed decisions about complex system designs - while maintaining focus on business value and user outcomes. You're a technical translator who bridges the gap between possibility and practicality, helping clients navigate the rapidly evolving AI landscape with confidence. Your background allows you to assess technical feasibility, identify risks early, and guide engineering teams toward optimal solutions. What You Will Do: Architect Product Strategy for Technical Platforms: Define product strategy for AI platforms, data infrastructure, and enterprise-scale data migration initiatives. Lead technical product discovery - evaluating emerging technologies (GenAI, Agentic AI, vector databases, streaming architectures) and assessing fit for client use cases. Design solution architectures in collaboration with data architects and engineers, making build vs buy decisions and technology stack selections. Develop technical roadmaps balancing innovation, scalability, security, and time to value. Drive AI/ML Product Development: Own end to end product lifecycle for GenAI applications leveraging LLMs, RAG architectures, Agentic frameworks, and multi modal AI systems. Translate business requirements into technical specifications, API contracts, data schemas, and system integration patterns. Guide model selection, evaluation criteria, and deployment strategies for ML models in production environments. Champion MLOps practices including model versioning, monitoring, performance tracking, and continuous improvement loops. Manage Complex Data Platform Initiatives: Lead product planning for data lake/lakehouse implementations, warehouse modernizations, and cloud data platform migrations. Define data product requirements including ingestion pipelines, transformation logic, data quality rules, governance policies, and access patterns. Oversee integration of multiple data domains, ensuring interoperability, data lineage, and metadata management. Partner with data engineering teams on performance optimization, cost management, and scalability planning. Execute Through Agile Delivery: Facilitate Agile ceremonies and maintain well groomed backlogs with properly sized, technically detailed Features and epic level stories. Work closely with engineering teams to decompose complex features into incremental releases with clear technical dependencies. Define sprint goals aligned with quarterly objectives and long term product vision. Balance technical debt management with feature delivery, advocating for enablers and architectural improvements. Enable Technical Decision Making: Conduct technical due diligence, proofs of concept, and spike solutions to validate approaches before full investment. Analyze trade offs between competing technical solutions, considering performance, cost, maintainability, and developer experience. Document technical decisions, architectural decision records (ADRs), and design patterns for knowledge sharing. Communicate technical strategies and recommendations to executive stakeholders with clarity and conviction. What You Bring: Required Qualifications: Bachelor's degree in Technology or Business related field (Master's preferred). 5-7+ years of experience in technical product management, solutions architecture, or software engineering. 5+ years in product management roles with demonstrated end to end product ownership. 3-5+ years of experience with AI/ML products, Generative AI, or data platform development. 3-5+ years working in Agile/Scrum environments with strong command of Agile methodologies and ceremonies. Deep understanding of cloud architectures (AWS, Azure, GCP) and modern data stack technologies. Technical Expertise: AI/GenAI: LLM integration, prompt engineering, RAG architectures, fine tuning, Agentic AI frameworks (LangChain, LlamaIndex, AutoGen). Data Engineering: ETL/ELT patterns, data modeling, Snowflake, Databricks, dbt, Airflow, Kafka/streaming architectures. Cloud Platforms: AWS (SageMaker, Bedrock, Glue), Azure (OpenAI Service, Synapse), GCP (Vertex AI, BigQuery). MLOps: Model deployment, monitoring, versioning, CI/CD for ML, feature stores, experiment tracking. Data Migration: Assessment methodologies, migration patterns, data validation, cutover strategies. Development Practices: API design, microservices, containerization (Docker, Kubernetes), CI/CD pipelines. Core Competencies: Solution design and technical architecture capabilities. Requirements translation from business needs to technical specifications. Strong analytical thinking and problem solving in complex technical domains. Exceptional stakeholder management across technical and non technical audiences. Clear technical communication-documenting complex systems and presenting architectural decisions. Risk identification, dependency mapping, and mitigation planning. Preferred Qualifications: Prior software development or data engineering experience (3+ years). Background in consulting or professional services, delivering client solutions. Certifications: AWS Solutions Architect, Azure Data Engineer, GCP Professional Data Engineer, Certified Scrum Product Owner. Personal Attributes: Insatiable curiosity about emerging technologies and a hands on experimentation mindset. Close attention to detail with quality focus and commitment to technical excellence. Collaborative team player who thrives in cross functional environments. Adaptable and comfortable navigating ambiguity in fast paced consulting contexts. Passion for mentoring engineers and elevating technical practices. Why Join Us: Lead top tier engineering teams and cutting edge agentic AI systems, enterprise AI platforms. Shape how enterprises adopt AI - from strategy to architecture to delivery. Grow within a team building modern AI delivery practices, tools, and frameworks. Remote friendly culture with strong engineering, data, and consulting partnerships.
Jan 01, 2026
Full time
About Provectus At Provectus, we architect enterprise-grade AI and data solutions that transform how organizations leverage their most valuable asset - data. We combine deep technical expertise with strategic product thinking to deliver scalable, production-ready AI systems and modern data platforms. Who We're Looking For We're seeking a Senior Technical Product Manager with strong engineering acumen and product leadership experience to drive sophisticated AI and data platform initiatives. You bring the technical depth to engage in architecture discussions, evaluate trade-offs, and make informed decisions about complex system designs - while maintaining focus on business value and user outcomes. You're a technical translator who bridges the gap between possibility and practicality, helping clients navigate the rapidly evolving AI landscape with confidence. Your background allows you to assess technical feasibility, identify risks early, and guide engineering teams toward optimal solutions. What You Will Do: Architect Product Strategy for Technical Platforms: Define product strategy for AI platforms, data infrastructure, and enterprise-scale data migration initiatives. Lead technical product discovery - evaluating emerging technologies (GenAI, Agentic AI, vector databases, streaming architectures) and assessing fit for client use cases. Design solution architectures in collaboration with data architects and engineers, making build vs buy decisions and technology stack selections. Develop technical roadmaps balancing innovation, scalability, security, and time to value. Drive AI/ML Product Development: Own end to end product lifecycle for GenAI applications leveraging LLMs, RAG architectures, Agentic frameworks, and multi modal AI systems. Translate business requirements into technical specifications, API contracts, data schemas, and system integration patterns. Guide model selection, evaluation criteria, and deployment strategies for ML models in production environments. Champion MLOps practices including model versioning, monitoring, performance tracking, and continuous improvement loops. Manage Complex Data Platform Initiatives: Lead product planning for data lake/lakehouse implementations, warehouse modernizations, and cloud data platform migrations. Define data product requirements including ingestion pipelines, transformation logic, data quality rules, governance policies, and access patterns. Oversee integration of multiple data domains, ensuring interoperability, data lineage, and metadata management. Partner with data engineering teams on performance optimization, cost management, and scalability planning. Execute Through Agile Delivery: Facilitate Agile ceremonies and maintain well groomed backlogs with properly sized, technically detailed Features and epic level stories. Work closely with engineering teams to decompose complex features into incremental releases with clear technical dependencies. Define sprint goals aligned with quarterly objectives and long term product vision. Balance technical debt management with feature delivery, advocating for enablers and architectural improvements. Enable Technical Decision Making: Conduct technical due diligence, proofs of concept, and spike solutions to validate approaches before full investment. Analyze trade offs between competing technical solutions, considering performance, cost, maintainability, and developer experience. Document technical decisions, architectural decision records (ADRs), and design patterns for knowledge sharing. Communicate technical strategies and recommendations to executive stakeholders with clarity and conviction. What You Bring: Required Qualifications: Bachelor's degree in Technology or Business related field (Master's preferred). 5-7+ years of experience in technical product management, solutions architecture, or software engineering. 5+ years in product management roles with demonstrated end to end product ownership. 3-5+ years of experience with AI/ML products, Generative AI, or data platform development. 3-5+ years working in Agile/Scrum environments with strong command of Agile methodologies and ceremonies. Deep understanding of cloud architectures (AWS, Azure, GCP) and modern data stack technologies. Technical Expertise: AI/GenAI: LLM integration, prompt engineering, RAG architectures, fine tuning, Agentic AI frameworks (LangChain, LlamaIndex, AutoGen). Data Engineering: ETL/ELT patterns, data modeling, Snowflake, Databricks, dbt, Airflow, Kafka/streaming architectures. Cloud Platforms: AWS (SageMaker, Bedrock, Glue), Azure (OpenAI Service, Synapse), GCP (Vertex AI, BigQuery). MLOps: Model deployment, monitoring, versioning, CI/CD for ML, feature stores, experiment tracking. Data Migration: Assessment methodologies, migration patterns, data validation, cutover strategies. Development Practices: API design, microservices, containerization (Docker, Kubernetes), CI/CD pipelines. Core Competencies: Solution design and technical architecture capabilities. Requirements translation from business needs to technical specifications. Strong analytical thinking and problem solving in complex technical domains. Exceptional stakeholder management across technical and non technical audiences. Clear technical communication-documenting complex systems and presenting architectural decisions. Risk identification, dependency mapping, and mitigation planning. Preferred Qualifications: Prior software development or data engineering experience (3+ years). Background in consulting or professional services, delivering client solutions. Certifications: AWS Solutions Architect, Azure Data Engineer, GCP Professional Data Engineer, Certified Scrum Product Owner. Personal Attributes: Insatiable curiosity about emerging technologies and a hands on experimentation mindset. Close attention to detail with quality focus and commitment to technical excellence. Collaborative team player who thrives in cross functional environments. Adaptable and comfortable navigating ambiguity in fast paced consulting contexts. Passion for mentoring engineers and elevating technical practices. Why Join Us: Lead top tier engineering teams and cutting edge agentic AI systems, enterprise AI platforms. Shape how enterprises adopt AI - from strategy to architecture to delivery. Grow within a team building modern AI delivery practices, tools, and frameworks. Remote friendly culture with strong engineering, data, and consulting partnerships.
Senior Cloud Engineer (AWS) Location: Central London - Hybrid - circa 3 days onsite in Central London Cloudscaler are based in Central London, with customers across the UK. Travel to our customer sites is required, the frequency of which will vary from customer-to-customer up to 3 days per week onsite. The customer that these vacancies are signposted for requires an onsite presence of 1 day every 2 weeks in Bath. Salary: £70,000 - £95,000 Eligibility: UK-based Security Clearance You must be eligible for SC clearance (you don't need active clearance, we'll sponsor it). Willingness to obtain DV clearance in the future is a bonus. About the Role We're looking for an experienced Senior AWS Cloud Engineer to lead the build and operation of secure, enterprise-scale cloud platforms for central government and private enterprise organisations clients. You'll work closely with senior stakeholders on complex cloud transformation projects, applying AWS best practices, security-first thinking, and modern platform engineering principles. What You'll Be Doing Designing and building AWS Landing Zones and multi-tenant cloud environments Creating infrastructure using Terraform (IaC) Implementing secure, scalable solutions in regulated environments Developing and maintaining CI/CD pipelines Applying SRE principles to ensure reliability, resilience, and operational excellence Collaborating with internal teams and external clients to drive cloud success What We're Looking For We're looking for someone who brings hands on technical expertise and strategic thinking, with experience in: Enterprise-scale AWS platforms AWS Landing Zone implementation Infrastructure as Code (Terraform) Secure cloud architecture environments and operations Working in highly-regulated, central government or defence clients Communicating technical concepts to a range of stakeholders Why Join Us We're cloud specialists, AWS experts, and trusted advisors to government and enterprise clients. We combine deep technical knowledge with a collaborative, people first culture. What you can expect: Discretionary bonus scheme 25 days' annual leave + 5 additional days for training/exams or volunteering Travel and accommodation expensed where eligible in line with our expenses policy Life Assurance Long Term Disability cover Employee Assist Programme for employee advice and support (including legal and counselling helpline) Health, Mental Health, Wellbeing, Financial and Legal support 24/7 GP access Public holiday opt out scheme giving you the option to work on public holidays creating the flexibility to enjoy your time off when it suits you Individual training and development plans with online training and exam costs covered Recruitment referral scheme - referral bonus if you introduce us to someone we then hire Customer referral scheme - referral bonus if you introduce us to a new customer Cycle To Work Scheme Dog friendly office Interview Process We keep things simple and transparent: Screening call - with our Talent Acquisition team First interview (30 mins) - with some of our engineering colleagues (remote) Technical interview (60 mins) - with some of our engineering colleagues (remote) Final interview (60 mins) - with our leadership team (in person) Apply Now If you're ready to build impactful, secure, and scalable AWS solutions - we'd love to hear from you. Apply today or reach out with any questions. Cloudscaler are proud to be an equal opportunity employer, committed to equal opportunities regardless of gender identity, sexual orientation, race, ancestry, age, marital status, disability, parental status, religion or medical history. If you require reasonable adjustments during the recruitment process or within the workplace, please let us know when you speak to our Talent Acquisition team or contact at the earliest opportunity.
Jan 01, 2026
Full time
Senior Cloud Engineer (AWS) Location: Central London - Hybrid - circa 3 days onsite in Central London Cloudscaler are based in Central London, with customers across the UK. Travel to our customer sites is required, the frequency of which will vary from customer-to-customer up to 3 days per week onsite. The customer that these vacancies are signposted for requires an onsite presence of 1 day every 2 weeks in Bath. Salary: £70,000 - £95,000 Eligibility: UK-based Security Clearance You must be eligible for SC clearance (you don't need active clearance, we'll sponsor it). Willingness to obtain DV clearance in the future is a bonus. About the Role We're looking for an experienced Senior AWS Cloud Engineer to lead the build and operation of secure, enterprise-scale cloud platforms for central government and private enterprise organisations clients. You'll work closely with senior stakeholders on complex cloud transformation projects, applying AWS best practices, security-first thinking, and modern platform engineering principles. What You'll Be Doing Designing and building AWS Landing Zones and multi-tenant cloud environments Creating infrastructure using Terraform (IaC) Implementing secure, scalable solutions in regulated environments Developing and maintaining CI/CD pipelines Applying SRE principles to ensure reliability, resilience, and operational excellence Collaborating with internal teams and external clients to drive cloud success What We're Looking For We're looking for someone who brings hands on technical expertise and strategic thinking, with experience in: Enterprise-scale AWS platforms AWS Landing Zone implementation Infrastructure as Code (Terraform) Secure cloud architecture environments and operations Working in highly-regulated, central government or defence clients Communicating technical concepts to a range of stakeholders Why Join Us We're cloud specialists, AWS experts, and trusted advisors to government and enterprise clients. We combine deep technical knowledge with a collaborative, people first culture. What you can expect: Discretionary bonus scheme 25 days' annual leave + 5 additional days for training/exams or volunteering Travel and accommodation expensed where eligible in line with our expenses policy Life Assurance Long Term Disability cover Employee Assist Programme for employee advice and support (including legal and counselling helpline) Health, Mental Health, Wellbeing, Financial and Legal support 24/7 GP access Public holiday opt out scheme giving you the option to work on public holidays creating the flexibility to enjoy your time off when it suits you Individual training and development plans with online training and exam costs covered Recruitment referral scheme - referral bonus if you introduce us to someone we then hire Customer referral scheme - referral bonus if you introduce us to a new customer Cycle To Work Scheme Dog friendly office Interview Process We keep things simple and transparent: Screening call - with our Talent Acquisition team First interview (30 mins) - with some of our engineering colleagues (remote) Technical interview (60 mins) - with some of our engineering colleagues (remote) Final interview (60 mins) - with our leadership team (in person) Apply Now If you're ready to build impactful, secure, and scalable AWS solutions - we'd love to hear from you. Apply today or reach out with any questions. Cloudscaler are proud to be an equal opportunity employer, committed to equal opportunities regardless of gender identity, sexual orientation, race, ancestry, age, marital status, disability, parental status, religion or medical history. If you require reasonable adjustments during the recruitment process or within the workplace, please let us know when you speak to our Talent Acquisition team or contact at the earliest opportunity.