Job Description We are looking for a Senior Software Engineer to join our Group Technology team in Milton Keynes. In this role, you will help develop, support and enhance business systems and applications using .NET technologies and SQL Server.We offer a hybrid working arrangement with one or two days per week in our Milton Keynes office. Key Responsibilities: Be up to date regarding best practices in software development and deployment Implement best practice coding in relation to development coding standards Use operational data to improve the stability and performance of the applications Maintain documentation and release notes Have awareness of application security considerations Identify dependencies across the organization and work with teams to resolve them before they become an issue, and install preventative measures to mitigate repeat occurrences Effectively handle risk, change, and uncertainty across the organization Work alone or alongside other Software Engineers on projects where necessary Create secure and high performing n-tier applications utilising best practices in the development of database applications using SQL Contribute to improve the overall processes and methodologies followed by the wider team Design and develop commercial/enterprise web applications Ensure application performance, quality, and responsiveness Work with all teams to recommend solutions that are in accordance with accepted testing frameworks Experience and Skills Required: Modern web application development architectures and frameworks such as React JS Web applications experience using C#, ASP.NET, MVC Excellent skills in SQL Server Experience with Scrum/Agile methodologies and working in that environment Knowledge of Webforms Familiarity with REST API and Soap Services Developing and maintaining multiple connected software solutions Skilled in software testing methodologies including TDD Ability to select and use the most appropriate tools, technologies, and languages for the job Team-oriented, with a willingness to work as part of a collaborative environment Strong knowledge of object-oriented design and development skills Highly Desirable: Experience with Terraform and Azure Experience with concurrent programming techniques, parallelism, and threading Experience working with distributed systems and microservice architectures Experience with high-scalability projects involving cloud-based infrastructure design and implementation Microsoft certified status Connells Group UK is an equal opportunities employer and positively encourages applications from suitably qualified and eligible candidates regardless of sex, race, disability, age, sexual orientation, transgender status, religion or belief, marital status, or pregnancy and maternity.CF00745
Mar 19, 2026
Full time
Job Description We are looking for a Senior Software Engineer to join our Group Technology team in Milton Keynes. In this role, you will help develop, support and enhance business systems and applications using .NET technologies and SQL Server.We offer a hybrid working arrangement with one or two days per week in our Milton Keynes office. Key Responsibilities: Be up to date regarding best practices in software development and deployment Implement best practice coding in relation to development coding standards Use operational data to improve the stability and performance of the applications Maintain documentation and release notes Have awareness of application security considerations Identify dependencies across the organization and work with teams to resolve them before they become an issue, and install preventative measures to mitigate repeat occurrences Effectively handle risk, change, and uncertainty across the organization Work alone or alongside other Software Engineers on projects where necessary Create secure and high performing n-tier applications utilising best practices in the development of database applications using SQL Contribute to improve the overall processes and methodologies followed by the wider team Design and develop commercial/enterprise web applications Ensure application performance, quality, and responsiveness Work with all teams to recommend solutions that are in accordance with accepted testing frameworks Experience and Skills Required: Modern web application development architectures and frameworks such as React JS Web applications experience using C#, ASP.NET, MVC Excellent skills in SQL Server Experience with Scrum/Agile methodologies and working in that environment Knowledge of Webforms Familiarity with REST API and Soap Services Developing and maintaining multiple connected software solutions Skilled in software testing methodologies including TDD Ability to select and use the most appropriate tools, technologies, and languages for the job Team-oriented, with a willingness to work as part of a collaborative environment Strong knowledge of object-oriented design and development skills Highly Desirable: Experience with Terraform and Azure Experience with concurrent programming techniques, parallelism, and threading Experience working with distributed systems and microservice architectures Experience with high-scalability projects involving cloud-based infrastructure design and implementation Microsoft certified status Connells Group UK is an equal opportunities employer and positively encourages applications from suitably qualified and eligible candidates regardless of sex, race, disability, age, sexual orientation, transgender status, religion or belief, marital status, or pregnancy and maternity.CF00745
Linux System Engineer (Take a step into DevOps) - Bracknell /Hybrid Halian is currently recruiting for a number of Linux System Engineers on a permanent basis on behalf of an innovative and industry leading SaaS organisation. Linux Ubuntu RedHat Cloud Terraform Ansible Systems Engineer Ansible MedTech Bash AWS Azure DevOps Are you wanting to further your experience with Cloud click apply for full job details
Mar 19, 2026
Full time
Linux System Engineer (Take a step into DevOps) - Bracknell /Hybrid Halian is currently recruiting for a number of Linux System Engineers on a permanent basis on behalf of an innovative and industry leading SaaS organisation. Linux Ubuntu RedHat Cloud Terraform Ansible Systems Engineer Ansible MedTech Bash AWS Azure DevOps Are you wanting to further your experience with Cloud click apply for full job details
Job Role: Data Engineer - Join Our Fintech Revolution! Location: London, UK Job Type: Full-time, in-office Reports To: Chief Technology Officer Salary: Competitive About Us We are an innovative fintech organisation committed to reshaping the future of homeownership by providing cutting-edge mortgage and insurance products. Our mission is to empower underserved borrower segments in the UK mortgage market. We pride ourselves on fostering a culture of excellence, collaboration, and support, enabling our team members to thrive! Job Purpose Are you a data enthusiast ready to take on an exciting challenge? As a Data Engineer, you will design, build, and operate our internal data platform, ensuring data from third-party systems is accurate, structured, and ready for insightful analysis. You will play a crucial role in managing data pipelines and ensuring high-quality data flows that meet our business needs. Key Responsibilities Data Platform & Engineering Build and maintain data ingestion pipelines using Azure Data Lake (ADLS Gen2) and Microsoft Fabric. Seamlessly integrate third-party platforms and implement data transformations. Develop datasets for Power BI and support management information reporting. Contribute to data architecture discussions, aligning with best practises. Data Quality & Governance Implement automated data quality checks and maintain clear documentation. Ensure consistent application of data definitions and business rules across teams. Support auditability through traceable data processing steps. Delivery & Collaboration Collaborate with external partners and internal teams to meet reporting needs. Work closely with Information Security to ensure compliant data handling. Participate in agile sprints, contributing to technical planning. Operational Ownership Monitor data pipeline health, performance, and reliability. Troubleshoot data issues swiftly, communicating effectively with stakeholders. Drive continuous improvement of the data platform's resilience and performance. Key Requirements Qualifications Degree in Computer Science, Cyber Security, Information Technology, or related field, or equivalent professional experience. Experience & Skills Essential Hands-on experience as a Data Engineer in a modern cloud environment. Strong expertise in Azure data services (ADLS, Azure Data Factory, Microsoft Fabric). Proficient in SQL and data modelling. Experience with API integration and SFTP data feeds. Excellent communication skills for engaging non-technical stakeholders. Desirable Background in financial services or fintech. Familiarity with Power BI dataset modelling. Knowledge of DevOps/CI/CD practises for data engineering. Personal Attributes Detail-oriented and committed to data quality. Analytical and pragmatic problem-solving approach. Ability to balance speed and quality in delivery. Collaborative mindset with a passion for cross-functional teamwork. What We Offer Competitive Salary: Attractive compensation package. Professional Development: Opportunities for continuous learning and career advancement. Generous Annual Leave: 25 days plus statutory days, increasing by one day after five years of service, up to 30 days. Are you ready to make an impact in the world of fintech? Join us on our journey to innovate and empower! Apply today to become a vital part of our dynamic team! Adecco is a disability-confident employer. It is important to us that we run an inclusive and accessible recruitment process to support candidates of all backgrounds and all abilities to apply. Adecco is committed to building a supportive environment for you to explore the next steps in your career. If you require reasonable adjustments at any stage, please let us know and we will be happy to support you. Adecco acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers. The Adecco Group UK & Ireland is an Equal Opportunities Employer. By applying for this role your details will be submitted to Adecco. Our Candidate Privacy Information Statement explaining how we will use your information is available on our website.
Mar 19, 2026
Full time
Job Role: Data Engineer - Join Our Fintech Revolution! Location: London, UK Job Type: Full-time, in-office Reports To: Chief Technology Officer Salary: Competitive About Us We are an innovative fintech organisation committed to reshaping the future of homeownership by providing cutting-edge mortgage and insurance products. Our mission is to empower underserved borrower segments in the UK mortgage market. We pride ourselves on fostering a culture of excellence, collaboration, and support, enabling our team members to thrive! Job Purpose Are you a data enthusiast ready to take on an exciting challenge? As a Data Engineer, you will design, build, and operate our internal data platform, ensuring data from third-party systems is accurate, structured, and ready for insightful analysis. You will play a crucial role in managing data pipelines and ensuring high-quality data flows that meet our business needs. Key Responsibilities Data Platform & Engineering Build and maintain data ingestion pipelines using Azure Data Lake (ADLS Gen2) and Microsoft Fabric. Seamlessly integrate third-party platforms and implement data transformations. Develop datasets for Power BI and support management information reporting. Contribute to data architecture discussions, aligning with best practises. Data Quality & Governance Implement automated data quality checks and maintain clear documentation. Ensure consistent application of data definitions and business rules across teams. Support auditability through traceable data processing steps. Delivery & Collaboration Collaborate with external partners and internal teams to meet reporting needs. Work closely with Information Security to ensure compliant data handling. Participate in agile sprints, contributing to technical planning. Operational Ownership Monitor data pipeline health, performance, and reliability. Troubleshoot data issues swiftly, communicating effectively with stakeholders. Drive continuous improvement of the data platform's resilience and performance. Key Requirements Qualifications Degree in Computer Science, Cyber Security, Information Technology, or related field, or equivalent professional experience. Experience & Skills Essential Hands-on experience as a Data Engineer in a modern cloud environment. Strong expertise in Azure data services (ADLS, Azure Data Factory, Microsoft Fabric). Proficient in SQL and data modelling. Experience with API integration and SFTP data feeds. Excellent communication skills for engaging non-technical stakeholders. Desirable Background in financial services or fintech. Familiarity with Power BI dataset modelling. Knowledge of DevOps/CI/CD practises for data engineering. Personal Attributes Detail-oriented and committed to data quality. Analytical and pragmatic problem-solving approach. Ability to balance speed and quality in delivery. Collaborative mindset with a passion for cross-functional teamwork. What We Offer Competitive Salary: Attractive compensation package. Professional Development: Opportunities for continuous learning and career advancement. Generous Annual Leave: 25 days plus statutory days, increasing by one day after five years of service, up to 30 days. Are you ready to make an impact in the world of fintech? Join us on our journey to innovate and empower! Apply today to become a vital part of our dynamic team! Adecco is a disability-confident employer. It is important to us that we run an inclusive and accessible recruitment process to support candidates of all backgrounds and all abilities to apply. Adecco is committed to building a supportive environment for you to explore the next steps in your career. If you require reasonable adjustments at any stage, please let us know and we will be happy to support you. Adecco acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers. The Adecco Group UK & Ireland is an Equal Opportunities Employer. By applying for this role your details will be submitted to Adecco. Our Candidate Privacy Information Statement explaining how we will use your information is available on our website.
Overview Oliver James has partnered with a leading global data and analytics organisation to support the hire of three Data Analysts on an initial 6-month contract. This is an opportunity to join a well-established financial services environment where data sits at the heart of decision-making. The team is undergoing transition and reshaping its onshore capability, so you'll play a key role in delivering and supporting cloud-based data solutions within a regulated setting. If you have strong data analysis experience and exposure to cloud platforms, particularly within financial services, this role offers immediate impact and meaningful project work. Key Responsibilities Analyse and interpret complex datasets to support business and regulatory requirements Work with cloud-based data platforms (Azure preferred) Translate business requirements into clear data outputs and insights Support data quality, validation and governance processes Collaborate with cross-functional stakeholders across technology and business teams Contribute to reporting, dashboards and data-driven decision support Ensure compliance within a regulated financial services environment Essential Skills Proven experience as a Data Analyst in a commercial environment Strong exposure to cloud-based data platforms (Azure desirable) Experience working within Financial Services Strong SQL and data manipulation skills Ability to engage confidently with business stakeholders Experience handling large, complex datasets Strong understanding of data governance and quality principles Desirable Skills Experience within credit referencing or credit risk environments Broader financial services background (banking, lending, fintech, etc.) Exposure to modern data engineering or analytics tooling Experience working in regulated environments Knowledge of reporting or BI tools Prior experience in fast-paced transformation or transition programmes If you're immediately available (or coming free shortly) and open to a Leeds-based hybrid contract, please apply with your updated CV.
Mar 19, 2026
Full time
Overview Oliver James has partnered with a leading global data and analytics organisation to support the hire of three Data Analysts on an initial 6-month contract. This is an opportunity to join a well-established financial services environment where data sits at the heart of decision-making. The team is undergoing transition and reshaping its onshore capability, so you'll play a key role in delivering and supporting cloud-based data solutions within a regulated setting. If you have strong data analysis experience and exposure to cloud platforms, particularly within financial services, this role offers immediate impact and meaningful project work. Key Responsibilities Analyse and interpret complex datasets to support business and regulatory requirements Work with cloud-based data platforms (Azure preferred) Translate business requirements into clear data outputs and insights Support data quality, validation and governance processes Collaborate with cross-functional stakeholders across technology and business teams Contribute to reporting, dashboards and data-driven decision support Ensure compliance within a regulated financial services environment Essential Skills Proven experience as a Data Analyst in a commercial environment Strong exposure to cloud-based data platforms (Azure desirable) Experience working within Financial Services Strong SQL and data manipulation skills Ability to engage confidently with business stakeholders Experience handling large, complex datasets Strong understanding of data governance and quality principles Desirable Skills Experience within credit referencing or credit risk environments Broader financial services background (banking, lending, fintech, etc.) Exposure to modern data engineering or analytics tooling Experience working in regulated environments Knowledge of reporting or BI tools Prior experience in fast-paced transformation or transition programmes If you're immediately available (or coming free shortly) and open to a Leeds-based hybrid contract, please apply with your updated CV.
CMA CGM ABOUT US Led by Rodolphe Saadé, the CMA CGM Group, a global leader in shipping and logistics, serves more than 420 ports around the world on five continents. With its subsidiary CEVA Logistics, a world leader in logistics, and its air freight division CMA CGM AIR CARGO, the CMA CGM Group is continually innovating to offer its customers a complete and increasingly efficient range of new shipping, land, air and logistics solutions. Committed to the energy transition in shipping, and a pioneer in the use of alternative fuels, the CMA CGM Group has set a target to become Net Zero Carbon by 2050. Through the CMA CGM Foundation, the Group acts in humanitarian crises that require an emergency response by mobilizing the Group's shipping and logistics expertise to bring humanitarian supplies around the world. Present in 160 countries through its network of more than 400 offices and 750 warehouses, the Group employs more than 155,000 people worldwide, including 4,000 in Marseilles where its head office is located. YOUR ROLE Proactively configure and administrate the UK VM Ware Server Infrastructure. Ensure all Server Hardware and Software issues are efficiently managed through to resolution. Prioritise and Escalate faults accordingly and ensure that SLA's and KPI's are adhered to. Provide L2 / L3 Support and technical guideance to the UK IT support team. The company also has a balanced approach to cloud computing so this role will also assist with Cloud computing, so this is a great oppertunity for someone with AWS Cloud skills or a desire to progress their IT skills into Cloud computing. The UK IT team is also part of a larger Europe Regional IT team who help govern and assist other countries in the region. This role has the possibility to assist in the Europe region from time to time so UK and Europe travel maybe required periodically. RESPONSIBILITIES Build and Maintain servers in a Virtual Environment Provide 2nd and 3rd line support to ensure smooth IT operations across multiple client environments. Support and maintain the cloud environment for application servers Pro-actively monitor and administrate the Server Infrastructure and Virtual Environment Manage and support Active Directory environment for the UK & Ireland Collaborate with clients and internal teams to plan and implement system upgrades, migrations, and new technologies. Document technical processes, best practices, and IT procedures to ensure efficiency and knowledge sharing. Provide technical guidance & support to IT projects Support and maintain the DR / Backup Infrastructure, including daily checks and test DR processes. Adhere to corporate IT security standards KEY PERFORMANCE INDICATORS Provide a working Production environment with the emphasis on minimal downtime Produce proper and thorough documentation Conform to Head Office and Local Blueprints & Standards Provide timely and effective technical support of servers, infrastructure, hardware and software Provide a secure and reliable electronic environment Ensure Redundancy and Resilience for the Server Infrastructure Environment Ensure the Backup Infrastructure is working, reliable and effective Additional Information This role will require occasional travel within the UK and possibly Europe. PROFILE AND REQUIRED SKILLS Microsoft Windows Server 2019 / 2022 VM Ware Environments NAS storage environments Knowledge of Cloud computing (AWS / Azure) Active Directory / DNS / DHCP etc. Knowledge of Veeam backup solution - desirable Ability to work effectively both as part of a team and independently. PRACTICAL AND TECHNICAL KNOWLEDGE Microsoft Windows operating systems Active Directory / DNS / DHCP etc. Virtual environments (VMWare and Hyper V) AWS cloud computing advantageous. Knowledge of backup / snapshot technologies Aptitude for troubleshooting Proactively drives innovation and stays ahead of emerging technologies QUALIFICATIONS Educated to degree level / IT qualification or equivalent Server support experience in a networked IT environment accepted Microsoft and/or VMware Certification - desirable AWS Certification - desirable WHAT DO WE OFFER? Not only do we offer a competitive salary, we also offer a generous benefits package including: 25 days annual leave (plus public holidays) increasing with length of service plus additional day over Christmas period and the opportunity to buy/sell annual leave Discretionary annual bonus Enhanced pension scheme up to 15% total contribution Life assurance x4 Private healthcare (BUPA), BUPA Dental Plan + Healthcare Cash plan, including an Employee Assistance Programme Hybrid Working Cycle to work scheme/ Season ticket loans Enhanced policies including Maternity & Paternity Employee recognition awards CMA CGM Group is proud to define itself as a family business built on strong human values. Excellence Exemplarity Imagination Boldness CMA CGM respects, supports and values diversity in all forms. We seek to avoid discrimination and are committed to equal opportunities for all our employees. Our long-held inclusive policy improves performance, creates growth opportunities for all, aligns with our customer's values and enhances employee engagement. Join us and discover a world of opportunities!
Mar 19, 2026
Full time
CMA CGM ABOUT US Led by Rodolphe Saadé, the CMA CGM Group, a global leader in shipping and logistics, serves more than 420 ports around the world on five continents. With its subsidiary CEVA Logistics, a world leader in logistics, and its air freight division CMA CGM AIR CARGO, the CMA CGM Group is continually innovating to offer its customers a complete and increasingly efficient range of new shipping, land, air and logistics solutions. Committed to the energy transition in shipping, and a pioneer in the use of alternative fuels, the CMA CGM Group has set a target to become Net Zero Carbon by 2050. Through the CMA CGM Foundation, the Group acts in humanitarian crises that require an emergency response by mobilizing the Group's shipping and logistics expertise to bring humanitarian supplies around the world. Present in 160 countries through its network of more than 400 offices and 750 warehouses, the Group employs more than 155,000 people worldwide, including 4,000 in Marseilles where its head office is located. YOUR ROLE Proactively configure and administrate the UK VM Ware Server Infrastructure. Ensure all Server Hardware and Software issues are efficiently managed through to resolution. Prioritise and Escalate faults accordingly and ensure that SLA's and KPI's are adhered to. Provide L2 / L3 Support and technical guideance to the UK IT support team. The company also has a balanced approach to cloud computing so this role will also assist with Cloud computing, so this is a great oppertunity for someone with AWS Cloud skills or a desire to progress their IT skills into Cloud computing. The UK IT team is also part of a larger Europe Regional IT team who help govern and assist other countries in the region. This role has the possibility to assist in the Europe region from time to time so UK and Europe travel maybe required periodically. RESPONSIBILITIES Build and Maintain servers in a Virtual Environment Provide 2nd and 3rd line support to ensure smooth IT operations across multiple client environments. Support and maintain the cloud environment for application servers Pro-actively monitor and administrate the Server Infrastructure and Virtual Environment Manage and support Active Directory environment for the UK & Ireland Collaborate with clients and internal teams to plan and implement system upgrades, migrations, and new technologies. Document technical processes, best practices, and IT procedures to ensure efficiency and knowledge sharing. Provide technical guidance & support to IT projects Support and maintain the DR / Backup Infrastructure, including daily checks and test DR processes. Adhere to corporate IT security standards KEY PERFORMANCE INDICATORS Provide a working Production environment with the emphasis on minimal downtime Produce proper and thorough documentation Conform to Head Office and Local Blueprints & Standards Provide timely and effective technical support of servers, infrastructure, hardware and software Provide a secure and reliable electronic environment Ensure Redundancy and Resilience for the Server Infrastructure Environment Ensure the Backup Infrastructure is working, reliable and effective Additional Information This role will require occasional travel within the UK and possibly Europe. PROFILE AND REQUIRED SKILLS Microsoft Windows Server 2019 / 2022 VM Ware Environments NAS storage environments Knowledge of Cloud computing (AWS / Azure) Active Directory / DNS / DHCP etc. Knowledge of Veeam backup solution - desirable Ability to work effectively both as part of a team and independently. PRACTICAL AND TECHNICAL KNOWLEDGE Microsoft Windows operating systems Active Directory / DNS / DHCP etc. Virtual environments (VMWare and Hyper V) AWS cloud computing advantageous. Knowledge of backup / snapshot technologies Aptitude for troubleshooting Proactively drives innovation and stays ahead of emerging technologies QUALIFICATIONS Educated to degree level / IT qualification or equivalent Server support experience in a networked IT environment accepted Microsoft and/or VMware Certification - desirable AWS Certification - desirable WHAT DO WE OFFER? Not only do we offer a competitive salary, we also offer a generous benefits package including: 25 days annual leave (plus public holidays) increasing with length of service plus additional day over Christmas period and the opportunity to buy/sell annual leave Discretionary annual bonus Enhanced pension scheme up to 15% total contribution Life assurance x4 Private healthcare (BUPA), BUPA Dental Plan + Healthcare Cash plan, including an Employee Assistance Programme Hybrid Working Cycle to work scheme/ Season ticket loans Enhanced policies including Maternity & Paternity Employee recognition awards CMA CGM Group is proud to define itself as a family business built on strong human values. Excellence Exemplarity Imagination Boldness CMA CGM respects, supports and values diversity in all forms. We seek to avoid discrimination and are committed to equal opportunities for all our employees. Our long-held inclusive policy improves performance, creates growth opportunities for all, aligns with our customer's values and enhances employee engagement. Join us and discover a world of opportunities!
A technology firm is seeking a Principle Site Reliability Engineer to build and maintain large-scale distributed systems. Candidates should have strong proficiency in programming languages like Golang, Java, or Python, and extensive experience with cloud infrastructure such as AWS, Azure, or GCP. The role is contract-based and offers competitive benefits, which will be disclosed during the interview process. This position is located in Wokingham, United Kingdom.
Mar 19, 2026
Full time
A technology firm is seeking a Principle Site Reliability Engineer to build and maintain large-scale distributed systems. Candidates should have strong proficiency in programming languages like Golang, Java, or Python, and extensive experience with cloud infrastructure such as AWS, Azure, or GCP. The role is contract-based and offers competitive benefits, which will be disclosed during the interview process. This position is located in Wokingham, United Kingdom.
Data Engineer London, £550 per day Outside IR35, Hybrid This is an exciting opportunity to play a key role in a major data modernisation programme, focused on migrating a large SQL Server estate into a cloud-native Azure Databricks environment. You will be central to transforming legacy reporting and data logic into scalable, modernised pipelines and models, helping the business unlock faster, more reliable insights. The Company They are a well-established organisation undergoing a significant transformation of their data landscape. With a strong commitment to modern BI practices and cloud engineering, they are investing in next-generation technology to improve analytics capabilities across the business. You will join a collaborative environment where engineering excellence, trusted data, and high-quality reporting are core priorities. The Role and Deliverables Lead the migration of SQL Server stored procedures, functions, views, and legacy reporting logic into Azure Databricks. Reengineer and optimise SQL workloads for Databricks using Databricks SQL, dbt, and PySpark. Support the uplift of SSRS and Tableau reporting so that all outputs are powered by Databricks-based datasets. Validate migrated datasets and reporting outputs, ensuring high levels of accuracy and performance. Document pipelines, models, and migration processes for long-term maintainability. Collaborate with BI, data warehouse, and project teams to ensure smooth delivery across the programme. Your Skills and Experience Strong experience working with Azure Databricks, including SQL development, data modelling, and PySpark. Proven capability in SQL Server, including complex T-SQL logic, stored procedures, and performance optimisation. Hands-on experience with dbt for modular, testable data model development. Solid understanding of legacy BI environments, particularly SSRS. Knowledge of Tableau and how to optimise dashboards against cloud-based data sources. Ability to work collaboratively within a BI, data warehouse, or reporting team during large-scale migrations. How to Apply If this project aligns with your experience, please apply with your most recent CV.
Mar 19, 2026
Contractor
Data Engineer London, £550 per day Outside IR35, Hybrid This is an exciting opportunity to play a key role in a major data modernisation programme, focused on migrating a large SQL Server estate into a cloud-native Azure Databricks environment. You will be central to transforming legacy reporting and data logic into scalable, modernised pipelines and models, helping the business unlock faster, more reliable insights. The Company They are a well-established organisation undergoing a significant transformation of their data landscape. With a strong commitment to modern BI practices and cloud engineering, they are investing in next-generation technology to improve analytics capabilities across the business. You will join a collaborative environment where engineering excellence, trusted data, and high-quality reporting are core priorities. The Role and Deliverables Lead the migration of SQL Server stored procedures, functions, views, and legacy reporting logic into Azure Databricks. Reengineer and optimise SQL workloads for Databricks using Databricks SQL, dbt, and PySpark. Support the uplift of SSRS and Tableau reporting so that all outputs are powered by Databricks-based datasets. Validate migrated datasets and reporting outputs, ensuring high levels of accuracy and performance. Document pipelines, models, and migration processes for long-term maintainability. Collaborate with BI, data warehouse, and project teams to ensure smooth delivery across the programme. Your Skills and Experience Strong experience working with Azure Databricks, including SQL development, data modelling, and PySpark. Proven capability in SQL Server, including complex T-SQL logic, stored procedures, and performance optimisation. Hands-on experience with dbt for modular, testable data model development. Solid understanding of legacy BI environments, particularly SSRS. Knowledge of Tableau and how to optimise dashboards against cloud-based data sources. Ability to work collaboratively within a BI, data warehouse, or reporting team during large-scale migrations. How to Apply If this project aligns with your experience, please apply with your most recent CV.
Senior Data Scientist London, hybrid 1 to 3 days per week. Competitive salary between £70,000 and £80,000 plus bonus and benefits. This is an exciting opportunity to build a data science capability from the ground up within a well-funded, high-impact organisation. You will join at a time where they expand their AI, automation, and analytics function, taking full ownership of predictive modelling projects that directly influence commercial decisions and operational efficiency. The Company They are a specialist organisation operating at the intersection of healthcare, supply chain, and strategic procurement. Their work ensures reliable access to essential products for millions of end users, using advanced analytics and supplier intelligence to secure cost-efficient, high-quality supply. With strong investment behind technology and AI, they are now evolving towards a more sophisticated, data-driven operating model. The Role Lead the development of predictive ML models to optimise pricing, bidding strategies, and market behaviour. Build data-driven workflows that improve operational processes and automate manual tasks. Contribute to early-stage AI initiatives, including conversational interfaces and intelligent assistants. Shape project plans, define requirements, and communicate insights to senior stakeholders. Deliver end-to-end modelling, from scoping and feature design to deployment and iteration. Act as the most senior data science practitioner, setting foundations for how the function will scale. Your Skills and Experience Strong commercial experience in machine learning, predictive modelling, and delivering production-ready solutions. Proficiency in Python and experience working with cloud environments, ideally Azure. Ability to work autonomously, make pragmatic technical decisions, and drive business outcomes. Comfortable collaborating with stakeholders across commercial, operations, and technology. Broad skill set across supervised learning, workflow automation, and hands-on engineering. STEM academic background with strong analytical foundations. What They Offer Competitive salary plus bonus and full benefits. Hybrid working with a minimum of one office day per week. The chance to build a new data science capability in a growing team. High visibility with opportunities to shape strategy, tooling, and delivery standards. Future headcount growth, including adjacent roles such as ML Ops. How to Apply If this opportunity sounds like the right next step, please apply with your CV or email me at for more information.
Mar 19, 2026
Full time
Senior Data Scientist London, hybrid 1 to 3 days per week. Competitive salary between £70,000 and £80,000 plus bonus and benefits. This is an exciting opportunity to build a data science capability from the ground up within a well-funded, high-impact organisation. You will join at a time where they expand their AI, automation, and analytics function, taking full ownership of predictive modelling projects that directly influence commercial decisions and operational efficiency. The Company They are a specialist organisation operating at the intersection of healthcare, supply chain, and strategic procurement. Their work ensures reliable access to essential products for millions of end users, using advanced analytics and supplier intelligence to secure cost-efficient, high-quality supply. With strong investment behind technology and AI, they are now evolving towards a more sophisticated, data-driven operating model. The Role Lead the development of predictive ML models to optimise pricing, bidding strategies, and market behaviour. Build data-driven workflows that improve operational processes and automate manual tasks. Contribute to early-stage AI initiatives, including conversational interfaces and intelligent assistants. Shape project plans, define requirements, and communicate insights to senior stakeholders. Deliver end-to-end modelling, from scoping and feature design to deployment and iteration. Act as the most senior data science practitioner, setting foundations for how the function will scale. Your Skills and Experience Strong commercial experience in machine learning, predictive modelling, and delivering production-ready solutions. Proficiency in Python and experience working with cloud environments, ideally Azure. Ability to work autonomously, make pragmatic technical decisions, and drive business outcomes. Comfortable collaborating with stakeholders across commercial, operations, and technology. Broad skill set across supervised learning, workflow automation, and hands-on engineering. STEM academic background with strong analytical foundations. What They Offer Competitive salary plus bonus and full benefits. Hybrid working with a minimum of one office day per week. The chance to build a new data science capability in a growing team. High visibility with opportunities to shape strategy, tooling, and delivery standards. Future headcount growth, including adjacent roles such as ML Ops. How to Apply If this opportunity sounds like the right next step, please apply with your CV or email me at for more information.
All our office locations considered: Newbury, Reading, London (satellite) Liverpool or Glasgow; OR Croatia (Šibenik) The Team We're Intuita - a fast growing consultancy that's making waves in both the consultancy and technology space! Now as part of the wider FSP Consulting group, we continue with our ambitious growth plans for this year and beyond; we are looking for talented individuals to complement the team of experts we already have working across our business, becoming a pivotal part of our journey, to not just meet, but continuously exceed our client expectations! The role: We're excited to expand our team and bring on an Informatica Contractor to support our data integration projects. As a part of our consultancy team, you will assist clients in optimizing their data processes and ensuring smooth data flows across their platforms. Your work will be dynamic and will involve collaborating with various stakeholders to understand and address their data integration needs. You will utilize your expertise in Informatica to develop, implement, and maintain ETL processes that contribute to meeting our clients' objectives. Your typical week will include: Design, develop, and maintain complex ETL processes using Informatica PowerCenter / IICS. Collaborating with business analysts to gather requirements and translate them into technical specifications. Optimise ETL performance, diagnose issues, and ensure high data quality. Identifying and addressing technical issues and challenges related to data integration. Participating in design discussions, providing insights for improvements and best practices. Documenting processes and solutions for future reference and knowledge sharing. A bit about you We're looking for an Informatica Contractor who possesses a unique blend of technical skills and problem-solving abilities. Here's what you need to bring to the table: Proven experience as an Informatica ETL specialist, ideally 5+ years. Solid understanding of data warehousing concepts and ETL processes Proficiency in SQL and database management systems Experience with data modeling and data integration strategies Ability to work under tight deadlines and manage multiple projects Excellent communication skills to interact with both technical and non-technical stakeholders Detail-oriented with a focus on delivering high-quality work Desirable Skills Experience with Snowflake (data modelling, performance tuning, ELT patterns). Exposure to cloud platforms such as Azure, AWS, or GCP. Familiarity with Informatica Cloud (IICS) and modern data integration tools. Knowledge of Python, Databricks, or broader data engineering technologies. Salary - it's important, we know! The salary will depend on your level of experience. Contractors can be considered. Please let us know your day rate range on applying. (Really) flexible and hybrid working: most companies say they offer flexible working, but they've never experienced flexible working at Intuita. We offer hybrid working as standard, flexible hours and part-time roles to fit your lifestyle. We also organise regular social events at each office to ensure we maintain our close-knit feel. Care for your health and wellbeing: we genuinely care about the wellbeing of our team. We offer comprehensive company-paid medical insurance, free therapy and mental health support. Incredible training and learning opportunities: our team is full of talented individuals who are genuine experts in what they do. You'll get to work alongside them and learn from the best, as well as boosting your skills and knowledge with our knowledge sharing sessions, mentoring and company-paid certifications. Freedom and empowerment: we allow our consultants to actually be consultants, not just bodies. You're given the responsibility and accountability to really own problems and are encouraged to explore new directions and opportunities. There are no glass ceilings here and we don't have salary or promotion review dates - we reward people as and when we see great work! A supportive, friendly team: we work hard but enjoy working hard together. We're a diverse and inclusive team who enjoy silly Slack conversations and regular social events; our relatively flat structure means that everyone has an equal voice. If you like the sound of Intuita, apply to join us today! you have submitted your application, we will be in touch. Please be aware that the timing can vary dependent on the volume of applications that we receive for each role and in some cases, we may start to review applications prior to the closing date. If you require any support with your application, please contact
Mar 19, 2026
Full time
All our office locations considered: Newbury, Reading, London (satellite) Liverpool or Glasgow; OR Croatia (Šibenik) The Team We're Intuita - a fast growing consultancy that's making waves in both the consultancy and technology space! Now as part of the wider FSP Consulting group, we continue with our ambitious growth plans for this year and beyond; we are looking for talented individuals to complement the team of experts we already have working across our business, becoming a pivotal part of our journey, to not just meet, but continuously exceed our client expectations! The role: We're excited to expand our team and bring on an Informatica Contractor to support our data integration projects. As a part of our consultancy team, you will assist clients in optimizing their data processes and ensuring smooth data flows across their platforms. Your work will be dynamic and will involve collaborating with various stakeholders to understand and address their data integration needs. You will utilize your expertise in Informatica to develop, implement, and maintain ETL processes that contribute to meeting our clients' objectives. Your typical week will include: Design, develop, and maintain complex ETL processes using Informatica PowerCenter / IICS. Collaborating with business analysts to gather requirements and translate them into technical specifications. Optimise ETL performance, diagnose issues, and ensure high data quality. Identifying and addressing technical issues and challenges related to data integration. Participating in design discussions, providing insights for improvements and best practices. Documenting processes and solutions for future reference and knowledge sharing. A bit about you We're looking for an Informatica Contractor who possesses a unique blend of technical skills and problem-solving abilities. Here's what you need to bring to the table: Proven experience as an Informatica ETL specialist, ideally 5+ years. Solid understanding of data warehousing concepts and ETL processes Proficiency in SQL and database management systems Experience with data modeling and data integration strategies Ability to work under tight deadlines and manage multiple projects Excellent communication skills to interact with both technical and non-technical stakeholders Detail-oriented with a focus on delivering high-quality work Desirable Skills Experience with Snowflake (data modelling, performance tuning, ELT patterns). Exposure to cloud platforms such as Azure, AWS, or GCP. Familiarity with Informatica Cloud (IICS) and modern data integration tools. Knowledge of Python, Databricks, or broader data engineering technologies. Salary - it's important, we know! The salary will depend on your level of experience. Contractors can be considered. Please let us know your day rate range on applying. (Really) flexible and hybrid working: most companies say they offer flexible working, but they've never experienced flexible working at Intuita. We offer hybrid working as standard, flexible hours and part-time roles to fit your lifestyle. We also organise regular social events at each office to ensure we maintain our close-knit feel. Care for your health and wellbeing: we genuinely care about the wellbeing of our team. We offer comprehensive company-paid medical insurance, free therapy and mental health support. Incredible training and learning opportunities: our team is full of talented individuals who are genuine experts in what they do. You'll get to work alongside them and learn from the best, as well as boosting your skills and knowledge with our knowledge sharing sessions, mentoring and company-paid certifications. Freedom and empowerment: we allow our consultants to actually be consultants, not just bodies. You're given the responsibility and accountability to really own problems and are encouraged to explore new directions and opportunities. There are no glass ceilings here and we don't have salary or promotion review dates - we reward people as and when we see great work! A supportive, friendly team: we work hard but enjoy working hard together. We're a diverse and inclusive team who enjoy silly Slack conversations and regular social events; our relatively flat structure means that everyone has an equal voice. If you like the sound of Intuita, apply to join us today! you have submitted your application, we will be in touch. Please be aware that the timing can vary dependent on the volume of applications that we receive for each role and in some cases, we may start to review applications prior to the closing date. If you require any support with your application, please contact
Hungry for a challenge? That's good, because at Just Eat (JET) we believe everything is possible, or, as we say, everything is on the table. We are a leading global online food delivery marketplace. Our tech ecosystem connects millions of active customers with hundreds of thousands of connected partners in countries across the globe. Our mission? To empower every food moment around the world, whether it's through customer service, coding or couriers. The Opportunity We're looking for a highly analytical and technically proficient Product Analyst to join our team. You will play a crucial role in driving the success of our Logistics product improvements and advancing our logistics-wide experimentation efforts. The ideal candidate has a strong background in data analysis, a passion to improve data driven product improvements and a proven ability to translate complex data into actionable product insights. This role requires hands on expertise with various data tools and a solid understanding of data pipelines. Please note: This is a 12 month Fixed Term Contract These are some of the key ingredients to the role: Product Performance Analysis Support analytics Define, monitor, and report on key product metrics to assess product health and impact of releases. Experimentation Design, set up, and analyze product experiments across various features and product surfaces. Develop hypotheses, determine appropriate statistical methodologies, and clearly communicate the results and recommendations to product managers and stakeholders. Data Manipulation & Reporting Write, optimize, and execute complex SQL queries to extract and manipulate data from large, disparate databases. Utilize Python (e.g., pandas, NumPy, statistical libraries) for advanced data cleaning, statistical modeling, and analysis as needed. Develop and maintain dynamic dashboards and reports using data visualization tools like Looker and Tableau to monitor business and product performance in real-time. Data Infrastructure & ETL Demonstrate hands on experience and comfort with understanding, troubleshooting, and occasionally contributing to ETL pipelines to ensure data quality, consistency, and availability for analysis. Collaborate with Backend and Data Engineering teams to define necessary tracking and logging for new features. Strategic Insight Partner closely with Product Managers, Engineers, Data Scientist and UX/UI Designers to influence the product roadmap based on quantitative data and analytical insights. Present findings, insights, and recommendations clearly. What will you bring to the table? Strong experience as a Product Analyst or data analytics Expert level proficiency in SQL for querying, aggregating, and analyzing large datasets. Prefer hands on experience with Python for data analysis, including statistical packages. Proven background in experimentation not just limited to A/B testing but also other techniques including hypothesis formulation, sample size calculation, statistical significance, and interpretation of results. Familiarity with cloud data platforms (e.g., AWS, GCP, Azure) and distributed processing frameworks. Proficiency in creating and managing dashboards, reports, and data models using Looker and/or Tableau. Familiarity and comfort with ETL processes and data warehousing concepts (e.g., how data flows, where to access reliable data). Excellent communication skills with the ability to clearly articulate complex analysis and trade offs. At JET, this is on the menu: Our teams forge connections internally and work with some of the best known brands on the planet, giving us truly international impact in a dynamic environment. Fun, fast paced and supportive, the JET culture is about movement, growth and about celebrating every aspect of our JETers. Thanks to them we stay one step ahead of the competition. Inclusion, Diversity & Belonging No matter who you are, what you look like, who you love, or where you are from, you can find your place at Just Eat We're committed to creating an inclusive culture, encouraging diversity of people and thinking, in which all employees feel they truly belong and can bring their most colourful selves to work every day. What else is cooking? Want to know more about our JETers, culture or company? Have a look at our career site where you can find people's stories, blogs, podcasts and more JET morsels. Are you ready to take your seat? Apply now!
Mar 19, 2026
Full time
Hungry for a challenge? That's good, because at Just Eat (JET) we believe everything is possible, or, as we say, everything is on the table. We are a leading global online food delivery marketplace. Our tech ecosystem connects millions of active customers with hundreds of thousands of connected partners in countries across the globe. Our mission? To empower every food moment around the world, whether it's through customer service, coding or couriers. The Opportunity We're looking for a highly analytical and technically proficient Product Analyst to join our team. You will play a crucial role in driving the success of our Logistics product improvements and advancing our logistics-wide experimentation efforts. The ideal candidate has a strong background in data analysis, a passion to improve data driven product improvements and a proven ability to translate complex data into actionable product insights. This role requires hands on expertise with various data tools and a solid understanding of data pipelines. Please note: This is a 12 month Fixed Term Contract These are some of the key ingredients to the role: Product Performance Analysis Support analytics Define, monitor, and report on key product metrics to assess product health and impact of releases. Experimentation Design, set up, and analyze product experiments across various features and product surfaces. Develop hypotheses, determine appropriate statistical methodologies, and clearly communicate the results and recommendations to product managers and stakeholders. Data Manipulation & Reporting Write, optimize, and execute complex SQL queries to extract and manipulate data from large, disparate databases. Utilize Python (e.g., pandas, NumPy, statistical libraries) for advanced data cleaning, statistical modeling, and analysis as needed. Develop and maintain dynamic dashboards and reports using data visualization tools like Looker and Tableau to monitor business and product performance in real-time. Data Infrastructure & ETL Demonstrate hands on experience and comfort with understanding, troubleshooting, and occasionally contributing to ETL pipelines to ensure data quality, consistency, and availability for analysis. Collaborate with Backend and Data Engineering teams to define necessary tracking and logging for new features. Strategic Insight Partner closely with Product Managers, Engineers, Data Scientist and UX/UI Designers to influence the product roadmap based on quantitative data and analytical insights. Present findings, insights, and recommendations clearly. What will you bring to the table? Strong experience as a Product Analyst or data analytics Expert level proficiency in SQL for querying, aggregating, and analyzing large datasets. Prefer hands on experience with Python for data analysis, including statistical packages. Proven background in experimentation not just limited to A/B testing but also other techniques including hypothesis formulation, sample size calculation, statistical significance, and interpretation of results. Familiarity with cloud data platforms (e.g., AWS, GCP, Azure) and distributed processing frameworks. Proficiency in creating and managing dashboards, reports, and data models using Looker and/or Tableau. Familiarity and comfort with ETL processes and data warehousing concepts (e.g., how data flows, where to access reliable data). Excellent communication skills with the ability to clearly articulate complex analysis and trade offs. At JET, this is on the menu: Our teams forge connections internally and work with some of the best known brands on the planet, giving us truly international impact in a dynamic environment. Fun, fast paced and supportive, the JET culture is about movement, growth and about celebrating every aspect of our JETers. Thanks to them we stay one step ahead of the competition. Inclusion, Diversity & Belonging No matter who you are, what you look like, who you love, or where you are from, you can find your place at Just Eat We're committed to creating an inclusive culture, encouraging diversity of people and thinking, in which all employees feel they truly belong and can bring their most colourful selves to work every day. What else is cooking? Want to know more about our JETers, culture or company? Have a look at our career site where you can find people's stories, blogs, podcasts and more JET morsels. Are you ready to take your seat? Apply now!
Title: Senior Platform Engineer Salary: Up to 85,000 D.O.E Plus bonus! Location: Bristol (2-3 days on site) iO Associates is exclusively working with a highly innovative tech-driven organisation building next-generation, real-world products, and they're looking for a Senior Platform Engineer to help scale and secure their cloud platform click apply for full job details
Mar 19, 2026
Full time
Title: Senior Platform Engineer Salary: Up to 85,000 D.O.E Plus bonus! Location: Bristol (2-3 days on site) iO Associates is exclusively working with a highly innovative tech-driven organisation building next-generation, real-world products, and they're looking for a Senior Platform Engineer to help scale and secure their cloud platform click apply for full job details
Performance Test Engineer - Public Sector, JMeter, Stress testing, Load testing, troubleshooting, Cloud infrastructure, Dynatrace, Performance testing We have an exciting opportunity for an experienced and established Performance Test Engineer to join a well known Public Sector dept on a very interesting digital programme For this role you must have: Exceptional knowledge of Tools - JMeter Extensive experience in performance testing Troubleshooting experience Experience of performance testing Cloud infrastructure (i.e. Azure) Monitoring - Experience with Dynatrace and using the dashboard capability Strong stakeholder engagement skills If this sounds like a role you would succeed in please do not hesitate to get in touch. You can contact me direct at
Mar 19, 2026
Contractor
Performance Test Engineer - Public Sector, JMeter, Stress testing, Load testing, troubleshooting, Cloud infrastructure, Dynatrace, Performance testing We have an exciting opportunity for an experienced and established Performance Test Engineer to join a well known Public Sector dept on a very interesting digital programme For this role you must have: Exceptional knowledge of Tools - JMeter Extensive experience in performance testing Troubleshooting experience Experience of performance testing Cloud infrastructure (i.e. Azure) Monitoring - Experience with Dynatrace and using the dashboard capability Strong stakeholder engagement skills If this sounds like a role you would succeed in please do not hesitate to get in touch. You can contact me direct at
We have a current opportunity for a Head of Azure Platform Security on a permanent basis. The position will be based in London. For further information about this position please apply. Requirements Hands-on Azure cloud security architecture and implementation - Defender for Cloud, Policy-as-Code, RBAC, PIM, private endpoints, and secure landing zone design; AWS security experience also considered Network security engineering: firewall policy design and lifecycle management, micro-segmentation, NSG/UDR/NVA architecture, hub-spoke topology, and perimeter defence for hybrid environments WAF design, deployment, and operational tuning - Cloudflare, Azure Application Gateway, or equivalent; custom rule authoring and false-positive management at production scale Network flow log analysis and intrusion detection engineering - building detection logic for lateral movement, beaconing, anomalous egress, and C2 patterns SIEM engineering: detection rule authoring (KQL, SPL, or equivalent), log pipeline design, alert correlation, triage workflow - you write the rules, not just read the dashboard Endpoint and desktop security: EDR deployment and tuning (Defender for Endpoint, CrowdStrike), Intune/Jamf device management, privileged access workstations, JIT/JEA models API and application security: threat modelling (STRIDE/PASTA), OAuth 2.0/OIDC implementation review, secrets management (Key Vault, HashiCorp Vault), and secure SDLC integration PKI, certificate lifecycle automation, identity federation, and SSO across hybrid cloud and on-premises environments Security automation and IaC: Python, PowerShell, Terraform, Bicep, or Sentinel analytics rules - you codify controls, you do not document them MITRE ATT&CK coverage mapping; threat hunting, adversary emulation, and proactive gap analysis against realistic TTPs Cloud infrastructure - Azure preferred, AWS considered; IAM, managed services, automated and auditable deployment pipelines, secrets management Nice to Have Financial services, trading, or capital markets - operational security in a regulated, high-availability, zero-downtime-tolerance environment Zero-trust architecture: BeyondCorp, Zscaler, or equivalent; conditional access policy design and implementation DDoS mitigation, BGP security, and network resilience engineering for latency-sensitive financial infrastructure ISO 27001, SOC 2, DORA, or equivalent - hands-on implementation, not just audit participation Red team, adversarial simulation, or penetration testing programme design - experience on both sides of the exercise What We're Looking For You are a builder first and a security engineer second - meaning you solve security problems by engineering better systems, not by writing longer policies. You find the gap before an attacker does because you have thought about how you would exploit the environment yourself. A Requirements Hands-on Azure cloud security architecture and implementation - Defender for Cloud, Policy-as-Code, RBAC, PIM, private endpoints, and secure landing zone design; AWS security experience also considered Network security engineering: firewall policy design and lifecycle management, micro-segmentation, NSG/UDR/NVA architecture, hub-spoke topology, and perimeter defence for hybrid environments WAF design, deployment, and operational tuning - Cloudflare, Azure Application Gateway, or equivalent; custom rule authoring and false-positive management at production scale Network flow log analysis and intrusion detection engineering - building detection logic for lateral movement, beaconing, anomalous egress, and C2 patterns SIEM engineering: detection rule authoring (KQL, SPL, or equivalent), log pipeline design, alert correlation, triage workflow - you write the rules, not just read the dashboard Endpoint and desktop security: EDR deployment and tuning (Defender for Endpoint, CrowdStrike), Intune/Jamf device management, privileged access workstations, JIT/JEA models API and application security: threat modelling (STRIDE/PASTA), OAuth 2.0/OIDC implementation review, secrets management (Key Vault, HashiCorp Vault), and secure SDLC integration PKI, certificate lifecycle automation, identity federation, and SSO across hybrid cloud and on-premises environments Security automation and IaC: Python, PowerShell, Terraform, Bicep, or Sentinel analytics rules - you codify controls, you do not document them MITRE ATT&CK coverage mapping; threat hunting, adversary emulation, and proactive gap analysis against realistic TTPs Cloud infrastructure - Azure preferred, AWS considered; IAM, managed services, automated and auditable deployment pipelines, secrets management Nice to Have Financial services, trading, or capital markets - operational security in a regulated, high-availability, zero-downtime-tolerance environment Zero-trust architecture: BeyondCorp, Zscaler, or equivalent; conditional access policy design and implementation DDoS mitigation, BGP security, and network resilience engineering for latency-sensitive financial infrastructure ISO 27001, SOC 2, DORA, or equivalent - hands-on implementation, not just audit participation Red team, adversarial simulation, or penetration testing programme design - experience on both sides of the exercise What We're Looking For You are a builder first and a security engineer second - meaning you solve security problems by engineering better systems, not by writing longer policies. You find the gap before an attacker does because you have thought about how you would exploit the environment yourself. A security incident is not just a technical failure - it is a business one. You bring hands-on capability, genuine innovation, and the rigour to make this organisation measurably more secure every quarter. security incident is not just a technical failure - it is a business one. You bring hands-on capability, genuine innovation, and the rigour to make this organisation measurably more secure every quarter. To find out more about Huxley, please visit (url removed) Huxley, a trading division of SThree Partnership LLP is acting as an Employment Business in relation to this vacancy Registered office 8 Bishopsgate, London, EC2N 4BQ, United Kingdom Partnership Number OC(phone number removed) England and Wales
Mar 18, 2026
Full time
We have a current opportunity for a Head of Azure Platform Security on a permanent basis. The position will be based in London. For further information about this position please apply. Requirements Hands-on Azure cloud security architecture and implementation - Defender for Cloud, Policy-as-Code, RBAC, PIM, private endpoints, and secure landing zone design; AWS security experience also considered Network security engineering: firewall policy design and lifecycle management, micro-segmentation, NSG/UDR/NVA architecture, hub-spoke topology, and perimeter defence for hybrid environments WAF design, deployment, and operational tuning - Cloudflare, Azure Application Gateway, or equivalent; custom rule authoring and false-positive management at production scale Network flow log analysis and intrusion detection engineering - building detection logic for lateral movement, beaconing, anomalous egress, and C2 patterns SIEM engineering: detection rule authoring (KQL, SPL, or equivalent), log pipeline design, alert correlation, triage workflow - you write the rules, not just read the dashboard Endpoint and desktop security: EDR deployment and tuning (Defender for Endpoint, CrowdStrike), Intune/Jamf device management, privileged access workstations, JIT/JEA models API and application security: threat modelling (STRIDE/PASTA), OAuth 2.0/OIDC implementation review, secrets management (Key Vault, HashiCorp Vault), and secure SDLC integration PKI, certificate lifecycle automation, identity federation, and SSO across hybrid cloud and on-premises environments Security automation and IaC: Python, PowerShell, Terraform, Bicep, or Sentinel analytics rules - you codify controls, you do not document them MITRE ATT&CK coverage mapping; threat hunting, adversary emulation, and proactive gap analysis against realistic TTPs Cloud infrastructure - Azure preferred, AWS considered; IAM, managed services, automated and auditable deployment pipelines, secrets management Nice to Have Financial services, trading, or capital markets - operational security in a regulated, high-availability, zero-downtime-tolerance environment Zero-trust architecture: BeyondCorp, Zscaler, or equivalent; conditional access policy design and implementation DDoS mitigation, BGP security, and network resilience engineering for latency-sensitive financial infrastructure ISO 27001, SOC 2, DORA, or equivalent - hands-on implementation, not just audit participation Red team, adversarial simulation, or penetration testing programme design - experience on both sides of the exercise What We're Looking For You are a builder first and a security engineer second - meaning you solve security problems by engineering better systems, not by writing longer policies. You find the gap before an attacker does because you have thought about how you would exploit the environment yourself. A Requirements Hands-on Azure cloud security architecture and implementation - Defender for Cloud, Policy-as-Code, RBAC, PIM, private endpoints, and secure landing zone design; AWS security experience also considered Network security engineering: firewall policy design and lifecycle management, micro-segmentation, NSG/UDR/NVA architecture, hub-spoke topology, and perimeter defence for hybrid environments WAF design, deployment, and operational tuning - Cloudflare, Azure Application Gateway, or equivalent; custom rule authoring and false-positive management at production scale Network flow log analysis and intrusion detection engineering - building detection logic for lateral movement, beaconing, anomalous egress, and C2 patterns SIEM engineering: detection rule authoring (KQL, SPL, or equivalent), log pipeline design, alert correlation, triage workflow - you write the rules, not just read the dashboard Endpoint and desktop security: EDR deployment and tuning (Defender for Endpoint, CrowdStrike), Intune/Jamf device management, privileged access workstations, JIT/JEA models API and application security: threat modelling (STRIDE/PASTA), OAuth 2.0/OIDC implementation review, secrets management (Key Vault, HashiCorp Vault), and secure SDLC integration PKI, certificate lifecycle automation, identity federation, and SSO across hybrid cloud and on-premises environments Security automation and IaC: Python, PowerShell, Terraform, Bicep, or Sentinel analytics rules - you codify controls, you do not document them MITRE ATT&CK coverage mapping; threat hunting, adversary emulation, and proactive gap analysis against realistic TTPs Cloud infrastructure - Azure preferred, AWS considered; IAM, managed services, automated and auditable deployment pipelines, secrets management Nice to Have Financial services, trading, or capital markets - operational security in a regulated, high-availability, zero-downtime-tolerance environment Zero-trust architecture: BeyondCorp, Zscaler, or equivalent; conditional access policy design and implementation DDoS mitigation, BGP security, and network resilience engineering for latency-sensitive financial infrastructure ISO 27001, SOC 2, DORA, or equivalent - hands-on implementation, not just audit participation Red team, adversarial simulation, or penetration testing programme design - experience on both sides of the exercise What We're Looking For You are a builder first and a security engineer second - meaning you solve security problems by engineering better systems, not by writing longer policies. You find the gap before an attacker does because you have thought about how you would exploit the environment yourself. A security incident is not just a technical failure - it is a business one. You bring hands-on capability, genuine innovation, and the rigour to make this organisation measurably more secure every quarter. security incident is not just a technical failure - it is a business one. You bring hands-on capability, genuine innovation, and the rigour to make this organisation measurably more secure every quarter. To find out more about Huxley, please visit (url removed) Huxley, a trading division of SThree Partnership LLP is acting as an Employment Business in relation to this vacancy Registered office 8 Bishopsgate, London, EC2N 4BQ, United Kingdom Partnership Number OC(phone number removed) England and Wales
Job Title: Data Science Analyst Location: Peterborough (hybrid working - 1 day in office) We have an exciting opportunity at Markerstudy Group for a Data Science Analyst. You will be responsible for providing data science and analytics solutions to support our strategic roadmaps and customer propositions. Working with a variety of teams and stakeholders, you will have strong communication skills allowing the business to adopt and embed your findings. Our Group Data Science team is commercially focused and driven by creating real value from data. We are a growing team of around 15 data science professionals, working across every part of the commercial business to help identify, build, and scale data-driven opportunities. Sitting within the Group Data Science function, this role works closely with a wide range of internal and external stakeholders, delivering data products, insights, and analytical services across pricing, partnerships, IT, insurers, customer insight, digital, marketing, and contact-centre teams This is a great opportunity for you to accelerate your career in Data Science, we'll provide you with all the relevant technical training around our data assets and technology stack, in return we ask that you are naturally inquisitive, passionate about problem solving and data, and view it as a vocation. You'll fit right into our team environment where you'll learn and develop with likeminded peers. As part of your Data Science career, you will be expected to develop and understand a wide range of modern statistical, machine learning and data science methods. This knowledge will be applied to a wide range of business problems, adding demonstrable commercial value to the wider Markerstudy Group Key Roles and Responsibilities Drive commercial benefit and solve business problems using data Build strong, collaborative relationships with stakeholders across Markerstudy Group Explore large structured / unstructured data from a variety of sources Explore, understand and visualise data using leading tools and technology Maintenance of our Data Products, Frameworks and Tools Understand End-to-End Data Science / Data Product lifecycles Working with other Data Scientists, analytics professionals on Projects What you can expect to be working on: Within the first 3 months you will gain knowledge of our data assets by creating actionable business insight from our data warehouse to build a strong foundation. Expect to be hands on using tools like Python / SQL , and working with large datasets within our Azure Cloud Platforms. By the end of your first year, you will be competent in Python programming, our tools and frameworks, and working in many of our machine learning projects. You will have started to create a network of stakeholders. By month 24 you will have had the opportunity to work on a wide variety of data products and understand the commercial applications e.g. Fraud, Claims, Debt, Digital personalisation. You will be skilled in Python (including real time coding) and SQL. Throughout you will receive ongoing personal development with senior members of the team to advance your skills and help guide your future career progression. Key Skills, Experience and Knowledge: Passionate and curious about data science, data. Love solving problems. Strong communication skills, and the ability to "story-tell" to our stakeholders and customers, can adapt for audiences of varying technical abilities. Strong numerical, a solid understanding of mathematical concepts and principles. Resilience can work independently to deliver projects. Proactively share insights, results and identify risks with the rest of the team. Proficient at communicating results in a concise manner both verbally and written. Experience using an analytical tool/language (Python, R or equivalent) or SQL Hands-on experience of data analysis and communicating findings Hands-on experience in the cloud platform and tools i.e. Azure, Azure Databricks, Azure Data Factory Experience of using collaboration tools such as JIRA and Confluence Experience of using version control software e.g. Git Experience of running and deploying Azure DevOps pipelines would be advantageous Behaviours: Works collaboratively and contributes positively as part of a team Self-motivated with a drive to learn, develop and show ownership Logical thinker with a professional and positive attitude Passion to innovate and improve processes Value differences and people from all walks of life, both colleagues and customers
Mar 18, 2026
Full time
Job Title: Data Science Analyst Location: Peterborough (hybrid working - 1 day in office) We have an exciting opportunity at Markerstudy Group for a Data Science Analyst. You will be responsible for providing data science and analytics solutions to support our strategic roadmaps and customer propositions. Working with a variety of teams and stakeholders, you will have strong communication skills allowing the business to adopt and embed your findings. Our Group Data Science team is commercially focused and driven by creating real value from data. We are a growing team of around 15 data science professionals, working across every part of the commercial business to help identify, build, and scale data-driven opportunities. Sitting within the Group Data Science function, this role works closely with a wide range of internal and external stakeholders, delivering data products, insights, and analytical services across pricing, partnerships, IT, insurers, customer insight, digital, marketing, and contact-centre teams This is a great opportunity for you to accelerate your career in Data Science, we'll provide you with all the relevant technical training around our data assets and technology stack, in return we ask that you are naturally inquisitive, passionate about problem solving and data, and view it as a vocation. You'll fit right into our team environment where you'll learn and develop with likeminded peers. As part of your Data Science career, you will be expected to develop and understand a wide range of modern statistical, machine learning and data science methods. This knowledge will be applied to a wide range of business problems, adding demonstrable commercial value to the wider Markerstudy Group Key Roles and Responsibilities Drive commercial benefit and solve business problems using data Build strong, collaborative relationships with stakeholders across Markerstudy Group Explore large structured / unstructured data from a variety of sources Explore, understand and visualise data using leading tools and technology Maintenance of our Data Products, Frameworks and Tools Understand End-to-End Data Science / Data Product lifecycles Working with other Data Scientists, analytics professionals on Projects What you can expect to be working on: Within the first 3 months you will gain knowledge of our data assets by creating actionable business insight from our data warehouse to build a strong foundation. Expect to be hands on using tools like Python / SQL , and working with large datasets within our Azure Cloud Platforms. By the end of your first year, you will be competent in Python programming, our tools and frameworks, and working in many of our machine learning projects. You will have started to create a network of stakeholders. By month 24 you will have had the opportunity to work on a wide variety of data products and understand the commercial applications e.g. Fraud, Claims, Debt, Digital personalisation. You will be skilled in Python (including real time coding) and SQL. Throughout you will receive ongoing personal development with senior members of the team to advance your skills and help guide your future career progression. Key Skills, Experience and Knowledge: Passionate and curious about data science, data. Love solving problems. Strong communication skills, and the ability to "story-tell" to our stakeholders and customers, can adapt for audiences of varying technical abilities. Strong numerical, a solid understanding of mathematical concepts and principles. Resilience can work independently to deliver projects. Proactively share insights, results and identify risks with the rest of the team. Proficient at communicating results in a concise manner both verbally and written. Experience using an analytical tool/language (Python, R or equivalent) or SQL Hands-on experience of data analysis and communicating findings Hands-on experience in the cloud platform and tools i.e. Azure, Azure Databricks, Azure Data Factory Experience of using collaboration tools such as JIRA and Confluence Experience of using version control software e.g. Git Experience of running and deploying Azure DevOps pipelines would be advantageous Behaviours: Works collaboratively and contributes positively as part of a team Self-motivated with a drive to learn, develop and show ownership Logical thinker with a professional and positive attitude Passion to innovate and improve processes Value differences and people from all walks of life, both colleagues and customers
Principal Data Engineer - Leeds (Hybrid) Are you a data leader who wants to shape the future of a modern cloud data platform? We're looking for a Principal Data Engineer to drive technical direction, uplift engineering standards, and deliver high quality Azure data solutions that make a real impact. The Role Build and deliver secure, scalable data pipelines using Azure (ADF, Databricks, Lakehouse). Lead engineering standards and mentor senior engineers. Turn complex requirements into automated, efficient data solutions. Influence platform strategy, remove blockers, and ensure strong governance. Optimise cloud costs through FinOps practices. Identify opportunities to integrate AI/ML into data workflows. What You'll Bring Extensive experience building data engineering solutions on Azure. Strong hands on skills in ADF, Databricks, Python, T SQL and PySpark. Proven leadership and mentoring experience. Solid understanding of data governance and security tooling (Purview / Unity Catalog). Bonus: experience with CI/CD, automation, containerisation or Azure certifications. Why Apply? This is an exciting opportunity to shape a strategic cloud data platform, influence engineering direction, and work with modern technologies in a forward thinking environment that encourages innovation and continuous improvement, all with the support of a hybrid working model and great benefits. Salary: up to £82,000 Interested? Get in touch to learn more.
Mar 18, 2026
Full time
Principal Data Engineer - Leeds (Hybrid) Are you a data leader who wants to shape the future of a modern cloud data platform? We're looking for a Principal Data Engineer to drive technical direction, uplift engineering standards, and deliver high quality Azure data solutions that make a real impact. The Role Build and deliver secure, scalable data pipelines using Azure (ADF, Databricks, Lakehouse). Lead engineering standards and mentor senior engineers. Turn complex requirements into automated, efficient data solutions. Influence platform strategy, remove blockers, and ensure strong governance. Optimise cloud costs through FinOps practices. Identify opportunities to integrate AI/ML into data workflows. What You'll Bring Extensive experience building data engineering solutions on Azure. Strong hands on skills in ADF, Databricks, Python, T SQL and PySpark. Proven leadership and mentoring experience. Solid understanding of data governance and security tooling (Purview / Unity Catalog). Bonus: experience with CI/CD, automation, containerisation or Azure certifications. Why Apply? This is an exciting opportunity to shape a strategic cloud data platform, influence engineering direction, and work with modern technologies in a forward thinking environment that encourages innovation and continuous improvement, all with the support of a hybrid working model and great benefits. Salary: up to £82,000 Interested? Get in touch to learn more.
About Echobox: We are a fast-growing, research-driven company building an artificial intelligence that helps online publishers overcome the challenges they face every day. Using novel AI, we are revolutionising the publishing industry and have a track record of building things that others have ruled out as impossible. Leading names from around the world rely on our product every day, including The Times, Le Monde, The Guardian, Vogue and many more. About the Role: You will work closely with our Product team and Data Scientists to define and execute on the future path for our products. Key Responsibilities: Work closely with senior engineers to design, build, and maintain back-end services and systems that support our AI-powered products, all whilst meeting launch deadlines. Assist in the development and optimization of scalable, high-performance systems to support large volumes of data and machine learning models. Write clean, efficient, and maintainable code while following best practices and coding standards. Contribute to the deployment and integration of back-end services, ensuring that they work seamlessly with front-end and data science systems. Help improve the architecture and functionality of our back-end systems, focusing on performance, reliability, and scalability. Assist with debugging and troubleshooting technical issues, providing solutions to enhance system performance. Collaborate with cross-functional teams to identify system needs and contribute to continuous improvements. Continuously learn and apply new technologies and techniques to improve back-end infrastructure and processes. Requirements: A degree in Computer Science, Engineering, or a related field (or equivalent practical experience). 3-5 years of experience in back-end development or related field, with a focus on server-side technologies. Strong knowledge of programming languages such as Python, Java, or Node.js, and experience with back-end frameworks. Familiarity with databases (SQL and NoSQL) and experience in building and optimizing data storage solutions. Understanding of RESTful APIs, microservices architecture, and best practices for back-end development. Familiarity with version control systems such as Git. Experience working with cloud platforms (AWS, GCP, or Azure) and deploying applications in a cloud environment. A passion for solving problems with technology and working to deliver efficient, scalable solutions. A proactive, results-driven mindset with the ability to work independently and learn quickly. Preferred Requirements: Experience in a fast-paced SaaS or tech environment, focusing on building and scaling back-end systems. Understanding of security best practices for back-end development. Knowledge of machine learning concepts and the ability to collaborate with data scientists to optimize AI models. Strong communication skills and the ability to work effectively within cross-functional teams. Benefits: Our employees enjoy free breakfast every day, coffee, drinks and snacks all day, everyday. Every Monday, Wednesday and Friday, we order food for our weekly team lunches where everyone gets together for an hour of fun. We have regular team events (dinner, bowling, karting, poker nights, board-games etc.) for our team to get to know each other outside of work. Professionally, we host in-house conferences and an annual summer camp for all our global employees who are flown to and hosted in London. We ensure that all our employees also get pension contributions, the latest tech, generous annual leave and an amazing office with a balcony overlooking Notting Hill.
Mar 18, 2026
Full time
About Echobox: We are a fast-growing, research-driven company building an artificial intelligence that helps online publishers overcome the challenges they face every day. Using novel AI, we are revolutionising the publishing industry and have a track record of building things that others have ruled out as impossible. Leading names from around the world rely on our product every day, including The Times, Le Monde, The Guardian, Vogue and many more. About the Role: You will work closely with our Product team and Data Scientists to define and execute on the future path for our products. Key Responsibilities: Work closely with senior engineers to design, build, and maintain back-end services and systems that support our AI-powered products, all whilst meeting launch deadlines. Assist in the development and optimization of scalable, high-performance systems to support large volumes of data and machine learning models. Write clean, efficient, and maintainable code while following best practices and coding standards. Contribute to the deployment and integration of back-end services, ensuring that they work seamlessly with front-end and data science systems. Help improve the architecture and functionality of our back-end systems, focusing on performance, reliability, and scalability. Assist with debugging and troubleshooting technical issues, providing solutions to enhance system performance. Collaborate with cross-functional teams to identify system needs and contribute to continuous improvements. Continuously learn and apply new technologies and techniques to improve back-end infrastructure and processes. Requirements: A degree in Computer Science, Engineering, or a related field (or equivalent practical experience). 3-5 years of experience in back-end development or related field, with a focus on server-side technologies. Strong knowledge of programming languages such as Python, Java, or Node.js, and experience with back-end frameworks. Familiarity with databases (SQL and NoSQL) and experience in building and optimizing data storage solutions. Understanding of RESTful APIs, microservices architecture, and best practices for back-end development. Familiarity with version control systems such as Git. Experience working with cloud platforms (AWS, GCP, or Azure) and deploying applications in a cloud environment. A passion for solving problems with technology and working to deliver efficient, scalable solutions. A proactive, results-driven mindset with the ability to work independently and learn quickly. Preferred Requirements: Experience in a fast-paced SaaS or tech environment, focusing on building and scaling back-end systems. Understanding of security best practices for back-end development. Knowledge of machine learning concepts and the ability to collaborate with data scientists to optimize AI models. Strong communication skills and the ability to work effectively within cross-functional teams. Benefits: Our employees enjoy free breakfast every day, coffee, drinks and snacks all day, everyday. Every Monday, Wednesday and Friday, we order food for our weekly team lunches where everyone gets together for an hour of fun. We have regular team events (dinner, bowling, karting, poker nights, board-games etc.) for our team to get to know each other outside of work. Professionally, we host in-house conferences and an annual summer camp for all our global employees who are flown to and hosted in London. We ensure that all our employees also get pension contributions, the latest tech, generous annual leave and an amazing office with a balcony overlooking Notting Hill.
Job Title: .Net Technical Specialist Location: Doncaster (hybrid working) Salary: 65,000 - 70,000 per annum, depending on experience Job Type: Full Time, Permanent The Role: DB Cargo UK is currently recruiting for a .NET Technical Specialist to support the modernisation of our in-house application estate. This role will play a key part in driving the adoption of modern Microsoft technologies while ensuring our systems are secure, scalable and aligned with our long-term technology strategy. Working as part of our IT team, you will help design and deliver modern .NET solutions that support DB Cargo's operational systems and digital platforms. You will also act as a subject matter expert, supporting development teams, influencing architectural decisions and helping shape the future of our applications as we continue to evolve towards cloud-native technologies and modern DevSecOps practices. This role will suit someone who enjoys combining hands-on engineering with technical leadership, working collaboratively to deliver reliable and secure systems that support the business. This role is based at our Doncaster Head Office and offers hybrid working, with a salary of 65,000 - 70,000 depending on experience. What will you be doing? Designing and developing enterprise-grade .NET applications using modern frameworks including C#, ASP.NET Core and Web APIs. Supporting the modernisation of legacy .NET applications, upgrading systems to supported frameworks and improving performance and reliability. Contributing to the design of cloud-native solutions hosted in Microsoft Azure. Supporting the implementation of API-first and event-driven architectures where appropriate. Developing Infrastructure as Code deployments using Bicep to ensure consistent and repeatable environments. Working with CI/CD pipelines in Azure DevOps to automate builds, testing and secure deployments. Supporting monitoring, telemetry and observability through Azure Monitor and Application Insights. Acting as a technical reviewer, ensuring solutions align with DB Cargo's architecture, security policies and development standards. Working closely with the Technical Lead and Enterprise Architect to support long-term platform strategy and application roadmaps. Mentoring developers and sharing knowledge across the engineering team. Supporting the responsible adoption of AI-assisted engineering tools to improve development efficiency and quality. What are we looking for? Strong experience developing applications using .NET technologies such as C#, ASP.NET Core and Entity Framework. Experience working with Azure cloud services such as App Services, Azure Functions, Azure SQL, API Management, Service Bus and Key Vault. Experience modernising or migrating legacy .NET applications to newer frameworks. Experience working with CI/CD pipelines and DevOps practices, ideally within Azure DevOps. Knowledge of secure development practices and DevSecOps principles. Experience implementing monitoring and observability using tools such as Azure Monitor or Application Insights. Experience working collaboratively with architects, engineers and product teams to deliver technical solutions. Strong problem-solving skills and the ability to mentor or support other developers. BSc (Hons) in Computer Science or equivalent experience. What matters to you? Here at DB Cargo we offer range of benefits as part of your employment. These will include: We are offering a salary of between 65,000 - 70,000 depending on experience based on a 37 hour contract per week. 25 days annual leave plus bank holidays Hybrid working between our locations and your home, this is mutually agreeable between the business and employee Bonus Scheme - non contractual dependant on business and personal performance Defined Contribution pension scheme with generous employer contribution - up to 10% employer contribution Free on-site parking EV charging at selected sites Health Cash Plan Available Cycle to work Scheme Manager led recognition programme for employees who live our values Access to DB Learning world Annual pay reviews We are dedicated to your continuous professional development. Depending on your role we have specialist training programs, apprenticeships, development plans, courses and qualifications we can support you through. Access to our employee benefits portal where you can take advantage of discounts for a variety of shops and services as well as accessing our wellbeing content. We take the health and wellbeing of all employees seriously and provide access to an Employee Assistance Programme. Please click APPLY to send your CV for this role. Candidates with experience of: .NET Developer, C# Developer, VB.NET Developer, SQL Developer, .NET Technical Specialist, Junior Developer, Mid-Level Developer, Software Engineer, Database Developer, and Backend Developer may be suitable for this role.
Mar 18, 2026
Full time
Job Title: .Net Technical Specialist Location: Doncaster (hybrid working) Salary: 65,000 - 70,000 per annum, depending on experience Job Type: Full Time, Permanent The Role: DB Cargo UK is currently recruiting for a .NET Technical Specialist to support the modernisation of our in-house application estate. This role will play a key part in driving the adoption of modern Microsoft technologies while ensuring our systems are secure, scalable and aligned with our long-term technology strategy. Working as part of our IT team, you will help design and deliver modern .NET solutions that support DB Cargo's operational systems and digital platforms. You will also act as a subject matter expert, supporting development teams, influencing architectural decisions and helping shape the future of our applications as we continue to evolve towards cloud-native technologies and modern DevSecOps practices. This role will suit someone who enjoys combining hands-on engineering with technical leadership, working collaboratively to deliver reliable and secure systems that support the business. This role is based at our Doncaster Head Office and offers hybrid working, with a salary of 65,000 - 70,000 depending on experience. What will you be doing? Designing and developing enterprise-grade .NET applications using modern frameworks including C#, ASP.NET Core and Web APIs. Supporting the modernisation of legacy .NET applications, upgrading systems to supported frameworks and improving performance and reliability. Contributing to the design of cloud-native solutions hosted in Microsoft Azure. Supporting the implementation of API-first and event-driven architectures where appropriate. Developing Infrastructure as Code deployments using Bicep to ensure consistent and repeatable environments. Working with CI/CD pipelines in Azure DevOps to automate builds, testing and secure deployments. Supporting monitoring, telemetry and observability through Azure Monitor and Application Insights. Acting as a technical reviewer, ensuring solutions align with DB Cargo's architecture, security policies and development standards. Working closely with the Technical Lead and Enterprise Architect to support long-term platform strategy and application roadmaps. Mentoring developers and sharing knowledge across the engineering team. Supporting the responsible adoption of AI-assisted engineering tools to improve development efficiency and quality. What are we looking for? Strong experience developing applications using .NET technologies such as C#, ASP.NET Core and Entity Framework. Experience working with Azure cloud services such as App Services, Azure Functions, Azure SQL, API Management, Service Bus and Key Vault. Experience modernising or migrating legacy .NET applications to newer frameworks. Experience working with CI/CD pipelines and DevOps practices, ideally within Azure DevOps. Knowledge of secure development practices and DevSecOps principles. Experience implementing monitoring and observability using tools such as Azure Monitor or Application Insights. Experience working collaboratively with architects, engineers and product teams to deliver technical solutions. Strong problem-solving skills and the ability to mentor or support other developers. BSc (Hons) in Computer Science or equivalent experience. What matters to you? Here at DB Cargo we offer range of benefits as part of your employment. These will include: We are offering a salary of between 65,000 - 70,000 depending on experience based on a 37 hour contract per week. 25 days annual leave plus bank holidays Hybrid working between our locations and your home, this is mutually agreeable between the business and employee Bonus Scheme - non contractual dependant on business and personal performance Defined Contribution pension scheme with generous employer contribution - up to 10% employer contribution Free on-site parking EV charging at selected sites Health Cash Plan Available Cycle to work Scheme Manager led recognition programme for employees who live our values Access to DB Learning world Annual pay reviews We are dedicated to your continuous professional development. Depending on your role we have specialist training programs, apprenticeships, development plans, courses and qualifications we can support you through. Access to our employee benefits portal where you can take advantage of discounts for a variety of shops and services as well as accessing our wellbeing content. We take the health and wellbeing of all employees seriously and provide access to an Employee Assistance Programme. Please click APPLY to send your CV for this role. Candidates with experience of: .NET Developer, C# Developer, VB.NET Developer, SQL Developer, .NET Technical Specialist, Junior Developer, Mid-Level Developer, Software Engineer, Database Developer, and Backend Developer may be suitable for this role.
Principle Site Reliability Engineer (IT) Wokingham, United Kingdom Posted 7 months ago Tech Stack GCP Go Azure Reliability Amazon AWS Java Python Responsibilities Build and maintain large scale distributed systems. Strong proficiency in programming languages such as Golang, Java, or Python. Extensive experience with cloud infrastructure providers (AWS, Azure, or GCP). Deep knowledge of container orchestration platforms. Compensation Competitive Role Type Contract Visa Sponsorship Not provided Benefits & Perks Competitive benefits and perks may be disclosed during the interview process.
Mar 18, 2026
Full time
Principle Site Reliability Engineer (IT) Wokingham, United Kingdom Posted 7 months ago Tech Stack GCP Go Azure Reliability Amazon AWS Java Python Responsibilities Build and maintain large scale distributed systems. Strong proficiency in programming languages such as Golang, Java, or Python. Extensive experience with cloud infrastructure providers (AWS, Azure, or GCP). Deep knowledge of container orchestration platforms. Compensation Competitive Role Type Contract Visa Sponsorship Not provided Benefits & Perks Competitive benefits and perks may be disclosed during the interview process.
Platform Engineer - Active SC, Databricks, Trivy, Azure DevOps Up to 510 per day - Inside IR35 Remote 6 months My client is an instantly recognisable consultancy who urgently require a Platform Engineer with Active SC Clearance for an end client within the public sector. Key Requirements: Proven commercial experience working as a Platform / DevOps Engineer within the public sector. Active SC Clearance. Strong, commercial experience with Terraform for IaC, and with Databricks. Proven track record configuring and managing Azure DevOps CI/CD pipelines. Deep understanding of Azure cloud services and components. Practical experience with Docker containerisation. Knowledge of security scanning tooling (Trivy or similar). Scripting proficiency in Bash (Python is desirable). Solid understanding of Git-based version control, specifically within Azure DevOps. Nice to have: Immediate availability. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)
Mar 18, 2026
Contractor
Platform Engineer - Active SC, Databricks, Trivy, Azure DevOps Up to 510 per day - Inside IR35 Remote 6 months My client is an instantly recognisable consultancy who urgently require a Platform Engineer with Active SC Clearance for an end client within the public sector. Key Requirements: Proven commercial experience working as a Platform / DevOps Engineer within the public sector. Active SC Clearance. Strong, commercial experience with Terraform for IaC, and with Databricks. Proven track record configuring and managing Azure DevOps CI/CD pipelines. Deep understanding of Azure cloud services and components. Practical experience with Docker containerisation. Knowledge of security scanning tooling (Trivy or similar). Scripting proficiency in Bash (Python is desirable). Solid understanding of Git-based version control, specifically within Azure DevOps. Nice to have: Immediate availability. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)
Data Platform Engineer Location: Milton Keynes (Hybrid - 1-2 days onsite) Salary: Up to £50,000 The Role We are working with a large UK enterprise organisation investing heavily in its cloud data capabilities. They are seeking a Data Platform Engineer to support and evolve their Microsoft Fabric-based data platform. This role sits within a central Group Technology function and will focus on the ongoing design, optimisation, and operational management of a modern cloud data platform built on Microsoft technologies. You will work across platform configuration, capacity planning, reliability, and observability, ensuring the data environment remains secure, scalable, and high-performing. This is a hands-on engineering role combining platform improvement with operational stability and incident ownership. Key Responsibilities Support the design and continuous improvement of an enterprise Microsoft Fabric data platform Manage and optimise Fabric workloads including lakehouse, warehouse, and associated services Administer and maintain SQL Server environments supporting the wider data ecosystem Monitor platform performance, capacity utilisation, and cost efficiency Conduct root cause analysis across platform incidents and implement preventative improvements Contribute to Agile-based delivery of platform enhancements Implement and support Platform as Code / Infrastructure as Code approaches Enhance technical and financial observability across cloud data services Collaborate with internal engineering teams and third-party partners to ensure platform resilience Participate in operational support, including occasional out-of-hours coverage Essential Experience Proven experience in a Data Platform, Cloud Data Engineering, or Data Infrastructure role Hands-on experience with Microsoft Fabric (including platform configuration and workload management) Strong experience managing and optimising SQL Server environments Experience implementing and supporting cloud-based data platforms in Microsoft Azure Experience working within SLA-driven environments with structured incident, change, and problem management Understanding of capacity planning, performance tuning, and operational monitoring Ability to work across both legacy and modern cloud-based data environments Desirable Experience Experience with GitHub, GitHub Actions, Terraform, and Platform/Infrastructure as Code Exposure to cloud security best practices including RBAC, identity management, and Zero Trust principles Experience with alerting, monitoring, and observability tooling Cloud cost monitoring and optimisation experience STEM degree or equivalent practical experience Candidate Profile This role is well suited to a mid-level engineer with strong Microsoft data platform exposure who is looking to deepen their expertise in Microsoft Fabric within an enterprise environment. You will enjoy balancing platform engineering with operational ownership and continuous improvement.
Mar 18, 2026
Full time
Data Platform Engineer Location: Milton Keynes (Hybrid - 1-2 days onsite) Salary: Up to £50,000 The Role We are working with a large UK enterprise organisation investing heavily in its cloud data capabilities. They are seeking a Data Platform Engineer to support and evolve their Microsoft Fabric-based data platform. This role sits within a central Group Technology function and will focus on the ongoing design, optimisation, and operational management of a modern cloud data platform built on Microsoft technologies. You will work across platform configuration, capacity planning, reliability, and observability, ensuring the data environment remains secure, scalable, and high-performing. This is a hands-on engineering role combining platform improvement with operational stability and incident ownership. Key Responsibilities Support the design and continuous improvement of an enterprise Microsoft Fabric data platform Manage and optimise Fabric workloads including lakehouse, warehouse, and associated services Administer and maintain SQL Server environments supporting the wider data ecosystem Monitor platform performance, capacity utilisation, and cost efficiency Conduct root cause analysis across platform incidents and implement preventative improvements Contribute to Agile-based delivery of platform enhancements Implement and support Platform as Code / Infrastructure as Code approaches Enhance technical and financial observability across cloud data services Collaborate with internal engineering teams and third-party partners to ensure platform resilience Participate in operational support, including occasional out-of-hours coverage Essential Experience Proven experience in a Data Platform, Cloud Data Engineering, or Data Infrastructure role Hands-on experience with Microsoft Fabric (including platform configuration and workload management) Strong experience managing and optimising SQL Server environments Experience implementing and supporting cloud-based data platforms in Microsoft Azure Experience working within SLA-driven environments with structured incident, change, and problem management Understanding of capacity planning, performance tuning, and operational monitoring Ability to work across both legacy and modern cloud-based data environments Desirable Experience Experience with GitHub, GitHub Actions, Terraform, and Platform/Infrastructure as Code Exposure to cloud security best practices including RBAC, identity management, and Zero Trust principles Experience with alerting, monitoring, and observability tooling Cloud cost monitoring and optimisation experience STEM degree or equivalent practical experience Candidate Profile This role is well suited to a mid-level engineer with strong Microsoft data platform exposure who is looking to deepen their expertise in Microsoft Fabric within an enterprise environment. You will enjoy balancing platform engineering with operational ownership and continuous improvement.