• Home
  • Search Jobs
  • Register CV
  • Post a Job
  • Employer Pricing
  • Contact Us
  • Sign in
  • Sign up
  • Home
  • Search Jobs
  • Register CV
  • Post a Job
  • Employer Pricing
  • Contact Us
Sorry, that job is no longer available. Here are some results that may be similar to the job you were looking for.

793 jobs found

Email me jobs like this
Refine Search
Current Search
azure data engineer
System Engineer
InfinityQuest Ltd, Sheffield, Yorkshire
Experience within an enterprise scale organisation including hands-on experience of complex data centre environments, working within a similar role ie DevOps Engineer, Cloud Engineer, Security Engineer is mandatory. Expert level knowledge of one of more leading Cloud platforms including Microsoft Azure, Amazon Web Services, Google Cloud Platform and Alibaba Cloud click apply for full job details
Jan 09, 2026
Contractor
Experience within an enterprise scale organisation including hands-on experience of complex data centre environments, working within a similar role ie DevOps Engineer, Cloud Engineer, Security Engineer is mandatory. Expert level knowledge of one of more leading Cloud platforms including Microsoft Azure, Amazon Web Services, Google Cloud Platform and Alibaba Cloud click apply for full job details
Finshore Partners
Platform Engineer
Finshore Partners
Were hiring a hands-on Platform Engineer to join a fast-moving investment firm, supporting and enhancing their cloud-based data and infrastructure platforms. Reporting to the Head of Technology, youll work across Microsoft Azure, Microsoft Fabric, and Microsoft 365. This role blends data engineering, cloud operations, and infrastructure management , working closely with internal teams across the bus click apply for full job details
Jan 09, 2026
Full time
Were hiring a hands-on Platform Engineer to join a fast-moving investment firm, supporting and enhancing their cloud-based data and infrastructure platforms. Reporting to the Head of Technology, youll work across Microsoft Azure, Microsoft Fabric, and Microsoft 365. This role blends data engineering, cloud operations, and infrastructure management , working closely with internal teams across the bus click apply for full job details
Hippo Digital Limited
Senior Data Science Engineer
Hippo Digital Limited Leeds, Yorkshire
About The Role Hippo is recruiting for a Senior Data Science Engineer you join our growing consultancy. This role sits within our wider Data Team - you'll be a part of a like minded, talented and passionate group of Science Engineers, Engineers, Analysts & Architects who are delivering awesome things for our clients. We are looking for someone who is inquisitive, excels in solving complex data centric problems, ready to explore and visualise data and bring their experience and knowledge on the importance on utilising data to aid in commercial decision making. Our solutions empower our customers to build and support secure, scalable, and well engineered systems beyond traditional boundaries. We leverage deep data insights and continuous innovation to deliver awesome platforms that allow our customers to understand and get the most from their data and digital services. Our Senior Data Science Engineers play a key role in this. We are looking for someone to bring their experience from a commercial environment to this role - you will be working as part of the wider Data Engineering Team and will be involved in elements of Engineering that sit outside of traditional Data Science. Please note, we are looking for candidates who are looking for growth at this level (senior), therefore the advertised salary band is the lower end our full banding for this level of position, allowing for progression in the role. Requirements of the Role Experience in developing and expanding Data Science capabilities Deliver business impact across all areas Solid experience working with concepts relevant to data ethics and privacy Understand product delivery from requirements to desirable business outcomes Develop complex solutions using a range of data science techniques, whilst understanding any ethical considerations Understand the role and benefits of data science within the organisation Support capability building within the organisation Collaborate with others to develop data science solutions and outputs supporting the organisation Prepare and manipulate data, and perform complex analytics Present and communicate effectively> Skills and experience that you need We need people who are open to new technologies, quick to adapt, and quick to learn. If you don't have one of the following, please apply and we can discuss in more detail. Strong experience in Machine Learning Experience is at least one core coding language (Python, R, Java, etc.) Experience in relevant Data Manipulation, Machine Learning and Statistical Analysis coding packages (eg. in python: NumPy, Scikit Learn, Pandas, Matplotlib etc.) Strong skills in data exploration, cleansing, modelling and presentation Strong experience in testing data models and Machine Learning Models Strong experience in data presentation and visualisation Desirable Technical Experience At least one Cloud Provider (AWS, Azure or GCP) Databases such as SQL / NoSQL End to end data pipelines Source Control and Version Control (e.g. Git) What makes us great As well as a competitive salary which we're transparent about from the outset, you can also expect a range of benefits: Contributory pension scheme (Hippo 6% with employee contributions of 2%) 25 days holiday plus UK public holidays Perkbox access for a wide range of discounts Critical illness cover Life assurance and death in service cover Volunteer days Cycle to work scheme for the avid cyclists Salary sacrifice electric vehicles scheme Season ticket loans Financial and general wellbeing sessions Flexible benefits scheme with options of: private health cover private dental cover additional company pension contributions additional holidays (up to an extra 2 days) wellbeing contribution charity contributions tree planting Diversity, Inclusion and Belonging at Hippo At Hippo, we're dedicated to creating a diverse, equitable and inclusive workplace that works for everyone. We understand that having a diverse team unlocks our capacity for innovation, creativity and problem solving. Only by building a community of diverse perspectives, cultures and socio economic backgrounds can we create an environment where all can contribute and thrive. We actively encourage applications from underrepresented groups including women, ethnic minorities, LGBTQ+, neurodivergent and people with disabilities. We are committed to providing an inclusive and accessible recruitment process that reflects our workplace culture. We are a registered Disability Confident Employer, Mindful Employer, Endometriosis Friendly Employer and a member of the Armed Forces Covenant. Hippo continually strives to remove barriers, provide accommodations and offer reasonable adjustments to ensure equity throughout our practices. Hi, we're Hippo. At Hippo, we design with empathy and build for impact. We do this by combining data informed evidence, human centred design and software engineering. We're a digital services partner who is genuinely invested in helping our clients thrive as modern organisations. Our delivery methodology is truly agile, from concept to reality, supporting innovation and continuous improvement to achieve your desired outcomes. We firmly believe that technology should serve humanity, not the other way around. We take a human centred approach to everything we do because we understand that complex problems require a service design approach. This means understanding how users behave and ensuring our solutions work for them in the real world. Our combination of data, design, and engineering delivers bespoke digital services that make a positive and meaningful impact on organisations and society. We're confident in our abilities, authentic in our approach, and passionate about what we do. If you're looking for a digital services partner that can deliver real results, let us help you build for the future and make a lasting impact. Hippo locations We are headquartered in Leeds and have offices across the UK in Glasgow, Manchester, Birmingham, London and Bristol. We're on the lookout for top talent nationwide but you need to be located within reasonable travelling distance from one of our offices which will be your contracted office location. Given the dynamic nature of a consulting business, you may be required to work on site at a Hippo office or at an in/out of town client location for a number of days per week (client dependent) and therefore candidates will need to be open/flexible to travel. Plus, we offer a generous relocation support package of up to £8k (please ask for terms and conditions) to help make your move a smooth one.
Jan 09, 2026
Full time
About The Role Hippo is recruiting for a Senior Data Science Engineer you join our growing consultancy. This role sits within our wider Data Team - you'll be a part of a like minded, talented and passionate group of Science Engineers, Engineers, Analysts & Architects who are delivering awesome things for our clients. We are looking for someone who is inquisitive, excels in solving complex data centric problems, ready to explore and visualise data and bring their experience and knowledge on the importance on utilising data to aid in commercial decision making. Our solutions empower our customers to build and support secure, scalable, and well engineered systems beyond traditional boundaries. We leverage deep data insights and continuous innovation to deliver awesome platforms that allow our customers to understand and get the most from their data and digital services. Our Senior Data Science Engineers play a key role in this. We are looking for someone to bring their experience from a commercial environment to this role - you will be working as part of the wider Data Engineering Team and will be involved in elements of Engineering that sit outside of traditional Data Science. Please note, we are looking for candidates who are looking for growth at this level (senior), therefore the advertised salary band is the lower end our full banding for this level of position, allowing for progression in the role. Requirements of the Role Experience in developing and expanding Data Science capabilities Deliver business impact across all areas Solid experience working with concepts relevant to data ethics and privacy Understand product delivery from requirements to desirable business outcomes Develop complex solutions using a range of data science techniques, whilst understanding any ethical considerations Understand the role and benefits of data science within the organisation Support capability building within the organisation Collaborate with others to develop data science solutions and outputs supporting the organisation Prepare and manipulate data, and perform complex analytics Present and communicate effectively> Skills and experience that you need We need people who are open to new technologies, quick to adapt, and quick to learn. If you don't have one of the following, please apply and we can discuss in more detail. Strong experience in Machine Learning Experience is at least one core coding language (Python, R, Java, etc.) Experience in relevant Data Manipulation, Machine Learning and Statistical Analysis coding packages (eg. in python: NumPy, Scikit Learn, Pandas, Matplotlib etc.) Strong skills in data exploration, cleansing, modelling and presentation Strong experience in testing data models and Machine Learning Models Strong experience in data presentation and visualisation Desirable Technical Experience At least one Cloud Provider (AWS, Azure or GCP) Databases such as SQL / NoSQL End to end data pipelines Source Control and Version Control (e.g. Git) What makes us great As well as a competitive salary which we're transparent about from the outset, you can also expect a range of benefits: Contributory pension scheme (Hippo 6% with employee contributions of 2%) 25 days holiday plus UK public holidays Perkbox access for a wide range of discounts Critical illness cover Life assurance and death in service cover Volunteer days Cycle to work scheme for the avid cyclists Salary sacrifice electric vehicles scheme Season ticket loans Financial and general wellbeing sessions Flexible benefits scheme with options of: private health cover private dental cover additional company pension contributions additional holidays (up to an extra 2 days) wellbeing contribution charity contributions tree planting Diversity, Inclusion and Belonging at Hippo At Hippo, we're dedicated to creating a diverse, equitable and inclusive workplace that works for everyone. We understand that having a diverse team unlocks our capacity for innovation, creativity and problem solving. Only by building a community of diverse perspectives, cultures and socio economic backgrounds can we create an environment where all can contribute and thrive. We actively encourage applications from underrepresented groups including women, ethnic minorities, LGBTQ+, neurodivergent and people with disabilities. We are committed to providing an inclusive and accessible recruitment process that reflects our workplace culture. We are a registered Disability Confident Employer, Mindful Employer, Endometriosis Friendly Employer and a member of the Armed Forces Covenant. Hippo continually strives to remove barriers, provide accommodations and offer reasonable adjustments to ensure equity throughout our practices. Hi, we're Hippo. At Hippo, we design with empathy and build for impact. We do this by combining data informed evidence, human centred design and software engineering. We're a digital services partner who is genuinely invested in helping our clients thrive as modern organisations. Our delivery methodology is truly agile, from concept to reality, supporting innovation and continuous improvement to achieve your desired outcomes. We firmly believe that technology should serve humanity, not the other way around. We take a human centred approach to everything we do because we understand that complex problems require a service design approach. This means understanding how users behave and ensuring our solutions work for them in the real world. Our combination of data, design, and engineering delivers bespoke digital services that make a positive and meaningful impact on organisations and society. We're confident in our abilities, authentic in our approach, and passionate about what we do. If you're looking for a digital services partner that can deliver real results, let us help you build for the future and make a lasting impact. Hippo locations We are headquartered in Leeds and have offices across the UK in Glasgow, Manchester, Birmingham, London and Bristol. We're on the lookout for top talent nationwide but you need to be located within reasonable travelling distance from one of our offices which will be your contracted office location. Given the dynamic nature of a consulting business, you may be required to work on site at a Hippo office or at an in/out of town client location for a number of days per week (client dependent) and therefore candidates will need to be open/flexible to travel. Plus, we offer a generous relocation support package of up to £8k (please ask for terms and conditions) to help make your move a smooth one.
Product Manager II - FinOps
The Association of Technology, Management and Applied Engineering
Job description The Onyx Research Data Tech organization represents a major investment by GSK R&D and Digital & Tech, designed to deliver a step-change in our ability to leverage data, knowledge, and prediction to find new medicines. We are a full stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward: Building a next generation data experience for GSK's scientists, engineers, and decision makers, increasing productivity and reducing time spent on "data mechanics". Providing best in class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top tier talent. Aggressively engineering our data at scale to unlock the value of our combined data assets and predictions in real time. Onyx Product Management is at the heart of our mission, ensuring that everything from our infrastructure, to platforms, to end user facing data assets and environments is designed to maximize our impact on R&D. The Product Management team partners with R&D stakeholders and Onyx leadership to develop a strategic roadmap for all customer facing aspects of Onyx, including data assets, ontology, Knowledge Graph / semantic search, data / computing / analysis platforms, and data powered applications. We are seeking an experienced Product Manager II who will be accountable for designing and delivering the road map for FinOps for all Onyx cloud platforms, products, and services. As a Product Manager II for FinOps Products, you will play a crucial role in optimizing our cloud spend and enhancing financial transparency across Onyx's platforms and services. Working closely with senior product leaders, engineering, finance, R&D leaders, and cloud operations teams, you will contribute to the development and delivery of products and features that empower our engineers, developers, scientists, and finance stakeholders to manage, forecast, and optimize cloud costs effectively. This is an exciting opportunity for a product professional passionate about cloud economics and building solutions that drive financial accountability and efficiency at scale. In this role you will Product Feature Ownership: Own the full product lifecycle for specific features or components within our FinOps product suite, from ideation and requirements gathering to launch, adoption, and iteration. Cloud Cost Visibility: Drive the development of tools and dashboards that provide clear, accurate, and granular visibility into cloud spending across Onyx platforms and services, enabling teams to understand their consumption patterns. Cost Optimization Enablement: Identify, define, and deliver capabilities that empower engineering teams to make cost efficient choices, including recommendations for resource rightsizing, reserved instance/savings plan management, and identification of idle or underutilized resources. Financial Governance Support: Assist in implementing and monitoring cloud financial governance policies and guardrails, including budget alerts, spend limits, and chargeback/showback mechanisms. User Research & Requirements: Conduct in depth user research with engineers, developers, data scientists, and finance teams to deeply understand their challenges and needs related to cloud cost management. Translate these insights into detailed product requirements and user stories. Data Analysis & Reporting: Leverage cloud billing data and other financial inputs to analyze spending trends, identify cost anomalies, and support the creation of actionable financial reports and forecasts. Agile Product Development: Actively participate in an agile development environment, collaborating daily with engineering, UX, and QA teams to ensure successful and timely delivery of high quality product releases. Cross Functional Collaboration: Partner effectively with Cloud Platform Engineering, Data Platform Engineering, Finance, and R&D teams to ensure product features meet business needs, integrate seamlessly, and drive desired financial outcomes. Documentation & Training: Create clear product documentation, user guides, and training materials to facilitate product adoption and ensure users can effectively leverage FinOps tools and insights. Qualifications & Skills Bachelors degree in a technical or scientific field, with a focus on computational science, Engineering, Finance, Business or related discipline. Experience in product management, cloud financial management (FinOps), or a related role such as a Cloud Engineer with a strong cost optimization focus. Demonstrated understanding of cloud billing models, cost drivers, and service offerings across major cloud providers (e.g., AWS, GCP, Azure). Experience with data analysis and reporting tools to extract insights from financial or operational data. Familiarity with agile product development methodologies. Preferred Qualifications & Skills Master's degree or MBA. FinOps Certified Practitioner (FOCP) or equivalent certification. Direct experience with FinOps platforms and tools (e.g., Cloudability, CloudHealth, or native cloud cost management tools). Experience contributing to products that support large scale, multi cloud environments. Understanding of enterprise financial processes, budgeting, forecasting, and cost allocation. Strong communication and stakeholder management skills, with the ability to articulate technical and financial concepts to diverse audiences. Prior experience in the life sciences or biopharma industry, understanding the unique compute and data needs of scientific research. Closing Date for Applications: Tuesday 6th January 2026 (COB) Please note: as we approach the holiday season, our recruitment team and hiring managers will have limited availability between now and early January. We encourage you to apply and will review all applications, however response times may be longer than usual, and interviews may be scheduled after the New Year. We appreciate your understanding and look forward to connecting soon! Please take a copy of the Job Description, as this will not be available post closure of the advert. When applying for this role, please use the 'cover letter' of the online application or your CV to describe how you meet the competencies for this role, as outlined in the job requirements above. The information that you have provided in your cover letter and CV will be used to assess your application. During the course of your application, you will be requested to complete voluntary information which will be used in monitoring the effectiveness of our equality and diversity policies. Your information will be treated as confidential and will not be used in any part of the selection process. If you require a reasonable adjustment to the application / selection process to enable you to demonstrate your ability to perform the job requirements, please contact . This will help us to understand any modifications we may need to make to support you throughout our selection process. Why GSK? Uniting science, technology and talent to get ahead of disease together. GSK is a global biopharma company with a purpose to unite science, technology and talent to get ahead of disease together. We aim to positively impact the health of 2.5 billion people by the end of the decade, as a successful, growing company where people can thrive. We get ahead of disease by preventing and treating it with innovation in specialty medicines and vaccines. We focus on four therapeutic areas: respiratory, immunology and inflammation; oncology; HIV; and infectious diseases to impact health at scale. People and patients around the world count on the medicines and vaccines we make, so we're committed to creating an environment where our people can thrive and focus on what matters most. Our culture of being ambitious for patients, accountable for impact and doing the right thing is the foundation for how, together, we deliver for patients, shareholders and our people. GSK is an Equal Opportunity Employer. This ensures that all qualified applicants will receive equal consideration for employment without regard to race, color, religion, sex (including pregnancy, gender identity, and sexual orientation), parental status, national origin, age, disability, genetic information (including family medical history), military service or any basis prohibited under federal, state or local law. We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Should you require any adjustments to our process to assist you in demonstrating your strengths and capabilities contact us on or . The helpline is available from 8.30 am to 12.00 noon Monday to Friday, during bank holidays these times and days may vary. Important notice to Employment businesses/ Agencies GSK does not accept referrals from employment businesses and/or employment agencies in respect of the vacancies posted on this site. All employment businesses/agencies are required to contact GSK's commercial and general procurement/human resources department to obtain prior written authorization before referring any candidates to GSK . click apply for full job details
Jan 09, 2026
Full time
Job description The Onyx Research Data Tech organization represents a major investment by GSK R&D and Digital & Tech, designed to deliver a step-change in our ability to leverage data, knowledge, and prediction to find new medicines. We are a full stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward: Building a next generation data experience for GSK's scientists, engineers, and decision makers, increasing productivity and reducing time spent on "data mechanics". Providing best in class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top tier talent. Aggressively engineering our data at scale to unlock the value of our combined data assets and predictions in real time. Onyx Product Management is at the heart of our mission, ensuring that everything from our infrastructure, to platforms, to end user facing data assets and environments is designed to maximize our impact on R&D. The Product Management team partners with R&D stakeholders and Onyx leadership to develop a strategic roadmap for all customer facing aspects of Onyx, including data assets, ontology, Knowledge Graph / semantic search, data / computing / analysis platforms, and data powered applications. We are seeking an experienced Product Manager II who will be accountable for designing and delivering the road map for FinOps for all Onyx cloud platforms, products, and services. As a Product Manager II for FinOps Products, you will play a crucial role in optimizing our cloud spend and enhancing financial transparency across Onyx's platforms and services. Working closely with senior product leaders, engineering, finance, R&D leaders, and cloud operations teams, you will contribute to the development and delivery of products and features that empower our engineers, developers, scientists, and finance stakeholders to manage, forecast, and optimize cloud costs effectively. This is an exciting opportunity for a product professional passionate about cloud economics and building solutions that drive financial accountability and efficiency at scale. In this role you will Product Feature Ownership: Own the full product lifecycle for specific features or components within our FinOps product suite, from ideation and requirements gathering to launch, adoption, and iteration. Cloud Cost Visibility: Drive the development of tools and dashboards that provide clear, accurate, and granular visibility into cloud spending across Onyx platforms and services, enabling teams to understand their consumption patterns. Cost Optimization Enablement: Identify, define, and deliver capabilities that empower engineering teams to make cost efficient choices, including recommendations for resource rightsizing, reserved instance/savings plan management, and identification of idle or underutilized resources. Financial Governance Support: Assist in implementing and monitoring cloud financial governance policies and guardrails, including budget alerts, spend limits, and chargeback/showback mechanisms. User Research & Requirements: Conduct in depth user research with engineers, developers, data scientists, and finance teams to deeply understand their challenges and needs related to cloud cost management. Translate these insights into detailed product requirements and user stories. Data Analysis & Reporting: Leverage cloud billing data and other financial inputs to analyze spending trends, identify cost anomalies, and support the creation of actionable financial reports and forecasts. Agile Product Development: Actively participate in an agile development environment, collaborating daily with engineering, UX, and QA teams to ensure successful and timely delivery of high quality product releases. Cross Functional Collaboration: Partner effectively with Cloud Platform Engineering, Data Platform Engineering, Finance, and R&D teams to ensure product features meet business needs, integrate seamlessly, and drive desired financial outcomes. Documentation & Training: Create clear product documentation, user guides, and training materials to facilitate product adoption and ensure users can effectively leverage FinOps tools and insights. Qualifications & Skills Bachelors degree in a technical or scientific field, with a focus on computational science, Engineering, Finance, Business or related discipline. Experience in product management, cloud financial management (FinOps), or a related role such as a Cloud Engineer with a strong cost optimization focus. Demonstrated understanding of cloud billing models, cost drivers, and service offerings across major cloud providers (e.g., AWS, GCP, Azure). Experience with data analysis and reporting tools to extract insights from financial or operational data. Familiarity with agile product development methodologies. Preferred Qualifications & Skills Master's degree or MBA. FinOps Certified Practitioner (FOCP) or equivalent certification. Direct experience with FinOps platforms and tools (e.g., Cloudability, CloudHealth, or native cloud cost management tools). Experience contributing to products that support large scale, multi cloud environments. Understanding of enterprise financial processes, budgeting, forecasting, and cost allocation. Strong communication and stakeholder management skills, with the ability to articulate technical and financial concepts to diverse audiences. Prior experience in the life sciences or biopharma industry, understanding the unique compute and data needs of scientific research. Closing Date for Applications: Tuesday 6th January 2026 (COB) Please note: as we approach the holiday season, our recruitment team and hiring managers will have limited availability between now and early January. We encourage you to apply and will review all applications, however response times may be longer than usual, and interviews may be scheduled after the New Year. We appreciate your understanding and look forward to connecting soon! Please take a copy of the Job Description, as this will not be available post closure of the advert. When applying for this role, please use the 'cover letter' of the online application or your CV to describe how you meet the competencies for this role, as outlined in the job requirements above. The information that you have provided in your cover letter and CV will be used to assess your application. During the course of your application, you will be requested to complete voluntary information which will be used in monitoring the effectiveness of our equality and diversity policies. Your information will be treated as confidential and will not be used in any part of the selection process. If you require a reasonable adjustment to the application / selection process to enable you to demonstrate your ability to perform the job requirements, please contact . This will help us to understand any modifications we may need to make to support you throughout our selection process. Why GSK? Uniting science, technology and talent to get ahead of disease together. GSK is a global biopharma company with a purpose to unite science, technology and talent to get ahead of disease together. We aim to positively impact the health of 2.5 billion people by the end of the decade, as a successful, growing company where people can thrive. We get ahead of disease by preventing and treating it with innovation in specialty medicines and vaccines. We focus on four therapeutic areas: respiratory, immunology and inflammation; oncology; HIV; and infectious diseases to impact health at scale. People and patients around the world count on the medicines and vaccines we make, so we're committed to creating an environment where our people can thrive and focus on what matters most. Our culture of being ambitious for patients, accountable for impact and doing the right thing is the foundation for how, together, we deliver for patients, shareholders and our people. GSK is an Equal Opportunity Employer. This ensures that all qualified applicants will receive equal consideration for employment without regard to race, color, religion, sex (including pregnancy, gender identity, and sexual orientation), parental status, national origin, age, disability, genetic information (including family medical history), military service or any basis prohibited under federal, state or local law. We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Should you require any adjustments to our process to assist you in demonstrating your strengths and capabilities contact us on or . The helpline is available from 8.30 am to 12.00 noon Monday to Friday, during bank holidays these times and days may vary. Important notice to Employment businesses/ Agencies GSK does not accept referrals from employment businesses and/or employment agencies in respect of the vacancies posted on this site. All employment businesses/agencies are required to contact GSK's commercial and general procurement/human resources department to obtain prior written authorization before referring any candidates to GSK . click apply for full job details
Lead Data Engineer (Azure)
Spectrum It Recruitment Limited Basingstoke, Hampshire
Lead Azure Data Engineer - Cloud Migration & Systems Integration I am recruiting for a rapidly growing, multi-site healthcare organisation in the middle of a major digital transformation. As their Data & Business Intelligence function continues to expand, they require a hands-on Lead Data Engineer to take ownership of the Azure migration and enterprise integration strategy click apply for full job details
Jan 09, 2026
Full time
Lead Azure Data Engineer - Cloud Migration & Systems Integration I am recruiting for a rapidly growing, multi-site healthcare organisation in the middle of a major digital transformation. As their Data & Business Intelligence function continues to expand, they require a hands-on Lead Data Engineer to take ownership of the Azure migration and enterprise integration strategy click apply for full job details
Software Development Engineer II
Expedia, Inc. City, London
Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We're building a more open world. Join us. Software Development Engineer Introduction to team Private Label Solutions (PLS) is the B2B arm of Expedia Group. We open up our supply and innovative technology to businesses looking to take on the world of travel. These businesses, sometimes referred to as our 'demand partners', include global financial institutions (e.g. AMEX), corporate managed travel, offline travel agents (e.g. Flight Centre), global travel suppliers (e.g. Delta) and many more . In this role, you will: As an engineer in our team, you'll have the opportunity to make a real impact by contributing to systems that operate at global scale. You'll work on high-throughput, low-latency APIs where availability, performance, and resilience are critical - powering billions of travel transactions every day- in particular: Write clean, maintainable, and well-tested code using Kotlin, Java, TypeScript Work across the full stack - primarily on backend services, APIs, and data flows, with the option to contribute to frontend web applications as needed Join a collaborative Agile team involved in all phases of development - from ideation and design to deployment and production support Take part in shaping technical direction through code reviews, mentorship, and architectural discussions Help continuously improve our systems for scalability, performance, observability, and fault tolerance Partner with product and business stakeholders to build solutions that solve real customer problems at scale Experience and qualifications: We're looking for curious, creative engineers who are passionate about building great products and eager to grow. You don't need to know everything on day one - if you're excited about the role and ready to learn, we'd love to hear from you. Some experience or interest in the following areas will be helpful: Programming with modern languages such as Java, Kotlin, JavaScript, or similar Working with frontend frameworks like React, Vue, or Angular Understanding of backend services, RESTful APIs, and how systems integrate Exposure to cloud platforms like AWS, GCP, or Azure Familiarity with SQL or NoSQL databases Knowledge of computer science fundamentals (data structures, algorithms, system design) Writing clean, maintainable code and an interest in CI/CD, testing, or DevOps practices Strong communication and teamwork skills A growth mindset - the desire to keep learning and improving, both personally and technically As a Team, We Love To: Build reliable, scalable systems that empower millions of travellers Keep our codebase clean and architecture elegant Learn from each other, share knowledge, and grow together Celebrate wins, reflect on misses, and always aim higher Use our travel perks to explore the world What You'll Get We'll take your career on a journey that's right for you, while recognising and rewarding your contributions. Competitive salary with clear growth pathways Opportunities to serve as a domain expert in cross-functional teams Access to global tech conferences and workshops Travel discounts to help you tick off your bucket list A truly inclusive culture that values your background and ideas Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request . We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia Expedia Partner Solutions, Vrbo , trivago , Orbitz , Travelocity , Hotwire , Wotif , ebookers , CheapTickets , Expedia Group Media Solutions, Expedia Local Expert and Expedia Cruises . 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: -50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group's Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you're confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain The official website to find and apply for job openings at Expedia Group is . Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.
Jan 09, 2026
Full time
Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We're building a more open world. Join us. Software Development Engineer Introduction to team Private Label Solutions (PLS) is the B2B arm of Expedia Group. We open up our supply and innovative technology to businesses looking to take on the world of travel. These businesses, sometimes referred to as our 'demand partners', include global financial institutions (e.g. AMEX), corporate managed travel, offline travel agents (e.g. Flight Centre), global travel suppliers (e.g. Delta) and many more . In this role, you will: As an engineer in our team, you'll have the opportunity to make a real impact by contributing to systems that operate at global scale. You'll work on high-throughput, low-latency APIs where availability, performance, and resilience are critical - powering billions of travel transactions every day- in particular: Write clean, maintainable, and well-tested code using Kotlin, Java, TypeScript Work across the full stack - primarily on backend services, APIs, and data flows, with the option to contribute to frontend web applications as needed Join a collaborative Agile team involved in all phases of development - from ideation and design to deployment and production support Take part in shaping technical direction through code reviews, mentorship, and architectural discussions Help continuously improve our systems for scalability, performance, observability, and fault tolerance Partner with product and business stakeholders to build solutions that solve real customer problems at scale Experience and qualifications: We're looking for curious, creative engineers who are passionate about building great products and eager to grow. You don't need to know everything on day one - if you're excited about the role and ready to learn, we'd love to hear from you. Some experience or interest in the following areas will be helpful: Programming with modern languages such as Java, Kotlin, JavaScript, or similar Working with frontend frameworks like React, Vue, or Angular Understanding of backend services, RESTful APIs, and how systems integrate Exposure to cloud platforms like AWS, GCP, or Azure Familiarity with SQL or NoSQL databases Knowledge of computer science fundamentals (data structures, algorithms, system design) Writing clean, maintainable code and an interest in CI/CD, testing, or DevOps practices Strong communication and teamwork skills A growth mindset - the desire to keep learning and improving, both personally and technically As a Team, We Love To: Build reliable, scalable systems that empower millions of travellers Keep our codebase clean and architecture elegant Learn from each other, share knowledge, and grow together Celebrate wins, reflect on misses, and always aim higher Use our travel perks to explore the world What You'll Get We'll take your career on a journey that's right for you, while recognising and rewarding your contributions. Competitive salary with clear growth pathways Opportunities to serve as a domain expert in cross-functional teams Access to global tech conferences and workshops Travel discounts to help you tick off your bucket list A truly inclusive culture that values your background and ideas Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request . We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia Expedia Partner Solutions, Vrbo , trivago , Orbitz , Travelocity , Hotwire , Wotif , ebookers , CheapTickets , Expedia Group Media Solutions, Expedia Local Expert and Expedia Cruises . 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: -50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group's Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you're confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain The official website to find and apply for job openings at Expedia Group is . Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.
System Engineer
N Consulting Limited City, Sheffield
The role requires an experienced systems engineer with strong technical leadership and collaboration skills. The ideal candidate will have significant experience in cloud platform management, infrastructure delivery pipelines (e.g. Azure, AWS, GCP, scripting in Bash, PowerShell, Python, Terraform, etc.). In this role you will: Act as a Technical SME, designing and developing innovative automated solutions to complex problems utilising the cloud environments. Design and support custom-built applications within HSBC Azure environment, ensuring secure, reliable and high-performing deployments. Build and manage Azure Infrastructure, including Virtual Machines, VM images, Virtual Networks (VNets), subnets, private endpoints and Azure Storage. Develop and deploy Python functions within an Azure Functions App. Develop Infrastructure-as-Code (IaC) such as ARM templates, Bicep, or Terraform. Support CI/CD practices through deployment automation and version-controlled infrastructure in Azure DevOps. Integrate monitoring, logging and diagnostics for custom applications using Azure Monitor, Application Insights and Log Analytics. Integrate with AI-related Azure services such as OpenAI and contribute to integrations strategies involving LLMs. Ensure that custom-built applications are built and maintained inline with HSBC standards, governance and controls, ensuring compliance with SLDC & DEPL controls, AI Governance and legal & regulatory requirements. Support and extend an existing architecture in close partnership with the principal architect and core development team. Produce well-documented, maintainable infrastructure configurations and effectively communicating implementation details with engineers and stakeholders. Work within evolving technical landscape and contribute to the refinement and evolution of architecture and infrastructure decisions. Utilise strong problem-solving skills, with the ability to investigate issues, troubleshoot deployment challenges, and propose scalable and secure solutions. Promote a "self-critical" and continuous assessment and improvement culture whereby identification of weaknesses in the bank's control plane (people, process and technology) are brought to light and addressed in an effective and timely manner. Support engagement of HSBC Global Businesses and Functions to drive a global up-lift in cyber-security awareness and help to evangelise HSBC Cybersecurity efforts and success. To be successful in this role you should meet the following requirements: Experience within an enterprise scale organisation including hands-on experience of complex data centre environments, working within a similar role ie DevOps Engineer, Cloud Engineer, Security Engineer is mandatory. Expert level knowledge of one of more leading Cloud platforms including Microsoft Azure, Amazon Web Services, Google Cloud Platform and Alibaba Cloud. Expert level knowledge and proven experience managing Azure App Services, Azure Virtual Machines, and Azure Storage solutions. Hands on experience in one or more programming or scripting language (e.g Python, PowerShell, Bash, Terraform ). Experience and demonstrated experience of building and maintaining CI/CD Pipelines to support efficient software delivery.
Jan 09, 2026
Full time
The role requires an experienced systems engineer with strong technical leadership and collaboration skills. The ideal candidate will have significant experience in cloud platform management, infrastructure delivery pipelines (e.g. Azure, AWS, GCP, scripting in Bash, PowerShell, Python, Terraform, etc.). In this role you will: Act as a Technical SME, designing and developing innovative automated solutions to complex problems utilising the cloud environments. Design and support custom-built applications within HSBC Azure environment, ensuring secure, reliable and high-performing deployments. Build and manage Azure Infrastructure, including Virtual Machines, VM images, Virtual Networks (VNets), subnets, private endpoints and Azure Storage. Develop and deploy Python functions within an Azure Functions App. Develop Infrastructure-as-Code (IaC) such as ARM templates, Bicep, or Terraform. Support CI/CD practices through deployment automation and version-controlled infrastructure in Azure DevOps. Integrate monitoring, logging and diagnostics for custom applications using Azure Monitor, Application Insights and Log Analytics. Integrate with AI-related Azure services such as OpenAI and contribute to integrations strategies involving LLMs. Ensure that custom-built applications are built and maintained inline with HSBC standards, governance and controls, ensuring compliance with SLDC & DEPL controls, AI Governance and legal & regulatory requirements. Support and extend an existing architecture in close partnership with the principal architect and core development team. Produce well-documented, maintainable infrastructure configurations and effectively communicating implementation details with engineers and stakeholders. Work within evolving technical landscape and contribute to the refinement and evolution of architecture and infrastructure decisions. Utilise strong problem-solving skills, with the ability to investigate issues, troubleshoot deployment challenges, and propose scalable and secure solutions. Promote a "self-critical" and continuous assessment and improvement culture whereby identification of weaknesses in the bank's control plane (people, process and technology) are brought to light and addressed in an effective and timely manner. Support engagement of HSBC Global Businesses and Functions to drive a global up-lift in cyber-security awareness and help to evangelise HSBC Cybersecurity efforts and success. To be successful in this role you should meet the following requirements: Experience within an enterprise scale organisation including hands-on experience of complex data centre environments, working within a similar role ie DevOps Engineer, Cloud Engineer, Security Engineer is mandatory. Expert level knowledge of one of more leading Cloud platforms including Microsoft Azure, Amazon Web Services, Google Cloud Platform and Alibaba Cloud. Expert level knowledge and proven experience managing Azure App Services, Azure Virtual Machines, and Azure Storage solutions. Hands on experience in one or more programming or scripting language (e.g Python, PowerShell, Bash, Terraform ). Experience and demonstrated experience of building and maintaining CI/CD Pipelines to support efficient software delivery.
InvitISE Ltd
Senior Security Engineer (Defender, PurView, Sentinel)
InvitISE Ltd City, London
Were looking for a Senior Security Engineer for our client in the data sector, based in London, on an initial 3 to 6 month contract paying up to £500 per day Outside IR35. This role offers hybrid working with an expectation to attend the office 3 days per week. You will focus on hands-on remediation across Azure and endpoint environments, improving Defender for Cloud findings, closing vulnerabilitie click apply for full job details
Jan 09, 2026
Contractor
Were looking for a Senior Security Engineer for our client in the data sector, based in London, on an initial 3 to 6 month contract paying up to £500 per day Outside IR35. This role offers hybrid working with an expectation to attend the office 3 days per week. You will focus on hands-on remediation across Azure and endpoint environments, improving Defender for Cloud findings, closing vulnerabilitie click apply for full job details
Core Engineer - DeFi - London (F/M/D)
Flowdesk City, London
Overview Flowdesk's mission is to build a global financial institution for digital assets, one designed from the ground up for market integrity and efficiency. To achieve this in a rapidly evolving market, we apply a disciplined, first-principles approach to everything we do. This approach is embedded in our core services, from institutional liquidity provision, trading solutions, OTC execution to our comprehensive treasury management offerings. This is how we cut through the noise and build robust and scalable systems across all our business lines. We seek individuals who are driven by this systematic approach. Joining Flowdesk means you will be a key contributor in building and scaling a more transparent and efficient financial markets infrastructure. Flowdesk is scaling fast, and behind every world-class trading operation is a core engineering team who provide the single source of truth and a scalable platform for business units to leverage. We're hiring a Core Engineer to join the team and help lead the design and delivery of a new internal platform. Mission Be a key player in the newly formed Core Engineering team at Flowdesk to build a world class back office system which will serve as the backbone for Flowdesk's systems. Work alongside a seasoned team with deep product, buy side, and sell side experience in both Crypto and TradFi. Tasks DeFi Integrations - Collect all necessary on-chain data and integrate with decentralized protocols to power an accurate, real-time back office system. You will be responsible for ensuring management has a comprehensive, up-to-date view of all on-chain assets. This includes owning the full process from requirements analysis, task specification, implementation, to post-production support. Core Booking / Aggregation - Implement and enhance features in the core booking and aggregation engine (Realised / Unrealised P&L, Balances etc) ensuring performance, reliability and correctness. Reconciliation - Automate reconciliation of all trading and treasury activity across the firm implementing data feeds from internal systems, banking, exchanges and custodians. Reporting - Work closely with the Finance and other internal users to gather and refine requirements, plan milestones, demo progress, and coordinate cutovers from manual / legacy workflows. Data products and APIs - Expose well-versioned APIs and event streams for downstream consumers; maintain backward compatibility and schema evolution. Ways of working Follow established architecture and coding standards; participate in RFCs/design reviews and propose incremental improvements within existing patterns. Write clean, well-documented code and meaningful tests. Contribute to PR reviews; maintain up-to-date technical docs and diagrams. Own components end-to-end from spec to production support. Must Haves - Experience Lifecycle knowledge of crypto asset classes: spot, perpetuals, futures, and options (including DeFi implementations). Hands-on integration with DeFi protocols (DEX/AMM, lending, derivatives) such as Uniswap v3, Curve, Aave. Proven track record delivering scalable, reliable systems in production. Comfortable partnering with internal stakeholders across Trading, Ops, Compliance, and Engineering. Must Haves - Technical Strong OO background in one or more: Go, C++, C#, Java or Rust. Experience with Typescript and Python. API design (versioning, backwards compatibility, observability). Postgres schema design and query optimization. Experience with message queue / pub-sub systems. Familiarity with cloud environments (GCP, AWS, Azure), modern CI/CD and containerization. Nice-to-Haves - Experience Built or maintained position-keeping / accounting systems (PnL, accounting methods, pricing, greeks). Centralized reference data platforms (assets, networks, instruments), consistent symbology and instrument economics. Trading, risk, or back-office systems, ideally in regulated environments. Benefits International environment (English is the main language) Pension 100% health coverage Team events and offsites Recruitment Process HR interview (30') Technical interview - Hiring Manager (30') Take home assignment + Live coding session (90') Team Member Technical (45') CTO (45") Chat with the Head of People (30')
Jan 09, 2026
Full time
Overview Flowdesk's mission is to build a global financial institution for digital assets, one designed from the ground up for market integrity and efficiency. To achieve this in a rapidly evolving market, we apply a disciplined, first-principles approach to everything we do. This approach is embedded in our core services, from institutional liquidity provision, trading solutions, OTC execution to our comprehensive treasury management offerings. This is how we cut through the noise and build robust and scalable systems across all our business lines. We seek individuals who are driven by this systematic approach. Joining Flowdesk means you will be a key contributor in building and scaling a more transparent and efficient financial markets infrastructure. Flowdesk is scaling fast, and behind every world-class trading operation is a core engineering team who provide the single source of truth and a scalable platform for business units to leverage. We're hiring a Core Engineer to join the team and help lead the design and delivery of a new internal platform. Mission Be a key player in the newly formed Core Engineering team at Flowdesk to build a world class back office system which will serve as the backbone for Flowdesk's systems. Work alongside a seasoned team with deep product, buy side, and sell side experience in both Crypto and TradFi. Tasks DeFi Integrations - Collect all necessary on-chain data and integrate with decentralized protocols to power an accurate, real-time back office system. You will be responsible for ensuring management has a comprehensive, up-to-date view of all on-chain assets. This includes owning the full process from requirements analysis, task specification, implementation, to post-production support. Core Booking / Aggregation - Implement and enhance features in the core booking and aggregation engine (Realised / Unrealised P&L, Balances etc) ensuring performance, reliability and correctness. Reconciliation - Automate reconciliation of all trading and treasury activity across the firm implementing data feeds from internal systems, banking, exchanges and custodians. Reporting - Work closely with the Finance and other internal users to gather and refine requirements, plan milestones, demo progress, and coordinate cutovers from manual / legacy workflows. Data products and APIs - Expose well-versioned APIs and event streams for downstream consumers; maintain backward compatibility and schema evolution. Ways of working Follow established architecture and coding standards; participate in RFCs/design reviews and propose incremental improvements within existing patterns. Write clean, well-documented code and meaningful tests. Contribute to PR reviews; maintain up-to-date technical docs and diagrams. Own components end-to-end from spec to production support. Must Haves - Experience Lifecycle knowledge of crypto asset classes: spot, perpetuals, futures, and options (including DeFi implementations). Hands-on integration with DeFi protocols (DEX/AMM, lending, derivatives) such as Uniswap v3, Curve, Aave. Proven track record delivering scalable, reliable systems in production. Comfortable partnering with internal stakeholders across Trading, Ops, Compliance, and Engineering. Must Haves - Technical Strong OO background in one or more: Go, C++, C#, Java or Rust. Experience with Typescript and Python. API design (versioning, backwards compatibility, observability). Postgres schema design and query optimization. Experience with message queue / pub-sub systems. Familiarity with cloud environments (GCP, AWS, Azure), modern CI/CD and containerization. Nice-to-Haves - Experience Built or maintained position-keeping / accounting systems (PnL, accounting methods, pricing, greeks). Centralized reference data platforms (assets, networks, instruments), consistent symbology and instrument economics. Trading, risk, or back-office systems, ideally in regulated environments. Benefits International environment (English is the main language) Pension 100% health coverage Team events and offsites Recruitment Process HR interview (30') Technical interview - Hiring Manager (30') Take home assignment + Live coding session (90') Team Member Technical (45') CTO (45") Chat with the Head of People (30')
Senior Data Engineer, Azure
ARC IT Recruitment Ltd City, London
Senior Data Engineer, Azure London/hybrid Circa £90k-£100k + bonus + benefits Senior Data Engineer required by global banking organisation to help build and scale a modern data platform within a complex financial services environment. This role plays a key part in bringing together data from multiple regions into a single, well-governed and scalable platform built on Microsoft Fabric and Azure click apply for full job details
Jan 09, 2026
Full time
Senior Data Engineer, Azure London/hybrid Circa £90k-£100k + bonus + benefits Senior Data Engineer required by global banking organisation to help build and scale a modern data platform within a complex financial services environment. This role plays a key part in bringing together data from multiple regions into a single, well-governed and scalable platform built on Microsoft Fabric and Azure click apply for full job details
Senior Bioinformatics Developer
Genestack
At Genestack we are tackling the underlying computational and scientific challenges of bioinformatics in order to provide researchers with software tools that will streamline the discovery process and drive forward precision medicine, drug development, and bioinformatics research. We're looking for a Senior Bioinformatics Developer to lead delivery of robust omics data solutions across client projects and internal initiatives. This hybrid role spans pipeline development, scientific application delivery, and platform integration - with full ownership from scoping to deployment. You'll work closely with clients, product managers, and engineers to understand complex requirements and translate them into scalable, interoperable workflows. If you enjoy solving biological data challenges through a mix of analysis, automation, and architecture - this is your chance to make a difference in real-world R&D environments. In this role, you will: Lead end-to-end delivery of client and internal bioinformatics projects. Design pipelines for omics data ingestion, harmonization, and QC. Build and deploy scientific applications (e.g., dashboards, APIs, reports). Develop reusable tools for data wrangling, integration, and visualization. Integrate cloud/on-prem systems (e.g., S3, Nextflow, REST APIs). Support pre-sales, onboarding, and trial delivery when needed. Collaborate across product, engineering, and customer-facing teams. We would like you to have: Bachelor's or Master's degree in Bioinformatics, Computational Biology, or a related scientific/technical field. 5+ years of experience in delivering bioinformatics solutions in services or platform settings. Strong knowledge of Python or R; experience with reproducible workflows and APIs. Application development experience (e.g., Dash, Flask, Shiny). Familiarity with cloud infrastructure, workflow tools (Nextflow), and data protocols. Excellent communication skills in English and ability to operate across domains. Ability to balance delivery work with internal tooling contributions. It would be nice for you to have: prior experience as a Team Lead on bioinformatic projects; experience wit cloud deployment (AWS, GCP, Azure); JVM-based integration experience (e.g., Java, Kotlin). We offer you: international team of professionals; extended sick leave; onboarding and domain training for newcomers; flexible work schedule.
Jan 09, 2026
Full time
At Genestack we are tackling the underlying computational and scientific challenges of bioinformatics in order to provide researchers with software tools that will streamline the discovery process and drive forward precision medicine, drug development, and bioinformatics research. We're looking for a Senior Bioinformatics Developer to lead delivery of robust omics data solutions across client projects and internal initiatives. This hybrid role spans pipeline development, scientific application delivery, and platform integration - with full ownership from scoping to deployment. You'll work closely with clients, product managers, and engineers to understand complex requirements and translate them into scalable, interoperable workflows. If you enjoy solving biological data challenges through a mix of analysis, automation, and architecture - this is your chance to make a difference in real-world R&D environments. In this role, you will: Lead end-to-end delivery of client and internal bioinformatics projects. Design pipelines for omics data ingestion, harmonization, and QC. Build and deploy scientific applications (e.g., dashboards, APIs, reports). Develop reusable tools for data wrangling, integration, and visualization. Integrate cloud/on-prem systems (e.g., S3, Nextflow, REST APIs). Support pre-sales, onboarding, and trial delivery when needed. Collaborate across product, engineering, and customer-facing teams. We would like you to have: Bachelor's or Master's degree in Bioinformatics, Computational Biology, or a related scientific/technical field. 5+ years of experience in delivering bioinformatics solutions in services or platform settings. Strong knowledge of Python or R; experience with reproducible workflows and APIs. Application development experience (e.g., Dash, Flask, Shiny). Familiarity with cloud infrastructure, workflow tools (Nextflow), and data protocols. Excellent communication skills in English and ability to operate across domains. Ability to balance delivery work with internal tooling contributions. It would be nice for you to have: prior experience as a Team Lead on bioinformatic projects; experience wit cloud deployment (AWS, GCP, Azure); JVM-based integration experience (e.g., Java, Kotlin). We offer you: international team of professionals; extended sick leave; onboarding and domain training for newcomers; flexible work schedule.
Hastings Direct
Data Platform Engineering Lead: Snowflake & Azure
Hastings Direct Bexhill-on-sea, Sussex
A digital-focused insurance provider in Bexhill-on-Sea is seeking a Head of Data Platform Engineering. The role involves managing the Snowflake and Azure enterprise data platform, ensuring data security and compliance, and leading teams for project delivery. Required skills include proficiency in ETL technologies, CI/CD, and excellent communication abilities. The position offers a competitive salary, flexible working options, and extensive benefits, contributing to the company's growth in the digital insurance market.
Jan 09, 2026
Full time
A digital-focused insurance provider in Bexhill-on-Sea is seeking a Head of Data Platform Engineering. The role involves managing the Snowflake and Azure enterprise data platform, ensuring data security and compliance, and leading teams for project delivery. Required skills include proficiency in ETL technologies, CI/CD, and excellent communication abilities. The position offers a competitive salary, flexible working options, and extensive benefits, contributing to the company's growth in the digital insurance market.
IO Associates
Senior Data Engineer
IO Associates
Senior Data Engineer Location - London Hybrid - 3 days in the office SC Clearance - ESSENTIAL Rate 550-600 per day Experience - 8+ years We are seeking a highly skilled and experienced SC Cleared Senior Data Engineer to join our client and play a key role in developing and maintaining their Azure Databricks platform for economic data click apply for full job details
Jan 08, 2026
Contractor
Senior Data Engineer Location - London Hybrid - 3 days in the office SC Clearance - ESSENTIAL Rate 550-600 per day Experience - 8+ years We are seeking a highly skilled and experienced SC Cleared Senior Data Engineer to join our client and play a key role in developing and maintaining their Azure Databricks platform for economic data click apply for full job details
Azure Platform Engineer
Armstrong Talent Partners Hamilton, Lanarkshire
My client hasa small team but have managed to deliver massive infrastructure projects to public and private sector clients through their state-of-the-art, tier 3 data centre facilities. We are now looking to expand our team further and add an experienced Azure Platform Engineer to the team on an initial 12 month fixed term contract click apply for full job details
Jan 08, 2026
Full time
My client hasa small team but have managed to deliver massive infrastructure projects to public and private sector clients through their state-of-the-art, tier 3 data centre facilities. We are now looking to expand our team further and add an experienced Azure Platform Engineer to the team on an initial 12 month fixed term contract click apply for full job details
Harvey Nash
Data Engineer
Harvey Nash Newcastle Upon Tyne, Tyne And Wear
Are you ready to be part of an exciting digital transformation using innovative technologies? This is an opportunity for a mid-level Data Engineer to play a key role in a major digital transformation. You will be shaping the future of data using innovative technologies like Azure Databricks and Fabric. You'll design and optimise modern data pipelines, work with diverse data sources, and collaborate click apply for full job details
Jan 08, 2026
Full time
Are you ready to be part of an exciting digital transformation using innovative technologies? This is an opportunity for a mid-level Data Engineer to play a key role in a major digital transformation. You will be shaping the future of data using innovative technologies like Azure Databricks and Fabric. You'll design and optimise modern data pipelines, work with diverse data sources, and collaborate click apply for full job details
Randstad Technologies Recruitment
Lead Data Specialist
Randstad Technologies Recruitment City, London
Lead - Data Specialist Contract: 6 months contract Location: London Hybrid 2 days in office Department: Data & Analytics Reports to: Lead, Metadata & Data Management About the Role We are hiring a Contractor Lead - Data Specialist to drive new data capabilities. This pivotal role involves building and operationalising reference data, data sourcing, data lineage, and integrating data management tooling to support a new enterprise data platform. You will lead the development of business-critical data management capabilities, ensuring data is trusted, well-defined, and meets organisational standards. This is a key opportunity to shape the data foundations for a major strategic transformation. Metadata & Lineage Drive the capture and management of metadata as part of change and operational deliverables. Maintain metadata repositories and process maps, resolving inconsistencies and escalating issues where needed. Collaborate with business analysts and solution designers to embed robust metadata practices. Enterprise Data Models & Data Sourcing Co-develop enterprise data models alongside data architects and specialists. Map workflows, data attributes, and design changes to data models. Support logical data modelling and contribute to master data management. Document and update models in Confluence and assist with reporting-related changes. Critical Data Management Support the definition and governance of critical data elements using tools such as Solidatus and Purview. Maintain logical lineage for critical data and lead stakeholder discussions to remediate breaks. Data Management Tools & Platforms Support requirements definition for internal/external reference data used in the new platform. Contribute to integrating data management tooling across Data and IT teams. Enhance metadata and lineage metrics through automation where possible (e.g., Azure). About You Essential Experience Significant hands-on experience as a Metadata Manager, Reference Data, or Data Sourcing Specialist. Proven track record in process re-engineering and design improvements. Expertise in technical design and user experience for data lineage. Familiarity with industry best practices (DAMA/EDMC desirable). Experience with metadata technologies and logical data modelling. Highly Desirable Experience in Investment/Commercial Bank, Asset/Fund Manager. Experience managing mixed (direct/indirect) teams. Strong facilitation skills (workshops, backlog management). Knowledge of architecture frameworks (e.g., TOGAF). Key Competencies Strong interpersonal and stakeholder management skills. Excellent process, control, and documentation capabilities. Tenacity and resilience in problem-solving. Ability to work effectively in remote or hybrid settings. Randstad Technologies is acting as an Employment Business in relation to this vacancy.
Jan 08, 2026
Full time
Lead - Data Specialist Contract: 6 months contract Location: London Hybrid 2 days in office Department: Data & Analytics Reports to: Lead, Metadata & Data Management About the Role We are hiring a Contractor Lead - Data Specialist to drive new data capabilities. This pivotal role involves building and operationalising reference data, data sourcing, data lineage, and integrating data management tooling to support a new enterprise data platform. You will lead the development of business-critical data management capabilities, ensuring data is trusted, well-defined, and meets organisational standards. This is a key opportunity to shape the data foundations for a major strategic transformation. Metadata & Lineage Drive the capture and management of metadata as part of change and operational deliverables. Maintain metadata repositories and process maps, resolving inconsistencies and escalating issues where needed. Collaborate with business analysts and solution designers to embed robust metadata practices. Enterprise Data Models & Data Sourcing Co-develop enterprise data models alongside data architects and specialists. Map workflows, data attributes, and design changes to data models. Support logical data modelling and contribute to master data management. Document and update models in Confluence and assist with reporting-related changes. Critical Data Management Support the definition and governance of critical data elements using tools such as Solidatus and Purview. Maintain logical lineage for critical data and lead stakeholder discussions to remediate breaks. Data Management Tools & Platforms Support requirements definition for internal/external reference data used in the new platform. Contribute to integrating data management tooling across Data and IT teams. Enhance metadata and lineage metrics through automation where possible (e.g., Azure). About You Essential Experience Significant hands-on experience as a Metadata Manager, Reference Data, or Data Sourcing Specialist. Proven track record in process re-engineering and design improvements. Expertise in technical design and user experience for data lineage. Familiarity with industry best practices (DAMA/EDMC desirable). Experience with metadata technologies and logical data modelling. Highly Desirable Experience in Investment/Commercial Bank, Asset/Fund Manager. Experience managing mixed (direct/indirect) teams. Strong facilitation skills (workshops, backlog management). Knowledge of architecture frameworks (e.g., TOGAF). Key Competencies Strong interpersonal and stakeholder management skills. Excellent process, control, and documentation capabilities. Tenacity and resilience in problem-solving. Ability to work effectively in remote or hybrid settings. Randstad Technologies is acting as an Employment Business in relation to this vacancy.
GlaxoSmithKline
Product Manager II - FinOps
GlaxoSmithKline
The Onyx Research Data Tech organization represents a major investment by GSK R&D and Digital & Tech, designed to deliver a step-change in our ability to leverage data, knowledge, and prediction to find new medicines. We are a full-stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward: Building a next-generation data experience for GSK's scientists, engineers, and decision-makers, increasing productivity and reducing time spent on "data mechanics" Providing best-in-class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top-tier talent Aggressively engineering our data at scale to unlock the value of our combined data assets and predictions in real-time Onyx Product Management is at the heart of our mission, ensuring that everything from our infrastructure, to platforms, to end-user facing data assets and environments is designed to maximize our impact on R&D. The Product Management team partners with R&D stakeholders and Onyx leadership to develop a strategic roadmap for all customer-facing aspects of Onyx, including data assets, ontology, Knowledge Graph / semantic search, data / computing / analysis platforms, and data-powered applications. We are seeking an experienced Product Manager II who will be accountable for designing and delivering the road map for FinOps for all Onyx cloud platforms, products, and services. As a Product Manager II for FinOps Products, you will play a crucial role in optimizing our cloud spend and enhancing financial transparency across Onyx's platforms and services. Working closely with senior product leaders, engineering, finance, R&D leaders, and cloud operations teams, you will contribute to the development and delivery of products and features that empower our engineers, developers, scientists, and finance stakeholders to manage, forecast, and optimize cloud costs effectively. This is an exciting opportunity for a product professional passionate about cloud economics and building solutions that drive financial accountability and efficiency at scale. In this role you will Product Feature Ownership: Own the full product lifecycle for specific features or components within our FinOps product suite, from ideation and requirements gathering to launch, adoption, and iteration. Cloud Cost Visibility: Drive the development of tools and dashboards that provide clear, accurate, and granular visibility into cloud spending across Onyx platforms and services, enabling teams to understand their consumption patterns. Cost Optimization Enablement: Identify, define, and deliver capabilities that empower engineering teams to make cost-efficient choices, including recommendations for resource rightsizing, reserved instance/savings plan management, and identification of idle or underutilized resources. Financial Governance Support: Assist in implementing and monitoring cloud financial governance policies and guardrails, including budget alerts, spend limits, and chargeback/showback mechanisms. User Research & Requirements: Conduct in-depth user research with engineers, developers, data scientists, and finance teams to deeply understand their challenges and needs related to cloud cost management. Translate these insights into detailed product requirements and user stories. Data Analysis & Reporting: Leverage cloud billing data and other financial inputs to analyze spending trends, identify cost anomalies, and support the creation of actionable financial reports and forecasts. Agile Product Development: Actively participate in an agile development environment, collaborating daily with engineering, UX, and QA teams to ensure successful and timely delivery of high-quality product releases. Cross-Functional Collaboration: Partner effectively with Cloud Platform Engineering, Data Platform Engineering, Finance, and R&D teams to ensure product features meet business needs, integrate seamlessly, and drive desired financial outcomes. Documentation & Training: Create clear product documentation, user guides, and training materials to facilitate product adoption and ensure users can effectively leverage FinOps tools and insights. Why you? Qualifications & Skills: We are looking for professionals with these required skills to achieve our goals: Bachelors degree in a technical or scientific field, with a focus on computational science, Engineering, Finance, Business or related discipline Experience in product management, cloud financial management (FinOps), or a related role such as a Cloud Engineer with a strong cost optimization focus. Demonstrated understanding of cloud billing models, cost drivers, and service offerings across major cloud providers (e.g., AWS, GCP, Azure). Experience with data analysis and reporting tools to extract insights from financial or operational data. Familiarity with agile product development methodologies. Preferred Qualifications & Skills: If you have the following characteristics, it would be a plus: Master's degree or MBA. FinOps Certified Practitioner (FOCP) or equivalent certification. Direct experience with FinOps platforms and tools (e.g., Cloudability, CloudHealth, or native cloud cost management tools). Experience contributing to products that support large-scale, multi-cloud environments. Understanding of enterprise financial processes, budgeting, forecasting, and cost allocation. Strong communication and stakeholder management skills, with the ability to articulate technical and financial concepts to diverse audiences. Prior experience in the life sciences or biopharma industry, understanding the unique compute and data needs of scientific research. Closing Date for Applications: Tuesday 6th January 2026 (COB) Please note: As we approach the holiday season, our recruitment team and hiring managers will have limited availability between now and early January. We encourage you to apply and will review all applications, however response times may be longer than usual, and interviews may be scheduled after the New Year. We appreciate your understanding and look forward to connecting soon! Please take a copy of the Job Description, as this will not be available post closure of the advert. When applying for this role, please use the 'cover letter' of the online application or your CV to describe how you meet the competencies for this role, as outlined in the job requirements above. The information that you have provided in your cover letter and CV will be used to assess your application. During the course of your application, you will be requested to complete voluntary information which will be used to monitoring the effectiveness of our equality and diversity policies. Your information will be treated as confidential and will not be used in any part of the selection process. If you require a reasonable adjustment to the application / selection process to enable you to demonstrate your ability to perform the job requirements, please contact . This will help us to understand any modifications we may need to make to support you throughout our selection process. Why GSK? Uniting science, technology and talent to get ahead of disease together. GSK is a global biopharma company with a purpose to unite science, technology and talent to get ahead of disease together. We aim to positively impact the health of 2.5 billion people by the end of the decade, as a successful, growing company where people can thrive. We get ahead of disease by preventing and treating it with innovation in specialty medicines and vaccines. We focus on four therapeutic areas: respiratory, immunology and inflammation; oncology; HIV; and infectious diseases - to impact health at scale. People and patients around the world count on the medicines and vaccines we make, so we're committed to creating an environment where our people can thrive and focus on what matters most. Our culture of being ambitious for patients, accountable for impact and doing the right thing is the foundation for how, together, we deliver for patients, shareholders and our people. GSK is an Equal Opportunity Employer. This ensures that all qualified applicants will receive equal consideration for employment without regard to race, color, religion, sex (including pregnancy, gender identity, and sexual orientation), parental status, national origin, age, disability, genetic information (including family medical history), military service or any basis prohibited under federal, state or local law. We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Should you require any adjustments to our process to assist you in demonstrating your strengths and capabilities contact us on or . The helpline is available from 8.30am to 12.00 noon Monday to Friday, during bank holidays these times and days may vary. Please note should your enquiry not relate to adjustments, we will not be able to support you through these channels. However, we have created a UK Recruitment FAQ guide. Click the link and scroll to the Careers Section where you will find answers to multiple questions we receive . click apply for full job details
Jan 08, 2026
Full time
The Onyx Research Data Tech organization represents a major investment by GSK R&D and Digital & Tech, designed to deliver a step-change in our ability to leverage data, knowledge, and prediction to find new medicines. We are a full-stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward: Building a next-generation data experience for GSK's scientists, engineers, and decision-makers, increasing productivity and reducing time spent on "data mechanics" Providing best-in-class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top-tier talent Aggressively engineering our data at scale to unlock the value of our combined data assets and predictions in real-time Onyx Product Management is at the heart of our mission, ensuring that everything from our infrastructure, to platforms, to end-user facing data assets and environments is designed to maximize our impact on R&D. The Product Management team partners with R&D stakeholders and Onyx leadership to develop a strategic roadmap for all customer-facing aspects of Onyx, including data assets, ontology, Knowledge Graph / semantic search, data / computing / analysis platforms, and data-powered applications. We are seeking an experienced Product Manager II who will be accountable for designing and delivering the road map for FinOps for all Onyx cloud platforms, products, and services. As a Product Manager II for FinOps Products, you will play a crucial role in optimizing our cloud spend and enhancing financial transparency across Onyx's platforms and services. Working closely with senior product leaders, engineering, finance, R&D leaders, and cloud operations teams, you will contribute to the development and delivery of products and features that empower our engineers, developers, scientists, and finance stakeholders to manage, forecast, and optimize cloud costs effectively. This is an exciting opportunity for a product professional passionate about cloud economics and building solutions that drive financial accountability and efficiency at scale. In this role you will Product Feature Ownership: Own the full product lifecycle for specific features or components within our FinOps product suite, from ideation and requirements gathering to launch, adoption, and iteration. Cloud Cost Visibility: Drive the development of tools and dashboards that provide clear, accurate, and granular visibility into cloud spending across Onyx platforms and services, enabling teams to understand their consumption patterns. Cost Optimization Enablement: Identify, define, and deliver capabilities that empower engineering teams to make cost-efficient choices, including recommendations for resource rightsizing, reserved instance/savings plan management, and identification of idle or underutilized resources. Financial Governance Support: Assist in implementing and monitoring cloud financial governance policies and guardrails, including budget alerts, spend limits, and chargeback/showback mechanisms. User Research & Requirements: Conduct in-depth user research with engineers, developers, data scientists, and finance teams to deeply understand their challenges and needs related to cloud cost management. Translate these insights into detailed product requirements and user stories. Data Analysis & Reporting: Leverage cloud billing data and other financial inputs to analyze spending trends, identify cost anomalies, and support the creation of actionable financial reports and forecasts. Agile Product Development: Actively participate in an agile development environment, collaborating daily with engineering, UX, and QA teams to ensure successful and timely delivery of high-quality product releases. Cross-Functional Collaboration: Partner effectively with Cloud Platform Engineering, Data Platform Engineering, Finance, and R&D teams to ensure product features meet business needs, integrate seamlessly, and drive desired financial outcomes. Documentation & Training: Create clear product documentation, user guides, and training materials to facilitate product adoption and ensure users can effectively leverage FinOps tools and insights. Why you? Qualifications & Skills: We are looking for professionals with these required skills to achieve our goals: Bachelors degree in a technical or scientific field, with a focus on computational science, Engineering, Finance, Business or related discipline Experience in product management, cloud financial management (FinOps), or a related role such as a Cloud Engineer with a strong cost optimization focus. Demonstrated understanding of cloud billing models, cost drivers, and service offerings across major cloud providers (e.g., AWS, GCP, Azure). Experience with data analysis and reporting tools to extract insights from financial or operational data. Familiarity with agile product development methodologies. Preferred Qualifications & Skills: If you have the following characteristics, it would be a plus: Master's degree or MBA. FinOps Certified Practitioner (FOCP) or equivalent certification. Direct experience with FinOps platforms and tools (e.g., Cloudability, CloudHealth, or native cloud cost management tools). Experience contributing to products that support large-scale, multi-cloud environments. Understanding of enterprise financial processes, budgeting, forecasting, and cost allocation. Strong communication and stakeholder management skills, with the ability to articulate technical and financial concepts to diverse audiences. Prior experience in the life sciences or biopharma industry, understanding the unique compute and data needs of scientific research. Closing Date for Applications: Tuesday 6th January 2026 (COB) Please note: As we approach the holiday season, our recruitment team and hiring managers will have limited availability between now and early January. We encourage you to apply and will review all applications, however response times may be longer than usual, and interviews may be scheduled after the New Year. We appreciate your understanding and look forward to connecting soon! Please take a copy of the Job Description, as this will not be available post closure of the advert. When applying for this role, please use the 'cover letter' of the online application or your CV to describe how you meet the competencies for this role, as outlined in the job requirements above. The information that you have provided in your cover letter and CV will be used to assess your application. During the course of your application, you will be requested to complete voluntary information which will be used to monitoring the effectiveness of our equality and diversity policies. Your information will be treated as confidential and will not be used in any part of the selection process. If you require a reasonable adjustment to the application / selection process to enable you to demonstrate your ability to perform the job requirements, please contact . This will help us to understand any modifications we may need to make to support you throughout our selection process. Why GSK? Uniting science, technology and talent to get ahead of disease together. GSK is a global biopharma company with a purpose to unite science, technology and talent to get ahead of disease together. We aim to positively impact the health of 2.5 billion people by the end of the decade, as a successful, growing company where people can thrive. We get ahead of disease by preventing and treating it with innovation in specialty medicines and vaccines. We focus on four therapeutic areas: respiratory, immunology and inflammation; oncology; HIV; and infectious diseases - to impact health at scale. People and patients around the world count on the medicines and vaccines we make, so we're committed to creating an environment where our people can thrive and focus on what matters most. Our culture of being ambitious for patients, accountable for impact and doing the right thing is the foundation for how, together, we deliver for patients, shareholders and our people. GSK is an Equal Opportunity Employer. This ensures that all qualified applicants will receive equal consideration for employment without regard to race, color, religion, sex (including pregnancy, gender identity, and sexual orientation), parental status, national origin, age, disability, genetic information (including family medical history), military service or any basis prohibited under federal, state or local law. We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Should you require any adjustments to our process to assist you in demonstrating your strengths and capabilities contact us on or . The helpline is available from 8.30am to 12.00 noon Monday to Friday, during bank holidays these times and days may vary. Please note should your enquiry not relate to adjustments, we will not be able to support you through these channels. However, we have created a UK Recruitment FAQ guide. Click the link and scroll to the Careers Section where you will find answers to multiple questions we receive . click apply for full job details
Computer Futures
Senior Dev SecOps Engineer
Computer Futures Bristol, Gloucestershire
What You'll Do Work in a cross-functional Agile team to design, develop, and deploy solutions. Build and maintain CI/CD pipelines and infrastructure as code. Collaborate with stakeholders to understand requirements and deliver secure, reliable systems. Automate workflows and improve deployment processes. Troubleshoot and resolve issues across development and production environments. Contribute to continuous improvement and share knowledge with the team. Stay up to date with emerging technologies and best practices in DevOps and security. What We're Looking For Experience in DevOps or DevSecOps environments. Strong skills in cloud platforms (AWS, GCP, or Azure) and infrastructure as code (Terraform, Ansible). Proficiency in CI/CD tools (GitHub Actions, Jenkins, CircleCI). Solid understanding of Linux systems and scripting (Bash, PowerShell, Python). Familiarity with security principles , SIEM/SOC tools, or incident response. Knowledge of networking fundamentals and APIs. Excellent problem-solving and communication skills. Nice to Have Experience with containerization (Docker, Kubernetes). Exposure to monitoring tools (Grafana, Datadog). Cloud certifications or security accreditations. Understanding of Agile methodologies. Interest in automation, security testing, or threat detection. To find out more about Computer Futures please visit (url removed) Computer Futures, a trading division of SThree Partnership LLP is acting as an Employment Business in relation to this vacancy Registered office 8 Bishopsgate, London, EC2N 4BQ, United Kingdom Partnership Number OC(phone number removed) England and Wales
Jan 07, 2026
Full time
What You'll Do Work in a cross-functional Agile team to design, develop, and deploy solutions. Build and maintain CI/CD pipelines and infrastructure as code. Collaborate with stakeholders to understand requirements and deliver secure, reliable systems. Automate workflows and improve deployment processes. Troubleshoot and resolve issues across development and production environments. Contribute to continuous improvement and share knowledge with the team. Stay up to date with emerging technologies and best practices in DevOps and security. What We're Looking For Experience in DevOps or DevSecOps environments. Strong skills in cloud platforms (AWS, GCP, or Azure) and infrastructure as code (Terraform, Ansible). Proficiency in CI/CD tools (GitHub Actions, Jenkins, CircleCI). Solid understanding of Linux systems and scripting (Bash, PowerShell, Python). Familiarity with security principles , SIEM/SOC tools, or incident response. Knowledge of networking fundamentals and APIs. Excellent problem-solving and communication skills. Nice to Have Experience with containerization (Docker, Kubernetes). Exposure to monitoring tools (Grafana, Datadog). Cloud certifications or security accreditations. Understanding of Agile methodologies. Interest in automation, security testing, or threat detection. To find out more about Computer Futures please visit (url removed) Computer Futures, a trading division of SThree Partnership LLP is acting as an Employment Business in relation to this vacancy Registered office 8 Bishopsgate, London, EC2N 4BQ, United Kingdom Partnership Number OC(phone number removed) England and Wales
CapGemini
AI & Data Science Manager / Senior Manager
CapGemini City, Manchester
Choose a partner with intimate knowledge of your industry and first-hand experience of defining its future.Select your locationSelect your locationIndustriesChoose a partner with intimate knowledge of your industry and first-hand experience of defining its future.Glasgow, London, Manchester# AI & Data Science Manager / Senior ManagerAt Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities, collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow. Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose.In a world of globalisation and constant innovation organisations are creating, consuming, and transforming unprecedented volumes of data. We work alongside our clients to extract and leverage key insights driven by our Data Science and Analytics expertise and capabilities. It's an exciting time to join our Data Science Team as we grow together to keep up with client demand and launch offerings to the market. In your role, you will partner with our clients to deliver outcomes through the application of cutting-edge data science methods. YOUR ROLE In this position you will play a key part in: Lead delivery of Agentic & Generative AI, Data Science, and Analytics projects, ensuring client expectations are met at every stage. Inspire clients by demonstrating the transformative potential of Agentic & Gen AI and data science to unlock business value. Design and implement scalable AI solutions in collaboration with architecture and platform teams. Mentor and develop data science consultants, championing technical excellence and delivery standards. Drive business growth by contributing to proposals, pitches, and strategic direction alongside leading client delivery.As part of your role you will also have the opportunity to contribute to the business and your own personal growth, through activities that form part of the following categories: Business Development - Leading/contributing to proposals, RFPs, bids, proposition development, client pitch contribution, client hosting at events. Internal contribution - Campaign development, internal think-tanks, whitepapers, practice development (operations, recruitment, team events & activities), offering development. Learning & development - Training to support your career development and the skills demand within the company, certifications etc. YOUR PROFILE We'd love to meet someone with: Proven experience leading complex data science, Agentic & Generative AI, and analytics projects, delivering value across the ML lifecycle using strong foundations in statistical modelling, natural language processing, time-series analysis, spatial analytics, and mathematical modelling methodologies. Experience managing the delivery of AI/Data Science projects, gained through roles in either a consulting firm or industry, leading end-to-end client engagements. A growth mindset with strong collaboration, communication, and analytical skills, able to build and maintain stakeholder relationships and influence effectively within a matrixed consulting environment. The ability to apply domain expertise and AI/ML innovation to solve client challenges, and present clear, compelling insights to diverse audiences. A proactive approach to business growth - identifying opportunities, contributing to proposals and pitches, fostering client trust, and supporting others' professional development within the organisation.Working knowledge in one or more of the following areas: Cloud data platforms such as Google Cloud, AWS, Azure, and Databricks. Programming languages such as Python, R, or PySpark. Agentic & Generative AI platforms such as Microsoft Copilot Studio, Adept AI, UiPath, OpenAI GPT-5 Agents, Orby AI, and Beam AI. DevOps and MLOps principles for production AI deployments.Data Science Consulting brings an inventive quantitative approach to our clients' biggest business and data challenges to unlock tangible business value by delivering intelligent data products and solutions through rapid innovation leveraging AI. We strive to be acknowledged as innovative and industry leading data science professionals and seek to achieve this by focusing on three area of the data science lifecycle:To be successfully appointed to this role, it is a requirement to obtain Security Check (SC) clearance. ( To obtain SC clearance, the successful applicant must have resided continuously within the United Kingdom for the last 5 years, along with other criteria and requirements.Throughout the recruitment process, you will be asked questions about your security clearance eligibility such as, but not limited to, country of residence and nationality. Some posts are restricted to sole UK Nationals for security reasons; therefore you may be asked about your citizenship in the application process. Exploring the art of the possible with AI by combining domain knowledge and AI expertise to identify opportunities across industries and functions where AI can deliver value and by shaping AI/ML roadmaps, and ideation using use cases aligned with data science and business strategies. Accelerating impact with AI by enabling proof of value through prototypes and by translating complex AI concepts into practical solutions that democratise access and maximise business advantage for our clients. Scaling AI from lab to live by defining and implementing responsible AI design principles throughout the AI journey and establishing sustainable, resilient, and scalable AI/ML Ops architectures and platforms for integrating AI products and solutions into business processes for real-time decision making. Declare they have a disability, and Meet the minimum essential criteria for the role.We're also focused on using tech to have a positive social impact. So, we're working to reduce our own carbon footprint and improve everyone's access to a digital world. It's something we're really serious about. In fact, we were even named as one of the world's most ethical companies by the Ethisphere Institute for the 10th year. When you join Capgemini, you'll join a team that does the right thing.Whilst you will have London, Manchester or Glasgow as an office base location, you must be fully flexible in terms of assignment location, as these roles may involve periods of time away from home at short notice.We offer a remuneration package which includes flexible benefits options for you to choose to suit your own personal circumstances and a variable element dependent grade and on company and personal performance.Experience levelExperienced ProfessionalsLocationGlasgow, London, Manchester
Jan 07, 2026
Full time
Choose a partner with intimate knowledge of your industry and first-hand experience of defining its future.Select your locationSelect your locationIndustriesChoose a partner with intimate knowledge of your industry and first-hand experience of defining its future.Glasgow, London, Manchester# AI & Data Science Manager / Senior ManagerAt Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities, collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow. Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose.In a world of globalisation and constant innovation organisations are creating, consuming, and transforming unprecedented volumes of data. We work alongside our clients to extract and leverage key insights driven by our Data Science and Analytics expertise and capabilities. It's an exciting time to join our Data Science Team as we grow together to keep up with client demand and launch offerings to the market. In your role, you will partner with our clients to deliver outcomes through the application of cutting-edge data science methods. YOUR ROLE In this position you will play a key part in: Lead delivery of Agentic & Generative AI, Data Science, and Analytics projects, ensuring client expectations are met at every stage. Inspire clients by demonstrating the transformative potential of Agentic & Gen AI and data science to unlock business value. Design and implement scalable AI solutions in collaboration with architecture and platform teams. Mentor and develop data science consultants, championing technical excellence and delivery standards. Drive business growth by contributing to proposals, pitches, and strategic direction alongside leading client delivery.As part of your role you will also have the opportunity to contribute to the business and your own personal growth, through activities that form part of the following categories: Business Development - Leading/contributing to proposals, RFPs, bids, proposition development, client pitch contribution, client hosting at events. Internal contribution - Campaign development, internal think-tanks, whitepapers, practice development (operations, recruitment, team events & activities), offering development. Learning & development - Training to support your career development and the skills demand within the company, certifications etc. YOUR PROFILE We'd love to meet someone with: Proven experience leading complex data science, Agentic & Generative AI, and analytics projects, delivering value across the ML lifecycle using strong foundations in statistical modelling, natural language processing, time-series analysis, spatial analytics, and mathematical modelling methodologies. Experience managing the delivery of AI/Data Science projects, gained through roles in either a consulting firm or industry, leading end-to-end client engagements. A growth mindset with strong collaboration, communication, and analytical skills, able to build and maintain stakeholder relationships and influence effectively within a matrixed consulting environment. The ability to apply domain expertise and AI/ML innovation to solve client challenges, and present clear, compelling insights to diverse audiences. A proactive approach to business growth - identifying opportunities, contributing to proposals and pitches, fostering client trust, and supporting others' professional development within the organisation.Working knowledge in one or more of the following areas: Cloud data platforms such as Google Cloud, AWS, Azure, and Databricks. Programming languages such as Python, R, or PySpark. Agentic & Generative AI platforms such as Microsoft Copilot Studio, Adept AI, UiPath, OpenAI GPT-5 Agents, Orby AI, and Beam AI. DevOps and MLOps principles for production AI deployments.Data Science Consulting brings an inventive quantitative approach to our clients' biggest business and data challenges to unlock tangible business value by delivering intelligent data products and solutions through rapid innovation leveraging AI. We strive to be acknowledged as innovative and industry leading data science professionals and seek to achieve this by focusing on three area of the data science lifecycle:To be successfully appointed to this role, it is a requirement to obtain Security Check (SC) clearance. ( To obtain SC clearance, the successful applicant must have resided continuously within the United Kingdom for the last 5 years, along with other criteria and requirements.Throughout the recruitment process, you will be asked questions about your security clearance eligibility such as, but not limited to, country of residence and nationality. Some posts are restricted to sole UK Nationals for security reasons; therefore you may be asked about your citizenship in the application process. Exploring the art of the possible with AI by combining domain knowledge and AI expertise to identify opportunities across industries and functions where AI can deliver value and by shaping AI/ML roadmaps, and ideation using use cases aligned with data science and business strategies. Accelerating impact with AI by enabling proof of value through prototypes and by translating complex AI concepts into practical solutions that democratise access and maximise business advantage for our clients. Scaling AI from lab to live by defining and implementing responsible AI design principles throughout the AI journey and establishing sustainable, resilient, and scalable AI/ML Ops architectures and platforms for integrating AI products and solutions into business processes for real-time decision making. Declare they have a disability, and Meet the minimum essential criteria for the role.We're also focused on using tech to have a positive social impact. So, we're working to reduce our own carbon footprint and improve everyone's access to a digital world. It's something we're really serious about. In fact, we were even named as one of the world's most ethical companies by the Ethisphere Institute for the 10th year. When you join Capgemini, you'll join a team that does the right thing.Whilst you will have London, Manchester or Glasgow as an office base location, you must be fully flexible in terms of assignment location, as these roles may involve periods of time away from home at short notice.We offer a remuneration package which includes flexible benefits options for you to choose to suit your own personal circumstances and a variable element dependent grade and on company and personal performance.Experience levelExperienced ProfessionalsLocationGlasgow, London, Manchester
Head Resourcing
Data Engineer
Head Resourcing
Mid-Level Data Engineer (Azure / Databricks) NO VISA REQUIREMENTS Location: Glasgow (3+ days) Reports to: Head of IT My client is undergoing a major transformation of their entire data landscape-migrating from legacy systems and manual reporting into a modern Azure + Databricks Lakehouse. They are building a secure, automated, enterprise-grade platform powered by Lakeflow Declarative Pipelines, Unity Catalog and Azure Data Factory. They are looking for a Mid-Level Data Engineer to help deliver high-quality pipelines and curated datasets used across Finance, Operations, Sales, Customer Care and Logistics. What You'll Do Lakehouse Engineering (Azure + Databricks) Build and maintain scalable ELT pipelines using Lakeflow Declarative Pipelines, PySpark and Spark SQL. Work within a Medallion architecture (Bronze ? Silver ? Gold) to deliver reliable, high-quality datasets. Ingest data from multiple sources including ChargeBee, legacy operational files, SharePoint, SFTP, SQL, REST and GraphQL APIs using Azure Data Factory and metadata-driven patterns. Apply data quality and validation rules using Lakeflow Declarative Pipelines expectations. Curated Layers & Data Modelling Develop clean and conforming Silver & Gold layers aligned to enterprise subject areas. Contribute to dimensional modelling (star schemas), harmonisation logic, SCDs and business marts powering Power BI datasets. Apply governance, lineage and permissioning through Unity Catalog. Orchestration & Observability Use Lakeflow Workflows and ADF to orchestrate and optimise ingestion, transformation and scheduled jobs. Help implement monitoring, alerting, SLAs/SLIs and runbooks to support production reliability. Assist in performance tuning and cost optimisation. DevOps & Platform Engineering Contribute to CI/CD pipelines in Azure DevOps to automate deployment of notebooks, Lakeflow Declarative Pipelines, SQL models and ADF assets. Support secure deployment patterns using private endpoints, managed identities and Key Vault. Participate in code reviews and help improve engineering practices. Collaboration & Delivery Work with BI and Analytics teams to deliver curated datasets that power dashboards across the business. Contribute to architectural discussions and the ongoing data platform roadmap. Tech You'll Use Databricks: Lakeflow Declarative Pipelines, Lakeflow Workflows, Unity Catalog, Delta Lake Azure: ADLS Gen2, Data Factory, Event Hubs (optional), Key Vault, private endpoints Languages: PySpark, Spark SQL, Python, Git DevOps: Azure DevOps Repos & Pipelines, CI/CD Analytics: Power BI, Fabric What We're Looking For Experience Commercial and proven data engineering experience. Hands-on experience delivering solutions on Azure + Databricks . Strong PySpark and Spark SQL skills within distributed compute environments. Experience working in a Lakehouse/Medallion architecture with Delta Lake. Understanding of dimensional modelling (Kimball), including SCD Type 1/2. Exposure to operational concepts such as monitoring, retries, idempotency and backfills. Mindset Keen to grow within a modern Azure Data Platform environment. Comfortable with Git, CI/CD and modern engineering workflows. Able to communicate technical concepts clearly to non-technical stakeholders. Quality-driven, collaborative and proactive. Nice to Have Databricks Certified Data Engineer Associate. Experience with streaming ingestion (Auto Loader, event streams, watermarking). Subscription/entitlement modelling (e.g., ChargeBee). Unity Catalog advanced security (RLS, PII governance). Terraform or Bicep for IaC. Fabric Semantic Models or Direct Lake optimisation experience. Why Join? Opportunity to shape and build a modern enterprise Lakehouse platform. Hands-on work with Azure, Databricks and leading-edge engineering practices. Real progression opportunities within a growing data function. Direct impact across multiple business domains.
Jan 07, 2026
Full time
Mid-Level Data Engineer (Azure / Databricks) NO VISA REQUIREMENTS Location: Glasgow (3+ days) Reports to: Head of IT My client is undergoing a major transformation of their entire data landscape-migrating from legacy systems and manual reporting into a modern Azure + Databricks Lakehouse. They are building a secure, automated, enterprise-grade platform powered by Lakeflow Declarative Pipelines, Unity Catalog and Azure Data Factory. They are looking for a Mid-Level Data Engineer to help deliver high-quality pipelines and curated datasets used across Finance, Operations, Sales, Customer Care and Logistics. What You'll Do Lakehouse Engineering (Azure + Databricks) Build and maintain scalable ELT pipelines using Lakeflow Declarative Pipelines, PySpark and Spark SQL. Work within a Medallion architecture (Bronze ? Silver ? Gold) to deliver reliable, high-quality datasets. Ingest data from multiple sources including ChargeBee, legacy operational files, SharePoint, SFTP, SQL, REST and GraphQL APIs using Azure Data Factory and metadata-driven patterns. Apply data quality and validation rules using Lakeflow Declarative Pipelines expectations. Curated Layers & Data Modelling Develop clean and conforming Silver & Gold layers aligned to enterprise subject areas. Contribute to dimensional modelling (star schemas), harmonisation logic, SCDs and business marts powering Power BI datasets. Apply governance, lineage and permissioning through Unity Catalog. Orchestration & Observability Use Lakeflow Workflows and ADF to orchestrate and optimise ingestion, transformation and scheduled jobs. Help implement monitoring, alerting, SLAs/SLIs and runbooks to support production reliability. Assist in performance tuning and cost optimisation. DevOps & Platform Engineering Contribute to CI/CD pipelines in Azure DevOps to automate deployment of notebooks, Lakeflow Declarative Pipelines, SQL models and ADF assets. Support secure deployment patterns using private endpoints, managed identities and Key Vault. Participate in code reviews and help improve engineering practices. Collaboration & Delivery Work with BI and Analytics teams to deliver curated datasets that power dashboards across the business. Contribute to architectural discussions and the ongoing data platform roadmap. Tech You'll Use Databricks: Lakeflow Declarative Pipelines, Lakeflow Workflows, Unity Catalog, Delta Lake Azure: ADLS Gen2, Data Factory, Event Hubs (optional), Key Vault, private endpoints Languages: PySpark, Spark SQL, Python, Git DevOps: Azure DevOps Repos & Pipelines, CI/CD Analytics: Power BI, Fabric What We're Looking For Experience Commercial and proven data engineering experience. Hands-on experience delivering solutions on Azure + Databricks . Strong PySpark and Spark SQL skills within distributed compute environments. Experience working in a Lakehouse/Medallion architecture with Delta Lake. Understanding of dimensional modelling (Kimball), including SCD Type 1/2. Exposure to operational concepts such as monitoring, retries, idempotency and backfills. Mindset Keen to grow within a modern Azure Data Platform environment. Comfortable with Git, CI/CD and modern engineering workflows. Able to communicate technical concepts clearly to non-technical stakeholders. Quality-driven, collaborative and proactive. Nice to Have Databricks Certified Data Engineer Associate. Experience with streaming ingestion (Auto Loader, event streams, watermarking). Subscription/entitlement modelling (e.g., ChargeBee). Unity Catalog advanced security (RLS, PII governance). Terraform or Bicep for IaC. Fabric Semantic Models or Direct Lake optimisation experience. Why Join? Opportunity to shape and build a modern enterprise Lakehouse platform. Hands-on work with Azure, Databricks and leading-edge engineering practices. Real progression opportunities within a growing data function. Direct impact across multiple business domains.

Modal Window

  • Home
  • Contact
  • About Us
  • Terms & Conditions
  • Privacy
  • Employer
  • Post a Job
  • Search Resumes
  • Sign in
  • Job Seeker
  • Find Jobs
  • Create Resume
  • Sign in
  • Facebook
  • Twitter
  • Google Plus
  • LinkedIn
Parent and Partner sites: IT Job Board | Jobs Near Me | RightTalent.co.uk | Quantity Surveyor jobs | Building Surveyor jobs | Construction Recruitment | Talent Recruiter | Construction Job Board | Property jobs | myJobsnearme.com | Jobs near me
© 2008-2026 Jobsite Jobs | Designed by Web Design Agency