• Home
  • Search Jobs
  • Register CV
  • Post a Job
  • Employer Pricing
  • Contact Us
  • Sign in
  • Sign up
  • Home
  • Search Jobs
  • Register CV
  • Post a Job
  • Employer Pricing
  • Contact Us
Sorry, that job is no longer available. Here are some results that may be similar to the job you were looking for.

46 jobs found

Email me jobs like this
Refine Search
Current Search
data engineer databricks and aws
CapGemini
AI & Data Science Manager / Senior Manager
CapGemini City, Manchester
Choose a partner with intimate knowledge of your industry and first-hand experience of defining its future.Select your locationSelect your locationIndustriesChoose a partner with intimate knowledge of your industry and first-hand experience of defining its future.Glasgow, London, Manchester# AI & Data Science Manager / Senior ManagerAt Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities, collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow. Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose.In a world of globalisation and constant innovation organisations are creating, consuming, and transforming unprecedented volumes of data. We work alongside our clients to extract and leverage key insights driven by our Data Science and Analytics expertise and capabilities. It's an exciting time to join our Data Science Team as we grow together to keep up with client demand and launch offerings to the market. In your role, you will partner with our clients to deliver outcomes through the application of cutting-edge data science methods. YOUR ROLE In this position you will play a key part in: Lead delivery of Agentic & Generative AI, Data Science, and Analytics projects, ensuring client expectations are met at every stage. Inspire clients by demonstrating the transformative potential of Agentic & Gen AI and data science to unlock business value. Design and implement scalable AI solutions in collaboration with architecture and platform teams. Mentor and develop data science consultants, championing technical excellence and delivery standards. Drive business growth by contributing to proposals, pitches, and strategic direction alongside leading client delivery.As part of your role you will also have the opportunity to contribute to the business and your own personal growth, through activities that form part of the following categories: Business Development - Leading/contributing to proposals, RFPs, bids, proposition development, client pitch contribution, client hosting at events. Internal contribution - Campaign development, internal think-tanks, whitepapers, practice development (operations, recruitment, team events & activities), offering development. Learning & development - Training to support your career development and the skills demand within the company, certifications etc. YOUR PROFILE We'd love to meet someone with: Proven experience leading complex data science, Agentic & Generative AI, and analytics projects, delivering value across the ML lifecycle using strong foundations in statistical modelling, natural language processing, time-series analysis, spatial analytics, and mathematical modelling methodologies. Experience managing the delivery of AI/Data Science projects, gained through roles in either a consulting firm or industry, leading end-to-end client engagements. A growth mindset with strong collaboration, communication, and analytical skills, able to build and maintain stakeholder relationships and influence effectively within a matrixed consulting environment. The ability to apply domain expertise and AI/ML innovation to solve client challenges, and present clear, compelling insights to diverse audiences. A proactive approach to business growth - identifying opportunities, contributing to proposals and pitches, fostering client trust, and supporting others' professional development within the organisation.Working knowledge in one or more of the following areas: Cloud data platforms such as Google Cloud, AWS, Azure, and Databricks. Programming languages such as Python, R, or PySpark. Agentic & Generative AI platforms such as Microsoft Copilot Studio, Adept AI, UiPath, OpenAI GPT-5 Agents, Orby AI, and Beam AI. DevOps and MLOps principles for production AI deployments.Data Science Consulting brings an inventive quantitative approach to our clients' biggest business and data challenges to unlock tangible business value by delivering intelligent data products and solutions through rapid innovation leveraging AI. We strive to be acknowledged as innovative and industry leading data science professionals and seek to achieve this by focusing on three area of the data science lifecycle:To be successfully appointed to this role, it is a requirement to obtain Security Check (SC) clearance. ( To obtain SC clearance, the successful applicant must have resided continuously within the United Kingdom for the last 5 years, along with other criteria and requirements.Throughout the recruitment process, you will be asked questions about your security clearance eligibility such as, but not limited to, country of residence and nationality. Some posts are restricted to sole UK Nationals for security reasons; therefore you may be asked about your citizenship in the application process. Exploring the art of the possible with AI by combining domain knowledge and AI expertise to identify opportunities across industries and functions where AI can deliver value and by shaping AI/ML roadmaps, and ideation using use cases aligned with data science and business strategies. Accelerating impact with AI by enabling proof of value through prototypes and by translating complex AI concepts into practical solutions that democratise access and maximise business advantage for our clients. Scaling AI from lab to live by defining and implementing responsible AI design principles throughout the AI journey and establishing sustainable, resilient, and scalable AI/ML Ops architectures and platforms for integrating AI products and solutions into business processes for real-time decision making. Declare they have a disability, and Meet the minimum essential criteria for the role.We're also focused on using tech to have a positive social impact. So, we're working to reduce our own carbon footprint and improve everyone's access to a digital world. It's something we're really serious about. In fact, we were even named as one of the world's most ethical companies by the Ethisphere Institute for the 10th year. When you join Capgemini, you'll join a team that does the right thing.Whilst you will have London, Manchester or Glasgow as an office base location, you must be fully flexible in terms of assignment location, as these roles may involve periods of time away from home at short notice.We offer a remuneration package which includes flexible benefits options for you to choose to suit your own personal circumstances and a variable element dependent grade and on company and personal performance.Experience levelExperienced ProfessionalsLocationGlasgow, London, Manchester
Jan 07, 2026
Full time
Choose a partner with intimate knowledge of your industry and first-hand experience of defining its future.Select your locationSelect your locationIndustriesChoose a partner with intimate knowledge of your industry and first-hand experience of defining its future.Glasgow, London, Manchester# AI & Data Science Manager / Senior ManagerAt Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities, collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow. Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose.In a world of globalisation and constant innovation organisations are creating, consuming, and transforming unprecedented volumes of data. We work alongside our clients to extract and leverage key insights driven by our Data Science and Analytics expertise and capabilities. It's an exciting time to join our Data Science Team as we grow together to keep up with client demand and launch offerings to the market. In your role, you will partner with our clients to deliver outcomes through the application of cutting-edge data science methods. YOUR ROLE In this position you will play a key part in: Lead delivery of Agentic & Generative AI, Data Science, and Analytics projects, ensuring client expectations are met at every stage. Inspire clients by demonstrating the transformative potential of Agentic & Gen AI and data science to unlock business value. Design and implement scalable AI solutions in collaboration with architecture and platform teams. Mentor and develop data science consultants, championing technical excellence and delivery standards. Drive business growth by contributing to proposals, pitches, and strategic direction alongside leading client delivery.As part of your role you will also have the opportunity to contribute to the business and your own personal growth, through activities that form part of the following categories: Business Development - Leading/contributing to proposals, RFPs, bids, proposition development, client pitch contribution, client hosting at events. Internal contribution - Campaign development, internal think-tanks, whitepapers, practice development (operations, recruitment, team events & activities), offering development. Learning & development - Training to support your career development and the skills demand within the company, certifications etc. YOUR PROFILE We'd love to meet someone with: Proven experience leading complex data science, Agentic & Generative AI, and analytics projects, delivering value across the ML lifecycle using strong foundations in statistical modelling, natural language processing, time-series analysis, spatial analytics, and mathematical modelling methodologies. Experience managing the delivery of AI/Data Science projects, gained through roles in either a consulting firm or industry, leading end-to-end client engagements. A growth mindset with strong collaboration, communication, and analytical skills, able to build and maintain stakeholder relationships and influence effectively within a matrixed consulting environment. The ability to apply domain expertise and AI/ML innovation to solve client challenges, and present clear, compelling insights to diverse audiences. A proactive approach to business growth - identifying opportunities, contributing to proposals and pitches, fostering client trust, and supporting others' professional development within the organisation.Working knowledge in one or more of the following areas: Cloud data platforms such as Google Cloud, AWS, Azure, and Databricks. Programming languages such as Python, R, or PySpark. Agentic & Generative AI platforms such as Microsoft Copilot Studio, Adept AI, UiPath, OpenAI GPT-5 Agents, Orby AI, and Beam AI. DevOps and MLOps principles for production AI deployments.Data Science Consulting brings an inventive quantitative approach to our clients' biggest business and data challenges to unlock tangible business value by delivering intelligent data products and solutions through rapid innovation leveraging AI. We strive to be acknowledged as innovative and industry leading data science professionals and seek to achieve this by focusing on three area of the data science lifecycle:To be successfully appointed to this role, it is a requirement to obtain Security Check (SC) clearance. ( To obtain SC clearance, the successful applicant must have resided continuously within the United Kingdom for the last 5 years, along with other criteria and requirements.Throughout the recruitment process, you will be asked questions about your security clearance eligibility such as, but not limited to, country of residence and nationality. Some posts are restricted to sole UK Nationals for security reasons; therefore you may be asked about your citizenship in the application process. Exploring the art of the possible with AI by combining domain knowledge and AI expertise to identify opportunities across industries and functions where AI can deliver value and by shaping AI/ML roadmaps, and ideation using use cases aligned with data science and business strategies. Accelerating impact with AI by enabling proof of value through prototypes and by translating complex AI concepts into practical solutions that democratise access and maximise business advantage for our clients. Scaling AI from lab to live by defining and implementing responsible AI design principles throughout the AI journey and establishing sustainable, resilient, and scalable AI/ML Ops architectures and platforms for integrating AI products and solutions into business processes for real-time decision making. Declare they have a disability, and Meet the minimum essential criteria for the role.We're also focused on using tech to have a positive social impact. So, we're working to reduce our own carbon footprint and improve everyone's access to a digital world. It's something we're really serious about. In fact, we were even named as one of the world's most ethical companies by the Ethisphere Institute for the 10th year. When you join Capgemini, you'll join a team that does the right thing.Whilst you will have London, Manchester or Glasgow as an office base location, you must be fully flexible in terms of assignment location, as these roles may involve periods of time away from home at short notice.We offer a remuneration package which includes flexible benefits options for you to choose to suit your own personal circumstances and a variable element dependent grade and on company and personal performance.Experience levelExperienced ProfessionalsLocationGlasgow, London, Manchester
Machine Learning Scientist II
PowerToFly
Expedia Group brands power global travel for everyone, everywhere. We design cutting edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We're building a more open world. Join us. Introduction to the team The Machine Learning Scientist II role sits on the Content Relevance Ranking AI team in the Expedia Technology division of Expedia Group. This team develops and optimizes ranking models with state of the art machine learning techniques to power the selection and ranking of property images and reviews for the multiple brands in our portfolio. In this role, your expertise and passion for innovation, developing cutting edge technology and implementing industry leading solutions, will improve the experience of millions of travelers and travel partners each year. This is an applied research role: your models will be deployed to our production systems, and your results will be measured objectively via A/B testing, directly impacting our business results. We collaborate closely with the analytics, product, and engineering teams. In this role, you will Work with product management to understand business problems, identify challenges and machine learning opportunities, and scope solutions. Conduct exploratory data analysis, formulate machine learning problems, and build effective models. Partner with data and software engineering teams to deliver your solutions into production. Develop a deep understanding of our data and ML infrastructure. Document the technical details of your work. Present your ideas and results to product management, stakeholders, and leadership teams in a clear and effective manner. Collaborate and brainstorm with other team members and across the company. Stay current with advances in ML and GenAI to drive innovation within the team. Minimum Qualifications Master's degree or Ph.D. in Computer Science, Statistics, Math, Engineering, or a related technical field; or equivalent related professional experience. You have 2+ years hands on experience with ML in production, building datasets, selecting and engineering features, building and optimizing algorithms. You have expertise with Python and related machine learning tools, deep learning frameworks such as TensorFlow or PyTorch, and SQL like query languages for data extraction, transformation, and loading. A strong foundation in Machine Learning fundamentals, statistics, and experimentation. You have real world experience working with large data sets in a distributed computing environment such as Spark. You have good programming practices, ability to write readable, fast code. You have intellectual curiosity and desire to learn new things, techniques and technologies. Preferred Qualifications: Experience with ranking systems and recent Large Language Models (LLMs), including fine tuning, efficient deployment, and architectures. Comfortable working with ML platforms like Databricks and Cloud platforms such as AWS, and Docker. Hands on experience with workflow orchestration tools (e.g., Airflow, Flyte). Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia Expedia Partner Solutions, Vrbo , trivago , Orbitz , Travelocity , Hotwire , Wotif , ebookers , CheapTickets , Expedia Group Media Solutions, Expedia Local Expert and Expedia Cruises . 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. Employment opportunities and job offers at Expedia Group will always come from Expedia Group's Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you're confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain The official website to find and apply for job openings at Expedia Group is Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.
Jan 06, 2026
Full time
Expedia Group brands power global travel for everyone, everywhere. We design cutting edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We're building a more open world. Join us. Introduction to the team The Machine Learning Scientist II role sits on the Content Relevance Ranking AI team in the Expedia Technology division of Expedia Group. This team develops and optimizes ranking models with state of the art machine learning techniques to power the selection and ranking of property images and reviews for the multiple brands in our portfolio. In this role, your expertise and passion for innovation, developing cutting edge technology and implementing industry leading solutions, will improve the experience of millions of travelers and travel partners each year. This is an applied research role: your models will be deployed to our production systems, and your results will be measured objectively via A/B testing, directly impacting our business results. We collaborate closely with the analytics, product, and engineering teams. In this role, you will Work with product management to understand business problems, identify challenges and machine learning opportunities, and scope solutions. Conduct exploratory data analysis, formulate machine learning problems, and build effective models. Partner with data and software engineering teams to deliver your solutions into production. Develop a deep understanding of our data and ML infrastructure. Document the technical details of your work. Present your ideas and results to product management, stakeholders, and leadership teams in a clear and effective manner. Collaborate and brainstorm with other team members and across the company. Stay current with advances in ML and GenAI to drive innovation within the team. Minimum Qualifications Master's degree or Ph.D. in Computer Science, Statistics, Math, Engineering, or a related technical field; or equivalent related professional experience. You have 2+ years hands on experience with ML in production, building datasets, selecting and engineering features, building and optimizing algorithms. You have expertise with Python and related machine learning tools, deep learning frameworks such as TensorFlow or PyTorch, and SQL like query languages for data extraction, transformation, and loading. A strong foundation in Machine Learning fundamentals, statistics, and experimentation. You have real world experience working with large data sets in a distributed computing environment such as Spark. You have good programming practices, ability to write readable, fast code. You have intellectual curiosity and desire to learn new things, techniques and technologies. Preferred Qualifications: Experience with ranking systems and recent Large Language Models (LLMs), including fine tuning, efficient deployment, and architectures. Comfortable working with ML platforms like Databricks and Cloud platforms such as AWS, and Docker. Hands on experience with workflow orchestration tools (e.g., Airflow, Flyte). Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia Expedia Partner Solutions, Vrbo , trivago , Orbitz , Travelocity , Hotwire , Wotif , ebookers , CheapTickets , Expedia Group Media Solutions, Expedia Local Expert and Expedia Cruises . 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. Employment opportunities and job offers at Expedia Group will always come from Expedia Group's Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you're confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain The official website to find and apply for job openings at Expedia Group is Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.
Barclays Bank Plc
AI/MLOps Platform Engineer
Barclays Bank Plc City, Glasgow
Join Us in Shaping the Future of AI at Barclays. We're launching an exciting new initiative at Barclays to design, build, and scale next-generation platform components that empower developers - including Quants and Strats - to create high-performance, AI-driven applications. As an AI/MLOps Platform Engineer, you'll play a pivotal role in this transformation, working hands-on to develop the infrastructure and tooling that supports the full lifecycle of machine learning and generative AI workloads. This is more than an engineering role-it's an opportunity to influence technical direction, collaborate across diverse teams, and help define how AI and GenAI are delivered at scale. To be successful as an AI/MLOps Platform Engineer at this level, you should have experience with: Proficiency in Python engineering skills, especially in backend systems and infrastructure. Deep AWS expertise, including services like SageMaker, Lambda, ECS, Step Functions, S3, IAM, KMS, CloudFormation, and Bedrock. Proven experience building and scaling MLOps platforms and supporting GenAI workloads in production. Strong understanding of secure software development, cloud cost optimization, and platform observability. Ability to communicate complex technical concepts clearly to both technical and non-technical audiences. Demonstrated leadership in setting technical direction while remaining hands-on. Some other highly valued skills may include: Experience with MLOps platforms such as Databricks or SageMaker, and familiarity with hybrid cloud strategies (Azure, on-prem Kubernetes). Strong understanding of AI infrastructure for scalable model serving, distributed training, and GPU orchestration. Expertise in Large Language Models (LLMs) and Small Language Models (SLMs), including fine-tuning and deployment for enterprise use cases. Hands-on experience with Hugging Face libraries and tools for model training, evaluation, and deployment. Knowledge of agentic frameworks (e.g., LangChain, AutoGen) and Model Context Protocol (MCP) for building autonomous AI workflows and interoperability. Awareness of emerging trends in GenAI platforms, open-source MLOps, and cloud-native AI solutions You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role can be based out of our Glasgow or Canary Wharf office. Purpose of the role To design, develop and improve software, utilising various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues. Accountabilities Development and delivery of high-quality software solutions by using industry aligned programming languages, frameworks, and tools. Ensuring that code is scalable, maintainable, and optimized for performance. Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives. Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. Stay informed of industry technology trends and innovations and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth. Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. Implementation of effective unit testing practices to ensure proper code design, readability, and reliability. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L - Listen and be authentic, E - Energise and inspire, A - Align across the enterprise, D - Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship - our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset - to Empower, Challenge and Drive - the operating manual for how we behave.
Jan 06, 2026
Full time
Join Us in Shaping the Future of AI at Barclays. We're launching an exciting new initiative at Barclays to design, build, and scale next-generation platform components that empower developers - including Quants and Strats - to create high-performance, AI-driven applications. As an AI/MLOps Platform Engineer, you'll play a pivotal role in this transformation, working hands-on to develop the infrastructure and tooling that supports the full lifecycle of machine learning and generative AI workloads. This is more than an engineering role-it's an opportunity to influence technical direction, collaborate across diverse teams, and help define how AI and GenAI are delivered at scale. To be successful as an AI/MLOps Platform Engineer at this level, you should have experience with: Proficiency in Python engineering skills, especially in backend systems and infrastructure. Deep AWS expertise, including services like SageMaker, Lambda, ECS, Step Functions, S3, IAM, KMS, CloudFormation, and Bedrock. Proven experience building and scaling MLOps platforms and supporting GenAI workloads in production. Strong understanding of secure software development, cloud cost optimization, and platform observability. Ability to communicate complex technical concepts clearly to both technical and non-technical audiences. Demonstrated leadership in setting technical direction while remaining hands-on. Some other highly valued skills may include: Experience with MLOps platforms such as Databricks or SageMaker, and familiarity with hybrid cloud strategies (Azure, on-prem Kubernetes). Strong understanding of AI infrastructure for scalable model serving, distributed training, and GPU orchestration. Expertise in Large Language Models (LLMs) and Small Language Models (SLMs), including fine-tuning and deployment for enterprise use cases. Hands-on experience with Hugging Face libraries and tools for model training, evaluation, and deployment. Knowledge of agentic frameworks (e.g., LangChain, AutoGen) and Model Context Protocol (MCP) for building autonomous AI workflows and interoperability. Awareness of emerging trends in GenAI platforms, open-source MLOps, and cloud-native AI solutions You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role can be based out of our Glasgow or Canary Wharf office. Purpose of the role To design, develop and improve software, utilising various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues. Accountabilities Development and delivery of high-quality software solutions by using industry aligned programming languages, frameworks, and tools. Ensuring that code is scalable, maintainable, and optimized for performance. Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives. Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. Stay informed of industry technology trends and innovations and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth. Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. Implementation of effective unit testing practices to ensure proper code design, readability, and reliability. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L - Listen and be authentic, E - Energise and inspire, A - Align across the enterprise, D - Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship - our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset - to Empower, Challenge and Drive - the operating manual for how we behave.
Machine Learning Scientist II
Expedia, Inc.
Expedia Group brands power global travel for everyone, everywhere. We design cutting edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and we know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We're building a more open world. Join us. Introduction to the team The Machine Learning Scientist II role sits on the Content Relevance Ranking AI team in the Expedia Technology division of Expedia Group. This team develops and optimizes ranking models with state of the art machine learning techniques to power the selection and ranking of property images and reviews for the multiple brands in our portfolio. In this role, your expertise and passion for innovation, developing cutting edge technology and implementing industry leading solutions, will improve the experience of millions of travelers and travel partners each year. This is an applied research role: your models will be deployed to our production systems, and your results will be measured objectively via A/B testing, directly impacting our business results. We collaborate closely with the analytics, product, and engineering teams. In this role, you will Work with product management to understand business problems, identify challenges and machine learning opportunities, and scope solutions. Conduct exploratory data analysis, formulate machine learning problems, and build effective models. Partner with data and software engineering teams to deliver your solutions into production. Develop a deep understanding of our data and ML infrastructure. Document the technical details of your work. Present your ideas and results to product management, stakeholders, and leadership teams in a clear and effective manner. Collaborate and brainstorm with other team members and across the company. Stay current with advances in ML and GenAI to drive innovation within the team. Minimum Qualifications Master's degree or Ph.D. in Computer Science, Statistics, Math, Engineering, or a related technical field; or equivalent related professional experience. You have 2+ years hands on experience with ML in production, building datasets, selecting and engineering features, building and optimizing algorithms. You have expertise with Python and related machine learning tools, deep learning frameworks such as TensorFlow or PyTorch, and SQL like query languages for data extraction, transformation, and loading. A strong foundation in Machine Learning fundamentals, statistics, and experimentation. You have real world experience working with large data sets in a distributed computing environment such as Spark. You have good programming practices, ability to write readable, fast code. You have intellectual curiosity and desire to learn new things, techniques and technologies. Preferred Qualifications Experience with ranking systems and recent Large Language Models (LLMs), including fine tuning, efficient deployment, and architectures. Comfortable working with ML platforms like Databricks and Cloud platforms such as AWS, and Docker. Hands on experience with workflow orchestration tools (e.g., Airflow, Flyte). Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.
Jan 05, 2026
Full time
Expedia Group brands power global travel for everyone, everywhere. We design cutting edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and we know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We're building a more open world. Join us. Introduction to the team The Machine Learning Scientist II role sits on the Content Relevance Ranking AI team in the Expedia Technology division of Expedia Group. This team develops and optimizes ranking models with state of the art machine learning techniques to power the selection and ranking of property images and reviews for the multiple brands in our portfolio. In this role, your expertise and passion for innovation, developing cutting edge technology and implementing industry leading solutions, will improve the experience of millions of travelers and travel partners each year. This is an applied research role: your models will be deployed to our production systems, and your results will be measured objectively via A/B testing, directly impacting our business results. We collaborate closely with the analytics, product, and engineering teams. In this role, you will Work with product management to understand business problems, identify challenges and machine learning opportunities, and scope solutions. Conduct exploratory data analysis, formulate machine learning problems, and build effective models. Partner with data and software engineering teams to deliver your solutions into production. Develop a deep understanding of our data and ML infrastructure. Document the technical details of your work. Present your ideas and results to product management, stakeholders, and leadership teams in a clear and effective manner. Collaborate and brainstorm with other team members and across the company. Stay current with advances in ML and GenAI to drive innovation within the team. Minimum Qualifications Master's degree or Ph.D. in Computer Science, Statistics, Math, Engineering, or a related technical field; or equivalent related professional experience. You have 2+ years hands on experience with ML in production, building datasets, selecting and engineering features, building and optimizing algorithms. You have expertise with Python and related machine learning tools, deep learning frameworks such as TensorFlow or PyTorch, and SQL like query languages for data extraction, transformation, and loading. A strong foundation in Machine Learning fundamentals, statistics, and experimentation. You have real world experience working with large data sets in a distributed computing environment such as Spark. You have good programming practices, ability to write readable, fast code. You have intellectual curiosity and desire to learn new things, techniques and technologies. Preferred Qualifications Experience with ranking systems and recent Large Language Models (LLMs), including fine tuning, efficient deployment, and architectures. Comfortable working with ML platforms like Databricks and Cloud platforms such as AWS, and Docker. Hands on experience with workflow orchestration tools (e.g., Airflow, Flyte). Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.
Huxley Associates
Python Data Engineer - Hedgefund
Huxley Associates
Python Data Engineer - Multi-Strategy Hedge Fund Location: London Hybrid: 2 days per week on-site Type: Full-time About the Role A leading multi-strategy hedge fund is seeking a highly skilled Python Data Engineer to join its technology and data team. This is a hands-on role focused on building and optimising data infrastructure that powers quantitative research, trading strategies, and risk management. Key Responsibilities Develop and maintain scalable Python-based ETL pipelines for ingesting and transforming market data from multiple sources. Design and manage cloud-based data lake solutions (AWS, Databricks) for large volumes of structured and unstructured data. Implement rigorous data quality, validation, and cleansing routines to ensure accuracy of financial time-series data. Optimize workflows for low latency and high throughput, critical for trading and research. Collaborate with portfolio managers, quantitative researchers, and traders to deliver tailored data solutions for modeling and strategy development. Contribute to the design and implementation of the firm's security master database. Analyse datasets to extract actionable insights for trading and risk management. Document system architecture, data flows, and technical processes for transparency and reproducibility. Requirements Strong proficiency in Python (pandas, NumPy, PySpark) and ETL development. Hands-on experience with AWS services (S3, Glue, Lambda) and Databricks. Solid understanding of financial market data, particularly time-series. Knowledge of data quality frameworks and performance optimisation techniques. Degree in Computer Science, Engineering, or related field. Preferred Skills SQL and relational database design experience. Exposure to quantitative finance or trading environments. Familiarity with containerisation and orchestration (Docker, Kubernetes). What We Offer Competitive compensation and performance-based bonus. Hybrid working model: 2 days per week on-site in London. Opportunity to work on mission-critical data systems for a global hedge fund. Collaborative, high-performance culture with direct exposure to front-office teams To Avoid Disappointment, Apply Now! To find out more about Huxley, please visit (url removed) Huxley, a trading division of SThree Partnership LLP is acting as an Employment Business in relation to this vacancy Registered office 8 Bishopsgate, London, EC2N 4BQ, United Kingdom Partnership Number OC(phone number removed) England and Wales
Jan 03, 2026
Full time
Python Data Engineer - Multi-Strategy Hedge Fund Location: London Hybrid: 2 days per week on-site Type: Full-time About the Role A leading multi-strategy hedge fund is seeking a highly skilled Python Data Engineer to join its technology and data team. This is a hands-on role focused on building and optimising data infrastructure that powers quantitative research, trading strategies, and risk management. Key Responsibilities Develop and maintain scalable Python-based ETL pipelines for ingesting and transforming market data from multiple sources. Design and manage cloud-based data lake solutions (AWS, Databricks) for large volumes of structured and unstructured data. Implement rigorous data quality, validation, and cleansing routines to ensure accuracy of financial time-series data. Optimize workflows for low latency and high throughput, critical for trading and research. Collaborate with portfolio managers, quantitative researchers, and traders to deliver tailored data solutions for modeling and strategy development. Contribute to the design and implementation of the firm's security master database. Analyse datasets to extract actionable insights for trading and risk management. Document system architecture, data flows, and technical processes for transparency and reproducibility. Requirements Strong proficiency in Python (pandas, NumPy, PySpark) and ETL development. Hands-on experience with AWS services (S3, Glue, Lambda) and Databricks. Solid understanding of financial market data, particularly time-series. Knowledge of data quality frameworks and performance optimisation techniques. Degree in Computer Science, Engineering, or related field. Preferred Skills SQL and relational database design experience. Exposure to quantitative finance or trading environments. Familiarity with containerisation and orchestration (Docker, Kubernetes). What We Offer Competitive compensation and performance-based bonus. Hybrid working model: 2 days per week on-site in London. Opportunity to work on mission-critical data systems for a global hedge fund. Collaborative, high-performance culture with direct exposure to front-office teams To Avoid Disappointment, Apply Now! To find out more about Huxley, please visit (url removed) Huxley, a trading division of SThree Partnership LLP is acting as an Employment Business in relation to this vacancy Registered office 8 Bishopsgate, London, EC2N 4BQ, United Kingdom Partnership Number OC(phone number removed) England and Wales
Data Platform Engineer
easyJet Airline Company PLC City, London
Overview Luton/Hybrid Company When it comes to innovation and achievement there are few organisations with a better track record. Join us and you'll be able to play a big part in the success of our highly successful, fast-paced business that opens up Europe so that people can exercise their get-up-and-go. With almost 300 aircraft flying over 1,000 routes to more than 32 countries, we're the UK's largest airline, the fourth largest in Europe and the tenth largest in the world. Set to fly more than 90 million passengers this year, we employ over 10,000 people. Its big-scale stuff and we're still growing. Job Purpose With a big investment into Databricks and a large amount of interesting data this is the chance for you to come and be part of an exciting transformation in the way we store, analyse and use data in a fast paced organisation. You will join as a Data Platform Engineer joining a team of committed data specialists. As our data strategy begins to take shape, we aim to become the most data driven airline in the world. To achieve this goal, we want to use our data to improve all our business processes and that means allowing our teams to experiment and innovate as they deem fit. Within data platform engineering, this translates to building the frameworks and tooling data engineers, analysts and scientists utilise, enabling platform users to focus on value delivery Job Accountabilities Support the business in harnessing the power of data within easyJet. Work in a fast-paced agile scrum environment with a "release within sprint mentality". Responsible for building and maintaining deployment frameworks. Build best practice and constantly improve processes and core code bases. Work in partnership with Data Engineers to conform to/define patterns and standards. Work with Business Analysts to deliver against requirements and realise business benefits. Work with a mixed team of onshore and offshore resource to work with consistent standards and frameworks. Requirements of the Role Key Skills Required Technical Ability: has a high level of current, technical competence in relevant technologies, and be able to independently learn new technologies and techniques as our stack changes. The role requires close coordination to design and implement data pipelines that support machine learning models, analytical dashboards, and experimental frameworks. Clear communication; can communicate effectively in both written and verbal forms with technical and non-technical audiences alike. Complex problem-solving ability; structured, organised, process-driven and outcome-oriented. Able to use historical experiences to help with future innovations. Passionate about data; enjoy being hands-on and learning about new technologies, particularly in the data field. Technical Skills Required Hands-on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). Experience with Apache Spark or any other distributed data programming frameworks. Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. Experience with cloud infrastructure like AWS or Azure. Experience with Linux and containerisation (e.g Docker, shell scripting). Understanding Data modelling and Data Cataloguing principles. Understanding of Data Management principles (security and data privacy) and how they can be applied to Data Engineering processes/solutions (e.g. access management, data privacy, handling of sensitive data (e.g. GDPR). Experience with CI/CD tools, in particular GitHub actions. Hands-on IaC development experience with Terraform or CloudFormation. Hands-on development experience in an airline, e-commerce or retail industry Experience in event-driven architecture, ingesting data in real time in a commercial production environment with Spark Streaming, Kafka, DLT or Beam. Experience implementing end-to-end monitoring, quality checks, lineage tracking and automated alerts to ensure reliable and trustworthy data across the platform. Experience of building a data transformation framework with dbt. Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. What you'll get in return Competitive base salary Up to 20% bonus BAYE, SAYE & Performance share schemes Flexible benefits package Excellent staff travel benefits About easyJet At easyJet our aim is to make low-cost travel easy - connecting people to what they value using Europe's best airline network, great value fares, and friendly service. It takes a real team effort to carry over 90 million passengers a year across 35 countries. Whether you're working as part of our front-line operations or in our corporate functions, you'll find people that are positive, inclusive, ready to take on a challenge, and that have your back. We call that our 'Orange Spirit', and we hope you'll share that too. Apply Complete your application on our careers site. We encourage individuality, empower our people to seize the initiative, and never stop learning. We see people first and foremost for their performance and potential and we are committed to building a diverse and inclusive organisation that supports the needs of all. As such we will make reasonable adjustments at interview through to employment for our candidates. Business Area Primary Location
Jan 01, 2026
Full time
Overview Luton/Hybrid Company When it comes to innovation and achievement there are few organisations with a better track record. Join us and you'll be able to play a big part in the success of our highly successful, fast-paced business that opens up Europe so that people can exercise their get-up-and-go. With almost 300 aircraft flying over 1,000 routes to more than 32 countries, we're the UK's largest airline, the fourth largest in Europe and the tenth largest in the world. Set to fly more than 90 million passengers this year, we employ over 10,000 people. Its big-scale stuff and we're still growing. Job Purpose With a big investment into Databricks and a large amount of interesting data this is the chance for you to come and be part of an exciting transformation in the way we store, analyse and use data in a fast paced organisation. You will join as a Data Platform Engineer joining a team of committed data specialists. As our data strategy begins to take shape, we aim to become the most data driven airline in the world. To achieve this goal, we want to use our data to improve all our business processes and that means allowing our teams to experiment and innovate as they deem fit. Within data platform engineering, this translates to building the frameworks and tooling data engineers, analysts and scientists utilise, enabling platform users to focus on value delivery Job Accountabilities Support the business in harnessing the power of data within easyJet. Work in a fast-paced agile scrum environment with a "release within sprint mentality". Responsible for building and maintaining deployment frameworks. Build best practice and constantly improve processes and core code bases. Work in partnership with Data Engineers to conform to/define patterns and standards. Work with Business Analysts to deliver against requirements and realise business benefits. Work with a mixed team of onshore and offshore resource to work with consistent standards and frameworks. Requirements of the Role Key Skills Required Technical Ability: has a high level of current, technical competence in relevant technologies, and be able to independently learn new technologies and techniques as our stack changes. The role requires close coordination to design and implement data pipelines that support machine learning models, analytical dashboards, and experimental frameworks. Clear communication; can communicate effectively in both written and verbal forms with technical and non-technical audiences alike. Complex problem-solving ability; structured, organised, process-driven and outcome-oriented. Able to use historical experiences to help with future innovations. Passionate about data; enjoy being hands-on and learning about new technologies, particularly in the data field. Technical Skills Required Hands-on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). Experience with Apache Spark or any other distributed data programming frameworks. Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. Experience with cloud infrastructure like AWS or Azure. Experience with Linux and containerisation (e.g Docker, shell scripting). Understanding Data modelling and Data Cataloguing principles. Understanding of Data Management principles (security and data privacy) and how they can be applied to Data Engineering processes/solutions (e.g. access management, data privacy, handling of sensitive data (e.g. GDPR). Experience with CI/CD tools, in particular GitHub actions. Hands-on IaC development experience with Terraform or CloudFormation. Hands-on development experience in an airline, e-commerce or retail industry Experience in event-driven architecture, ingesting data in real time in a commercial production environment with Spark Streaming, Kafka, DLT or Beam. Experience implementing end-to-end monitoring, quality checks, lineage tracking and automated alerts to ensure reliable and trustworthy data across the platform. Experience of building a data transformation framework with dbt. Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. What you'll get in return Competitive base salary Up to 20% bonus BAYE, SAYE & Performance share schemes Flexible benefits package Excellent staff travel benefits About easyJet At easyJet our aim is to make low-cost travel easy - connecting people to what they value using Europe's best airline network, great value fares, and friendly service. It takes a real team effort to carry over 90 million passengers a year across 35 countries. Whether you're working as part of our front-line operations or in our corporate functions, you'll find people that are positive, inclusive, ready to take on a challenge, and that have your back. We call that our 'Orange Spirit', and we hope you'll share that too. Apply Complete your application on our careers site. We encourage individuality, empower our people to seize the initiative, and never stop learning. We see people first and foremost for their performance and potential and we are committed to building a diverse and inclusive organisation that supports the needs of all. As such we will make reasonable adjustments at interview through to employment for our candidates. Business Area Primary Location
Senior Palantir Engineer
Kainos Group plc City, London
Senior Palantir Engineer page is loaded Senior Palantir Engineerlocations: Homeworker - UK: London: Belfast: Birminghamposted on: Posted Todayjob requisition id: JR\_16664# Join Kainos and Shape the FutureAt Kainos, we're problem solvers, innovators, and collaborators - driven by a shared mission to create real impact. Whether we're transforming digital services for millions, delivering cutting-edge Workday solutions, or pushing the boundaries of technology, we do it together.We believe in a people-first culture , where your ideas are valued, your growth is supported, and your contributions truly make a difference. Here, you'll be part of a diverse, ambitious team that celebrates creativity and collaboration. Join us and be part of something bigger. MAIN PURPOSE OF THE ROLE & RESPONSIBILITIES IN THE BUSINESS:As a Senior Palantir Engineer (Senior Associate) at Kainos, you will be responsible or designing and developing data processing and data persistence software components for solutions which handle data at scale. Working in agile teams, Senior Data Engineers provide strong development leadership and take responsibility for significant technical components of data systems. You will work within a multi-skilled agile team to design and develop large scale data processing software to meet user needs in demanding production environments.YOUR RESPONSIBILITIES WILL INCLUDE:• Working to develop data processing software primarily for deployment in Big Data technologies. The role encompasses the full software lifecycle including design, code, test and defect resolution.• Working with Architects and Lead Engineers to ensure the software supports non-functional needs.• Collaborating with colleagues to resolve implementation challenges and ensure code quality and maintainability remains high. Leads by example in code quality.• Working with operations teams to ensure operational readiness• Advising customers and managers on the estimated effort and technical implications of user stories and user journeys.• Coaching and mentoring team members.MINIMUM (ESSENTIAL) REQUIREMENTS:• Strong software development experience in one of Java, Scala, or Python• Software development experience with data-processing platforms from vendors such as AWS, Azure, GCP, Databricks.• Experience of developing substantial components for large-scale data processing solutions and deploying into a production environment• Proficient in SQL and SQL extensions for analytical queries• Solid understanding of ETL/ELT data processing pipelines and design patterns• Aware of key features and pitfalls of distributed data processing frameworks, data stores and data serialisation formats• Able to write quality, testable code and has experience of automated testing• Experience with Continuous Integration and Continuous Deployment techniquesDESIRABLE:• Experience of performance tuning• Experience of data visualisation and complex data transformations• Experience with steaming and event-processing architectures including technologies such as Kafka and change-data-capture (CDC) products• Expertise in continuous improvement and sharing input on data best practice.# Embracing our differences At Kainos, we believe in the power of diversity, equity and inclusion. We are committed to building a team that is as diverse as the world we live in, where everyone is valued, respected, and given an equal chance to thrive. We actively seek out talented people from all backgrounds, regardless of age, race, ethnicity, gender, sexual orientation, religion, disability, or any other characteristic that makes them who they are. We also believe every candidate deserves a level playing field. Our friendly talent acquisition team is here to support you every step of the way, so if you require any accommodations or adjustments, we encourage you to reach out. We understand that everyone's journey is different, and by having a private conversation we can ensure that our recruitment process is tailored to your needs.At Kainos we use technology to solve real problems for our customers, overcome big challenges for businesses, and make people's lives easier. We build strong relationships with our customers and go beyond to change the way they work today and the impact they have tomorrow.Our two specialist practices, Digital Services and Workday, work globally for clients across healthcare, commercial and the public sector to make the world a little bit better, day by day.Our people love the exciting work, the cutting-edge technologies and the benefits we offer. That's why we've been ranked in the Sunday Times Top 100 Best Companies on numerous occasions.For more information, see .
Jan 01, 2026
Full time
Senior Palantir Engineer page is loaded Senior Palantir Engineerlocations: Homeworker - UK: London: Belfast: Birminghamposted on: Posted Todayjob requisition id: JR\_16664# Join Kainos and Shape the FutureAt Kainos, we're problem solvers, innovators, and collaborators - driven by a shared mission to create real impact. Whether we're transforming digital services for millions, delivering cutting-edge Workday solutions, or pushing the boundaries of technology, we do it together.We believe in a people-first culture , where your ideas are valued, your growth is supported, and your contributions truly make a difference. Here, you'll be part of a diverse, ambitious team that celebrates creativity and collaboration. Join us and be part of something bigger. MAIN PURPOSE OF THE ROLE & RESPONSIBILITIES IN THE BUSINESS:As a Senior Palantir Engineer (Senior Associate) at Kainos, you will be responsible or designing and developing data processing and data persistence software components for solutions which handle data at scale. Working in agile teams, Senior Data Engineers provide strong development leadership and take responsibility for significant technical components of data systems. You will work within a multi-skilled agile team to design and develop large scale data processing software to meet user needs in demanding production environments.YOUR RESPONSIBILITIES WILL INCLUDE:• Working to develop data processing software primarily for deployment in Big Data technologies. The role encompasses the full software lifecycle including design, code, test and defect resolution.• Working with Architects and Lead Engineers to ensure the software supports non-functional needs.• Collaborating with colleagues to resolve implementation challenges and ensure code quality and maintainability remains high. Leads by example in code quality.• Working with operations teams to ensure operational readiness• Advising customers and managers on the estimated effort and technical implications of user stories and user journeys.• Coaching and mentoring team members.MINIMUM (ESSENTIAL) REQUIREMENTS:• Strong software development experience in one of Java, Scala, or Python• Software development experience with data-processing platforms from vendors such as AWS, Azure, GCP, Databricks.• Experience of developing substantial components for large-scale data processing solutions and deploying into a production environment• Proficient in SQL and SQL extensions for analytical queries• Solid understanding of ETL/ELT data processing pipelines and design patterns• Aware of key features and pitfalls of distributed data processing frameworks, data stores and data serialisation formats• Able to write quality, testable code and has experience of automated testing• Experience with Continuous Integration and Continuous Deployment techniquesDESIRABLE:• Experience of performance tuning• Experience of data visualisation and complex data transformations• Experience with steaming and event-processing architectures including technologies such as Kafka and change-data-capture (CDC) products• Expertise in continuous improvement and sharing input on data best practice.# Embracing our differences At Kainos, we believe in the power of diversity, equity and inclusion. We are committed to building a team that is as diverse as the world we live in, where everyone is valued, respected, and given an equal chance to thrive. We actively seek out talented people from all backgrounds, regardless of age, race, ethnicity, gender, sexual orientation, religion, disability, or any other characteristic that makes them who they are. We also believe every candidate deserves a level playing field. Our friendly talent acquisition team is here to support you every step of the way, so if you require any accommodations or adjustments, we encourage you to reach out. We understand that everyone's journey is different, and by having a private conversation we can ensure that our recruitment process is tailored to your needs.At Kainos we use technology to solve real problems for our customers, overcome big challenges for businesses, and make people's lives easier. We build strong relationships with our customers and go beyond to change the way they work today and the impact they have tomorrow.Our two specialist practices, Digital Services and Workday, work globally for clients across healthcare, commercial and the public sector to make the world a little bit better, day by day.Our people love the exciting work, the cutting-edge technologies and the benefits we offer. That's why we've been ranked in the Sunday Times Top 100 Best Companies on numerous occasions.For more information, see .
(INV) Senior Consultant, Data Engineer, AI&Data, UKI
Ernst & Young Advisory Services Sdn Bhd City, Belfast
Location: Belfast Other locations: Primary Location Only Requisition ID: At EY, we're all in to shape your future with confidence. We'll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Data Engineer Senior Consultant - Job Specification Position Overview We are seeking a highly skilled Data Engineer Senior Consultant with hands on experience designing, building, and optimizing data solutions that enable advanced analytics and AI driven business transformation. This role requires expertise in modern data engineering practices, cloud platforms, and the ability to deliver robust, scalable data pipelines for diverse business domains such as finance, supply chain, energy, and commercial operations. Your Client Impact Design, develop, and deploy end to end data pipelines for complex business problems, supporting analytics, modernising data infrastructure and AI/ML initiatives. Design and implement data models, ETL/ELT workflows, and data integration solutions across structured and unstructured sources. Collaborate with AI engineers, data scientists, and business analysts to deliver integrated solutions that unlock business value. Ensure data quality, integrity, and governance throughout the data lifecycle. Optimize data storage, retrieval, and processing for performance and scalability on cloud platforms (Azure, AWS, GCP, Databricks, Snowflake). Translate business requirements into technical data engineering solutions, including architecture decisions and technology selection. Contribute to proposals, technical assessments, and internal knowledge sharing. Data preparation, feature engineering, and MLOps activities to collaborate with AI engineers, data scientists, and business analysts to deliver integrated solutions. Essential Qualifications Degree or equivalent certification in Computer Science, Data Engineering, Information Systems, Mathematics, or related quantitative field. Proven experience building and maintaining large scale data pipelines using tools such as Databricks, Azure Data Factory, Snowflake, or similar. Strong programming skills in Python and SQL, with proficiency in data engineering libraries (pandas, PySpark, dbt). Deep understanding of data modelling, ETL/ELT processes, and Lakehouse concepts. Experience with data quality frameworks, data governance, and compliance requirements. Familiarity with version control (Git), CI/CD pipelines, and workflow orchestration tools (Airflow, Prefect). Soft Skills Strong analytical and problem solving mindset with attention to detail. Good team player with effective communication and storytelling with data and insights. Consulting skills, including development of presentation decks and client facing documentation. Preferred Criteria Experience with real time data processing (Kafka, Kinesis, Azure Event Hub). Knowledge of big data storage solutions (Delta Lake, Parquet, Avro). Experience with data visualization tools (Power BI, Tableau, Looker). Understanding of AI/ML concepts and collaboration with AI teams. Preferred Qualifications Certifications such as: AWS Certified Data Analytics - Specialty SnowPro Advanced: Data Engineer EY Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Jan 01, 2026
Full time
Location: Belfast Other locations: Primary Location Only Requisition ID: At EY, we're all in to shape your future with confidence. We'll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Data Engineer Senior Consultant - Job Specification Position Overview We are seeking a highly skilled Data Engineer Senior Consultant with hands on experience designing, building, and optimizing data solutions that enable advanced analytics and AI driven business transformation. This role requires expertise in modern data engineering practices, cloud platforms, and the ability to deliver robust, scalable data pipelines for diverse business domains such as finance, supply chain, energy, and commercial operations. Your Client Impact Design, develop, and deploy end to end data pipelines for complex business problems, supporting analytics, modernising data infrastructure and AI/ML initiatives. Design and implement data models, ETL/ELT workflows, and data integration solutions across structured and unstructured sources. Collaborate with AI engineers, data scientists, and business analysts to deliver integrated solutions that unlock business value. Ensure data quality, integrity, and governance throughout the data lifecycle. Optimize data storage, retrieval, and processing for performance and scalability on cloud platforms (Azure, AWS, GCP, Databricks, Snowflake). Translate business requirements into technical data engineering solutions, including architecture decisions and technology selection. Contribute to proposals, technical assessments, and internal knowledge sharing. Data preparation, feature engineering, and MLOps activities to collaborate with AI engineers, data scientists, and business analysts to deliver integrated solutions. Essential Qualifications Degree or equivalent certification in Computer Science, Data Engineering, Information Systems, Mathematics, or related quantitative field. Proven experience building and maintaining large scale data pipelines using tools such as Databricks, Azure Data Factory, Snowflake, or similar. Strong programming skills in Python and SQL, with proficiency in data engineering libraries (pandas, PySpark, dbt). Deep understanding of data modelling, ETL/ELT processes, and Lakehouse concepts. Experience with data quality frameworks, data governance, and compliance requirements. Familiarity with version control (Git), CI/CD pipelines, and workflow orchestration tools (Airflow, Prefect). Soft Skills Strong analytical and problem solving mindset with attention to detail. Good team player with effective communication and storytelling with data and insights. Consulting skills, including development of presentation decks and client facing documentation. Preferred Criteria Experience with real time data processing (Kafka, Kinesis, Azure Event Hub). Knowledge of big data storage solutions (Delta Lake, Parquet, Avro). Experience with data visualization tools (Power BI, Tableau, Looker). Understanding of AI/ML concepts and collaboration with AI teams. Preferred Qualifications Certifications such as: AWS Certified Data Analytics - Specialty SnowPro Advanced: Data Engineer EY Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Senior Manager - Databricks Engineer
Kubrick
Number of Positions 1 City Mansion House Province Greater London Postal Code EC4 Country United Kingdom Job Type NA Job Description/ Summary Who we are: Kubrick is a next-generation technology consultancy, designed to accelerate delivery and build amazing teams. We deliver services across data, AI, and cloud and we're building the next generation of tech leaders. Since 2017, we have established a market leading position supporting our clients build their data and technology teams and deliver enduring solutions. About Kubrick Advanced: Kubrick Advanced is the delivery arm of Kubrick. It is a fast-growing team working across an array of practices tasked with delivering client projects and leading teams of Kubrick consultants. We are at the forefront of Kubrick's growth and instrumental in developing services and solutions for our clients and technology partners. Kubrick Advanced offers opportunities for technical and commercial exposure across a wide range of industries and technical use cases. It is a tight knit team of motivated technology professionals where ongoing learning and development are central to our ethos and internal culture. The Role We are seeking a highly skilled and experienced Senior Machine Learning Engineer to join our growing community specialising in Databricks. The successful applicant will have a strong background in training models to support a range of problem domains and be well versed in delivering and maintaining models in a production environment through applying MLOps best practice. The role will require familiarity with the relevant capabilities of Databricks and at least one of the major cloud service providers (AWS, Azure, or GCP). Advanced proficiency in Python and SQL is essential and an academic background ground in ML theory or Maths is preferred. As a Databricks Engineer working in our Kubrick Advanced team, you will play a key role in delivering high-quality data engineering projects to our clients, often in collaboration with Databricks' professional services team. In addition to hands on technical work, you will often play a leadership role in Kubrick's delivery squads, providing technical guidance and ensuring best practices are followed throughout the project lifecycle. You will work closely with clients and internal stakeholders to translate business requirements into robust technical solutions, ensuring projects are delivered on time, within scope, and aligned with client expectations. You will also support the growth and capability development of Kubrick Advanced, particularly with respect to our Databricks delivery capabilities. This will include assuming line management and/or technical leadership roles within the team. Key Responsibilities Lead technical delivery within Kubrick's squads deployed on client project engagements, ensuring that Kubrick is known for the quality of the technical solutions it provides. Work with Kubrick & client staff of other disciplines to understand and assess requirements, design solutions and inform delivery planning. Seek, build, and maintain effective client relationships contributing to Kubrick's commercial priorities and supporting our partnering approach to working with clients. Line managing developers within the team, supporting their professional and personal development. Promote a culture of engineering excellence within KA through curiosity, collaboration and contributions to the internal knowledge base. Participating in self directed or group learning and upskilling (gaining Kubrick funded certifications) to ensure your technical skills stay up to date and industry relevant. Required Skills & Experience Strong, recent experience in Machine Learning and/or Data Science, including deployment and maintenance of production grade ML models. Demonstrable practical experience of training and deploying ML models on Databricks (has ideally attained a Databricks ML Engineer certification). Ability to "pick the right tool for the job" when it comes to selecting the type of learning model and framework to apply to a given problem statement. Awareness of the cost implications of training, fine tuning, testing and serving ML models. AI/ML subject matter expertise coupled with strong communication skills. Degree or post doc qualification in AI/ML Theory, Maths, statistics or a related subject. Ability to influence key technical and business decision makers. Experience in both delivery/technical leadership and line management. Experience mentoring junior technical personnel. Location: Central London office space by Mansion House/Cannon Street/Monument (EC4V) Working Pattern: Hybrid - expectation of 2 3 days per week in our office and/or working at client locations. Diversity Statement At Kubrick, we not only strive to bridge the skills gap in data and next generation technology, but we are also committed to playing a key role in improving diversity in the tech industry. To that effect, we welcome candidates from all backgrounds and particularly encourage applications from groups currently underrepresented in the industry, including women, people from black and ethnic minority backgrounds, LGBTQ+ people, people with disability and those who are neurodivergent. We know that potential applicants are sometimes put off if they don't meet 100% of the requirements. We think individual experience, skills and passion make all the difference, so if you meet a good proportion of the criteria, we'd love to hear from you. We are committed to ensuring that all candidates have an equally positive experience, and equal chances for success regardless of any personal characteristics. Please speak to us if we can support you with any adjustments to our recruitment process.
Jan 01, 2026
Full time
Number of Positions 1 City Mansion House Province Greater London Postal Code EC4 Country United Kingdom Job Type NA Job Description/ Summary Who we are: Kubrick is a next-generation technology consultancy, designed to accelerate delivery and build amazing teams. We deliver services across data, AI, and cloud and we're building the next generation of tech leaders. Since 2017, we have established a market leading position supporting our clients build their data and technology teams and deliver enduring solutions. About Kubrick Advanced: Kubrick Advanced is the delivery arm of Kubrick. It is a fast-growing team working across an array of practices tasked with delivering client projects and leading teams of Kubrick consultants. We are at the forefront of Kubrick's growth and instrumental in developing services and solutions for our clients and technology partners. Kubrick Advanced offers opportunities for technical and commercial exposure across a wide range of industries and technical use cases. It is a tight knit team of motivated technology professionals where ongoing learning and development are central to our ethos and internal culture. The Role We are seeking a highly skilled and experienced Senior Machine Learning Engineer to join our growing community specialising in Databricks. The successful applicant will have a strong background in training models to support a range of problem domains and be well versed in delivering and maintaining models in a production environment through applying MLOps best practice. The role will require familiarity with the relevant capabilities of Databricks and at least one of the major cloud service providers (AWS, Azure, or GCP). Advanced proficiency in Python and SQL is essential and an academic background ground in ML theory or Maths is preferred. As a Databricks Engineer working in our Kubrick Advanced team, you will play a key role in delivering high-quality data engineering projects to our clients, often in collaboration with Databricks' professional services team. In addition to hands on technical work, you will often play a leadership role in Kubrick's delivery squads, providing technical guidance and ensuring best practices are followed throughout the project lifecycle. You will work closely with clients and internal stakeholders to translate business requirements into robust technical solutions, ensuring projects are delivered on time, within scope, and aligned with client expectations. You will also support the growth and capability development of Kubrick Advanced, particularly with respect to our Databricks delivery capabilities. This will include assuming line management and/or technical leadership roles within the team. Key Responsibilities Lead technical delivery within Kubrick's squads deployed on client project engagements, ensuring that Kubrick is known for the quality of the technical solutions it provides. Work with Kubrick & client staff of other disciplines to understand and assess requirements, design solutions and inform delivery planning. Seek, build, and maintain effective client relationships contributing to Kubrick's commercial priorities and supporting our partnering approach to working with clients. Line managing developers within the team, supporting their professional and personal development. Promote a culture of engineering excellence within KA through curiosity, collaboration and contributions to the internal knowledge base. Participating in self directed or group learning and upskilling (gaining Kubrick funded certifications) to ensure your technical skills stay up to date and industry relevant. Required Skills & Experience Strong, recent experience in Machine Learning and/or Data Science, including deployment and maintenance of production grade ML models. Demonstrable practical experience of training and deploying ML models on Databricks (has ideally attained a Databricks ML Engineer certification). Ability to "pick the right tool for the job" when it comes to selecting the type of learning model and framework to apply to a given problem statement. Awareness of the cost implications of training, fine tuning, testing and serving ML models. AI/ML subject matter expertise coupled with strong communication skills. Degree or post doc qualification in AI/ML Theory, Maths, statistics or a related subject. Ability to influence key technical and business decision makers. Experience in both delivery/technical leadership and line management. Experience mentoring junior technical personnel. Location: Central London office space by Mansion House/Cannon Street/Monument (EC4V) Working Pattern: Hybrid - expectation of 2 3 days per week in our office and/or working at client locations. Diversity Statement At Kubrick, we not only strive to bridge the skills gap in data and next generation technology, but we are also committed to playing a key role in improving diversity in the tech industry. To that effect, we welcome candidates from all backgrounds and particularly encourage applications from groups currently underrepresented in the industry, including women, people from black and ethnic minority backgrounds, LGBTQ+ people, people with disability and those who are neurodivergent. We know that potential applicants are sometimes put off if they don't meet 100% of the requirements. We think individual experience, skills and passion make all the difference, so if you meet a good proportion of the criteria, we'd love to hear from you. We are committed to ensuring that all candidates have an equally positive experience, and equal chances for success regardless of any personal characteristics. Please speak to us if we can support you with any adjustments to our recruitment process.
Principal AI Engineer
Hypercube
Hypercube Consulting is a rapidly growing data and AI consultancy dedicated to transforming the energy sector through cutting edge technology. Specialising in advanced AI systems, including Agentic AI workflows and large language models (LLMs), we help clients unlock profound value from their data assets. Join our expert team in shaping the future of AI driven energy solutions. To get a better understanding of how we think and some of the ways we work, check out our founder's blog here: We are seeking a Principal AI Engineer with expertise in Agentic AI systems and Large Language Models to lead the design, development, and deployment of advanced AI solutions. You will collaborate closely with data engineering, analytics, and cloud teams to deliver transformative AI capabilities for our clients. As an influential hire in a growing organisation, your impact will be substantial, shaping technical strategy, cultivating our AI focused culture, and setting delivery standards. You will: Engage clients to understand their challenges, designing innovative Agentic AI and LLM driven solutions. Architect and implement robust AI systems, including end to end ML/LLM pipelines and autonomous agentic workflows. Promote and establish best practices in LLMOps, AI lifecycle management, and cloud native AI infrastructure. Mentor and develop team expertise, positioning Hypercube as a leader in AI engineering excellence. Key responsibilities Technical leadership & strategy Act as the AI and LLM subject matter expert for internal teams and client engagements. Drive the strategic design and implementation of sophisticated AI solutions leveraging cutting edge Agentic AI architectures and LLM frameworks. Design, build, and maintain scalable AI and LLM based pipelines using AWS or Azure services (e.g., SageMaker, Azure ML, Databricks, OpenAI integrations). Oversee AI model lifecycles from data preprocessing and prompt engineering through to deployment and continuous monitoring in production environments. Coordinate with cross functional teams (data engineers, data scientists, DevOps, stakeholders) to define and deliver client focused AI solutions. Communicate complex AI and LLM methodologies clearly to both technical peers and non technical stakeholders. Thought leadership & evangelism Advocate for best practices in LLMOps and Agentic AI (prompt engineering, evaluation, agent architectures, CI/CD). Engage with the AI community through blogs, speaking engagements, and open source contributions. Support business development through demos, proposals, and technical pre sales activities. Foster strong client relationships, advising on AI and LLM strategic directions. Mentor colleagues, enhancing the team's collective capabilities. Technical skills Please apply even if you meet only some criteria - we value potential alongside experience. Core skills Agentic AI & LLMs: Hands on experience building, deploying, and managing large language models and agent based AI workflows. Cloud AI (AWS/Azure): Demonstrated experience delivering AI solutions in production cloud environments. Advanced Python: Expertise in developing efficient, production grade AI/ML code. LLMOps & AI Model Management: Experience with tools like MLFlow, LangChain, Hugging Face, Kubeflow, or similar platforms. Data Processing: Proficient with Databricks/Spark for large scale AI data processing. SQL: Strong capabilities in data querying and preparation. Data Architectures: Understanding of modern data infrastructure (lakehouses, data lakes, vector databases). Additional (nice to have) skills Infrastructure as Code: Terraform or similar. Streaming: Kafka, Kinesis, Event Hubs. AWS or Azure certifications. Consulting or Energy sector. Public Thought Leadership (blogs, conferences, open source). Effective stakeholder engagement and business requirements translation. Integration with complex external or hybrid cloud systems. Excellent communication across diverse technical audiences. What's in it for you? High Impact: Drive innovation in energy sector AI solutions, directly influencing client outcomes. Career Growth: Benefit from senior mentorship, dedicated training budgets, and clear growth pathways. Flexible Environment: Open to various flexible working arrangements to suit your lifestyle. Start up Culture: Contribute significantly to shaping our culture, processes, and technologies. Personal Branding: Encouraged and supported in building your public professional profile. Enhanced pension Performance related bonus Enhanced maternity/paternity Cycle to work scheme Events and community participation Private health insurance Health cash plan EV leasing scheme Training and events budget Diversity & Inclusion Hypercube is committed to creating an inclusive environment reflective of society. We actively encourage applications from all backgrounds and experiences. Ready to Apply? If this role excites you, please apply via our careers page or reach out directly - even if you meet some but not all criteria. We're excited to explore how your expertise can help transform data and AI in the energy sector! N.B. We are currently not able to sponsor visas. Apply to work at Hypercube We want to ensure that you remain in control of your privacy and personal data. Part of this is making sure you understand your legal rights. To find out more about rights, see our Privacy Policy which can be found in our footer.
Jan 01, 2026
Full time
Hypercube Consulting is a rapidly growing data and AI consultancy dedicated to transforming the energy sector through cutting edge technology. Specialising in advanced AI systems, including Agentic AI workflows and large language models (LLMs), we help clients unlock profound value from their data assets. Join our expert team in shaping the future of AI driven energy solutions. To get a better understanding of how we think and some of the ways we work, check out our founder's blog here: We are seeking a Principal AI Engineer with expertise in Agentic AI systems and Large Language Models to lead the design, development, and deployment of advanced AI solutions. You will collaborate closely with data engineering, analytics, and cloud teams to deliver transformative AI capabilities for our clients. As an influential hire in a growing organisation, your impact will be substantial, shaping technical strategy, cultivating our AI focused culture, and setting delivery standards. You will: Engage clients to understand their challenges, designing innovative Agentic AI and LLM driven solutions. Architect and implement robust AI systems, including end to end ML/LLM pipelines and autonomous agentic workflows. Promote and establish best practices in LLMOps, AI lifecycle management, and cloud native AI infrastructure. Mentor and develop team expertise, positioning Hypercube as a leader in AI engineering excellence. Key responsibilities Technical leadership & strategy Act as the AI and LLM subject matter expert for internal teams and client engagements. Drive the strategic design and implementation of sophisticated AI solutions leveraging cutting edge Agentic AI architectures and LLM frameworks. Design, build, and maintain scalable AI and LLM based pipelines using AWS or Azure services (e.g., SageMaker, Azure ML, Databricks, OpenAI integrations). Oversee AI model lifecycles from data preprocessing and prompt engineering through to deployment and continuous monitoring in production environments. Coordinate with cross functional teams (data engineers, data scientists, DevOps, stakeholders) to define and deliver client focused AI solutions. Communicate complex AI and LLM methodologies clearly to both technical peers and non technical stakeholders. Thought leadership & evangelism Advocate for best practices in LLMOps and Agentic AI (prompt engineering, evaluation, agent architectures, CI/CD). Engage with the AI community through blogs, speaking engagements, and open source contributions. Support business development through demos, proposals, and technical pre sales activities. Foster strong client relationships, advising on AI and LLM strategic directions. Mentor colleagues, enhancing the team's collective capabilities. Technical skills Please apply even if you meet only some criteria - we value potential alongside experience. Core skills Agentic AI & LLMs: Hands on experience building, deploying, and managing large language models and agent based AI workflows. Cloud AI (AWS/Azure): Demonstrated experience delivering AI solutions in production cloud environments. Advanced Python: Expertise in developing efficient, production grade AI/ML code. LLMOps & AI Model Management: Experience with tools like MLFlow, LangChain, Hugging Face, Kubeflow, or similar platforms. Data Processing: Proficient with Databricks/Spark for large scale AI data processing. SQL: Strong capabilities in data querying and preparation. Data Architectures: Understanding of modern data infrastructure (lakehouses, data lakes, vector databases). Additional (nice to have) skills Infrastructure as Code: Terraform or similar. Streaming: Kafka, Kinesis, Event Hubs. AWS or Azure certifications. Consulting or Energy sector. Public Thought Leadership (blogs, conferences, open source). Effective stakeholder engagement and business requirements translation. Integration with complex external or hybrid cloud systems. Excellent communication across diverse technical audiences. What's in it for you? High Impact: Drive innovation in energy sector AI solutions, directly influencing client outcomes. Career Growth: Benefit from senior mentorship, dedicated training budgets, and clear growth pathways. Flexible Environment: Open to various flexible working arrangements to suit your lifestyle. Start up Culture: Contribute significantly to shaping our culture, processes, and technologies. Personal Branding: Encouraged and supported in building your public professional profile. Enhanced pension Performance related bonus Enhanced maternity/paternity Cycle to work scheme Events and community participation Private health insurance Health cash plan EV leasing scheme Training and events budget Diversity & Inclusion Hypercube is committed to creating an inclusive environment reflective of society. We actively encourage applications from all backgrounds and experiences. Ready to Apply? If this role excites you, please apply via our careers page or reach out directly - even if you meet some but not all criteria. We're excited to explore how your expertise can help transform data and AI in the energy sector! N.B. We are currently not able to sponsor visas. Apply to work at Hypercube We want to ensure that you remain in control of your privacy and personal data. Part of this is making sure you understand your legal rights. To find out more about rights, see our Privacy Policy which can be found in our footer.
Senior Software Engineer, Transaction Tracing
Paradigm City, London
Overview The engineering team at Chainalysis is inspired by solving the hardest technical challenges and creating products that build trust in cryptocurrencies. We're a global organization with teams in Denmark, UK, Canada, Israel, and the USA who thrive on the challenging work we do and doing it with other exceptionally talented teammates. Our industry changes every day and our job is to create user facing products supported by a flexible and scalable data platform allowing us to adapt to those rapid changes and bring value to our customers. As a Software Engineer within the Investigations group, you'll join a remote team of other engineers and share the responsibility of providing the right tools for the right data in order to build best in class customer experiences for our customers. Thus, you'll find the opportunity to lead, build and maintain customer facing web frontend features as well as the services and data pipelines backing them. Part of this responsibility is also to learn and understand the domain as well as the underlying data platform in order to successfully partner with product managers and designers to deliver impactful solutions to our customers. Responsibilities Become part of an established team adept at collaboration and task allocation. A team which can focus on generating direct impact. Make key contributions to high availability solutions in close collaboration with your team through your ability and willingness to take ownership and assist where needed. Your contributions will be thorough and result in sustaining a low maintenance overhead. Gain customer understanding as well as valuable insights into our data platform through your curiosity around cryptocurrencies/decentralized-finance Demonstrate your passion for learning by seeking out knowledge and collaborating with people close to our products and solutions. Display your bias to ship and iterate alongside product management and design partners. Qualifications Expertise in writing and maintaining Java/Spring-based backend services. Experience in the full lifecycle of service management, from initial development to continuous operation. A deep understanding of the critical aspects of service scalability and high availability as well as monitoring and maintaining deployed features and services ensuring optimal performance and reliability. Database management systems experience including replication, high availability, performance tuning, and complex query optimization. A background in frontend development. Strong knowledge of container orchestration using Kubernetes. Nice to have An understanding of event streaming platforms. Hands on with infrastructure as code. A track record in mentoring other engineers, leading cross-team projects without authority, and driving design and technology decisions. Technologies we use (nice to have experience) Monitoring and alerting: Datadog, Falcon LogScale (formerly Humio) • Database management systems: PostgreSQL, ClickHouse Deployment tools: Flux, Helm, Kustomize Frontend frameworks: React, Angular Infrastructure as code: Terraform, Terragrunt Cloud provider: AWS Event streaming platform: Kafka Big data processing: Databricks About Chainalysis Blockchain technology is powering a growing wave of innovation. Businesses and governments around the world are using blockchains to make banking more efficient, connect with their customers, and investigate criminal cases. As adoption of blockchain technology grows, more and more organizations seek access to all this ecosystem has to offer. That's where Chainalysis comes in. We provide complete knowledge of what's happening on blockchains through our data, services, and solutions. With Chainalysis, organizations can navigate blockchains safely and with confidence. You belong here. At Chainalysis, we believe that diversity of experience and thought makes us stronger. With both customers and employees around the world, we are committed to ensuring our team reflects the unique communities around us. We're ensuring we keep learning by committing to continually revisit and reevaluate our diversity culture. We encourage applicants across any race, ethnicity, gender/gender expression, age, spirituality, ability, experience and more. If you need any accommodations to make our interview process more accessible to you due to a disability, don't hesitate to let us know. You can learn more about our process and rights. We can't wait to meet you.
Jan 01, 2026
Full time
Overview The engineering team at Chainalysis is inspired by solving the hardest technical challenges and creating products that build trust in cryptocurrencies. We're a global organization with teams in Denmark, UK, Canada, Israel, and the USA who thrive on the challenging work we do and doing it with other exceptionally talented teammates. Our industry changes every day and our job is to create user facing products supported by a flexible and scalable data platform allowing us to adapt to those rapid changes and bring value to our customers. As a Software Engineer within the Investigations group, you'll join a remote team of other engineers and share the responsibility of providing the right tools for the right data in order to build best in class customer experiences for our customers. Thus, you'll find the opportunity to lead, build and maintain customer facing web frontend features as well as the services and data pipelines backing them. Part of this responsibility is also to learn and understand the domain as well as the underlying data platform in order to successfully partner with product managers and designers to deliver impactful solutions to our customers. Responsibilities Become part of an established team adept at collaboration and task allocation. A team which can focus on generating direct impact. Make key contributions to high availability solutions in close collaboration with your team through your ability and willingness to take ownership and assist where needed. Your contributions will be thorough and result in sustaining a low maintenance overhead. Gain customer understanding as well as valuable insights into our data platform through your curiosity around cryptocurrencies/decentralized-finance Demonstrate your passion for learning by seeking out knowledge and collaborating with people close to our products and solutions. Display your bias to ship and iterate alongside product management and design partners. Qualifications Expertise in writing and maintaining Java/Spring-based backend services. Experience in the full lifecycle of service management, from initial development to continuous operation. A deep understanding of the critical aspects of service scalability and high availability as well as monitoring and maintaining deployed features and services ensuring optimal performance and reliability. Database management systems experience including replication, high availability, performance tuning, and complex query optimization. A background in frontend development. Strong knowledge of container orchestration using Kubernetes. Nice to have An understanding of event streaming platforms. Hands on with infrastructure as code. A track record in mentoring other engineers, leading cross-team projects without authority, and driving design and technology decisions. Technologies we use (nice to have experience) Monitoring and alerting: Datadog, Falcon LogScale (formerly Humio) • Database management systems: PostgreSQL, ClickHouse Deployment tools: Flux, Helm, Kustomize Frontend frameworks: React, Angular Infrastructure as code: Terraform, Terragrunt Cloud provider: AWS Event streaming platform: Kafka Big data processing: Databricks About Chainalysis Blockchain technology is powering a growing wave of innovation. Businesses and governments around the world are using blockchains to make banking more efficient, connect with their customers, and investigate criminal cases. As adoption of blockchain technology grows, more and more organizations seek access to all this ecosystem has to offer. That's where Chainalysis comes in. We provide complete knowledge of what's happening on blockchains through our data, services, and solutions. With Chainalysis, organizations can navigate blockchains safely and with confidence. You belong here. At Chainalysis, we believe that diversity of experience and thought makes us stronger. With both customers and employees around the world, we are committed to ensuring our team reflects the unique communities around us. We're ensuring we keep learning by committing to continually revisit and reevaluate our diversity culture. We encourage applicants across any race, ethnicity, gender/gender expression, age, spirituality, ability, experience and more. If you need any accommodations to make our interview process more accessible to you due to a disability, don't hesitate to let us know. You can learn more about our process and rights. We can't wait to meet you.
Senior Data Platform Architect (Azure/PySpark/Databricks)
PEXA Group Limited Leeds, Yorkshire
A leading tech company in the property sector seeks a Principal Data Engineer to define and scale the technical strategy of their data platform. The role involves overseeing data systems, ensuring quality and compliance with regulations. Ideal candidates will have cloud experience in AWS and Azure, exceptional skills in PySpark, and technical leadership. Offering a salary between £80,000 and £100,000 a year. This is an opportunity to mentor engineers and influence data strategy across the organization.
Jan 01, 2026
Full time
A leading tech company in the property sector seeks a Principal Data Engineer to define and scale the technical strategy of their data platform. The role involves overseeing data systems, ensuring quality and compliance with regulations. Ideal candidates will have cloud experience in AWS and Azure, exceptional skills in PySpark, and technical leadership. Offering a salary between £80,000 and £100,000 a year. This is an opportunity to mentor engineers and influence data strategy across the organization.
Senior Data Platform Engineer
easyJet Airline Company PLC City, London
Overview Location: Luton/Hybrid When it comes to innovation and achievement there are few organisations with a better track record. Join us and you'll be able to play a big part in the success of our highly successful, fast-paced business that opens up Europe so that people can exercise their get-up-and-go. With almost 300 aircraft flying over 1,000 routes to more than 32 countries, we're the UK's largest airline, the fourth largest in Europe and the tenth largest in the world. Set to fly more than 90 million passengers this year, we employ over 10,000 people. Its big-scale stuff and we're still growing. Job Purpose With a big investment into Databricks, and with a large amount of interesting data, this is the chance for you to come and be part of an exciting transformation in the way we store, analyse and use data in a fast paced organisation. You will join as a Senior Platform Data Engineer providing technical leadership to the Data Engineering team. You will work closely with our Data Scientists and business stakeholders to ensure value is delivered through our solutions. Responsibilities Develop robust, scalable data pipelines to serve the easyJet analyst and data science community. Highly competent hands-on experience with relevant Data Engineering technologies, such as Databricks, Spark, Spark API, Python, SQL Server, Scala. Work with data scientists, machine learning engineers and DevOps engineers to develop, deploy machine learning models and algorithms aimed at addressing specific business challenges and opportunities. Coach and mentor the team (including contractors) to improve development standards. Work with Business Analysts to deliver against requirements and realise business benefits. Build a documentation library and data catalogue for developed code/products. Oversight of project deliverables and code quality going into each release. Qualifications Key Skills Required Technical Ability: has a high level of current, technical competence in relevant technologies, and be able to independently learn new technologies and techniques as our stack changes. Clear communication; can communicate effectively in both written and verbal forms with technical and nontechnical audiences alike. Complex problem-solving ability; structured, organised, process-driven and outcome-oriented. Able to use historical experiences to help with future innovations. Passionate about data; enjoy being hands-on and learning about new technologies, particularly in the data field. Self-directed and independent; able to take general guidance and the overarching data strategy and identify practical steps to take. Technical Skills Required Significant experience designing and building data solutions on a cloud based, big data distributed system. Hands-on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD), and software deployment automation with GitHub actions or Azure DevOps. Experience in testing automation of data transformation pipelines, using frameworks like Pytest or dbt Unit Test. Comfortable writing efficient SQL and debugging. Data warehouse operations and tuning experience in schema evolution, indexing, partitioning. Hands-on IaC development experience with Terraform or CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam). Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data lineage frameworks like Great Expectations, dbt data quality, OpenLineage or Marquez, and data drift detection and alerting. Understanding of Data Management principles (security and data privacy) and how they can be applied to Data Engineering processes/solutions (e.g. access management, data privacy, handling of sensitive data (e.g. GDPR). Experience in event-driven architecture, ingesting data in real time in a commercial production environment with Spark Streaming, Kafka, DLT or Beam. Understanding of the challenges faced in the design and development of a streaming data pipeline and the different options for processing unbounded data (pubsub, message queues, event streaming etc). Understanding of the most commonly used Data Science and Machine Learning models, libraries and frameworks. Knowledge of the development lifecycle of analytical solutions using visualisation tools (e.g. Tableau, PowerBI, ThoughtSpot). Hands-on development experience in an airline, e-commerce or retail industry. Worked within the AWS cloud ecosystem. Experience of building a data transformation framework with dbt. Location & Hours Location & Hours of Work: We operate a hybrid working policy of 40% of the month spent with colleagues. Application Process Interested candidates should apply through our careers portal. Reasonable Adjustments At easyJet, we are dedicated to fostering an inclusive workplace that reflects the diverse customers we serve across Europe. We welcome candidates from all backgrounds. If you require specific adjustments or support during the application or recruitment process, such as extra time for assessments or accessible interview locations, please contact us at . We are committed to providing reasonable adjustments throughout the recruitment process to ensure accessibility and accommodation MP2# Business Area Primary Location
Jan 01, 2026
Full time
Overview Location: Luton/Hybrid When it comes to innovation and achievement there are few organisations with a better track record. Join us and you'll be able to play a big part in the success of our highly successful, fast-paced business that opens up Europe so that people can exercise their get-up-and-go. With almost 300 aircraft flying over 1,000 routes to more than 32 countries, we're the UK's largest airline, the fourth largest in Europe and the tenth largest in the world. Set to fly more than 90 million passengers this year, we employ over 10,000 people. Its big-scale stuff and we're still growing. Job Purpose With a big investment into Databricks, and with a large amount of interesting data, this is the chance for you to come and be part of an exciting transformation in the way we store, analyse and use data in a fast paced organisation. You will join as a Senior Platform Data Engineer providing technical leadership to the Data Engineering team. You will work closely with our Data Scientists and business stakeholders to ensure value is delivered through our solutions. Responsibilities Develop robust, scalable data pipelines to serve the easyJet analyst and data science community. Highly competent hands-on experience with relevant Data Engineering technologies, such as Databricks, Spark, Spark API, Python, SQL Server, Scala. Work with data scientists, machine learning engineers and DevOps engineers to develop, deploy machine learning models and algorithms aimed at addressing specific business challenges and opportunities. Coach and mentor the team (including contractors) to improve development standards. Work with Business Analysts to deliver against requirements and realise business benefits. Build a documentation library and data catalogue for developed code/products. Oversight of project deliverables and code quality going into each release. Qualifications Key Skills Required Technical Ability: has a high level of current, technical competence in relevant technologies, and be able to independently learn new technologies and techniques as our stack changes. Clear communication; can communicate effectively in both written and verbal forms with technical and nontechnical audiences alike. Complex problem-solving ability; structured, organised, process-driven and outcome-oriented. Able to use historical experiences to help with future innovations. Passionate about data; enjoy being hands-on and learning about new technologies, particularly in the data field. Self-directed and independent; able to take general guidance and the overarching data strategy and identify practical steps to take. Technical Skills Required Significant experience designing and building data solutions on a cloud based, big data distributed system. Hands-on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD), and software deployment automation with GitHub actions or Azure DevOps. Experience in testing automation of data transformation pipelines, using frameworks like Pytest or dbt Unit Test. Comfortable writing efficient SQL and debugging. Data warehouse operations and tuning experience in schema evolution, indexing, partitioning. Hands-on IaC development experience with Terraform or CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam). Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data lineage frameworks like Great Expectations, dbt data quality, OpenLineage or Marquez, and data drift detection and alerting. Understanding of Data Management principles (security and data privacy) and how they can be applied to Data Engineering processes/solutions (e.g. access management, data privacy, handling of sensitive data (e.g. GDPR). Experience in event-driven architecture, ingesting data in real time in a commercial production environment with Spark Streaming, Kafka, DLT or Beam. Understanding of the challenges faced in the design and development of a streaming data pipeline and the different options for processing unbounded data (pubsub, message queues, event streaming etc). Understanding of the most commonly used Data Science and Machine Learning models, libraries and frameworks. Knowledge of the development lifecycle of analytical solutions using visualisation tools (e.g. Tableau, PowerBI, ThoughtSpot). Hands-on development experience in an airline, e-commerce or retail industry. Worked within the AWS cloud ecosystem. Experience of building a data transformation framework with dbt. Location & Hours Location & Hours of Work: We operate a hybrid working policy of 40% of the month spent with colleagues. Application Process Interested candidates should apply through our careers portal. Reasonable Adjustments At easyJet, we are dedicated to fostering an inclusive workplace that reflects the diverse customers we serve across Europe. We welcome candidates from all backgrounds. If you require specific adjustments or support during the application or recruitment process, such as extra time for assessments or accessible interview locations, please contact us at . We are committed to providing reasonable adjustments throughout the recruitment process to ensure accessibility and accommodation MP2# Business Area Primary Location
Senior Software Engineer - Energy & Resources Analytics Platform
Baringa Partners LLP City, London
Senior Software Engineer- Energy & Resources Analytics Platform London, United Kingdom We set out to build the world's most trusted consulting firm - creating lasting impact for clients and pioneering a positive, people-first way of working. We work with everyone from FTSE 100 names to bright new start-ups, in every sector. You'll find us collaborating shoulder-to-shoulder with our clients, from the big picture right down to the detail: helping them define their strategy, deliver complex change, spot the right commercial opportunities, manage risk, or bring their purpose and sustainability goals to life. Our clients love how we get to know what makes their organisations tick - slotting seamlessly into their teams and being proudly geeky about solving their challenges. We have hubs in Europe, the US, Asia and Australia, and we work all around the world - from a wind farm in Wyoming to a boardroom in Berlin." Our Market, Advisory and Analytics practice are looking for experienced Senior Platform Software Engineers to join the team. Baringa are world leaders in modelling energy markets and using the insights these models provide to drive change in a decarbonising energy industry. We deal with highly varied modelling, data, and processing - everything from kilobytes of academic papers to terabytes of hourly scenario projections for 50+ years. Our staff come from diverse backgrounds, based across multiple geographies, and utilise a variety of technologies, tools and analytical modelling approaches. They need rapid access to data, analytics outputs, and processing through GUIs, APIs and other systems, whilst being given the confidence that everything is compliant, licenced and well-governed. Our scale and model complexity has now reached a point that to continue to achieve our business vision, we are building a dedicated global Platform team. We are looking for experienced software engineers to join as core members of this team to help architect, implement and support the tools that will be fundamental to Baringa's ongoing growth and success. This will be a high calibre team, where curiosity and a thirst to understand the problem space is not just encouraged, but prioritised. There will be a range of engineering challenges to solve at all levels, requiring the flexibility to adopt the right technology for a given problem space. Our next generation platform will be core to driving significant improvements to the lives and capabilities of our Baringa colleagues in the energy modelling space. Successful candidates will be given responsibility and freedom from day one, trusted to challenge and be challenged in turn, in an environment that rewards creativity and entrepreneurship as we build the foundations for Baringa's future. Come and join us to be part of the energy transition, the defining challenge of our lifetimes, where your technical skills and experience can have real impact. What you will be doing We are looking for an experienced software engineer to be a core member of the new Platform team, working closely with the Engineering Lead. You will be working within our Energy and Resources group to: Work closely with the Engineering Lead to engage tool developers, energy system modellers, client facing energy experts and users across the business to build a platform and supporting toolchain that fulfils the needs of your Baringa colleagues. Be a driving force in the development, automated testing and deployment of the new platform, utilising best-practices to ensure quality, reliability and monitoring is built-in from inception. Work with the team to build a DevOps pipeline with robust CI and CD practices, with a focus on evolving our processes to improve our team's engineering experience. Be an active part of the team's agile development approach, from refinement through to demonstration and feedback, helping drive the continuous improvement of team processes as we grow and mature. Your skills and experience We're looking for people to join the team who will be committed to designing and building high quality and fit-for-purpose systems, enabling our staff to maximise the creation, utilisation and management of the various models, tools and data frameworks that enable market-leading insights for our clients. You are passionate about building the 'right' solution to problems, and understanding the 'why' behind what you're building to support Baringa's work in the energy sector. You have practical experience as a senior engineer in highly motivated engineering team(s), collaborating closely with colleagues and taking pride in what you create. You are great at problem solving and see all technologies/engineering as a means to achieve this. You have advanced working knowledge of a general programming language (e.g. Scala, Python, Java, C# etc.) and understand both domain modelling and application programming. You have working knowledge of data management platforms (SQL, NoSQL, Spark/Databricks etc.) You have working knowledge of modern software engineering tools (Git, CI/CD pipelines), cloud technologies (Azure, AWS) and IaC (e.g. Terraform, Pulumi) You have worked with different frameworks throughout technology stacks (e.g. React/Angular/Vue/Blazor frontends etc., FastAPI/Spring Boot/Django/.NET backends etc.) You have experience building and working across different architectural approaches, and are confident in justifying your technology and architectural choices. You are passionate about communicating complex concepts succinctly to both technical and non-technical colleagues and clients to reach a common understanding. You have experience working with agile methodologies (e.g. Scrum/Kanban), with an understanding of the key principles that underpin its effective use. We've seen the research that says that some candidates will not apply to a role if they don't meet every requirement, so don't let this put you off. If you think you are a good overall match please do get in touch - we look carefully at all applications and you may well be our ideal candidate. What a career at Baringa will give you Putting People First. Baringa is a People First company and wellbeing is at the forefront of our culture. We recognise the importance of work-life balance and flexible working and provide our staff amazing benefits. Some of these benefits include: Generous Annual Leave Policy: We recognise everyone needs a well-deserved break. We provide our employees with 5 weeks of annual leave, fully available at the start of each year. In addition to this, we have introduced our 5-Year Recharge benefit which allows all employees an additional 2 weeks of paid leave after 5 years continuous service. Flexible Working: We know that the 'ideal' work-life balance will vary from person to person and change at different stages of our working lives. To accommodate this, we have implemented a hybrid working policy and introduced more flexibility around taking unpaid leave. Corporate Responsibility Days: Our world is important to us, so all our employees get 3 every year to help social and environmental causes and increase our impact on the communities that mean the most to us. Wellbeing Fund: We want to encourage all employees to take charge and prioritise their own wellbeing. We've introduced our annual People Fund to support this by offering every individual a fund to support and manage their wellbeing through an activity of their choice. Profit Share Scheme: All employees participate in the Baringa Group Profit Share Scheme so everyone has a stake in the company's success. Diversity and Inclusion We are proud to be an Equal Opportunity Employer. We believe that creating an environment where everyone feels a sense of belonging is central to our culture and that diversity is paramount to driving creativity, innovation, and value for our clients and for our people. You can be a part of our 'Great Place to Work' - with our commitment to women and well-being in the workplace for all. Click here to see some of our recent awards and how we've achieved this. Using business as a force for good. We maintain high standards of environmental performance and transparency, which can be seen through our commitment to Net Zero with our SBTI-verified Scope 1, 2 and 3 emissions reduction targets and our support of the Better Business Act. We report our progress publicly and ensure that we are also externally assessed and scored through organisations like CDP and EcoVadis - helping us to continually identify where we can improve. We have a long legacy of supporting the communities in which we work, and offer a variety of ways to contribute, by putting people first and creating impact that lasts. Our Corporate Social Responsibility (CSR) agenda is about giving back to the communities in which we live and work by sharing our skills, talent and time. In essence, we aim to empower and encourage everyone in the firm to contribute to the things we care about, and support registered charities and organisations with a clear social or environmental purpose to increase the positive impact they can have. All applications will receive consideration for employment without regard to race, ethnicity, religion, gender, gender identity or expression, sexual orientation, nationality, disability, age, faith or social background. We do not filter applications by university background and encourage those who have taken alternative educational and career paths to apply. We would like to actively encourage applications from those who identify with less represented and minority groups. We operate an inclusive recruitment process . click apply for full job details
Jan 01, 2026
Full time
Senior Software Engineer- Energy & Resources Analytics Platform London, United Kingdom We set out to build the world's most trusted consulting firm - creating lasting impact for clients and pioneering a positive, people-first way of working. We work with everyone from FTSE 100 names to bright new start-ups, in every sector. You'll find us collaborating shoulder-to-shoulder with our clients, from the big picture right down to the detail: helping them define their strategy, deliver complex change, spot the right commercial opportunities, manage risk, or bring their purpose and sustainability goals to life. Our clients love how we get to know what makes their organisations tick - slotting seamlessly into their teams and being proudly geeky about solving their challenges. We have hubs in Europe, the US, Asia and Australia, and we work all around the world - from a wind farm in Wyoming to a boardroom in Berlin." Our Market, Advisory and Analytics practice are looking for experienced Senior Platform Software Engineers to join the team. Baringa are world leaders in modelling energy markets and using the insights these models provide to drive change in a decarbonising energy industry. We deal with highly varied modelling, data, and processing - everything from kilobytes of academic papers to terabytes of hourly scenario projections for 50+ years. Our staff come from diverse backgrounds, based across multiple geographies, and utilise a variety of technologies, tools and analytical modelling approaches. They need rapid access to data, analytics outputs, and processing through GUIs, APIs and other systems, whilst being given the confidence that everything is compliant, licenced and well-governed. Our scale and model complexity has now reached a point that to continue to achieve our business vision, we are building a dedicated global Platform team. We are looking for experienced software engineers to join as core members of this team to help architect, implement and support the tools that will be fundamental to Baringa's ongoing growth and success. This will be a high calibre team, where curiosity and a thirst to understand the problem space is not just encouraged, but prioritised. There will be a range of engineering challenges to solve at all levels, requiring the flexibility to adopt the right technology for a given problem space. Our next generation platform will be core to driving significant improvements to the lives and capabilities of our Baringa colleagues in the energy modelling space. Successful candidates will be given responsibility and freedom from day one, trusted to challenge and be challenged in turn, in an environment that rewards creativity and entrepreneurship as we build the foundations for Baringa's future. Come and join us to be part of the energy transition, the defining challenge of our lifetimes, where your technical skills and experience can have real impact. What you will be doing We are looking for an experienced software engineer to be a core member of the new Platform team, working closely with the Engineering Lead. You will be working within our Energy and Resources group to: Work closely with the Engineering Lead to engage tool developers, energy system modellers, client facing energy experts and users across the business to build a platform and supporting toolchain that fulfils the needs of your Baringa colleagues. Be a driving force in the development, automated testing and deployment of the new platform, utilising best-practices to ensure quality, reliability and monitoring is built-in from inception. Work with the team to build a DevOps pipeline with robust CI and CD practices, with a focus on evolving our processes to improve our team's engineering experience. Be an active part of the team's agile development approach, from refinement through to demonstration and feedback, helping drive the continuous improvement of team processes as we grow and mature. Your skills and experience We're looking for people to join the team who will be committed to designing and building high quality and fit-for-purpose systems, enabling our staff to maximise the creation, utilisation and management of the various models, tools and data frameworks that enable market-leading insights for our clients. You are passionate about building the 'right' solution to problems, and understanding the 'why' behind what you're building to support Baringa's work in the energy sector. You have practical experience as a senior engineer in highly motivated engineering team(s), collaborating closely with colleagues and taking pride in what you create. You are great at problem solving and see all technologies/engineering as a means to achieve this. You have advanced working knowledge of a general programming language (e.g. Scala, Python, Java, C# etc.) and understand both domain modelling and application programming. You have working knowledge of data management platforms (SQL, NoSQL, Spark/Databricks etc.) You have working knowledge of modern software engineering tools (Git, CI/CD pipelines), cloud technologies (Azure, AWS) and IaC (e.g. Terraform, Pulumi) You have worked with different frameworks throughout technology stacks (e.g. React/Angular/Vue/Blazor frontends etc., FastAPI/Spring Boot/Django/.NET backends etc.) You have experience building and working across different architectural approaches, and are confident in justifying your technology and architectural choices. You are passionate about communicating complex concepts succinctly to both technical and non-technical colleagues and clients to reach a common understanding. You have experience working with agile methodologies (e.g. Scrum/Kanban), with an understanding of the key principles that underpin its effective use. We've seen the research that says that some candidates will not apply to a role if they don't meet every requirement, so don't let this put you off. If you think you are a good overall match please do get in touch - we look carefully at all applications and you may well be our ideal candidate. What a career at Baringa will give you Putting People First. Baringa is a People First company and wellbeing is at the forefront of our culture. We recognise the importance of work-life balance and flexible working and provide our staff amazing benefits. Some of these benefits include: Generous Annual Leave Policy: We recognise everyone needs a well-deserved break. We provide our employees with 5 weeks of annual leave, fully available at the start of each year. In addition to this, we have introduced our 5-Year Recharge benefit which allows all employees an additional 2 weeks of paid leave after 5 years continuous service. Flexible Working: We know that the 'ideal' work-life balance will vary from person to person and change at different stages of our working lives. To accommodate this, we have implemented a hybrid working policy and introduced more flexibility around taking unpaid leave. Corporate Responsibility Days: Our world is important to us, so all our employees get 3 every year to help social and environmental causes and increase our impact on the communities that mean the most to us. Wellbeing Fund: We want to encourage all employees to take charge and prioritise their own wellbeing. We've introduced our annual People Fund to support this by offering every individual a fund to support and manage their wellbeing through an activity of their choice. Profit Share Scheme: All employees participate in the Baringa Group Profit Share Scheme so everyone has a stake in the company's success. Diversity and Inclusion We are proud to be an Equal Opportunity Employer. We believe that creating an environment where everyone feels a sense of belonging is central to our culture and that diversity is paramount to driving creativity, innovation, and value for our clients and for our people. You can be a part of our 'Great Place to Work' - with our commitment to women and well-being in the workplace for all. Click here to see some of our recent awards and how we've achieved this. Using business as a force for good. We maintain high standards of environmental performance and transparency, which can be seen through our commitment to Net Zero with our SBTI-verified Scope 1, 2 and 3 emissions reduction targets and our support of the Better Business Act. We report our progress publicly and ensure that we are also externally assessed and scored through organisations like CDP and EcoVadis - helping us to continually identify where we can improve. We have a long legacy of supporting the communities in which we work, and offer a variety of ways to contribute, by putting people first and creating impact that lasts. Our Corporate Social Responsibility (CSR) agenda is about giving back to the communities in which we live and work by sharing our skills, talent and time. In essence, we aim to empower and encourage everyone in the firm to contribute to the things we care about, and support registered charities and organisations with a clear social or environmental purpose to increase the positive impact they can have. All applications will receive consideration for employment without regard to race, ethnicity, religion, gender, gender identity or expression, sexual orientation, nationality, disability, age, faith or social background. We do not filter applications by university background and encourage those who have taken alternative educational and career paths to apply. We would like to actively encourage applications from those who identify with less represented and minority groups. We operate an inclusive recruitment process . click apply for full job details
Development & Cloud Solutions Architect
Goaco Ltd
Do you strive to make a difference? Goaco is looking to build a team to continue solving problems using software and technology for our clients. We are developers at heart - and by the mind too. We thrive on challenges and live for logical thinking. Formed over a decade ago, we have built on our successes, all of whom have benefitted from their level-headed software solutions. The team is all like-minded individuals, with a drive to succeed in their own fields. Goaco is a Digital and Cyber Security Consultancy and we are looking to build a team to continue solving problems using software and technology for our clients. As a Development & Cloud Solutions Architect, you will lead the design and delivery of scalable, secure, and high-performing cloud-based solutions tailored to meet the needs of private and public sector clients. RESPONSIBILITIES Across AWS, Azure, and GCP, devise and execute flexible, protected data architectures. Give enterprise-wide data models, pipelines, and artificial intelligence frameworks. Use Spark, Kinesis, Pub/Sub to organise real time and batch data processing. Create ML pipelines for effective model training, deployment, and monitoring. Construct AI powered analytical systems, data lakes, and warehouses stored in the cloud. Ensure that data security, privacy, and integrity policies are followed. Offer technical leadership in data engineering, artificial intelligence, and machine learning. Deploy auto AI using CI/CD together with MLOps greatest practices. While guaranteeing high availability and performance, make the best use of cloud assets. Lead groups in choosing technology agnostic data and artificial intelligence tools. Work with the sales team to help form AI driven products and answer RFPs. Give customers technical lectures, proofs of concept, and artificial intelligence demos. Encourage stakeholders to modernise, embrace the cloud, and embrace artificial intelligence. Get engineering teams and senior management involved to coordinate artificial intelligence approaches with corporate aims. Give training seminars, hackathons, and workshops to stimulate creativity. Mentoring groups on optimal approaches in artificial intelligence, data engineering, and cloud structures. Create artificial intelligence powered systems appropriate to security standards and official rules. Convey difficult scientific ideas to stakeholder who themselves are not experts. EXPERIENCE & SKILLS REQUIRED Experience in AI/ML platform design, data engineering, and solution architecture. Skilled in current data technologies (Kafka, Spark, Snowflake, Databricks, BigQuery). Proficient in deep learning (TensorFlow, PyTorch) as well as artificial intelligence/machine learning platforms (SageMaker, Azure ML, Vertex AI). Good understanding of responsible Artificial Intelligence practices, model interpretability, and AI ethics. Experience combining artificial intelligence with event driven architectures, APIfirst approach, and microservices. Ability demonstrated to create AI/ML solutions from start to end-from data acquisition to deployment. Knowledge of regulatory compliance, data strategy, and governance in different industries. Experience with cognitive services, decision making, and AI driven automation. Technical presales knowledge, proposition drafting, and stakeholder interaction. Good consulting abilities marrying technical feasibility with corporate requirements. Track history of providing artificial intelligence/data services to enterprise clients and government. Full knowledge of constraints on public sector data policies and compliance requirements. Cybersecurity, health, and financial industries expertise. Trained in Azure, AWS, and common artificial intelligence/machine learning platforms. BENEFITS We prioritise employee well-being and mental health by offering a comprehensive range of benefits so to enhance both health and career growth. Competitive Salary: Salary depending on experience and background. Health Benefits: 24/7 GP Access, Counselling Services, Virtual Physiotherapy, Discounted Gym Memberships, Virtual Gym Classes, Discounted Private Health Cover, Eye Care Discounts. Wealth Benefits: Shopping Discounts, Debt Support, Money Advice, Free Credit Reports, Travel Money Savings. Education Benefits: Learning Courses, Business Skills Training. Offered only to employees based in the UK Apply for this job Full Name Email Phone Canada +1 Single Line Text File Upload Click or drag a file to this area to upload.
Jan 01, 2026
Full time
Do you strive to make a difference? Goaco is looking to build a team to continue solving problems using software and technology for our clients. We are developers at heart - and by the mind too. We thrive on challenges and live for logical thinking. Formed over a decade ago, we have built on our successes, all of whom have benefitted from their level-headed software solutions. The team is all like-minded individuals, with a drive to succeed in their own fields. Goaco is a Digital and Cyber Security Consultancy and we are looking to build a team to continue solving problems using software and technology for our clients. As a Development & Cloud Solutions Architect, you will lead the design and delivery of scalable, secure, and high-performing cloud-based solutions tailored to meet the needs of private and public sector clients. RESPONSIBILITIES Across AWS, Azure, and GCP, devise and execute flexible, protected data architectures. Give enterprise-wide data models, pipelines, and artificial intelligence frameworks. Use Spark, Kinesis, Pub/Sub to organise real time and batch data processing. Create ML pipelines for effective model training, deployment, and monitoring. Construct AI powered analytical systems, data lakes, and warehouses stored in the cloud. Ensure that data security, privacy, and integrity policies are followed. Offer technical leadership in data engineering, artificial intelligence, and machine learning. Deploy auto AI using CI/CD together with MLOps greatest practices. While guaranteeing high availability and performance, make the best use of cloud assets. Lead groups in choosing technology agnostic data and artificial intelligence tools. Work with the sales team to help form AI driven products and answer RFPs. Give customers technical lectures, proofs of concept, and artificial intelligence demos. Encourage stakeholders to modernise, embrace the cloud, and embrace artificial intelligence. Get engineering teams and senior management involved to coordinate artificial intelligence approaches with corporate aims. Give training seminars, hackathons, and workshops to stimulate creativity. Mentoring groups on optimal approaches in artificial intelligence, data engineering, and cloud structures. Create artificial intelligence powered systems appropriate to security standards and official rules. Convey difficult scientific ideas to stakeholder who themselves are not experts. EXPERIENCE & SKILLS REQUIRED Experience in AI/ML platform design, data engineering, and solution architecture. Skilled in current data technologies (Kafka, Spark, Snowflake, Databricks, BigQuery). Proficient in deep learning (TensorFlow, PyTorch) as well as artificial intelligence/machine learning platforms (SageMaker, Azure ML, Vertex AI). Good understanding of responsible Artificial Intelligence practices, model interpretability, and AI ethics. Experience combining artificial intelligence with event driven architectures, APIfirst approach, and microservices. Ability demonstrated to create AI/ML solutions from start to end-from data acquisition to deployment. Knowledge of regulatory compliance, data strategy, and governance in different industries. Experience with cognitive services, decision making, and AI driven automation. Technical presales knowledge, proposition drafting, and stakeholder interaction. Good consulting abilities marrying technical feasibility with corporate requirements. Track history of providing artificial intelligence/data services to enterprise clients and government. Full knowledge of constraints on public sector data policies and compliance requirements. Cybersecurity, health, and financial industries expertise. Trained in Azure, AWS, and common artificial intelligence/machine learning platforms. BENEFITS We prioritise employee well-being and mental health by offering a comprehensive range of benefits so to enhance both health and career growth. Competitive Salary: Salary depending on experience and background. Health Benefits: 24/7 GP Access, Counselling Services, Virtual Physiotherapy, Discounted Gym Memberships, Virtual Gym Classes, Discounted Private Health Cover, Eye Care Discounts. Wealth Benefits: Shopping Discounts, Debt Support, Money Advice, Free Credit Reports, Travel Money Savings. Education Benefits: Learning Courses, Business Skills Training. Offered only to employees based in the UK Apply for this job Full Name Email Phone Canada +1 Single Line Text File Upload Click or drag a file to this area to upload.
Senior Ruby Backend Engineer
Kitrum Braintree, Essex
We're looking for a Senior Ruby Engineer to join the Payments team and help evolve the platform that powers millions of subscription bills worldwide. You'll design and deliver features across our Ruby on Rails ecosystem, integrate with multiple payment service providers (PSPs), and improve resiliency and user experience at a global scale. This is a full-time role with long-term engagement. This remote position, ideally suited for candidates located in LATAM or the US time zones, is perfect for someone with a deep technical background in Ruby backend stack and AWS cloud solutions. Must-have for the position 5+ years of backend experience with Ruby and Ruby on Rails for high-scale, production systems; Payments domain experience, including integrations with PSPs such as Stripe, Braintree, Adyen; 3+ years of Experience with AWS building, operating, and monitoring cloud services; Strong ownership, clear communication, and ability to drive projects end-to-end; English Level: Upper-Intermediate English or higher. Will be a strong plus Databricks exposure (e.g., analytics/observability workflows around payments); Hands-on use of AI productivity tools for coding (e.g., Cursor, ChatGPT/Copilot-style assistants). Responsibilities Own and enhance the payments & billing platform, ensuring accuracy, reliability, and great UX across the subscription lifecycle; Design, implement, and ship new features and services in Ruby/Rails to support billing, retries, dunning, and PSP capabilities; Integrate and optimize multi-PSP payment flows (auth, capture, refunds, chargebacks, reconciliations); Improve observability & resilience (logging/metrics/alerts, graceful degradation, idempotency, retries); Partner with product, data, finance, and support to troubleshoot production issues and deliver high-quality outcomes; Contribute to code reviews, technical design docs, and team standards. About the project Client is an American e-book and audiobook subscription service that includes one million titles. The platform hosts 60 million documents on its open publishing platform. The platform allows anyone to share their ideas with the world, access to audio books, access to the world's composers who publish their music, and incorporates articles from private publishers and world magazines. The payments and billing platform powers millions of subscription bills globally, integrating with multiple PSPs to process payments for a global subscription platform (authorization, settlement, refunds, invoicing, dunning). Tech Stack and Team Composition Tech Stack: Ruby on Rails application running on AWS, integrating with multiple PSPs (Stripe, Braintree, Adyen). Team Composition: You'll work within a payments squad of 3-4 engineers, collaborating with product and adjacent platform teams. Working conditions Work schedule: US business-hours overlap (between EST and PST time zones); Engagement: long-term, full-time; Fully Remote: This role offers the flexibility to work from anywhere. Interview Process HR Interview: Initial discussion with our recruiter. KITRUM's Technical Interview. Client Interviews: Technical coding round (hands-on Ruby/Rails problem-solving); Hiring Manager round (deep dive into prior projects, system design, and problem-solving). Why you'll love working here Competitive Pay: We offer a compensation that reflects your skills and experience; Remote Flexibility: Work from anywhere - our team is distributed across the globe; Professional Growth: Access to continuous learning opportunities, including paid courses, certifications, mentorship; Work-Life Balance: 30 days of paid vacation and 6 paid sick days per year, plus flexible hours; Inclusive Culture: We embrace diversity and foster a culture of trust, transparency, and mutual respect; Cool Perks: Join our virtual team events, get a budget for your home office setup, and enjoy access to exclusive content and tools.
Jan 01, 2026
Full time
We're looking for a Senior Ruby Engineer to join the Payments team and help evolve the platform that powers millions of subscription bills worldwide. You'll design and deliver features across our Ruby on Rails ecosystem, integrate with multiple payment service providers (PSPs), and improve resiliency and user experience at a global scale. This is a full-time role with long-term engagement. This remote position, ideally suited for candidates located in LATAM or the US time zones, is perfect for someone with a deep technical background in Ruby backend stack and AWS cloud solutions. Must-have for the position 5+ years of backend experience with Ruby and Ruby on Rails for high-scale, production systems; Payments domain experience, including integrations with PSPs such as Stripe, Braintree, Adyen; 3+ years of Experience with AWS building, operating, and monitoring cloud services; Strong ownership, clear communication, and ability to drive projects end-to-end; English Level: Upper-Intermediate English or higher. Will be a strong plus Databricks exposure (e.g., analytics/observability workflows around payments); Hands-on use of AI productivity tools for coding (e.g., Cursor, ChatGPT/Copilot-style assistants). Responsibilities Own and enhance the payments & billing platform, ensuring accuracy, reliability, and great UX across the subscription lifecycle; Design, implement, and ship new features and services in Ruby/Rails to support billing, retries, dunning, and PSP capabilities; Integrate and optimize multi-PSP payment flows (auth, capture, refunds, chargebacks, reconciliations); Improve observability & resilience (logging/metrics/alerts, graceful degradation, idempotency, retries); Partner with product, data, finance, and support to troubleshoot production issues and deliver high-quality outcomes; Contribute to code reviews, technical design docs, and team standards. About the project Client is an American e-book and audiobook subscription service that includes one million titles. The platform hosts 60 million documents on its open publishing platform. The platform allows anyone to share their ideas with the world, access to audio books, access to the world's composers who publish their music, and incorporates articles from private publishers and world magazines. The payments and billing platform powers millions of subscription bills globally, integrating with multiple PSPs to process payments for a global subscription platform (authorization, settlement, refunds, invoicing, dunning). Tech Stack and Team Composition Tech Stack: Ruby on Rails application running on AWS, integrating with multiple PSPs (Stripe, Braintree, Adyen). Team Composition: You'll work within a payments squad of 3-4 engineers, collaborating with product and adjacent platform teams. Working conditions Work schedule: US business-hours overlap (between EST and PST time zones); Engagement: long-term, full-time; Fully Remote: This role offers the flexibility to work from anywhere. Interview Process HR Interview: Initial discussion with our recruiter. KITRUM's Technical Interview. Client Interviews: Technical coding round (hands-on Ruby/Rails problem-solving); Hiring Manager round (deep dive into prior projects, system design, and problem-solving). Why you'll love working here Competitive Pay: We offer a compensation that reflects your skills and experience; Remote Flexibility: Work from anywhere - our team is distributed across the globe; Professional Growth: Access to continuous learning opportunities, including paid courses, certifications, mentorship; Work-Life Balance: 30 days of paid vacation and 6 paid sick days per year, plus flexible hours; Inclusive Culture: We embrace diversity and foster a culture of trust, transparency, and mutual respect; Cool Perks: Join our virtual team events, get a budget for your home office setup, and enjoy access to exclusive content and tools.
Principal Data Engineer (Azure, PySpark, Databricks)
PEXA Group Limited Thame, Oxfordshire
Hi, we're Smoove, part of the PEXA Group. Our vision is to simplify and revolutionise the home moving and ownership experience for everyone. We are on a mission to deliver products and services that remove the pain, frustration, uncertainty, frictionand stress that the current process creates. We are a leading provider of tech in the property sector - founded in 2003, our product focus has been our conveyancer two sided marketplace, connecting consumers with a range of quality conveyancers to choose from at competitive prices via our easy to use tech platform. We are now building out our ecosystem so consumers can benefit from our services either via their Estate Agent or their Mortgage Broker, through smarter conveyancing platforms, making the home buying or selling process easier, quicker, safer and more transparent. Why join Smoove? Great question! We pride ourselves on attracting, developing and retaining a diverse range of people in an equally diverse range of roles and specialisms - who together achieve outstanding results. Our transparent approach and open door policy make Smoove a great place to work and as our business expands, we are looking for ambitious, talented people to join us. We are seeking an experienced Principal Data Engineer to define, lead, and scale the technical strategy of our data platform. This is a senior, hands on leadership role at the intersection of architecture, governance, and engineering excellence, where you will shape how data is collected, processed, and delivered across the organisation. You will own the end to end quality, performance, and scalability of our data systems - from raw ingestion through to trusted datasets powering business critical analytics and reporting. This includes setting standards and influencing the strategic roadmap for data infrastructure. Our stack is built on both AWS and Azure, using Databricks across data domains and you will lead the evolution of this ecosystem to meet future business needs. You'll ensure that data is secure, compliant, discoverable, and business ready, enabling analysts, data scientists, and stakeholders to make confident, data driven decisions. This role is ideal for a highly technical leader who thrives at both the strategic and execution levels: someone equally comfortable defining architecture with executives, mentoring senior engineers, and optimising distributed pipelines at scale. Role Responsibilities Design and oversee scalable, performant, and secure architectures on Databricks and distributed systems. Anticipate scaling challenges and ensure platforms are future proof. Lead the design and development of robust, high performance data pipelines using PySpark and Databricks. Define and ensure testing frameworks for data workflows. Ensure end to end data quality from raw ingestion to curated, trusted datasets powering analytics. Establish and enforce best practices for data governance, lineage, metadata, and security controls. Ensure compliance with GDPR and other regulatory frameworks. Act as a technical authority and mentor, guiding data engineers. Influence cross functional teams to align on data strategy, standards, and practices. Partner with product, engineering, and business leaders to prioritise and deliver high impact data initiatives. Build a culture of data trust, ensuring downstream analytics and reporting are always accurate and consistent. Evaluate and recommend emerging technologies where they add value to the ecosystem. Skills & Experience Required Broad experience as a Data Engineer including technical leadership. Broad cloud experience, ideally both Azure and AWS. Deep expertise in PySpark and distributed data processing at scale. Extensive experience designing and optimising in Databricks. Advanced SQL optimisation and schema design for analytical workloads. Strong understanding of data security, privacy, and GDPR/PII compliance. Experience implementing and leading data governance frameworks. Proven experience leading the design and operation of a complex data platform. Track record of mentoring engineers and raising technical standards. Ability to influence senior stakeholders and align data initiatives with wider business goals. Strategic mindset with a holistic view of data reliability, scalability, and business value. £80,000 - £100,000 a year Sound like you? We at Smoove are ready so if this role sounds like you, apply today.
Jan 01, 2026
Full time
Hi, we're Smoove, part of the PEXA Group. Our vision is to simplify and revolutionise the home moving and ownership experience for everyone. We are on a mission to deliver products and services that remove the pain, frustration, uncertainty, frictionand stress that the current process creates. We are a leading provider of tech in the property sector - founded in 2003, our product focus has been our conveyancer two sided marketplace, connecting consumers with a range of quality conveyancers to choose from at competitive prices via our easy to use tech platform. We are now building out our ecosystem so consumers can benefit from our services either via their Estate Agent or their Mortgage Broker, through smarter conveyancing platforms, making the home buying or selling process easier, quicker, safer and more transparent. Why join Smoove? Great question! We pride ourselves on attracting, developing and retaining a diverse range of people in an equally diverse range of roles and specialisms - who together achieve outstanding results. Our transparent approach and open door policy make Smoove a great place to work and as our business expands, we are looking for ambitious, talented people to join us. We are seeking an experienced Principal Data Engineer to define, lead, and scale the technical strategy of our data platform. This is a senior, hands on leadership role at the intersection of architecture, governance, and engineering excellence, where you will shape how data is collected, processed, and delivered across the organisation. You will own the end to end quality, performance, and scalability of our data systems - from raw ingestion through to trusted datasets powering business critical analytics and reporting. This includes setting standards and influencing the strategic roadmap for data infrastructure. Our stack is built on both AWS and Azure, using Databricks across data domains and you will lead the evolution of this ecosystem to meet future business needs. You'll ensure that data is secure, compliant, discoverable, and business ready, enabling analysts, data scientists, and stakeholders to make confident, data driven decisions. This role is ideal for a highly technical leader who thrives at both the strategic and execution levels: someone equally comfortable defining architecture with executives, mentoring senior engineers, and optimising distributed pipelines at scale. Role Responsibilities Design and oversee scalable, performant, and secure architectures on Databricks and distributed systems. Anticipate scaling challenges and ensure platforms are future proof. Lead the design and development of robust, high performance data pipelines using PySpark and Databricks. Define and ensure testing frameworks for data workflows. Ensure end to end data quality from raw ingestion to curated, trusted datasets powering analytics. Establish and enforce best practices for data governance, lineage, metadata, and security controls. Ensure compliance with GDPR and other regulatory frameworks. Act as a technical authority and mentor, guiding data engineers. Influence cross functional teams to align on data strategy, standards, and practices. Partner with product, engineering, and business leaders to prioritise and deliver high impact data initiatives. Build a culture of data trust, ensuring downstream analytics and reporting are always accurate and consistent. Evaluate and recommend emerging technologies where they add value to the ecosystem. Skills & Experience Required Broad experience as a Data Engineer including technical leadership. Broad cloud experience, ideally both Azure and AWS. Deep expertise in PySpark and distributed data processing at scale. Extensive experience designing and optimising in Databricks. Advanced SQL optimisation and schema design for analytical workloads. Strong understanding of data security, privacy, and GDPR/PII compliance. Experience implementing and leading data governance frameworks. Proven experience leading the design and operation of a complex data platform. Track record of mentoring engineers and raising technical standards. Ability to influence senior stakeholders and align data initiatives with wider business goals. Strategic mindset with a holistic view of data reliability, scalability, and business value. £80,000 - £100,000 a year Sound like you? We at Smoove are ready so if this role sounds like you, apply today.
LexisNexis Risk Solutions
Principal Data Scientist (H/F)
LexisNexis Risk Solutions City, London
Data, Research & Analytics Principal Data Scientist (H/F) Job Description in French (English version at the bottom) Preferred location: Paris, France LexisNexis Risk Solutions is a global leader in technology and data analytics, tackling some of the world's most complex and meaningful challenges - from stopping cybercriminals to enabling frictionless experiences for legitimate consumers. As a Principal Data Scientist, you will play a key role in shaping the future of our AI capabilities across multiple products within our Fraud, Identity, and Financial Crime Compliance portfolio. You will lead the ideation, research, modeling, and implementation of new AI-driven features - with a strong focus on Large Language Models (LLMs), Generative AI, and advanced Machine Learning. Your work will directly impact millions of identity verifications and fraud prevention decisions every day, helping global organizations operate safely and efficiently. You will also contribute to the company's long term AI strategy and act as a thought leader and role model within the data science teams. Operating within a global organization, you will collaborate closely with engineering labs, analytics teams, and professional services, while staying attuned to customer feedback and business priorities. A strong business mindset, proactive communication, and ability to drive innovation across teams are key to success in this role. You will work primarily on a European schedule but engage frequently with colleagues across multiple time zones and travel when needed. Key Responsibilities Lead the research, prototyping, and productionization of new AI and ML features across our product portfolio. Partner with Product Managers and Engineering teams to design and deliver impactful, data-driven enhancements. Deeply understand existing products and data assets to identify opportunities for AI driven improvement. Design and execute experiments to validate new research ideas and evaluate model performance. Train, fine tune, and optimize LLM and ML models on structured and unstructured data derived from APIs and customer workflows. Develop strategies for real time model inference and scalable deployment. Collaborate with external vendors for data collection, annotation, and research initiatives. Engage with customers and regional professional services teams to understand evolving fraud patterns and integrate insights into product development. Mentor and support other data scientists, fostering technical excellence and innovation across the organization. Education Master's degree or PhD in Computer Science, Artificial Intelligence, Applied Mathematics, or a related field. Degree from a leading Engineering School (Grand Ecole) or University with a strong quantitative curriculum is highly valued. Requirements 8+ years of experience building, training, and evaluating Deep Learning and Machine Learning models using tools such as PyTorch, TensorFlow, scikit learn, HuggingFace, or LangChain. Experience in a start up or a cross functional team is a plus Experience in Natural Language Processing (NLP) is a plus Strong programming skills in Python, including data wrangling, analysis, and visualization. Solid experience with SQL and database querying for data exploration and preparation. Familiarity with cloud platforms (AWS, Azure, ) and modern data stack tools (Snowflake, Databricks, ) Proven ability to tackle ambiguous problems, develop data informed strategies, and define measurable success criteria. Familiarity with object oriented or functional programming languages such as C++, Java, or Rust is a plus Experience with software engineering tools and practices (e.g. Docker, Kubernetes, Git, CI/CD pipelines) is a plus. Knowledge of ML Ops, model deployment, and monitoring frameworks. Understanding of fraud prevention, authentication, or identity verification methodologies is a plus. Excellent communication skills with both technical and non technical stakeholders. Strong English proficiency (C1/C2) and proven experience working in multicultural, international environments. Ability to collaborate across time zones and travel occasionally as required. Why Join Us At LexisNexis Risk Solutions, you'll join a global community of innovators using AI to make the world a safer place. You'll have the autonomy to explore new ideas, the resources to bring them to life, and the opportunity to shape how AI transforms fraud and identity verification on a global scale. Additional location(s) Wales; UK - London (Bishopsgate) We are committed to providing a fair and accessible hiring process. If you have a disability or other need that requires accommodation or adjustment, please let us know by completing our Applicant Request Support Form or please contact 1-. Criminals may pose as recruiters asking for money or personal information. We never request money or banking details from job applicants. Learn more about spotting and avoiding scams here. Please read our Candidate Privacy Policy. USA Job Seekers: We are an equal opportunity employer: qualified applicants are considered for and treated during employment without regard to race, color, creed, religion, sex, national origin, citizenship status, disability status, protected veteran status, age, marital status, sexual orientation, gender identity, genetic information, or any other characteristic protected by law. EEO Know Your Rights.
Jan 01, 2026
Full time
Data, Research & Analytics Principal Data Scientist (H/F) Job Description in French (English version at the bottom) Preferred location: Paris, France LexisNexis Risk Solutions is a global leader in technology and data analytics, tackling some of the world's most complex and meaningful challenges - from stopping cybercriminals to enabling frictionless experiences for legitimate consumers. As a Principal Data Scientist, you will play a key role in shaping the future of our AI capabilities across multiple products within our Fraud, Identity, and Financial Crime Compliance portfolio. You will lead the ideation, research, modeling, and implementation of new AI-driven features - with a strong focus on Large Language Models (LLMs), Generative AI, and advanced Machine Learning. Your work will directly impact millions of identity verifications and fraud prevention decisions every day, helping global organizations operate safely and efficiently. You will also contribute to the company's long term AI strategy and act as a thought leader and role model within the data science teams. Operating within a global organization, you will collaborate closely with engineering labs, analytics teams, and professional services, while staying attuned to customer feedback and business priorities. A strong business mindset, proactive communication, and ability to drive innovation across teams are key to success in this role. You will work primarily on a European schedule but engage frequently with colleagues across multiple time zones and travel when needed. Key Responsibilities Lead the research, prototyping, and productionization of new AI and ML features across our product portfolio. Partner with Product Managers and Engineering teams to design and deliver impactful, data-driven enhancements. Deeply understand existing products and data assets to identify opportunities for AI driven improvement. Design and execute experiments to validate new research ideas and evaluate model performance. Train, fine tune, and optimize LLM and ML models on structured and unstructured data derived from APIs and customer workflows. Develop strategies for real time model inference and scalable deployment. Collaborate with external vendors for data collection, annotation, and research initiatives. Engage with customers and regional professional services teams to understand evolving fraud patterns and integrate insights into product development. Mentor and support other data scientists, fostering technical excellence and innovation across the organization. Education Master's degree or PhD in Computer Science, Artificial Intelligence, Applied Mathematics, or a related field. Degree from a leading Engineering School (Grand Ecole) or University with a strong quantitative curriculum is highly valued. Requirements 8+ years of experience building, training, and evaluating Deep Learning and Machine Learning models using tools such as PyTorch, TensorFlow, scikit learn, HuggingFace, or LangChain. Experience in a start up or a cross functional team is a plus Experience in Natural Language Processing (NLP) is a plus Strong programming skills in Python, including data wrangling, analysis, and visualization. Solid experience with SQL and database querying for data exploration and preparation. Familiarity with cloud platforms (AWS, Azure, ) and modern data stack tools (Snowflake, Databricks, ) Proven ability to tackle ambiguous problems, develop data informed strategies, and define measurable success criteria. Familiarity with object oriented or functional programming languages such as C++, Java, or Rust is a plus Experience with software engineering tools and practices (e.g. Docker, Kubernetes, Git, CI/CD pipelines) is a plus. Knowledge of ML Ops, model deployment, and monitoring frameworks. Understanding of fraud prevention, authentication, or identity verification methodologies is a plus. Excellent communication skills with both technical and non technical stakeholders. Strong English proficiency (C1/C2) and proven experience working in multicultural, international environments. Ability to collaborate across time zones and travel occasionally as required. Why Join Us At LexisNexis Risk Solutions, you'll join a global community of innovators using AI to make the world a safer place. You'll have the autonomy to explore new ideas, the resources to bring them to life, and the opportunity to shape how AI transforms fraud and identity verification on a global scale. Additional location(s) Wales; UK - London (Bishopsgate) We are committed to providing a fair and accessible hiring process. If you have a disability or other need that requires accommodation or adjustment, please let us know by completing our Applicant Request Support Form or please contact 1-. Criminals may pose as recruiters asking for money or personal information. We never request money or banking details from job applicants. Learn more about spotting and avoiding scams here. Please read our Candidate Privacy Policy. USA Job Seekers: We are an equal opportunity employer: qualified applicants are considered for and treated during employment without regard to race, color, creed, religion, sex, national origin, citizenship status, disability status, protected veteran status, age, marital status, sexual orientation, gender identity, genetic information, or any other characteristic protected by law. EEO Know Your Rights.
Principal Data Engineer
Made Tech Limited Barnet, London
Our Principal Data Engineers are responsible for leading and delivering strategically significant, complex client engagements across our portfolio of clients. We believe that great delivery stems from a thorough understanding of our clients and their needs, strong discipline skills and subject matter expertise, excellent leadership and a clear vision of lasting and effective change in a public sector environment. We expect our Principal Data Engineers to bring all of that and enthuse our delivery teams with the same passion. The successful candidate will lead the Data Engineering aspects of our client engagements while overseeing the wider delivery within the account (or industry) when appropriate. They will coach and develop team members on their engagements providing them with detailed performance feedback, as well as monitoring overall delivery to achieve the highest levels of client satisfaction. In addition, our Principal Data Engineers are responsible for engaging with our clients to understand their challenges and build lasting, trusted advisor relationships. They will also oversee multiple, concurrent client deliveries to help ensure quality and drive the sharing of best practice across our engagements and industries. Key responsibilities Our Principal Data Engineers are members of the Data & AI Practice leadership team with the responsibility to develop the capability of the practice to meet business needs and to accelerate the growth of the practice, their account and the wider business. You will be responsible for the practice and service line-specific delivery elements of your engagement/account as well as a shared ownership for the overall delivery of client outcomes. You will leverage your client and delivery insight to support the account and industry teams to identify opportunities and develop client solutions. The right person for this role will do this by combining their technical discipline/craft experience, leadership skills and industry network with Made Tech's unparalleled experience of delivering digital services and digital transformation for the Public Sector. Responsibilities Collaborate with clients to understand their needs, provide solution advice in your role as a trusted advisor and shape solutions that leverage Made Tech's wider capabilities and credentials Assess project performance as a part of the billable delivery team, Quality Assure (QA) the deliverables and outcomes, and ensure client satisfaction. Coach and mentor team members as well as providing direction to enable them to achieve their engagement outcomes and to develop their careers Act as a Technical Authority of the Data & AI capability to provide oversight and ensure alignment with internal and industry best practices. Ensure engagement experience is captured and used to improve standards and contribute to Made Tech knowledge Participate in business development activities, including bids and pre-sales within the account, industry and practice. Coach team members on their contributions and oversee the relevant technical aspects of the proposal submission Undertake people management responsibilities, including performance reviews and professional development of your engagement and practice colleagues Serve as a thought leader within Made Tech, our account engagements and the wider public sector and represent the company at industry events Skills, knowledge and expertise Clients Understanding of the issues and challenges that the public sector faces in delivering services that make the best use of data and digital capabilities, transforming legacy infrastructure, and taking an innovative and user-centric approach Ability to innovate and take learnings from the commercial sector, other countries and advances in technology and apply them to UK Public Sector challenges to create tangible solutions for our clients Experience building trusted advisor relationships with senior client stakeholders within the public sector. Leadership Experience of building and leading high performing, consulting teams and creating the leveraged engagements to provide a cost-effective, profitable, successful client-facing delivery Leadership of bids and solution shaping to produce compelling proposals that help Made Tech win new business and grow the industry Experience of managing third-party partnerships and suppliers (in conjunction with Made Tech colleagues) to provide a consolidated and seamless delivery team to clients. Practice Experience in delivering complex and difficult engagements that span multiple capabilities for user-facing digital and data services in the public sector Experience in identifying opportunities based on client needs and developing targeted solutions to progress the development of the opportunity Experience of working with sales professionals and commercial responsibility for strategic organisational goals. While this is not a hands-on coding technical role, the importance of credibility in approach to digital, data and technology in the public sector cannot be understated. You will be expected to maintain a broad technical knowledge of modern data practices, be able to shape data strategy and roadmaps, and hold others to account for technical quality. Experience working directly with customers and users within a technology consultancy Expertise in shaping data strategy and approaches to quality, ethics and governance. Experience in developing a data capability or function Combining data science, analytics and engineering Strong understanding of various architectures including data warehouses, data lakes, data lake houses and data mesh Strong understanding of best practice DataOps and MLOps Up-to-date understanding of various data engineering technologies including Apache Spark, Databricks and Hadoop Strong understanding of agile ways of working Up-to-date understanding of various programming languages including Python, Scala, R and SQL Up-to-date understanding of various databases and cloud-based datastores including SQL and NoSQL Up-to-date understanding of cloud platforms including AWS and/or Azure and their data offerings Evidence of self-development - we value keen learners Support in applying If you need this job description in another format, or other support in applying, please email . We believe we can use tech to make public services better. We also believe this can happen best when our own team represents the society that actually uses the services we work on. We're collectively continuing to grow a culture that is happy, healthy, safe and inspiring for people of all backgrounds and experiences, so we encourage people from underrepresented groups to apply for roles with us. When you apply, we'll put you in touch with a talent partner who can help with any needs or adjustments we may need to make to help with your application. This includes alternative formats for documents, the time allotted for interviews and any other needs. We also welcome any feedback on how we can improve the experience for future candidates. Like many organisations, we use Slack to foster a sense of community and connection. As well as special interest groups such as music, food and pets, we also have 10+ Slack channels dedicated to specific communities, allies, and identities as well as dedicated learning spaces called communities of practice (COPs). If you'd like to speak to someone from one of these groups about their experience as an employee, please do let a member of the Made Tech talent team know. We are always listening to our growing teams and evolving the benefits available to our people. As we scale, as do our benefits and we are scaling quickly. We've recently introduced a flexible benefit platform which includes a Smart Tech scheme, Cycle to work scheme, and an individual benefits allowance which you can invest in a Health care cash plan or Pension plan. We're also big on connection and have an optional social and wellbeing calendar of events for all employees to join should they choose to. Here are some of our most popular benefits listed below: ️ 30 days Holiday - we offer 30 days of paid annual leave + bank holidays! Remote Working - we offer part time remote working for all our staff Paid counselling - we offer paid counselling as well as financial and legal advice. An increasing number of our customers are specifying a minimum of SC (security check) clearance in order to work on their projects. As a result, we're looking for all successful candidates for this role to have eligibility.Eligibility for SC requires 5 years' of continuous UK residency. Please note that if at any point during the interview process it is apparent that you may not be eligible for SC, we won't be able to progress your application and we will contact you to let you know why. Our hiring process is designed to be thorough, transparent, and supportive, guiding candidates through each step. The exact process may vary slightly depending on the role but these are the typical steps candidates can expect. We'll keep you updated throughout the process and provide helpful feedback at each stage. No matter the outcome, we make sure the feedback is useful and supportive, so you feel informed and can learn from the experience. Register your interest to be notified of any roles that come along that meet your criteria.
Jan 01, 2026
Full time
Our Principal Data Engineers are responsible for leading and delivering strategically significant, complex client engagements across our portfolio of clients. We believe that great delivery stems from a thorough understanding of our clients and their needs, strong discipline skills and subject matter expertise, excellent leadership and a clear vision of lasting and effective change in a public sector environment. We expect our Principal Data Engineers to bring all of that and enthuse our delivery teams with the same passion. The successful candidate will lead the Data Engineering aspects of our client engagements while overseeing the wider delivery within the account (or industry) when appropriate. They will coach and develop team members on their engagements providing them with detailed performance feedback, as well as monitoring overall delivery to achieve the highest levels of client satisfaction. In addition, our Principal Data Engineers are responsible for engaging with our clients to understand their challenges and build lasting, trusted advisor relationships. They will also oversee multiple, concurrent client deliveries to help ensure quality and drive the sharing of best practice across our engagements and industries. Key responsibilities Our Principal Data Engineers are members of the Data & AI Practice leadership team with the responsibility to develop the capability of the practice to meet business needs and to accelerate the growth of the practice, their account and the wider business. You will be responsible for the practice and service line-specific delivery elements of your engagement/account as well as a shared ownership for the overall delivery of client outcomes. You will leverage your client and delivery insight to support the account and industry teams to identify opportunities and develop client solutions. The right person for this role will do this by combining their technical discipline/craft experience, leadership skills and industry network with Made Tech's unparalleled experience of delivering digital services and digital transformation for the Public Sector. Responsibilities Collaborate with clients to understand their needs, provide solution advice in your role as a trusted advisor and shape solutions that leverage Made Tech's wider capabilities and credentials Assess project performance as a part of the billable delivery team, Quality Assure (QA) the deliverables and outcomes, and ensure client satisfaction. Coach and mentor team members as well as providing direction to enable them to achieve their engagement outcomes and to develop their careers Act as a Technical Authority of the Data & AI capability to provide oversight and ensure alignment with internal and industry best practices. Ensure engagement experience is captured and used to improve standards and contribute to Made Tech knowledge Participate in business development activities, including bids and pre-sales within the account, industry and practice. Coach team members on their contributions and oversee the relevant technical aspects of the proposal submission Undertake people management responsibilities, including performance reviews and professional development of your engagement and practice colleagues Serve as a thought leader within Made Tech, our account engagements and the wider public sector and represent the company at industry events Skills, knowledge and expertise Clients Understanding of the issues and challenges that the public sector faces in delivering services that make the best use of data and digital capabilities, transforming legacy infrastructure, and taking an innovative and user-centric approach Ability to innovate and take learnings from the commercial sector, other countries and advances in technology and apply them to UK Public Sector challenges to create tangible solutions for our clients Experience building trusted advisor relationships with senior client stakeholders within the public sector. Leadership Experience of building and leading high performing, consulting teams and creating the leveraged engagements to provide a cost-effective, profitable, successful client-facing delivery Leadership of bids and solution shaping to produce compelling proposals that help Made Tech win new business and grow the industry Experience of managing third-party partnerships and suppliers (in conjunction with Made Tech colleagues) to provide a consolidated and seamless delivery team to clients. Practice Experience in delivering complex and difficult engagements that span multiple capabilities for user-facing digital and data services in the public sector Experience in identifying opportunities based on client needs and developing targeted solutions to progress the development of the opportunity Experience of working with sales professionals and commercial responsibility for strategic organisational goals. While this is not a hands-on coding technical role, the importance of credibility in approach to digital, data and technology in the public sector cannot be understated. You will be expected to maintain a broad technical knowledge of modern data practices, be able to shape data strategy and roadmaps, and hold others to account for technical quality. Experience working directly with customers and users within a technology consultancy Expertise in shaping data strategy and approaches to quality, ethics and governance. Experience in developing a data capability or function Combining data science, analytics and engineering Strong understanding of various architectures including data warehouses, data lakes, data lake houses and data mesh Strong understanding of best practice DataOps and MLOps Up-to-date understanding of various data engineering technologies including Apache Spark, Databricks and Hadoop Strong understanding of agile ways of working Up-to-date understanding of various programming languages including Python, Scala, R and SQL Up-to-date understanding of various databases and cloud-based datastores including SQL and NoSQL Up-to-date understanding of cloud platforms including AWS and/or Azure and their data offerings Evidence of self-development - we value keen learners Support in applying If you need this job description in another format, or other support in applying, please email . We believe we can use tech to make public services better. We also believe this can happen best when our own team represents the society that actually uses the services we work on. We're collectively continuing to grow a culture that is happy, healthy, safe and inspiring for people of all backgrounds and experiences, so we encourage people from underrepresented groups to apply for roles with us. When you apply, we'll put you in touch with a talent partner who can help with any needs or adjustments we may need to make to help with your application. This includes alternative formats for documents, the time allotted for interviews and any other needs. We also welcome any feedback on how we can improve the experience for future candidates. Like many organisations, we use Slack to foster a sense of community and connection. As well as special interest groups such as music, food and pets, we also have 10+ Slack channels dedicated to specific communities, allies, and identities as well as dedicated learning spaces called communities of practice (COPs). If you'd like to speak to someone from one of these groups about their experience as an employee, please do let a member of the Made Tech talent team know. We are always listening to our growing teams and evolving the benefits available to our people. As we scale, as do our benefits and we are scaling quickly. We've recently introduced a flexible benefit platform which includes a Smart Tech scheme, Cycle to work scheme, and an individual benefits allowance which you can invest in a Health care cash plan or Pension plan. We're also big on connection and have an optional social and wellbeing calendar of events for all employees to join should they choose to. Here are some of our most popular benefits listed below: ️ 30 days Holiday - we offer 30 days of paid annual leave + bank holidays! Remote Working - we offer part time remote working for all our staff Paid counselling - we offer paid counselling as well as financial and legal advice. An increasing number of our customers are specifying a minimum of SC (security check) clearance in order to work on their projects. As a result, we're looking for all successful candidates for this role to have eligibility.Eligibility for SC requires 5 years' of continuous UK residency. Please note that if at any point during the interview process it is apparent that you may not be eligible for SC, we won't be able to progress your application and we will contact you to let you know why. Our hiring process is designed to be thorough, transparent, and supportive, guiding candidates through each step. The exact process may vary slightly depending on the role but these are the typical steps candidates can expect. We'll keep you updated throughout the process and provide helpful feedback at each stage. No matter the outcome, we make sure the feedback is useful and supportive, so you feel informed and can learn from the experience. Register your interest to be notified of any roles that come along that meet your criteria.
AI Engineer - FDE (Forward Deployed Engineer)
Menlo Ventures
AI Engineer - FDE (Forward Deployed Engineer) (ALL LEVELS) Mission The AI Forward Deployed Engineering (AI FDE) team is a highly specialised customer-facing AI team at Databricks. We deliver professional services engagements to help our customers build and productionize first-of-its-kind AI applications. We work cross-functionally to shape long-term strategic priorities and initiatives alongside engineering, product, and developer relations, as well as support internal subject matter expert (SME) teams. We view our team as an ensemble: we look for individuals with strong, unique specialisations to improve the overall strength of the team. This team is the right fit for you if you love working with customers, teammates, and fueling your curiosity for the latest trends in GenAI, LLMOps, and ML more broadly. The impact you will have Develop cutting-edge GenAI solutions, incorporating the latest techniques from our Mosaic AI research to solve customer problems Own production rollouts of consumer and internally facing GenAI applications Serve as a trusted technical advisor to customers across a variety of domains Present at conferences such as Data + AI Summit, recognized as a thought leader internally and externally Collaborate cross-functionally with the product and engineering teams to influence priorities and shape the product roadmap What we look for Experience building GenAI applications, including RAG, multi-agent systems, Text2SQL, fine-tuning, etc., with tools such as HuggingFace, LangChain, and DSPy Expertise in deploying production-grade GenAI applications, including evaluation and optimizations Extensive years of hands on industry data science experience, leveraging common machine learning and data science tools, i.e. pandas, scikit learn, PyTorch, etc. Experience building production grade machine learning deployments on AWS, Azure, or GCP Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike Passion for collaboration, life long learning, and driving business value through AI Preferred Experience using the Databricks Intelligence Platform and Apache Spark to process large-scale distributed datasets About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide - including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 - rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark , Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Jan 01, 2026
Full time
AI Engineer - FDE (Forward Deployed Engineer) (ALL LEVELS) Mission The AI Forward Deployed Engineering (AI FDE) team is a highly specialised customer-facing AI team at Databricks. We deliver professional services engagements to help our customers build and productionize first-of-its-kind AI applications. We work cross-functionally to shape long-term strategic priorities and initiatives alongside engineering, product, and developer relations, as well as support internal subject matter expert (SME) teams. We view our team as an ensemble: we look for individuals with strong, unique specialisations to improve the overall strength of the team. This team is the right fit for you if you love working with customers, teammates, and fueling your curiosity for the latest trends in GenAI, LLMOps, and ML more broadly. The impact you will have Develop cutting-edge GenAI solutions, incorporating the latest techniques from our Mosaic AI research to solve customer problems Own production rollouts of consumer and internally facing GenAI applications Serve as a trusted technical advisor to customers across a variety of domains Present at conferences such as Data + AI Summit, recognized as a thought leader internally and externally Collaborate cross-functionally with the product and engineering teams to influence priorities and shape the product roadmap What we look for Experience building GenAI applications, including RAG, multi-agent systems, Text2SQL, fine-tuning, etc., with tools such as HuggingFace, LangChain, and DSPy Expertise in deploying production-grade GenAI applications, including evaluation and optimizations Extensive years of hands on industry data science experience, leveraging common machine learning and data science tools, i.e. pandas, scikit learn, PyTorch, etc. Experience building production grade machine learning deployments on AWS, Azure, or GCP Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike Passion for collaboration, life long learning, and driving business value through AI Preferred Experience using the Databricks Intelligence Platform and Apache Spark to process large-scale distributed datasets About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide - including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 - rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark , Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.

Modal Window

  • Home
  • Contact
  • About Us
  • Terms & Conditions
  • Privacy
  • Employer
  • Post a Job
  • Search Resumes
  • Sign in
  • Job Seeker
  • Find Jobs
  • Create Resume
  • Sign in
  • Facebook
  • Twitter
  • Google Plus
  • LinkedIn
Parent and Partner sites: IT Job Board | Jobs Near Me | RightTalent.co.uk | Quantity Surveyor jobs | Building Surveyor jobs | Construction Recruitment | Talent Recruiter | Construction Job Board | Property jobs | myJobsnearme.com | Jobs near me
© 2008-2026 Jobsite Jobs | Designed by Web Design Agency