• Home
  • Search Jobs
  • Register CV
  • Post a Job
  • Employer Pricing
  • Contact Us
  • Sign in
  • Sign up
  • Home
  • Search Jobs
  • Register CV
  • Post a Job
  • Employer Pricing
  • Contact Us
Sorry, that job is no longer available. Here are some results that may be similar to the job you were looking for.

18 jobs found

Email me jobs like this
Refine Search
Current Search
athena data engineer
Data Engineer
updraft.com Richmond, Surrey
Updraft. Helping you make changes that pay off. Updraft is an award winning, FCA-authorised, high-growth fintech based in London. Our vision is to revolutionise the way people spend and think about money, by automating the day to day decisions involved in managing money and mainstream borrowings like credit cards, overdrafts and other loans. A 360 degree spending view across all your financial accounts (using Open banking) A free credit report with tips and guidance to help improve your credit score Native AI led personalised financial planning to help users manage money, pay off their debts and improve their credit scores Intelligent lending products to help reduce cost of credit We have built scale and are getting well recognised in the UK fintech ecosystem. 800k+ users of the mobile app that has helped users swap c £500 m of costly credit-card debt for smarter credit, putting hundreds of thousands on a path to better financial health The product is highly rated by our customers. We are rated 4.8 on Trustpilot, 4.8 on the Play Store, and 4.4 on the iOS Store We are selected for Technation Future Fifty 2025 - a program that recognizes and supports successful and innovative scaleups to IPOs - 30% of UK unicorns have come out of this program. Updraft once again featured on the Sifted 100 UK startups - among only 25 companies to have made the list over both years 2024 and 2025 We are looking for exceptional talent to join us on our next stage of growth with a compelling proposition - purpose you can feel, impact you can measure, and ownership you'll actually hold. Expect a hybrid, London-hub culture where cross-functional squads tackle real-world problems with cutting-edge tech; generous learning budgets and wellness benefits; and the freedom to experiment, ship, and see your work reflected in customers' financial freedom. At Updraft, you'll help build a fairer credit system. Role and Responsibilities Join our Analytics team to deliver cutting edge solutions. Support business and operation teams on making better data driven decisions by ingesting new data sources, creating intuitive dashboards and producing data insights Build new data processing workflows to extract data from core systems for analytic products Maintain and improve existing data processing workflows. Contribute to optimizing and maintaining the production data pipelines, including system and process improvements Contribute to the development of analytical products and dashboards with integration of internal and third-party data sources/ APIs Contribute to cataloguing and documentation of data Bachelor's degree in mathematics, statistics, computer science or related field 2-5 years experience working in data engineering/analyst and related fields Advanced analytical framework and experience relating data insight with business problems and creating appropriate dashboards Mandatory required high proficiency in ETL, SQL and database management Experience with AWS services like Glue, Athena, Redshift, Lambda, S3 Python programming experience using data libraries like pandas and numpy etc Interest in machine learning, logistic regression and emerging solutions for data analytics You are comfortable working without direct supervision on outcomes that have a direct impact on the business You are curious about the data and have a desire to ask "why?" Good to have but not mandatory required: Experience in startup or fintech will be considered a great advantage Awareness or Hands-on experience with ML-AI implementation or ML-Ops Certification in AWS foundation Opportunities to Take Ownership - Work on high-impact projects with real autonomy. Fast Career Growth - Gain exposure to multiple business areas and advance quickly. Be at the Forefront of Innovation - Work on cutting-edge technologies or disruptive ideas. Collaborative & Flat Hierarchy - Work closely with leadership and have a real voice. Dynamic, Fast-Paced Environment - No two days are the same; challenge yourself every day. A Mission-Driven Company - Be part of something that makes a difference
Jan 01, 2026
Full time
Updraft. Helping you make changes that pay off. Updraft is an award winning, FCA-authorised, high-growth fintech based in London. Our vision is to revolutionise the way people spend and think about money, by automating the day to day decisions involved in managing money and mainstream borrowings like credit cards, overdrafts and other loans. A 360 degree spending view across all your financial accounts (using Open banking) A free credit report with tips and guidance to help improve your credit score Native AI led personalised financial planning to help users manage money, pay off their debts and improve their credit scores Intelligent lending products to help reduce cost of credit We have built scale and are getting well recognised in the UK fintech ecosystem. 800k+ users of the mobile app that has helped users swap c £500 m of costly credit-card debt for smarter credit, putting hundreds of thousands on a path to better financial health The product is highly rated by our customers. We are rated 4.8 on Trustpilot, 4.8 on the Play Store, and 4.4 on the iOS Store We are selected for Technation Future Fifty 2025 - a program that recognizes and supports successful and innovative scaleups to IPOs - 30% of UK unicorns have come out of this program. Updraft once again featured on the Sifted 100 UK startups - among only 25 companies to have made the list over both years 2024 and 2025 We are looking for exceptional talent to join us on our next stage of growth with a compelling proposition - purpose you can feel, impact you can measure, and ownership you'll actually hold. Expect a hybrid, London-hub culture where cross-functional squads tackle real-world problems with cutting-edge tech; generous learning budgets and wellness benefits; and the freedom to experiment, ship, and see your work reflected in customers' financial freedom. At Updraft, you'll help build a fairer credit system. Role and Responsibilities Join our Analytics team to deliver cutting edge solutions. Support business and operation teams on making better data driven decisions by ingesting new data sources, creating intuitive dashboards and producing data insights Build new data processing workflows to extract data from core systems for analytic products Maintain and improve existing data processing workflows. Contribute to optimizing and maintaining the production data pipelines, including system and process improvements Contribute to the development of analytical products and dashboards with integration of internal and third-party data sources/ APIs Contribute to cataloguing and documentation of data Bachelor's degree in mathematics, statistics, computer science or related field 2-5 years experience working in data engineering/analyst and related fields Advanced analytical framework and experience relating data insight with business problems and creating appropriate dashboards Mandatory required high proficiency in ETL, SQL and database management Experience with AWS services like Glue, Athena, Redshift, Lambda, S3 Python programming experience using data libraries like pandas and numpy etc Interest in machine learning, logistic regression and emerging solutions for data analytics You are comfortable working without direct supervision on outcomes that have a direct impact on the business You are curious about the data and have a desire to ask "why?" Good to have but not mandatory required: Experience in startup or fintech will be considered a great advantage Awareness or Hands-on experience with ML-AI implementation or ML-Ops Certification in AWS foundation Opportunities to Take Ownership - Work on high-impact projects with real autonomy. Fast Career Growth - Gain exposure to multiple business areas and advance quickly. Be at the Forefront of Innovation - Work on cutting-edge technologies or disruptive ideas. Collaborative & Flat Hierarchy - Work closely with leadership and have a real voice. Dynamic, Fast-Paced Environment - No two days are the same; challenge yourself every day. A Mission-Driven Company - Be part of something that makes a difference
Data Engineer (Multiple Roles) - AI SaaS
Vortexa Ltd Barnet, London
About Us: Vortexa is a fast-growing international technology business founded to solve the immense information gap that exists in the energy industry. By using massive amounts of new satellite data and pioneering work in artificial intelligence, Vortexa creates an unprecedented view on the global seaborne energy flows in real-time, bringing transparency and efficiency to the energy markets and society as a whole. The Role: Processing thousands of rich data points per second from many and vastly different external sources, moving terabytes of data while processing it in real-time, running complex prediction and forecasting AI models while coupling their output into a hybrid human-machine data refinement process and presenting the result through a nimble low-latency SaaS solution used by customers around the globe is no small feat of science and engineering. This processing requires models that can survive the scrutiny of industry experts, data analysts and traders, with the performance, stability, latency and agility a fast-moving startup influencing multi-$m transactions requires. The Data Production Team is responsible for all of Vortexa's data. It ranges from mixing raw satellite data from 600,000 vessels with rich but incomplete text data, to generating high-value forecasts such as the vessel destination, cargo onboard, ship-to-ship transfer detection, dark vessels, congestion, future prices, etc The team has built a variety of procedural, statistical and machine learning models that enabled us to provide the most accurate and comprehensive view of energy flows. We take pride in applying cutting-edge research to real-world problems in a robust, long-lasting and maintainable way. The quality of our data is continuously benchmarked and assessed by experienced in-house market and data analysts to ensure the quality of our predictions. You'll be instrumental in designing and building infrastructure and applications to propel the design, deployment, and benchmarking of existing and new pipelines and ML models. Working with software and data engineers, data scientists and market analysts, you'll help bridge the gap between scientific experiments and commercial products by ensuring 100% uptime and bulletproof fault-tolerance of every component of the team's data pipelines. You Are: Experienced in building and deploying distributed scalable backend data processing pipelines that can go through terabytes of data daily using AWS, K8s, and Airflow. With solid software engineering fundamentals, fluent in both Java and Python (with Rust good to have). Knowledgeable about data lake systems like Athena, and big data storage formats like Parquet, HDF5, ORC, with a focus on data ingestion. Driven by working in an intellectually engaging environment with the top minds in the industry, where constructive and friendly challenges and debates are encouraged, not avoided Excited about working in a start-up environment: not afraid of challenges, excited to bring new ideas to production, and a positive can-do will-do person, not afraid to push the boundaries of your job role. Passionate about coaching developers, helping them improve their skills and grow their careers Deep experience of the full software development life cycle (SDLC), including technical design, coding standards, code review, source control, build, test, deploy, and operations Awesome If You: Have experience with Apache Kafka and streaming frameworks, e.g., Flink, Familiar with observability principles such as logging, monitoring, and tracing Have experience with web scraping technologies and information extraction. A vibrant, diverse company pushing ourselves and the technology to deliver beyond the cutting edge A team of motivated characters and top minds striving to be the best at what we do at all times Constantly learning and exploring new tools and technologies Acting as company owners (all Vortexa staff have equity options)- in a business-savvy and responsible way Motivated by being collaborative, working and achieving together A flexible working policy- accommodating both remote & home working, with regular staff events Private Health Insurance offered via Vitality to help you look after your physical health Global Volunteering Policy to help you 'do good' and feel better
Jan 01, 2026
Full time
About Us: Vortexa is a fast-growing international technology business founded to solve the immense information gap that exists in the energy industry. By using massive amounts of new satellite data and pioneering work in artificial intelligence, Vortexa creates an unprecedented view on the global seaborne energy flows in real-time, bringing transparency and efficiency to the energy markets and society as a whole. The Role: Processing thousands of rich data points per second from many and vastly different external sources, moving terabytes of data while processing it in real-time, running complex prediction and forecasting AI models while coupling their output into a hybrid human-machine data refinement process and presenting the result through a nimble low-latency SaaS solution used by customers around the globe is no small feat of science and engineering. This processing requires models that can survive the scrutiny of industry experts, data analysts and traders, with the performance, stability, latency and agility a fast-moving startup influencing multi-$m transactions requires. The Data Production Team is responsible for all of Vortexa's data. It ranges from mixing raw satellite data from 600,000 vessels with rich but incomplete text data, to generating high-value forecasts such as the vessel destination, cargo onboard, ship-to-ship transfer detection, dark vessels, congestion, future prices, etc The team has built a variety of procedural, statistical and machine learning models that enabled us to provide the most accurate and comprehensive view of energy flows. We take pride in applying cutting-edge research to real-world problems in a robust, long-lasting and maintainable way. The quality of our data is continuously benchmarked and assessed by experienced in-house market and data analysts to ensure the quality of our predictions. You'll be instrumental in designing and building infrastructure and applications to propel the design, deployment, and benchmarking of existing and new pipelines and ML models. Working with software and data engineers, data scientists and market analysts, you'll help bridge the gap between scientific experiments and commercial products by ensuring 100% uptime and bulletproof fault-tolerance of every component of the team's data pipelines. You Are: Experienced in building and deploying distributed scalable backend data processing pipelines that can go through terabytes of data daily using AWS, K8s, and Airflow. With solid software engineering fundamentals, fluent in both Java and Python (with Rust good to have). Knowledgeable about data lake systems like Athena, and big data storage formats like Parquet, HDF5, ORC, with a focus on data ingestion. Driven by working in an intellectually engaging environment with the top minds in the industry, where constructive and friendly challenges and debates are encouraged, not avoided Excited about working in a start-up environment: not afraid of challenges, excited to bring new ideas to production, and a positive can-do will-do person, not afraid to push the boundaries of your job role. Passionate about coaching developers, helping them improve their skills and grow their careers Deep experience of the full software development life cycle (SDLC), including technical design, coding standards, code review, source control, build, test, deploy, and operations Awesome If You: Have experience with Apache Kafka and streaming frameworks, e.g., Flink, Familiar with observability principles such as logging, monitoring, and tracing Have experience with web scraping technologies and information extraction. A vibrant, diverse company pushing ourselves and the technology to deliver beyond the cutting edge A team of motivated characters and top minds striving to be the best at what we do at all times Constantly learning and exploring new tools and technologies Acting as company owners (all Vortexa staff have equity options)- in a business-savvy and responsible way Motivated by being collaborative, working and achieving together A flexible working policy- accommodating both remote & home working, with regular staff events Private Health Insurance offered via Vitality to help you look after your physical health Global Volunteering Policy to help you 'do good' and feel better
Savant Recruitment
AWS Data Platform Engineer
Savant Recruitment
AWS Data Platform Support Engineer - Telecom - Leading Global Consultancy - London - Inside IR35 - 6 Months - Hybrid (1-2 days onsite) We are recruiting on behalf of our client, a leading global consultancy, for an AWS Data Platform Support Engineer to support large-scale AWS data environments within the telecom domain. This is a confirmed 6-month contract, hybrid in London, Inside IR35. Responsibilities Support and troubleshoot AWS-based data platforms for a leading global consultancy. Perform incident management, monitoring, and root cause analysis. Work with AWS services: Lambda, EC2, S3, Glue, MSK, CloudWatch/CloudTrail, SQS/SNS, Step Functions. Manage AWS networking: VPC, DNS, route tables, security groups, NACLs, NAT, endpoints. Support Glue Catalog, Athena, and basic RDS/Redshift. Produce clear documentation and communicate effectively with internal and client stakeholders. Required Skills Strong understanding of data pipeline concepts. Solid AWS service and IAM policy knowledge. Strong troubleshooting and communication skills.
Jan 01, 2026
Full time
AWS Data Platform Support Engineer - Telecom - Leading Global Consultancy - London - Inside IR35 - 6 Months - Hybrid (1-2 days onsite) We are recruiting on behalf of our client, a leading global consultancy, for an AWS Data Platform Support Engineer to support large-scale AWS data environments within the telecom domain. This is a confirmed 6-month contract, hybrid in London, Inside IR35. Responsibilities Support and troubleshoot AWS-based data platforms for a leading global consultancy. Perform incident management, monitoring, and root cause analysis. Work with AWS services: Lambda, EC2, S3, Glue, MSK, CloudWatch/CloudTrail, SQS/SNS, Step Functions. Manage AWS networking: VPC, DNS, route tables, security groups, NACLs, NAT, endpoints. Support Glue Catalog, Athena, and basic RDS/Redshift. Produce clear documentation and communicate effectively with internal and client stakeholders. Required Skills Strong understanding of data pipeline concepts. Solid AWS service and IAM policy knowledge. Strong troubleshooting and communication skills.
LexisNexis Risk Solutions
Consulting/Principal Software Engineer
LexisNexis Risk Solutions Southampton, Hampshire
.Consulting/Principal Software Engineer page is loaded Consulting/Principal Software Engineerlocations: UK - Grosvenor House (Southampton)time type: Full timeposted on: Posted Todayjob requisition id: R104268 Principal Software Engineer About Cirium: Cirium is transforming the way the world understands aviation data. We connect the industry through innovative analytics, helping airlines, airports, travel companies, tech giants, manufacturers, and financial institutions accelerate their digital transformation. Join us to shape the future of aviation. About the Team: You'll join a collaborative, curious team of professionals at all levels. We value diverse perspectives and encourage ownership in solving challenges end-to-end-from exploring new data sources to designing and deploying predictive models. About the role: As a member of our Team, you'll contribute to building, maintaining, and supporting our infrastructure and applications. You'll work with talented colleagues, communicate effectively, and thrive in a fast-paced environment. Responsibilities: Deliver resilient solutions using DevOps practices and "Infrastructure as Code" Diagnose and resolve complex technical issues Write and maintain documentation for technical and non-technical audiences Collaborate across teams to drive innovation and continuous improvement Uphold our commitment to accessibility, inclusion, and data-driven decision-making Requirements: Experience in AWS, Terraform, Python, Athena, Glue, Lambda, Databricks, S3, EKS, ECS, SQL, Rust, Snowflake. Ability to communicate clearly and respectfully with diverse audiences Commitment to learning and adapting in a dynamic environment Willingness to attend our Southampton office 2 days per week (reasonable accommodations available) Experience in data platforms , distributed systems, and data pipelines in cloud-native environments. Working for you: We know that your wellbeing and happiness are key to a long and successful career. These are some of the benefits we are delighted to offer: Generous holiday allowance with the option to buy additional days Health screening, eye care vouchers and private medical benefits Wellbeing programs Life assurance Access to a competitive contributory pension scheme Save As You Earn share option scheme Travel Season ticket loan Electric Vehicle Scheme Optional Dental Insurance Maternity, paternity and shared parental leave Employee Assistance Programme Access to emergency care for both the elderly and children RECARES days, giving you time to support the charities and causes that matter to you Access to employee resource groups with dedicated time to volunteer Access to extensive learning and development resources Access to employee discounts scheme via Perks at WorkLearn more about the LexisNexis Risk team and how we work We are committed to providing a fair and accessible hiring process. If you have a disability or other need that requires accommodation or adjustment, please let us know by completing our or please contact 1-. Criminals may pose as recruiters asking for money or personal information. We never request money or banking details from job applicants. Learn more about spotting and avoiding scams . Please read our .We are an equal opportunity employer: qualified applicants are considered for and treated during employment without regard to race, color, creed, religion, sex, national origin, citizenship status, disability status, protected veteran status, age, marital status, sexual orientation, gender identity, genetic information, or any other characteristic protected by law. USA Job Seekers: . Cirium offers aviation and air travel data and analytics to help keep the world in motion. Our people are at the center of who we are and what we do. We put the interests of our customers unmistakably first, we are empowered by the trust we earn from each other and our customers, we share a common global vision for Cirium based diversity, inclusion and collaboration and our passion for discovery will transform industries. Our team delivers insight, built from decades of experience in the sector, enabling travel companies, aircraft manufacturers, airports, airlines and financial institutions, among others, to make logical and informed decisions which shape the future of travel, growing revenues and enhancing customer experiences.
Jan 01, 2026
Full time
.Consulting/Principal Software Engineer page is loaded Consulting/Principal Software Engineerlocations: UK - Grosvenor House (Southampton)time type: Full timeposted on: Posted Todayjob requisition id: R104268 Principal Software Engineer About Cirium: Cirium is transforming the way the world understands aviation data. We connect the industry through innovative analytics, helping airlines, airports, travel companies, tech giants, manufacturers, and financial institutions accelerate their digital transformation. Join us to shape the future of aviation. About the Team: You'll join a collaborative, curious team of professionals at all levels. We value diverse perspectives and encourage ownership in solving challenges end-to-end-from exploring new data sources to designing and deploying predictive models. About the role: As a member of our Team, you'll contribute to building, maintaining, and supporting our infrastructure and applications. You'll work with talented colleagues, communicate effectively, and thrive in a fast-paced environment. Responsibilities: Deliver resilient solutions using DevOps practices and "Infrastructure as Code" Diagnose and resolve complex technical issues Write and maintain documentation for technical and non-technical audiences Collaborate across teams to drive innovation and continuous improvement Uphold our commitment to accessibility, inclusion, and data-driven decision-making Requirements: Experience in AWS, Terraform, Python, Athena, Glue, Lambda, Databricks, S3, EKS, ECS, SQL, Rust, Snowflake. Ability to communicate clearly and respectfully with diverse audiences Commitment to learning and adapting in a dynamic environment Willingness to attend our Southampton office 2 days per week (reasonable accommodations available) Experience in data platforms , distributed systems, and data pipelines in cloud-native environments. Working for you: We know that your wellbeing and happiness are key to a long and successful career. These are some of the benefits we are delighted to offer: Generous holiday allowance with the option to buy additional days Health screening, eye care vouchers and private medical benefits Wellbeing programs Life assurance Access to a competitive contributory pension scheme Save As You Earn share option scheme Travel Season ticket loan Electric Vehicle Scheme Optional Dental Insurance Maternity, paternity and shared parental leave Employee Assistance Programme Access to emergency care for both the elderly and children RECARES days, giving you time to support the charities and causes that matter to you Access to employee resource groups with dedicated time to volunteer Access to extensive learning and development resources Access to employee discounts scheme via Perks at WorkLearn more about the LexisNexis Risk team and how we work We are committed to providing a fair and accessible hiring process. If you have a disability or other need that requires accommodation or adjustment, please let us know by completing our or please contact 1-. Criminals may pose as recruiters asking for money or personal information. We never request money or banking details from job applicants. Learn more about spotting and avoiding scams . Please read our .We are an equal opportunity employer: qualified applicants are considered for and treated during employment without regard to race, color, creed, religion, sex, national origin, citizenship status, disability status, protected veteran status, age, marital status, sexual orientation, gender identity, genetic information, or any other characteristic protected by law. USA Job Seekers: . Cirium offers aviation and air travel data and analytics to help keep the world in motion. Our people are at the center of who we are and what we do. We put the interests of our customers unmistakably first, we are empowered by the trust we earn from each other and our customers, we share a common global vision for Cirium based diversity, inclusion and collaboration and our passion for discovery will transform industries. Our team delivers insight, built from decades of experience in the sector, enabling travel companies, aircraft manufacturers, airports, airlines and financial institutions, among others, to make logical and informed decisions which shape the future of travel, growing revenues and enhancing customer experiences.
Senior Data Platform Engineer - Recommendations
DICE City, London
Senior Data Platform Engineer - Recommendations London Live shows make us feel good. They're a time to hang with our friends, discover new artists or lose ourselves on a dancefloor. We're on a mission to bring all of this to more fans, more often - and that's where you come in. We're looking for a Senior Data Platform Engineer to join our Data Platform team, to help us scale our data platform to make DICE more reliable and impactful for its partners and customers. At DICE, you'll be part of the company that's redefining live entertainment. It's a place where you can be yourself, influence the culture, and create work that you're proud of. About the role You're an experienced individual who is passionate about building and scaling data products, enhancing data platforms, and comfortable supporting the diverse needs of multiple teams. You'll be in an inspiring and fast-moving environment as part of a cross-functional team, working closely with developers, data scientists, data analysts, data engineers, analytics engineers and product managers. You'll be building data tools, designing highly scalable systems, and arriving at essential insights that will help us make the right platform choices for our use cases. You will contribute to improving our ways of working and establishing standards across the data team. We work in an iterative approach, designing, building and trialling out new concepts quickly to test our assumptions and create the best service for our fans & partners. We also continue to pursue the best approaches to the many challenges we face in delivering highly scaled and richly personalised services. We want engineers open to collaboration and who want to be an integral part of the product improvement process. You'll be Joining a team of experienced engineers focused on evolving our core data platform, with responsibilities ranging from deprecating legacy systems to designing and building next-generation platform capabilities. Working with Engineering Managers and Software Engineers to support the formulation and implementation of a data platform strategy. Taking ownership of end-to-end platform initiatives-from scoping and architecture to implementation and rollout-delivering scalable and maintainable systems that empower data consumers across the org. Helping drive the technical maturity of the platform team by introducing engineering best practices, fostering reusability, and actively breaking down silos to promote shared ownership and collaboration. Working closely with the Product Manager for the Data Platform team, along with Data Engineers, Analytics Engineers, Analysts, and Data Scientists, to ensure our platform solutions are robust, efficient, and future-ready. You are Passionate, humble and talented. A fan of music and culture. Actively responsible and a problem solver. Inspiring and collaborative. Eager to build apps that make a positive impact on the world. Up-to-date with technologies and techniques, and you know when to use them. Independent and require minimal supervision. You'll need Proven experience as a Data Platform Engineer with 4+ years of experience, with a track record of building and scaling modern data platforms and tooling, ideally in complex or high-growth environments. Strong understanding of platform architecture and infrastructure design, including security, scalability, observability, and cost efficiency. Experience building reusable, self-serve tooling and frameworks that improve developer productivity and standardise data workflows across teams. Extensive knowledge of the AWS tech stack. Deep familiarity with streaming technologies such as Kafka, Kinesis, or similar event-driven architectures. Experience with orchestration frameworks (e.g.Prefect and Airflow) and setting up scalable, observable DAG infrastructure. Strong understanding of data infrastructure best practices, including CI/CD, DevOps, Infrastructure-as-Code (Terraform), and containerization (Docker/Kubernetes) and have used these technologies to build scalable data platforms. Deep experience with cloud-native data stacks, especially on AWS (e.g. S3, Glue, Redshift, Athena, RDS, etc.). Solid development experience with Python (other programming languages are a plus). Strong grasp of version control and CI workflows (e.g. GitHub Actions), and comfort working in Unix environments. Knowledge of dbt. Knowledge of A/B testing or multivariate testing frameworks. Knowledge of data governance, security & privacy, lineage, and quality frameworks in cloud environments. AWS certification About DICE DICE is based throughout Europe, North America, Australia and India, and is rapidly growing worldwide. We're constantly innovating to bring amazing products to fans, artists, venues and promoters. We know that having a variety of perspectives makes us a better company - it's why we strongly encourage members of underrepresented communities to apply. Find out how we're creating a more diverse, equitable and inclusive DICE. Unlimited paid holiday Monthly DICE credits Private health insurance with Vitality, with tons of perks Workplace pension with Penfold Coaching and CBT sessions Classpass Summer Fridays Eye Care Vouchers Cycle 2 Work Season Ticket Loan We recognise the benefits of hybrid working and want to create the best balance to ensure we can continue working together effectively. For our UK team, we have a hybrid work policy of three days in the office and two days from anywhere. You can chat about your specific team's days and expectations during the interview process. Application process Our process usually involves a quick chat on the phone, a portfolio review or task and a couple of interviews where you'll meet the people you'll work with. We'll keep you fully informed along the way. We want you to know your data is safe with us.
Jan 01, 2026
Full time
Senior Data Platform Engineer - Recommendations London Live shows make us feel good. They're a time to hang with our friends, discover new artists or lose ourselves on a dancefloor. We're on a mission to bring all of this to more fans, more often - and that's where you come in. We're looking for a Senior Data Platform Engineer to join our Data Platform team, to help us scale our data platform to make DICE more reliable and impactful for its partners and customers. At DICE, you'll be part of the company that's redefining live entertainment. It's a place where you can be yourself, influence the culture, and create work that you're proud of. About the role You're an experienced individual who is passionate about building and scaling data products, enhancing data platforms, and comfortable supporting the diverse needs of multiple teams. You'll be in an inspiring and fast-moving environment as part of a cross-functional team, working closely with developers, data scientists, data analysts, data engineers, analytics engineers and product managers. You'll be building data tools, designing highly scalable systems, and arriving at essential insights that will help us make the right platform choices for our use cases. You will contribute to improving our ways of working and establishing standards across the data team. We work in an iterative approach, designing, building and trialling out new concepts quickly to test our assumptions and create the best service for our fans & partners. We also continue to pursue the best approaches to the many challenges we face in delivering highly scaled and richly personalised services. We want engineers open to collaboration and who want to be an integral part of the product improvement process. You'll be Joining a team of experienced engineers focused on evolving our core data platform, with responsibilities ranging from deprecating legacy systems to designing and building next-generation platform capabilities. Working with Engineering Managers and Software Engineers to support the formulation and implementation of a data platform strategy. Taking ownership of end-to-end platform initiatives-from scoping and architecture to implementation and rollout-delivering scalable and maintainable systems that empower data consumers across the org. Helping drive the technical maturity of the platform team by introducing engineering best practices, fostering reusability, and actively breaking down silos to promote shared ownership and collaboration. Working closely with the Product Manager for the Data Platform team, along with Data Engineers, Analytics Engineers, Analysts, and Data Scientists, to ensure our platform solutions are robust, efficient, and future-ready. You are Passionate, humble and talented. A fan of music and culture. Actively responsible and a problem solver. Inspiring and collaborative. Eager to build apps that make a positive impact on the world. Up-to-date with technologies and techniques, and you know when to use them. Independent and require minimal supervision. You'll need Proven experience as a Data Platform Engineer with 4+ years of experience, with a track record of building and scaling modern data platforms and tooling, ideally in complex or high-growth environments. Strong understanding of platform architecture and infrastructure design, including security, scalability, observability, and cost efficiency. Experience building reusable, self-serve tooling and frameworks that improve developer productivity and standardise data workflows across teams. Extensive knowledge of the AWS tech stack. Deep familiarity with streaming technologies such as Kafka, Kinesis, or similar event-driven architectures. Experience with orchestration frameworks (e.g.Prefect and Airflow) and setting up scalable, observable DAG infrastructure. Strong understanding of data infrastructure best practices, including CI/CD, DevOps, Infrastructure-as-Code (Terraform), and containerization (Docker/Kubernetes) and have used these technologies to build scalable data platforms. Deep experience with cloud-native data stacks, especially on AWS (e.g. S3, Glue, Redshift, Athena, RDS, etc.). Solid development experience with Python (other programming languages are a plus). Strong grasp of version control and CI workflows (e.g. GitHub Actions), and comfort working in Unix environments. Knowledge of dbt. Knowledge of A/B testing or multivariate testing frameworks. Knowledge of data governance, security & privacy, lineage, and quality frameworks in cloud environments. AWS certification About DICE DICE is based throughout Europe, North America, Australia and India, and is rapidly growing worldwide. We're constantly innovating to bring amazing products to fans, artists, venues and promoters. We know that having a variety of perspectives makes us a better company - it's why we strongly encourage members of underrepresented communities to apply. Find out how we're creating a more diverse, equitable and inclusive DICE. Unlimited paid holiday Monthly DICE credits Private health insurance with Vitality, with tons of perks Workplace pension with Penfold Coaching and CBT sessions Classpass Summer Fridays Eye Care Vouchers Cycle 2 Work Season Ticket Loan We recognise the benefits of hybrid working and want to create the best balance to ensure we can continue working together effectively. For our UK team, we have a hybrid work policy of three days in the office and two days from anywhere. You can chat about your specific team's days and expectations during the interview process. Application process Our process usually involves a quick chat on the phone, a portfolio review or task and a couple of interviews where you'll meet the people you'll work with. We'll keep you fully informed along the way. We want you to know your data is safe with us.
Senior QA Automation Engineer (UK)
Atreides LLC.
Job Title: Senior QA Automation Data Engineer (Remote CAN) Company Overview: Atreides helps organizations transform large and complex multi modal datasets into information rich geo spatial data subscriptions that can be used across a wide spectrum of use cases. Currently, Atreides focuses on providing high fidelity data solutions to enable customers to derive insights quickly. We are a fast moving, high performance startup. We value a diverse team and believe inclusion drives better performance. We trust our team with autonomy, believing it leads to better results and job satisfaction. With a mission driven mindset and entrepreneurial spirit, we are building something new and helping unlock the power of massive scale data to make the world safer, stronger, and more prosperous. Team Overview: We are a passionate team of technologists, data scientists, and analysts with backgrounds in operational intelligence, law enforcement, large multinationals, and cybersecurity operations. We obsess about designing products that will change the way global companies, governments and nonprofits protect themselves from external threats and global adversaries. Position Overview: We are seeking a QA Automation Data Engineer to ensure the correctness, performance, and reliability of our data pipelines, data lakes, and enrichment systems. In this role, you will design, implement, and maintain automated validation frameworks for our large scale data workflows. You will work closely with data engineers, analysts, and platform engineers to embed test coverage and data quality controls directly into the CI/CD lifecycle of our ETL and geospatial data pipelines. You should be deeply familiar with test automation in data contexts, including schema evolution validation, edge case generation, null/duplicate detection, statistical drift analysis, and pipeline integration testing. This is not a manual QA role - you will write code, define test frameworks, and help enforce reliability through automation. Team Principles: Remain curious and passionate in all aspects of our work Promote clear, direct, and transparent communication Embrace the 'measure twice, cut once' philosophy Value and encourage diverse ideas and technologies Lead with empathy in all interactions Responsibilities: Develop automated test harnesses for validating Spark pipelines, Iceberg table transformations, and Python based data flows. Implement validation suites for data schema enforcement, contract testing, and null/duplication/anomaly checks. Design test cases for validating geospatial data processing pipelines (e.g., geometry validation, bounding box edge cases). Integrate data pipeline validation with CI/CD tooling. Monitor and alert on data quality regressions using metric driven validation (e.g., row count deltas, join key sparsity, referential integrity). Write and maintain mock data generators and property based test cases for data edge cases and corner conditions. Contribute to team standards for testing strategy, coverage thresholds, and release readiness gates. Collaborate with data engineers on pipeline observability and reproducibility strategies. Participate in root cause analysis and post mortems for failed data releases or quality incidents. Document infrastructure design, data engineering processes, and maintain comprehensive documentation. Desired Qualifications: 5+ years of experience in data engineering or data QA roles with automation focus. Strong proficiency in Python and PySpark, including writing testable, modular data code. Experience with Apache Iceberg, Delta Lake, or Hudi, including schema evolution and partitioning. Familiarity with data validation libraries (e.g., Great Expectations, Deequ, Soda SQL) or homegrown equivalents. Understanding of geospatial formats (e.g., GeoParquet, GeoJSON, Shapefiles) and related edge cases. Experience with test automation frameworks such as pytest, hypothesis, unittest, and integration with CI pipelines. Familiarity with cloud native data infrastructure, especially AWS (Glue, S3, Athena, EMR). Knowledge of data lineage, data contracts, and observability tools is a plus. Strong communication skills and the ability to work cross functionally with engineers and analysts. You'll Succeed If You: Enjoy catching issues before they hit production and designing coverage to prevent them. Believe that data quality is a first class concern, not an afterthought. Thrive in environments where automated tests are part of the engineering pipeline, not separate from it. Can bridge the gap between engineering practices and analytics/ML testing needs. Have experience debugging distributed failures (e.g., skewed partitions, schema mismatches, memory pressure). Compensation and Benefits: Competitive salary Comprehensive health, dental, and vision insurance plans Flexible hybrid work environment Additional benefits like flexible hours, work travel opportunities, competitive vacation time and parental leave While meeting all of these criteria would be ideal, we understand that some candidates may meet most, but not all. If you're passionate, curious and ready to "work smart and get things done," we'd love to hear from you.
Jan 01, 2026
Full time
Job Title: Senior QA Automation Data Engineer (Remote CAN) Company Overview: Atreides helps organizations transform large and complex multi modal datasets into information rich geo spatial data subscriptions that can be used across a wide spectrum of use cases. Currently, Atreides focuses on providing high fidelity data solutions to enable customers to derive insights quickly. We are a fast moving, high performance startup. We value a diverse team and believe inclusion drives better performance. We trust our team with autonomy, believing it leads to better results and job satisfaction. With a mission driven mindset and entrepreneurial spirit, we are building something new and helping unlock the power of massive scale data to make the world safer, stronger, and more prosperous. Team Overview: We are a passionate team of technologists, data scientists, and analysts with backgrounds in operational intelligence, law enforcement, large multinationals, and cybersecurity operations. We obsess about designing products that will change the way global companies, governments and nonprofits protect themselves from external threats and global adversaries. Position Overview: We are seeking a QA Automation Data Engineer to ensure the correctness, performance, and reliability of our data pipelines, data lakes, and enrichment systems. In this role, you will design, implement, and maintain automated validation frameworks for our large scale data workflows. You will work closely with data engineers, analysts, and platform engineers to embed test coverage and data quality controls directly into the CI/CD lifecycle of our ETL and geospatial data pipelines. You should be deeply familiar with test automation in data contexts, including schema evolution validation, edge case generation, null/duplicate detection, statistical drift analysis, and pipeline integration testing. This is not a manual QA role - you will write code, define test frameworks, and help enforce reliability through automation. Team Principles: Remain curious and passionate in all aspects of our work Promote clear, direct, and transparent communication Embrace the 'measure twice, cut once' philosophy Value and encourage diverse ideas and technologies Lead with empathy in all interactions Responsibilities: Develop automated test harnesses for validating Spark pipelines, Iceberg table transformations, and Python based data flows. Implement validation suites for data schema enforcement, contract testing, and null/duplication/anomaly checks. Design test cases for validating geospatial data processing pipelines (e.g., geometry validation, bounding box edge cases). Integrate data pipeline validation with CI/CD tooling. Monitor and alert on data quality regressions using metric driven validation (e.g., row count deltas, join key sparsity, referential integrity). Write and maintain mock data generators and property based test cases for data edge cases and corner conditions. Contribute to team standards for testing strategy, coverage thresholds, and release readiness gates. Collaborate with data engineers on pipeline observability and reproducibility strategies. Participate in root cause analysis and post mortems for failed data releases or quality incidents. Document infrastructure design, data engineering processes, and maintain comprehensive documentation. Desired Qualifications: 5+ years of experience in data engineering or data QA roles with automation focus. Strong proficiency in Python and PySpark, including writing testable, modular data code. Experience with Apache Iceberg, Delta Lake, or Hudi, including schema evolution and partitioning. Familiarity with data validation libraries (e.g., Great Expectations, Deequ, Soda SQL) or homegrown equivalents. Understanding of geospatial formats (e.g., GeoParquet, GeoJSON, Shapefiles) and related edge cases. Experience with test automation frameworks such as pytest, hypothesis, unittest, and integration with CI pipelines. Familiarity with cloud native data infrastructure, especially AWS (Glue, S3, Athena, EMR). Knowledge of data lineage, data contracts, and observability tools is a plus. Strong communication skills and the ability to work cross functionally with engineers and analysts. You'll Succeed If You: Enjoy catching issues before they hit production and designing coverage to prevent them. Believe that data quality is a first class concern, not an afterthought. Thrive in environments where automated tests are part of the engineering pipeline, not separate from it. Can bridge the gap between engineering practices and analytics/ML testing needs. Have experience debugging distributed failures (e.g., skewed partitions, schema mismatches, memory pressure). Compensation and Benefits: Competitive salary Comprehensive health, dental, and vision insurance plans Flexible hybrid work environment Additional benefits like flexible hours, work travel opportunities, competitive vacation time and parental leave While meeting all of these criteria would be ideal, we understand that some candidates may meet most, but not all. If you're passionate, curious and ready to "work smart and get things done," we'd love to hear from you.
Senior Data Platform Engineer - Recommendations
DICE
Senior Data Platform Engineer - Recommendations Remote Live shows make us feel good. They're a time to hang with our friends, discover new artists or lose ourselves on a dancefloor. We're on a mission to bring all of this to more fans, more often - and that's where you come in. We're looking for a Senior Data Platform Engineer to join our Data Platform team, to help us scale our data platform to make DICE more reliable and impactful for its partners and customers. At DICE, you'll be part of the company that's redefining live entertainment. It's a place where you can be yourself, influence the culture, and create work that you're proud of. About the role You're an experienced individual who is passionate about building and scaling data products, enhancing data platforms, and comfortable supporting the diverse needs of multiple teams. You'll be in an inspiring and fast-moving environment as part of a cross-functional team, working closely with developers, data scientists, data analysts, data engineers, analytics engineers and product managers. You'll be building data tools, designing highly scalable systems, and arriving at essential insights that will help us make the right platform choices for our use cases. You will contribute to improving our ways of working and establishing standards across the data team. We work in an iterative approach, designing, building and trialling out new concepts quickly to test our assumptions and create the best service for our fans & partners. We also continue to pursue the best approaches to the many challenges we face in delivering highly scaled and richly personalised services. We want engineers open to collaboration and who want to be an integral part of the product improvement process. You'll be Joining a team of experienced engineers focused on evolving our core data platform, with responsibilities ranging from deprecating legacy systems to designing and building next-generation platform capabilities. Working with Engineering Managers and Software Engineers to support the formulation and implementation of a data platform strategy. Taking ownership of end-to-end platform initiatives-from scoping and architecture to implementation and rollout-delivering scalable and maintainable systems that empower data consumers across the org. Helping drive the technical maturity of the platform team by introducing engineering best practices, fostering reusability, and actively breaking down silos to promote shared ownership and collaboration. Working closely with the Product Manager for the Data Platform team, along with Data Engineers, Analytics Engineers, Analysts, and Data Scientists, to ensure our platform solutions are robust, efficient, and future-ready. You are Passionate, humble and talented. A fan of music and culture. Actively responsible and a problem solver. Inspiring and collaborative. Eager to build apps that make a positive impact on the world. Up-to-date with technologies and techniques, and you know when to use them. Independent and require minimal supervision. You'll need Proven experience as a Data Platform Engineer with 4+ years of experience, with a track record of building and scaling modern data platforms and tooling, ideally in complex or high-growth environments. Strong understanding of platform architecture and infrastructure design, including security, scalability, observability, and cost efficiency. Experience building reusable, self-serve tooling and frameworks that improve developer productivity and standardise data workflows across teams. Extensive knowledge of the AWS tech stack. Deep familiarity with streaming technologies such as Kafka, Kinesis, or similar event-driven architectures. Experience with orchestration frameworks (e.g.Prefect and Airflow) and setting up scalable, observable DAG infrastructure. Strong understanding of data infrastructure best practices, including CI/CD, DevOps, Infrastructure-as-Code (Terraform), and containerization (Docker/Kubernetes) and have used these technologies to build scalable data platforms. Deep experience with cloud-native data stacks, especially on AWS (e.g. S3, Glue, Redshift, Athena, RDS, etc.). Solid development experience with Python (other programming languages are a plus). Strong grasp of version control and CI workflows (e.g. GitHub Actions), and comfort working in Unix environments. Knowledge of dbt. Knowledge of A/B testing or multivariate testing frameworks. Knowledge of data governance, security & privacy, lineage, and quality frameworks in cloud environments. AWS certification About DICE DICE is based throughout Europe, North America, Australia and India, and is rapidly growing worldwide. We're constantly innovating to bring amazing products to fans, artists, venues and promoters. We know that having a variety of perspectives makes us a better company - it's why we strongly encourage members of underrepresented communities to apply. Find out how we're creating a more diverse, equitable and inclusive DICE. Application process Our process usually involves a quick chat on the phone, a portfolio review or task and a couple of interviews where you'll meet the people you'll work with. We'll keep you fully informed along the way.
Jan 01, 2026
Full time
Senior Data Platform Engineer - Recommendations Remote Live shows make us feel good. They're a time to hang with our friends, discover new artists or lose ourselves on a dancefloor. We're on a mission to bring all of this to more fans, more often - and that's where you come in. We're looking for a Senior Data Platform Engineer to join our Data Platform team, to help us scale our data platform to make DICE more reliable and impactful for its partners and customers. At DICE, you'll be part of the company that's redefining live entertainment. It's a place where you can be yourself, influence the culture, and create work that you're proud of. About the role You're an experienced individual who is passionate about building and scaling data products, enhancing data platforms, and comfortable supporting the diverse needs of multiple teams. You'll be in an inspiring and fast-moving environment as part of a cross-functional team, working closely with developers, data scientists, data analysts, data engineers, analytics engineers and product managers. You'll be building data tools, designing highly scalable systems, and arriving at essential insights that will help us make the right platform choices for our use cases. You will contribute to improving our ways of working and establishing standards across the data team. We work in an iterative approach, designing, building and trialling out new concepts quickly to test our assumptions and create the best service for our fans & partners. We also continue to pursue the best approaches to the many challenges we face in delivering highly scaled and richly personalised services. We want engineers open to collaboration and who want to be an integral part of the product improvement process. You'll be Joining a team of experienced engineers focused on evolving our core data platform, with responsibilities ranging from deprecating legacy systems to designing and building next-generation platform capabilities. Working with Engineering Managers and Software Engineers to support the formulation and implementation of a data platform strategy. Taking ownership of end-to-end platform initiatives-from scoping and architecture to implementation and rollout-delivering scalable and maintainable systems that empower data consumers across the org. Helping drive the technical maturity of the platform team by introducing engineering best practices, fostering reusability, and actively breaking down silos to promote shared ownership and collaboration. Working closely with the Product Manager for the Data Platform team, along with Data Engineers, Analytics Engineers, Analysts, and Data Scientists, to ensure our platform solutions are robust, efficient, and future-ready. You are Passionate, humble and talented. A fan of music and culture. Actively responsible and a problem solver. Inspiring and collaborative. Eager to build apps that make a positive impact on the world. Up-to-date with technologies and techniques, and you know when to use them. Independent and require minimal supervision. You'll need Proven experience as a Data Platform Engineer with 4+ years of experience, with a track record of building and scaling modern data platforms and tooling, ideally in complex or high-growth environments. Strong understanding of platform architecture and infrastructure design, including security, scalability, observability, and cost efficiency. Experience building reusable, self-serve tooling and frameworks that improve developer productivity and standardise data workflows across teams. Extensive knowledge of the AWS tech stack. Deep familiarity with streaming technologies such as Kafka, Kinesis, or similar event-driven architectures. Experience with orchestration frameworks (e.g.Prefect and Airflow) and setting up scalable, observable DAG infrastructure. Strong understanding of data infrastructure best practices, including CI/CD, DevOps, Infrastructure-as-Code (Terraform), and containerization (Docker/Kubernetes) and have used these technologies to build scalable data platforms. Deep experience with cloud-native data stacks, especially on AWS (e.g. S3, Glue, Redshift, Athena, RDS, etc.). Solid development experience with Python (other programming languages are a plus). Strong grasp of version control and CI workflows (e.g. GitHub Actions), and comfort working in Unix environments. Knowledge of dbt. Knowledge of A/B testing or multivariate testing frameworks. Knowledge of data governance, security & privacy, lineage, and quality frameworks in cloud environments. AWS certification About DICE DICE is based throughout Europe, North America, Australia and India, and is rapidly growing worldwide. We're constantly innovating to bring amazing products to fans, artists, venues and promoters. We know that having a variety of perspectives makes us a better company - it's why we strongly encourage members of underrepresented communities to apply. Find out how we're creating a more diverse, equitable and inclusive DICE. Application process Our process usually involves a quick chat on the phone, a portfolio review or task and a couple of interviews where you'll meet the people you'll work with. We'll keep you fully informed along the way.
Linuxrecruit
DevOps/Data Engineer - Amazon Connect Migration
Linuxrecruit City, London
Your DevOps and Data experience is required to aid a brand new artificial intelligence start-up in a mission critical project. The client in question incorporated on April 2nd so you're getting in as early as it gets with the chance of a flurry of work coming in, so it could be extensions galore. You, as a data focused DevOps Engineer with a strong focus on data will be working on a high-impact project involving the migration of a call centre system to Amazon Connect, previous experience migrating to Amazon Connect is a MUST. You'll play a pivotal role in architecting and implementing robust DevOps solutions while leveraging your expertise in AWS, data management, and infrastructure automation. Key Responsibilities Lead the migration of the call centre system to Amazon Connect, ensuring seamless integration and optimal performance. Utilise AWS services, including Kinesis and Terraform, to design and implement scalable and reliable infrastructure. Develop and maintain automated pipelines for data processing and analysis. Write and optimise SQL queries for data retrieval and manipulation. Implement distributed logging solutions to facilitate monitoring and troubleshooting. Requirements Amazon Connect migration experience. Proven experience in DevOps, with a focus on data and cloud technologies. Strong proficiency in AWS services, including but not limited to EC2, S3, and IAM. Hands-on experience with Infrastructure as Code (IaC) tools such as Terraform. Familiarity with data streaming technologies, such as Kinesis. Proficiency in writing and optimising SQL queries for data analysis. Experience with distributed logging solutions and monitoring tools. Prior exposure to Blue, Athena, Genesis, and EEL pipelines is highly desirable. Consulting background preferred, with a focus on delivering high-quality solutions and adding value to clients. Knowledge of ML Ops and machine learning concepts is a plus - If not, you will get exposure in this role! If you are someone who really quality controls your output and has a consultative approach to your work, often not just doing what is within your remit, but also having opinions on how to improve and add value to other aspects of the project, then this one is for you. Apply now for the full details!
Jan 01, 2026
Full time
Your DevOps and Data experience is required to aid a brand new artificial intelligence start-up in a mission critical project. The client in question incorporated on April 2nd so you're getting in as early as it gets with the chance of a flurry of work coming in, so it could be extensions galore. You, as a data focused DevOps Engineer with a strong focus on data will be working on a high-impact project involving the migration of a call centre system to Amazon Connect, previous experience migrating to Amazon Connect is a MUST. You'll play a pivotal role in architecting and implementing robust DevOps solutions while leveraging your expertise in AWS, data management, and infrastructure automation. Key Responsibilities Lead the migration of the call centre system to Amazon Connect, ensuring seamless integration and optimal performance. Utilise AWS services, including Kinesis and Terraform, to design and implement scalable and reliable infrastructure. Develop and maintain automated pipelines for data processing and analysis. Write and optimise SQL queries for data retrieval and manipulation. Implement distributed logging solutions to facilitate monitoring and troubleshooting. Requirements Amazon Connect migration experience. Proven experience in DevOps, with a focus on data and cloud technologies. Strong proficiency in AWS services, including but not limited to EC2, S3, and IAM. Hands-on experience with Infrastructure as Code (IaC) tools such as Terraform. Familiarity with data streaming technologies, such as Kinesis. Proficiency in writing and optimising SQL queries for data analysis. Experience with distributed logging solutions and monitoring tools. Prior exposure to Blue, Athena, Genesis, and EEL pipelines is highly desirable. Consulting background preferred, with a focus on delivering high-quality solutions and adding value to clients. Knowledge of ML Ops and machine learning concepts is a plus - If not, you will get exposure in this role! If you are someone who really quality controls your output and has a consultative approach to your work, often not just doing what is within your remit, but also having opinions on how to improve and add value to other aspects of the project, then this one is for you. Apply now for the full details!
CapGemini
AWS Data Architect
CapGemini City, Birmingham
As an AWS Data Architect within Capgemini's Insights and Data Global Practice, you will be part of the Cloud Data Platforms team, which includes Data Engineers, Platform Engineers, Solutions Architects, and Business Analysts. This team drives digital and data transformation journeys using modern cloud platforms like AWS, Azure, and GCP. You will be responsible for designing and delivering innovative data solutions using AWS technologies, contributing to client transformation initiatives across various sectors. Your Role Lead the design and implementation of cloud-native data platforms using AWS. Architect and deliver data modernisation strategies, transitioning legacy systems to scalable, cloud-native solutions. Design and implement lakehouse architectures that unify data lakes and data warehouses for advanced analytics and AI/ML workloads. Collaborate with other solution architects to ensure alignment with enterprise architecture. Act as a technical liaison between Sales, Delivery, and Client teams. Support proposal writing, solution direction, pricing, and costing. Define and implement data governance, security, and compliance strategies. Work hands-on with AWS services such as Redshift, Glue, Lake Formation, SageMaker, Athena, and more. Contribute to pre-sales activities and client bid responses. Mentor junior team members and contribute to internal capability building. Your Skills and Experience Required Skills & Experience: Proven experience in AWS cloud architecture, particularly in data and analytics. Strong hands-on expertise with AWS services (e.g. Redshift, Glue, Lake Formation, SageMaker). Experience designing scalable data platforms, including data lakes, lakehouses, and real-time analytics. Demonstrated success in data modernisation projects, including migration from on-premise to cloud-native platforms. Knowledge of automation tooling (e.g., CI/CD with AWS DevOps). Familiarity with containerization and orchestration tools (Docker, Kubernetes). Understanding of IaaS, PaaS, SaaS models. Experience with other cloud platforms (Azure, GCP) is a plus. Excellent communication and stakeholder engagement skills. AWS Certified Solutions Architect and/or industry certifications such as TOGAF 9 or equivalent. Experience with data platforms like Databricks, Snowflake, Quantexa, Palantir or SAS. Exposure to AI/ML use cases and GenAI technologies. Background in Public Sector or other regulated industries Your Security Clearance To be successfully appointed to this role, it is a requirement to obtain Security Check (SC) clearance. To obtain SC clearance, the successful applicant must have resided continuously within the United Kingdom for the last 5 years, along with other criteria and requirements. Throughout the recruitment process, you will be asked questions about your security clearance eligibility such as, but not limited to, country of residence and nationality. Some posts are restricted to sole UK Nationals for security reasons; therefore, you may be asked about your citizenship in the application process. What does 'Get The Future You Want' mean for you? Your wellbeing You'd be joining an accredited Great Place to work for Wellbeing in 2023. Employee wellbeing is vitally important to us as an organisation. We see a healthy and happy workforce a critical component for us to achieve our organisational ambitions. To help support wellbeing we have trained 'Mental Health Champions' across each of our business areas, and we have invested in wellbeing apps such as Thrive and Peppy. Shape your path You will be empowered to explore, innovate, and progress. You will benefit from Capgemini's 'learning for life' mindset, meaning you will have countless training and development opportunities from thinktanks to hackathons, and access to 250,000 courses with numerous external certifications from AWS, Microsoft, Harvard ManageMentor, Cybersecurity qualifications and much more. Why you should consider Capgemini Growing clients' businesses while building a more sustainable, more inclusive future is a tough ask. When you join Capgemini, you'll join a thriving company and become part of a diverse collective of free-thinkers, entrepreneurs and industry experts. We find new ways technology can help us reimagine what's possible. It's why, together, we seek out opportunities that will transform the world's leading businesses, and it's how you'll gain the experiences and connections you need to shape your future. By learning from each other every day, sharing knowledge, and always pushing yourself to do better, you'll build the skills you want. You'll use your skills to help our clients leverage technology to innovate and grow their business. So, it might not always be easy, but making the world a better place rarely is. About Capgemini Capgemini is a global business and technology transformation partner, helping organisations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fuelled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2024 global revenues of €22.1 billion. When you join Capgemini, you don't just start a new job. You become part of something bigger. Learn about how the recruitment process works - how to apply, where to follow your application, and next steps. To help you bring out the best of yourself during the interview process, we've got some great interview tips to share before the big day.
Jan 01, 2026
Full time
As an AWS Data Architect within Capgemini's Insights and Data Global Practice, you will be part of the Cloud Data Platforms team, which includes Data Engineers, Platform Engineers, Solutions Architects, and Business Analysts. This team drives digital and data transformation journeys using modern cloud platforms like AWS, Azure, and GCP. You will be responsible for designing and delivering innovative data solutions using AWS technologies, contributing to client transformation initiatives across various sectors. Your Role Lead the design and implementation of cloud-native data platforms using AWS. Architect and deliver data modernisation strategies, transitioning legacy systems to scalable, cloud-native solutions. Design and implement lakehouse architectures that unify data lakes and data warehouses for advanced analytics and AI/ML workloads. Collaborate with other solution architects to ensure alignment with enterprise architecture. Act as a technical liaison between Sales, Delivery, and Client teams. Support proposal writing, solution direction, pricing, and costing. Define and implement data governance, security, and compliance strategies. Work hands-on with AWS services such as Redshift, Glue, Lake Formation, SageMaker, Athena, and more. Contribute to pre-sales activities and client bid responses. Mentor junior team members and contribute to internal capability building. Your Skills and Experience Required Skills & Experience: Proven experience in AWS cloud architecture, particularly in data and analytics. Strong hands-on expertise with AWS services (e.g. Redshift, Glue, Lake Formation, SageMaker). Experience designing scalable data platforms, including data lakes, lakehouses, and real-time analytics. Demonstrated success in data modernisation projects, including migration from on-premise to cloud-native platforms. Knowledge of automation tooling (e.g., CI/CD with AWS DevOps). Familiarity with containerization and orchestration tools (Docker, Kubernetes). Understanding of IaaS, PaaS, SaaS models. Experience with other cloud platforms (Azure, GCP) is a plus. Excellent communication and stakeholder engagement skills. AWS Certified Solutions Architect and/or industry certifications such as TOGAF 9 or equivalent. Experience with data platforms like Databricks, Snowflake, Quantexa, Palantir or SAS. Exposure to AI/ML use cases and GenAI technologies. Background in Public Sector or other regulated industries Your Security Clearance To be successfully appointed to this role, it is a requirement to obtain Security Check (SC) clearance. To obtain SC clearance, the successful applicant must have resided continuously within the United Kingdom for the last 5 years, along with other criteria and requirements. Throughout the recruitment process, you will be asked questions about your security clearance eligibility such as, but not limited to, country of residence and nationality. Some posts are restricted to sole UK Nationals for security reasons; therefore, you may be asked about your citizenship in the application process. What does 'Get The Future You Want' mean for you? Your wellbeing You'd be joining an accredited Great Place to work for Wellbeing in 2023. Employee wellbeing is vitally important to us as an organisation. We see a healthy and happy workforce a critical component for us to achieve our organisational ambitions. To help support wellbeing we have trained 'Mental Health Champions' across each of our business areas, and we have invested in wellbeing apps such as Thrive and Peppy. Shape your path You will be empowered to explore, innovate, and progress. You will benefit from Capgemini's 'learning for life' mindset, meaning you will have countless training and development opportunities from thinktanks to hackathons, and access to 250,000 courses with numerous external certifications from AWS, Microsoft, Harvard ManageMentor, Cybersecurity qualifications and much more. Why you should consider Capgemini Growing clients' businesses while building a more sustainable, more inclusive future is a tough ask. When you join Capgemini, you'll join a thriving company and become part of a diverse collective of free-thinkers, entrepreneurs and industry experts. We find new ways technology can help us reimagine what's possible. It's why, together, we seek out opportunities that will transform the world's leading businesses, and it's how you'll gain the experiences and connections you need to shape your future. By learning from each other every day, sharing knowledge, and always pushing yourself to do better, you'll build the skills you want. You'll use your skills to help our clients leverage technology to innovate and grow their business. So, it might not always be easy, but making the world a better place rarely is. About Capgemini Capgemini is a global business and technology transformation partner, helping organisations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fuelled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2024 global revenues of €22.1 billion. When you join Capgemini, you don't just start a new job. You become part of something bigger. Learn about how the recruitment process works - how to apply, where to follow your application, and next steps. To help you bring out the best of yourself during the interview process, we've got some great interview tips to share before the big day.
Staff Data Engineer
Bazaarvoice City, Belfast
At Bazaarvoice, we create smart shopping experiences. Through our expansive global network, product passionate community & enterprise technology, we connect thousands of brands and retailers with billions of consumers. Our solutions enable brands to connect with consumers and collect valuable user generated content, at an unprecedented scale. This content achieves global reach by leveraging our extensive and ever expanding retail, social & search syndication network. And we make it easy for brands & retailers to gain valuable business insights from real time consumer feedback with intuitive tools and dashboards. The result is smarter shopping: loyal customers, increased sales, and improved products. The problem we are trying to solve: Brands and retailers struggle to make real connections with consumers. It's a challenge to deliver trustworthy and inspiring content in the moments that matter most during the discovery and purchase cycle. The result? Time and money spent on content that doesn't attract new consumers, convert them, or earn their long term loyalty. Our brand promise: closing the gap between brands and consumers. Founded in 2005, Bazaarvoice is headquartered in Austin, Texas with offices in North America, Europe, Asia and Australia. It's official: Bazaarvoice is a Great Place to Work in the US, Australia, India, Lithuania, France, Germany and the UK! Who we want Are you ready to combine your talent for crafting solid data systems and enthusiasm for cutting edge technology to harness the power of data at Bazaarvoice? We're looking for a strong data engineer who thrives on building large scale, robust, distributed data systems and pipelines, who understands the importance of good software engineering practices to get it done. If you're excited about shaping the future of datavoice, come join us. How you will make an impact As a key member of the Insights team, you'll be tasked with designing, building, and supporting large scale, distributed data systems that drive our organization's data infrastructure forward, and power our products and services. Your responsibilities will include developing data pipelines, optimizing data storage and retrieval processes, and ensuring the reliability and scalability of our data architecture. You'll collaborate closely with cross functional teams to understand data requirements, implement solutions, and troubleshoot issues as they arise. You'll also play a pivotal role in advocating for and implementing software engineering best practices to ensure the efficiency, maintainability, and robustness of our data systems. This role offers an exciting opportunity to work on cutting edge technology and contribute to shaping the future of data driven decision making within our organization. Who you are BSc in Computer Science or related discipline. 7+ years experience designing and building robust, scalable, distributed data systems and pipelines, using open source and public cloud technologies. Strong experience with data orchestration tools: e.g. Apache Airflow, Dagster. Experience with big data storage and processing technologies: e.g. DBT, Spark, SQL, Athena/Trino, Redshift, Snowflake, RDBMSs (PostgreSQL/MySQL). Knowledge of event driven architectures and streaming technologies: e.g. Apache Kafka, Kafka Streams, Apache Flink. Experience with public cloud environments: e.g. AWS, GCP, Azure, Terraform. Strong knowledge of software engineering practices: e.g. testing, CI/CD (Jenkins, Github Actions), agile development, git/version control, containers etc. Strong technical leadership, problem solving skills and analytical thinking. Passion for staying up to date with emerging data engineering technologies and trends. Why join Bazaarvoice? Customer is key We see our own success through our customers' outcomes. We approach every situation with a customer first mindset. Transparency & Integrity Builds Trust We believe in the power of authentic feedback because it's in our DNA. We do the right thing when faced with and trust accelerate our collective performance. Passionate Pursuit of Performance Our energy is contagious, because we hire for passion, drive & curiosity. We love what we do, and because we're laser focused on our mission. Innovation over Imitation We seek to innovate as we are not content with the status quo. We embrace agility and experimentation as an advantage. Stronger Together We bring our whole selves to the mission and find value in diverse perspectives. We champion what's best for Bazaarvoice before individuals or teams. As a stronger company we build a stronger community. Commitment to diversity and inclusion Bazaarvoice provides equal employment opportunities (EEO) to all team members and applicants according to their experience, talent, and qualifications for the job without regard to race, color, national origin, religion, age, disability, sex (including pregnancy, gender stereotyping, and marital status), sexual orientation, gender identity, genetic information, military/veteran status, or any other category protected by federal, state, or local law in every location in which the company has facilities. Bazaarvoice believes that diversity and an inclusive company culture are key drivers of creativity, innovation and performance. Furthermore, a diverse workforce and the maintenance of an atmosphere that welcomes versatile perspectives will enhance our ability to fulfill our vision of creating the world's smartest network of consumers, brands, and retailers. Please note: A basic background check will be required for the successful candidate.
Jan 01, 2026
Full time
At Bazaarvoice, we create smart shopping experiences. Through our expansive global network, product passionate community & enterprise technology, we connect thousands of brands and retailers with billions of consumers. Our solutions enable brands to connect with consumers and collect valuable user generated content, at an unprecedented scale. This content achieves global reach by leveraging our extensive and ever expanding retail, social & search syndication network. And we make it easy for brands & retailers to gain valuable business insights from real time consumer feedback with intuitive tools and dashboards. The result is smarter shopping: loyal customers, increased sales, and improved products. The problem we are trying to solve: Brands and retailers struggle to make real connections with consumers. It's a challenge to deliver trustworthy and inspiring content in the moments that matter most during the discovery and purchase cycle. The result? Time and money spent on content that doesn't attract new consumers, convert them, or earn their long term loyalty. Our brand promise: closing the gap between brands and consumers. Founded in 2005, Bazaarvoice is headquartered in Austin, Texas with offices in North America, Europe, Asia and Australia. It's official: Bazaarvoice is a Great Place to Work in the US, Australia, India, Lithuania, France, Germany and the UK! Who we want Are you ready to combine your talent for crafting solid data systems and enthusiasm for cutting edge technology to harness the power of data at Bazaarvoice? We're looking for a strong data engineer who thrives on building large scale, robust, distributed data systems and pipelines, who understands the importance of good software engineering practices to get it done. If you're excited about shaping the future of datavoice, come join us. How you will make an impact As a key member of the Insights team, you'll be tasked with designing, building, and supporting large scale, distributed data systems that drive our organization's data infrastructure forward, and power our products and services. Your responsibilities will include developing data pipelines, optimizing data storage and retrieval processes, and ensuring the reliability and scalability of our data architecture. You'll collaborate closely with cross functional teams to understand data requirements, implement solutions, and troubleshoot issues as they arise. You'll also play a pivotal role in advocating for and implementing software engineering best practices to ensure the efficiency, maintainability, and robustness of our data systems. This role offers an exciting opportunity to work on cutting edge technology and contribute to shaping the future of data driven decision making within our organization. Who you are BSc in Computer Science or related discipline. 7+ years experience designing and building robust, scalable, distributed data systems and pipelines, using open source and public cloud technologies. Strong experience with data orchestration tools: e.g. Apache Airflow, Dagster. Experience with big data storage and processing technologies: e.g. DBT, Spark, SQL, Athena/Trino, Redshift, Snowflake, RDBMSs (PostgreSQL/MySQL). Knowledge of event driven architectures and streaming technologies: e.g. Apache Kafka, Kafka Streams, Apache Flink. Experience with public cloud environments: e.g. AWS, GCP, Azure, Terraform. Strong knowledge of software engineering practices: e.g. testing, CI/CD (Jenkins, Github Actions), agile development, git/version control, containers etc. Strong technical leadership, problem solving skills and analytical thinking. Passion for staying up to date with emerging data engineering technologies and trends. Why join Bazaarvoice? Customer is key We see our own success through our customers' outcomes. We approach every situation with a customer first mindset. Transparency & Integrity Builds Trust We believe in the power of authentic feedback because it's in our DNA. We do the right thing when faced with and trust accelerate our collective performance. Passionate Pursuit of Performance Our energy is contagious, because we hire for passion, drive & curiosity. We love what we do, and because we're laser focused on our mission. Innovation over Imitation We seek to innovate as we are not content with the status quo. We embrace agility and experimentation as an advantage. Stronger Together We bring our whole selves to the mission and find value in diverse perspectives. We champion what's best for Bazaarvoice before individuals or teams. As a stronger company we build a stronger community. Commitment to diversity and inclusion Bazaarvoice provides equal employment opportunities (EEO) to all team members and applicants according to their experience, talent, and qualifications for the job without regard to race, color, national origin, religion, age, disability, sex (including pregnancy, gender stereotyping, and marital status), sexual orientation, gender identity, genetic information, military/veteran status, or any other category protected by federal, state, or local law in every location in which the company has facilities. Bazaarvoice believes that diversity and an inclusive company culture are key drivers of creativity, innovation and performance. Furthermore, a diverse workforce and the maintenance of an atmosphere that welcomes versatile perspectives will enhance our ability to fulfill our vision of creating the world's smartest network of consumers, brands, and retailers. Please note: A basic background check will be required for the successful candidate.
Lead Data Engineer (Databricks/Python) in UK - Eleks
WorksHub
Eleks United Kingdom Posted about 1 month ago This is a job posted by our partner Jooble Below is a snippet of the job description. To read the full text, please click on the "Apply Now" link. Qualifications Exception handling 2+ years of hands-on experience with Databricks 2+ years in a technical leadership role Strong knowledge of Python Practical experience with AWS services (e.g. S3, Redshift, Athena, Glue, Lambda) Upper-Intermediate or higher level of English ELEKS is a global provider of software engineering and technology consulting services, specializing in Data Science, Mobility, and Financial solutions for industry leaders. 108 E 16th Street, New York, NY 10003
Jan 01, 2026
Full time
Eleks United Kingdom Posted about 1 month ago This is a job posted by our partner Jooble Below is a snippet of the job description. To read the full text, please click on the "Apply Now" link. Qualifications Exception handling 2+ years of hands-on experience with Databricks 2+ years in a technical leadership role Strong knowledge of Python Practical experience with AWS services (e.g. S3, Redshift, Athena, Glue, Lambda) Upper-Intermediate or higher level of English ELEKS is a global provider of software engineering and technology consulting services, specializing in Data Science, Mobility, and Financial solutions for industry leaders. 108 E 16th Street, New York, NY 10003
Senior Backend Engineer
Popsa City Of Westminster, London
The mission for the role: The backend team at Popsa is responsible for the entirety of the backend, from the underlying AWS infrastructure to the microservices and code that runs on it. As guardians of Popsa's infrastructure the team are involved in the design and development of features from the get go; supporting with domain knowledge in API design, security and infrastructure; enabling Popsa to bring exciting features from inception to implementation. The backend team can be considered cross discipline, operating across both the backend services and platform/devops domains - this richness keeps the workload varied and exciting. We're looking for a highly skilled and driven Senior Backend Engineer who can cover a wide scope of responsibilities, including user-facing feature development, infrastructure reliability and security, and development of internal services. Sitting at the heart of the company, this role will work directly with product, front-end engineering, data science, customer service and operational teams. This is a really exciting opportunity with the potential to directly influence the company's growth, through innovative technical design and freedom to explore novel approaches. What we are looking for Strong recent experience with AWS and its managed/serverless ecosystem A problem-solving mindset and a constructive, collaborative approach Clear communicator who works well across engineering and product teams Solid experience developing in Go, familiarity with languages such as Python or TypeScript would be a plus Hands on experience with Kubernetes for orchestration Proficient with Terraform for infrastructure configuration and provisioning Practical knowledge of observability tooling (CloudWatch, Grafana, Prometheus) Comfortable writing SQL for analytical workloads (e.g., Athena) Experience with ElasticSearch/OpenSearch (nice to have) Familiarity with GitHub Actions (nice to have) Interest in and active use of AI based tooling to support efficient engineering practices A technically strong, product focused mindset that balances engineering quality with product priorities Some of our exciting technical challenges Scaling infrastructure globally to provide a low latency experience to our users Enabling real time design collaboration between our users Developing social graphs to help users enrich their stories Tech Stack highlights Core Platform Cloud hosted infrastructure running 30+ micro services on AWS using Kubernetes (EKS) and gRPC for interservice communication Serverless stack with over 250 Lambda functions for event processing Terraform managed infrastructure DynamoDB application database Prometheus, Grafana, Jaeger and Splunk for observability and alerting User facing Apps 100% native iOS app built in Swift using the Coordinators (C MVVM) pattern 100% native Android app built in Kotlin, using JetPack Compose (Both mobile apps leverage native vision and machine learning frameworks to perform deep analysis on photos using our in house trained models) Modern, high performance Typescript web application deployed on Vercel Fully automated deployment workflows for Web development Data Architecture S3 data lake with Athena and Apache Spark for analytical workloads AWS Batch for orchestration of user facing data rich features like Memory generation SageMaker for model training and evaluation Bedrock and AgentCore for agents workflows Ops Linear used for work management across all teams Figma used for product design and front end prototyping Confluence (moving to Coda) for knowledge management Slack for internal comms Mixpanel and Growthbook for behavioural analytics and multi variate testing ChatGPT, Claude and AI enabled IDEs available to all team members
Jan 01, 2026
Full time
The mission for the role: The backend team at Popsa is responsible for the entirety of the backend, from the underlying AWS infrastructure to the microservices and code that runs on it. As guardians of Popsa's infrastructure the team are involved in the design and development of features from the get go; supporting with domain knowledge in API design, security and infrastructure; enabling Popsa to bring exciting features from inception to implementation. The backend team can be considered cross discipline, operating across both the backend services and platform/devops domains - this richness keeps the workload varied and exciting. We're looking for a highly skilled and driven Senior Backend Engineer who can cover a wide scope of responsibilities, including user-facing feature development, infrastructure reliability and security, and development of internal services. Sitting at the heart of the company, this role will work directly with product, front-end engineering, data science, customer service and operational teams. This is a really exciting opportunity with the potential to directly influence the company's growth, through innovative technical design and freedom to explore novel approaches. What we are looking for Strong recent experience with AWS and its managed/serverless ecosystem A problem-solving mindset and a constructive, collaborative approach Clear communicator who works well across engineering and product teams Solid experience developing in Go, familiarity with languages such as Python or TypeScript would be a plus Hands on experience with Kubernetes for orchestration Proficient with Terraform for infrastructure configuration and provisioning Practical knowledge of observability tooling (CloudWatch, Grafana, Prometheus) Comfortable writing SQL for analytical workloads (e.g., Athena) Experience with ElasticSearch/OpenSearch (nice to have) Familiarity with GitHub Actions (nice to have) Interest in and active use of AI based tooling to support efficient engineering practices A technically strong, product focused mindset that balances engineering quality with product priorities Some of our exciting technical challenges Scaling infrastructure globally to provide a low latency experience to our users Enabling real time design collaboration between our users Developing social graphs to help users enrich their stories Tech Stack highlights Core Platform Cloud hosted infrastructure running 30+ micro services on AWS using Kubernetes (EKS) and gRPC for interservice communication Serverless stack with over 250 Lambda functions for event processing Terraform managed infrastructure DynamoDB application database Prometheus, Grafana, Jaeger and Splunk for observability and alerting User facing Apps 100% native iOS app built in Swift using the Coordinators (C MVVM) pattern 100% native Android app built in Kotlin, using JetPack Compose (Both mobile apps leverage native vision and machine learning frameworks to perform deep analysis on photos using our in house trained models) Modern, high performance Typescript web application deployed on Vercel Fully automated deployment workflows for Web development Data Architecture S3 data lake with Athena and Apache Spark for analytical workloads AWS Batch for orchestration of user facing data rich features like Memory generation SageMaker for model training and evaluation Bedrock and AgentCore for agents workflows Ops Linear used for work management across all teams Figma used for product design and front end prototyping Confluence (moving to Coda) for knowledge management Slack for internal comms Mixpanel and Growthbook for behavioural analytics and multi variate testing ChatGPT, Claude and AI enabled IDEs available to all team members
Trainline
Head of Data Science
Trainline City, London
About us We are champions of rail, inspired to build a greener, more sustainable future of travel. Trainline enables millions of travellers to find and book the best value tickets across carriers, fares, and journey options through our highly rated mobile app, website, and B2B partner channels. Great journeys start with Trainline Now Europe's number 1 downloaded rail app, with over 125 million monthly visits and £5.9 billion in annual ticket sales, we collaborate with 270+ rail and coach companies in over 40 countries. We want to create a world where travel is as simple, seamless, eco-friendly and affordable as it should be. Today, we're a FTSE 250 company driven by our incredible team of over 1,000 Trainliners from 50+ nationalities, based across London, Paris, Barcelona, Milan, Edinburgh and Madrid. With our focus on growth in the UK and Europe, now is the perfect time to join us on this high-speed journey. Introducing Data Science at Trainline Data Science is central to how we build products, delight our customers and grow our business. Our Data Scientists are embedded in cross-functional teams across Product and Marketing, empowered with a high degree of autonomy to drive outcomes using all data and techniques at their disposal. As the Head of Data Science, you will lead a team of high-performing Data Science Managers and play a pivotal role in shaping a large cross-functional organisation spanning Product, Engineering, Marketing and Data. You will be a key decision-maker, helping define and deliver a product experience that provides the right inventory, enables a seamless purchase journey, and drives forward our future ticketing opportunities. Doing this well requires deeply understanding our users, identifying their Jobs-to-Be-Done, evaluating whether we are successfully meeting their needs, and accelerating the pace of product discovery and iteration. Your team will influence strategic product thinking, strengthen experimentation and measurement practices, and shape how AI and data power our product experience. In this role, your leadership spans two complementary dimensions: Functional leadership, setting the bar for excellence in Data Science & Analytics. Strategic business partnership, working closely with Product, Engineering, Commercial and Marketing to define long-term direction and deliver impactful outcomes. As a Head of Data Science at Trainline, you will Lead & Develop a High-Performing Data Science Organisation Lead an org of 3 Data Science Managers and their respective teams. Build a culture focused on experimentation, learning, and measurable business impact. Ensure Data Science & Analytics talent is embedded effectively into cross-functional squads and operating at a high bar. Shape Strategy Through Data Act as a co-leader of a large cross functional strategic area of 150 people, defining long-term vision and strategy. Provide data-driven frameworks to structure product thinking - user classifications, Jobs-to-Be-Done, north star metrics, success criteria, and evaluation methods. Influence prioritisation and roadmap decisions by grounding strategic choices in evidence and insight. Advance Experimentation, Measurement & Goaling Champion and mature experimentation practices across teams. Develop clear goaling methodologies enabling rapid iteration and learning. Ensure robust evaluation of product changes, including holdouts and causal inference methods. Elevate Data, AI & Infrastructure Capabilities Work with our ML Engineering counterparts to help shape our wider AI/ML strategy. Influence Data Engineering, BI and Platform priorities to improve data maturity, quality and tooling. Ensure foundational datasets and metrics are trusted, consistent and scalable. Drive High-Impact Outcomes & Senior Communication Hold the organisation to a high bar for analytical rigour and business impact. Communicate insights, strategy, and progress to senior leadership. Drive alignment and influence decision-making across the company. We'd love to hear from you if you have Experience leading data-driven teams in the product space within tech organisations. Proven experience managing Data Science Managers or Data Scientists & Analysts. Demonstrated driving growth and influencing strategy in online products. Experience setting strategic direction, thinking big, and executing effectively. Ability to distil complex analysis into clear, actionable communication for all levels. Strong experience guiding experimentation and test-and-learn cultures. Ability to navigate ambiguous datasets and translate them into insights. Strong stakeholder management and cross-functional leadership experience. Strong data visualisation and communication skills. Knowledge of statistical and causal inference methods. Tech stack: SQL, Python, dbt, Tableau, Trino, AWS Athena + more. More information: Enjoy fantastic perks like private healthcare & dental insurance, a generous work from abroad policy, 2-for-1 share purchase plans, an EV Scheme to further reduce carbon emissions, extra festive time off, and excellent family-friendly benefits. We prioritise career growth with clear career paths, transparent pay bands, personal learning budgets, and regular learning days. Jump on board and supercharge your career from day one! We're operate a hybrid model to work and ask that Trainliners work from the office a minimum of 60% of their time over a 12-week period. We also have a 28-day Work from Abroad policy. Our values represent the things that matter most to us and what we live and breathe everyday, in everything we do: Think Big - We're building the future of rail ️ Own It - We focus on every customer, partner and journey Travel Together - We're one team ️ Do Good - We make a positive impact We know that having a diverse team makes us better and helps us succeed. And we mean all forms of diversity - gender, ethnicity, sexuality, disability, nationality and diversity of thought. That's why we're committed to creating inclusive places to work, where everyone belongs and differences are valued and celebrated. Interested in finding out more about what it's like to work at Trainline? Why not check us out on LinkedIn, Instagram and Glassdoor!
Jan 01, 2026
Full time
About us We are champions of rail, inspired to build a greener, more sustainable future of travel. Trainline enables millions of travellers to find and book the best value tickets across carriers, fares, and journey options through our highly rated mobile app, website, and B2B partner channels. Great journeys start with Trainline Now Europe's number 1 downloaded rail app, with over 125 million monthly visits and £5.9 billion in annual ticket sales, we collaborate with 270+ rail and coach companies in over 40 countries. We want to create a world where travel is as simple, seamless, eco-friendly and affordable as it should be. Today, we're a FTSE 250 company driven by our incredible team of over 1,000 Trainliners from 50+ nationalities, based across London, Paris, Barcelona, Milan, Edinburgh and Madrid. With our focus on growth in the UK and Europe, now is the perfect time to join us on this high-speed journey. Introducing Data Science at Trainline Data Science is central to how we build products, delight our customers and grow our business. Our Data Scientists are embedded in cross-functional teams across Product and Marketing, empowered with a high degree of autonomy to drive outcomes using all data and techniques at their disposal. As the Head of Data Science, you will lead a team of high-performing Data Science Managers and play a pivotal role in shaping a large cross-functional organisation spanning Product, Engineering, Marketing and Data. You will be a key decision-maker, helping define and deliver a product experience that provides the right inventory, enables a seamless purchase journey, and drives forward our future ticketing opportunities. Doing this well requires deeply understanding our users, identifying their Jobs-to-Be-Done, evaluating whether we are successfully meeting their needs, and accelerating the pace of product discovery and iteration. Your team will influence strategic product thinking, strengthen experimentation and measurement practices, and shape how AI and data power our product experience. In this role, your leadership spans two complementary dimensions: Functional leadership, setting the bar for excellence in Data Science & Analytics. Strategic business partnership, working closely with Product, Engineering, Commercial and Marketing to define long-term direction and deliver impactful outcomes. As a Head of Data Science at Trainline, you will Lead & Develop a High-Performing Data Science Organisation Lead an org of 3 Data Science Managers and their respective teams. Build a culture focused on experimentation, learning, and measurable business impact. Ensure Data Science & Analytics talent is embedded effectively into cross-functional squads and operating at a high bar. Shape Strategy Through Data Act as a co-leader of a large cross functional strategic area of 150 people, defining long-term vision and strategy. Provide data-driven frameworks to structure product thinking - user classifications, Jobs-to-Be-Done, north star metrics, success criteria, and evaluation methods. Influence prioritisation and roadmap decisions by grounding strategic choices in evidence and insight. Advance Experimentation, Measurement & Goaling Champion and mature experimentation practices across teams. Develop clear goaling methodologies enabling rapid iteration and learning. Ensure robust evaluation of product changes, including holdouts and causal inference methods. Elevate Data, AI & Infrastructure Capabilities Work with our ML Engineering counterparts to help shape our wider AI/ML strategy. Influence Data Engineering, BI and Platform priorities to improve data maturity, quality and tooling. Ensure foundational datasets and metrics are trusted, consistent and scalable. Drive High-Impact Outcomes & Senior Communication Hold the organisation to a high bar for analytical rigour and business impact. Communicate insights, strategy, and progress to senior leadership. Drive alignment and influence decision-making across the company. We'd love to hear from you if you have Experience leading data-driven teams in the product space within tech organisations. Proven experience managing Data Science Managers or Data Scientists & Analysts. Demonstrated driving growth and influencing strategy in online products. Experience setting strategic direction, thinking big, and executing effectively. Ability to distil complex analysis into clear, actionable communication for all levels. Strong experience guiding experimentation and test-and-learn cultures. Ability to navigate ambiguous datasets and translate them into insights. Strong stakeholder management and cross-functional leadership experience. Strong data visualisation and communication skills. Knowledge of statistical and causal inference methods. Tech stack: SQL, Python, dbt, Tableau, Trino, AWS Athena + more. More information: Enjoy fantastic perks like private healthcare & dental insurance, a generous work from abroad policy, 2-for-1 share purchase plans, an EV Scheme to further reduce carbon emissions, extra festive time off, and excellent family-friendly benefits. We prioritise career growth with clear career paths, transparent pay bands, personal learning budgets, and regular learning days. Jump on board and supercharge your career from day one! We're operate a hybrid model to work and ask that Trainliners work from the office a minimum of 60% of their time over a 12-week period. We also have a 28-day Work from Abroad policy. Our values represent the things that matter most to us and what we live and breathe everyday, in everything we do: Think Big - We're building the future of rail ️ Own It - We focus on every customer, partner and journey Travel Together - We're one team ️ Do Good - We make a positive impact We know that having a diverse team makes us better and helps us succeed. And we mean all forms of diversity - gender, ethnicity, sexuality, disability, nationality and diversity of thought. That's why we're committed to creating inclusive places to work, where everyone belongs and differences are valued and celebrated. Interested in finding out more about what it's like to work at Trainline? Why not check us out on LinkedIn, Instagram and Glassdoor!
University of Glasgow
Data Science and AI Specialist
University of Glasgow City, Glasgow
Applied Data Scientist - Health and AI (Trusted Research Environment) Research Track Job Purpose To provide advanced analytical, epidemiological, and data science support for research projects using NHS data hosted within the Trusted Research Environment (TRE). The postholder will work closely with investigators from NHS Greater Glasgow and Clyde (NHSGGC), the University of Glasgow (UofG), and industry partners to translate research ideas into robust analytical plans, ensure data are appropriately specified and prepared for analysis, and deliver high quality, reproducible outputs. The role focuses on real world health data analysis - including study design, data wrangling, phenotype development, data integration, and statistical and machine learning methods - to accelerate project delivery, strengthen grant applications, and advance the overall research capability of the TRE. Main Duties and Responsibilities Support principal investigators by designing and implementing robust analytical and statistical workflows for complex clinical and population health datasets hosted in the TRE - including data wrangling, quality assessment, phenotype development, and exploratory analyses. Develop reproducible and transparent analytical pipelines, ensuring data provenance, version control, and adherence to ethical and governance standards. Working closely with clinicians, researchers, and data engineers across NHS and UofG to define project data requirements, optimise analytical design, and translate research questions into executable analyses. Lead on technical aspects of data integration, statistical and machine learning model development, validation, interpretability, and deployment within the secure TRE environment. Ensure all research activities comply with NHS data governance, ISO standards, and the TRE's ethical frameworks. Contribute to demonstration and exemplar projects (e.g., multimodal data integration, digital phenotyping, predictive analytics) that highlight the TRE's analytical and AI capabilities. Act as liaison between NHS Safe Haven, academic researchers, and University Services (e.g., Information Services, Centre for Data Science and AI) advising on data specifications, study design, and appropriate analytical methodologies. Support the training and mentoring of researchers and students in applied health data science, statistical methods, and TRE workflows. Perform administrative and governance related tasks relevant to TRE operations, including documentation, data access tracking, and project coordination. Keep up to date with current knowledge and recent advances in the field / discipline. Contribute to research outputs, grant applications, and dissemination activities that strengthen TRE capabilities and support collaborative funding bids. Participate and engage with national and cross institutional AI/TRE initiatives and networks as appropriate. Undertake any other reasonable duties as required by the Head of School / Director of Clinical TRE. Contribute to the enhancement of the University's international profile in line with the University Strategy. Knowledge, Qualifications, Skills and Experience Knowledge / Qualifications Essential A1 Scottish Credit and Qualification Framework level 12 (PhD) in a relevant discipline such as Epidemiology, Biostatistics, Health Data Science, or Health Informatics. A2 Strong knowledge of epidemiological and biostatistical principles applied to healthcare data, with experience integrating these with data science or AI/ML methods. A3 Demonstrable understanding of data governance and regulatory requirements for clinical data, including anonymisation, secure data handling protocols and workflows underpinning Trusted Research Environments (TREs). A4 Understanding of study design, phenotype development, and data quality assessment in real world healthcare research. Desirable B1 Additional formal training or certification in Epidemiology, Biostatistics, Health Informatics, or Applied AI in Healthcare. B2 Knowledge of data standards and interoperability frameworks (e.g., OMOP, FHIR, SNOMED CT, ICD 10) relevant to real world data integration. B3 Understanding of computable phenotypes, data harmonisation, or ontology development for clinical research. B4 Awareness of federated analytics, privacy preserving computation, or distributed learning within Trusted Research Environments. Skills Essential C1 Proficiency in R and/or Python, with strong skills in health data wrangling, cleaning, integration, and visualisation; experience with analytical and machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit learn). C2 Ability to manipulate, analyse, and interpret large or complex healthcare datasets within secure computing environments, ensuring reproducibility and integrity. C3 Excellent communication and interpersonal skills to work across interdisciplinary teams in both academic and clinical environments. C4 Proven ability to explain analytical findings and complex technical concepts to non specialist stakeholders, including clinicians, policymakers, and industry partners. C5 Problem solving mindset with the ability to work independently and manage multiple priorities. Desirable D1 Experience in developing reproducible analysis pipelines using tools such as Git, Docker, or workflow managers. D2 Strong skills in data visualisation and dashboarding (e.g., R Shiny, Plotly, Dash, Power BI) for communicating insights to clinical and policy audiences. D3 Familiarity with advanced analytical techniques, such as causal inference, predictive modelling, or survival analysis in health data contexts. Experience Essential E1 Significant experience in applied health data analysis - including study design, data specification, data wrangling, statistical analysis, and (where appropriate) machine learning model development or evaluation. E2 Experience working with sensitive health or clinical datasets within secure research environments or safe havens. E3 Experience contributing to research publications, technical reports, or grant funded projects through provision of analytical and methodological expertise. E4 Experience working within data governance and ethical frameworks, ideally in healthcare or public sector research. E5 Proven commitment to supporting the career development of colleagues and to other forms of collegiality appropriate to the career stage. Desirable F1 Prior experience supporting Safe Haven/TRE governance committees, data access processes, or technical advisory groups. F2 Contribution to open source tools, data models, or methods for healthcare analytics or AI reproducibility. F3 Experience in preparing grant applications or preliminary data analyses that directly supported successful research funding. F4 Evidence of continuous professional development in health data science, AI ethics, or digital health innovation. Informal enquiries should be directed toProfessor Sandosh Padmanabhan, Previous applicants should not re apply for this position. Terms and Conditions Salary will be Grade 7, £41,064 - £46,049 per annum. This post is full time (35 hours p/w) and has funding for up to 3 years initially. Relocation assistance will be provided where appropriate. As a valued member of our team, you can expect: A warm welcoming and engaging organisational culture, where your talents are developed and nurtured, and success is celebrated and shared. An excellent employment package with generous terms and conditions including 41 days of leave for full time staff, pension - pensions handbook , benefits and discount packages. A flexible approach to working. A commitment to support your health and wellbeing, including a free 6 month UofG Sport membership for all new staff joining the University We believe that we can only reach our full potential through the talents of all. Equality, diversity and inclusion are at the heart of our values. Applications are particularly welcome from across our communities and in particular people from the Black, Asian and Minority Ethnic (BAME) community, and other protected characteristics who are under represented within the University. Read more on how the University promotes and embeds all aspects of equality and diversity within our community We endorse the principles of Athena Swan and hold bronze, silver and gold awards across the University. We are investing in our organisation, and we will invest in you too. Please visit our website for more information. Closing date 8 January 2026 at 23:45
Jan 01, 2026
Full time
Applied Data Scientist - Health and AI (Trusted Research Environment) Research Track Job Purpose To provide advanced analytical, epidemiological, and data science support for research projects using NHS data hosted within the Trusted Research Environment (TRE). The postholder will work closely with investigators from NHS Greater Glasgow and Clyde (NHSGGC), the University of Glasgow (UofG), and industry partners to translate research ideas into robust analytical plans, ensure data are appropriately specified and prepared for analysis, and deliver high quality, reproducible outputs. The role focuses on real world health data analysis - including study design, data wrangling, phenotype development, data integration, and statistical and machine learning methods - to accelerate project delivery, strengthen grant applications, and advance the overall research capability of the TRE. Main Duties and Responsibilities Support principal investigators by designing and implementing robust analytical and statistical workflows for complex clinical and population health datasets hosted in the TRE - including data wrangling, quality assessment, phenotype development, and exploratory analyses. Develop reproducible and transparent analytical pipelines, ensuring data provenance, version control, and adherence to ethical and governance standards. Working closely with clinicians, researchers, and data engineers across NHS and UofG to define project data requirements, optimise analytical design, and translate research questions into executable analyses. Lead on technical aspects of data integration, statistical and machine learning model development, validation, interpretability, and deployment within the secure TRE environment. Ensure all research activities comply with NHS data governance, ISO standards, and the TRE's ethical frameworks. Contribute to demonstration and exemplar projects (e.g., multimodal data integration, digital phenotyping, predictive analytics) that highlight the TRE's analytical and AI capabilities. Act as liaison between NHS Safe Haven, academic researchers, and University Services (e.g., Information Services, Centre for Data Science and AI) advising on data specifications, study design, and appropriate analytical methodologies. Support the training and mentoring of researchers and students in applied health data science, statistical methods, and TRE workflows. Perform administrative and governance related tasks relevant to TRE operations, including documentation, data access tracking, and project coordination. Keep up to date with current knowledge and recent advances in the field / discipline. Contribute to research outputs, grant applications, and dissemination activities that strengthen TRE capabilities and support collaborative funding bids. Participate and engage with national and cross institutional AI/TRE initiatives and networks as appropriate. Undertake any other reasonable duties as required by the Head of School / Director of Clinical TRE. Contribute to the enhancement of the University's international profile in line with the University Strategy. Knowledge, Qualifications, Skills and Experience Knowledge / Qualifications Essential A1 Scottish Credit and Qualification Framework level 12 (PhD) in a relevant discipline such as Epidemiology, Biostatistics, Health Data Science, or Health Informatics. A2 Strong knowledge of epidemiological and biostatistical principles applied to healthcare data, with experience integrating these with data science or AI/ML methods. A3 Demonstrable understanding of data governance and regulatory requirements for clinical data, including anonymisation, secure data handling protocols and workflows underpinning Trusted Research Environments (TREs). A4 Understanding of study design, phenotype development, and data quality assessment in real world healthcare research. Desirable B1 Additional formal training or certification in Epidemiology, Biostatistics, Health Informatics, or Applied AI in Healthcare. B2 Knowledge of data standards and interoperability frameworks (e.g., OMOP, FHIR, SNOMED CT, ICD 10) relevant to real world data integration. B3 Understanding of computable phenotypes, data harmonisation, or ontology development for clinical research. B4 Awareness of federated analytics, privacy preserving computation, or distributed learning within Trusted Research Environments. Skills Essential C1 Proficiency in R and/or Python, with strong skills in health data wrangling, cleaning, integration, and visualisation; experience with analytical and machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit learn). C2 Ability to manipulate, analyse, and interpret large or complex healthcare datasets within secure computing environments, ensuring reproducibility and integrity. C3 Excellent communication and interpersonal skills to work across interdisciplinary teams in both academic and clinical environments. C4 Proven ability to explain analytical findings and complex technical concepts to non specialist stakeholders, including clinicians, policymakers, and industry partners. C5 Problem solving mindset with the ability to work independently and manage multiple priorities. Desirable D1 Experience in developing reproducible analysis pipelines using tools such as Git, Docker, or workflow managers. D2 Strong skills in data visualisation and dashboarding (e.g., R Shiny, Plotly, Dash, Power BI) for communicating insights to clinical and policy audiences. D3 Familiarity with advanced analytical techniques, such as causal inference, predictive modelling, or survival analysis in health data contexts. Experience Essential E1 Significant experience in applied health data analysis - including study design, data specification, data wrangling, statistical analysis, and (where appropriate) machine learning model development or evaluation. E2 Experience working with sensitive health or clinical datasets within secure research environments or safe havens. E3 Experience contributing to research publications, technical reports, or grant funded projects through provision of analytical and methodological expertise. E4 Experience working within data governance and ethical frameworks, ideally in healthcare or public sector research. E5 Proven commitment to supporting the career development of colleagues and to other forms of collegiality appropriate to the career stage. Desirable F1 Prior experience supporting Safe Haven/TRE governance committees, data access processes, or technical advisory groups. F2 Contribution to open source tools, data models, or methods for healthcare analytics or AI reproducibility. F3 Experience in preparing grant applications or preliminary data analyses that directly supported successful research funding. F4 Evidence of continuous professional development in health data science, AI ethics, or digital health innovation. Informal enquiries should be directed toProfessor Sandosh Padmanabhan, Previous applicants should not re apply for this position. Terms and Conditions Salary will be Grade 7, £41,064 - £46,049 per annum. This post is full time (35 hours p/w) and has funding for up to 3 years initially. Relocation assistance will be provided where appropriate. As a valued member of our team, you can expect: A warm welcoming and engaging organisational culture, where your talents are developed and nurtured, and success is celebrated and shared. An excellent employment package with generous terms and conditions including 41 days of leave for full time staff, pension - pensions handbook , benefits and discount packages. A flexible approach to working. A commitment to support your health and wellbeing, including a free 6 month UofG Sport membership for all new staff joining the University We believe that we can only reach our full potential through the talents of all. Equality, diversity and inclusion are at the heart of our values. Applications are particularly welcome from across our communities and in particular people from the Black, Asian and Minority Ethnic (BAME) community, and other protected characteristics who are under represented within the University. Read more on how the University promotes and embeds all aspects of equality and diversity within our community We endorse the principles of Athena Swan and hold bronze, silver and gold awards across the University. We are investing in our organisation, and we will invest in you too. Please visit our website for more information. Closing date 8 January 2026 at 23:45
Lectureships/Readerships in Statistics and Data Science
The International Society for Bayesian Analysis Edinburgh, Midlothian
Lectureships/Readerships in Statistics and Data Science Continuing an ambitious long-term plan, which includes expansion into part of the new £40M Bayes Centre, the School of Mathematics is making a number of permanent appointments in the Mathematical Sciences. We are recruiting candidates with a track record of high quality research and teaching in Statistics and Data Science to start on 1 August 2018 or by agreement. The successful applicants will contribute to the growing reputation of the University as an international hub for Statistics and will join the recently established University-wide Centre for Statistics. They will interact with colleagues in the Bayes Centre, a new interdisciplinary Data Science Institute within the University, as well as the Maxwell Institute, a longstanding research partnership between the University of Edinburgh and Heriot-Watt University. They will also have opportunities to be actively involved with the Alan Turing Institute, a UK wide initiative in Data Science. All applications must be submitted online and include a full CV, a research statement and a teaching statement. We also require details of four referees, three to comment on your research and one on your teaching. Salary Scale: £39,992 - £47,722 per annum. Very strong and experienced applicants may be appointed to a Readership, for which the salary is £50,618 - £56,950 per annum. Applications close at 5pm (UK time) on 3rd January 2018. Informal enquiries may be made to Professor Ruth King (Thomas Bayes' Chair of Statistics) . The University of Edinburgh promotes equality and diversity. We strive for a family-friendly School of Mathematics; hold a Bronze Athena SWAN award and support the London Mathematical Society Good Practice Scheme.
Jan 01, 2026
Full time
Lectureships/Readerships in Statistics and Data Science Continuing an ambitious long-term plan, which includes expansion into part of the new £40M Bayes Centre, the School of Mathematics is making a number of permanent appointments in the Mathematical Sciences. We are recruiting candidates with a track record of high quality research and teaching in Statistics and Data Science to start on 1 August 2018 or by agreement. The successful applicants will contribute to the growing reputation of the University as an international hub for Statistics and will join the recently established University-wide Centre for Statistics. They will interact with colleagues in the Bayes Centre, a new interdisciplinary Data Science Institute within the University, as well as the Maxwell Institute, a longstanding research partnership between the University of Edinburgh and Heriot-Watt University. They will also have opportunities to be actively involved with the Alan Turing Institute, a UK wide initiative in Data Science. All applications must be submitted online and include a full CV, a research statement and a teaching statement. We also require details of four referees, three to comment on your research and one on your teaching. Salary Scale: £39,992 - £47,722 per annum. Very strong and experienced applicants may be appointed to a Readership, for which the salary is £50,618 - £56,950 per annum. Applications close at 5pm (UK time) on 3rd January 2018. Informal enquiries may be made to Professor Ruth King (Thomas Bayes' Chair of Statistics) . The University of Edinburgh promotes equality and diversity. We strive for a family-friendly School of Mathematics; hold a Bronze Athena SWAN award and support the London Mathematical Society Good Practice Scheme.
WISE Campaign
Lecturer in Energy Engineering
WISE Campaign Norwich, Norfolk
Faculty of Science School of Engineering, Mathematics and Physics Lecturer A or B in Energy Engineering Ref: ATR1732 Salary on Lecturer A (grade 7) will be £38,784 per annum, dependent on skills and experience, with an annual increment up to £46,049 per annum. Salary on Lecturer B (grade 8) will be £48,822 per annum, dependent on skills and experience, with an annual increment up to £56,535 per annum. An exciting opportunity has arisen for a Lecturer to join the School of Engineering, Mathematics and Physics at UEA. Are you a high calibre, research excellent individual? Do you work in sensors, data, diagnostics and/or smart materials with renewable energy or circular economy applications? Are you able to develop engaging teaching materials in energy engineering and sustainability? Can you support growth in student numbers through advocacy for engineering and UEA? You will receive support to develop your research profile, producing high quality proposals to secure external research funding and disseminate results through academic publications and conferences. You will be encouraged to work with local business to develop consultancy and research impact. Teaching is a key part of this role, and you will be mentored to deliver at undergraduate and postgraduate levels. You must have a PhD (or equivalent) in a relevant subject area, with experience of undergraduate teaching and student assessment. You will have a track record, appropriate to experience, of high quality publications and have produced research which has the potential to have an impact beyond academia. Evidence of successful grant applications are essential for appointment at Lecturer B. An ability to provide academic leadership and supervise PhD students would be advantageous. This full time post is available from 20 April 2026 on an indefinite basis. One appointment will be considered at either Lecturer A (grade 7) or Lecturer B (grade 8) dependent on skills and experience. Within your personal statement please specify the grade of role you would like to be considered for. You must be able to meet all the essential criteria for that grade set out in the Person Specification, full details can be found in the Candidate Brochure. UEA offers a variety of flexible working options and although this role is advertised on a full time basis, we encourage applications from individuals who would prefer a flexible working pattern including annualised hours, compressed working hours, part time, job share, term time only and/or hybrid working. Details of preferred hours should be stated in the personal statement and will be discussed further at interview. Further information on our great benefits package, including 44 days annual leave inclusive of Bank Holidays and additional University Customary days (pro rata for part time), can be found on our benefits page. Closing date: 19 January 2026 The University holds an Athena Swan Silver Institutional Award in recognition of our advancement towards gender equality.
Jan 01, 2026
Full time
Faculty of Science School of Engineering, Mathematics and Physics Lecturer A or B in Energy Engineering Ref: ATR1732 Salary on Lecturer A (grade 7) will be £38,784 per annum, dependent on skills and experience, with an annual increment up to £46,049 per annum. Salary on Lecturer B (grade 8) will be £48,822 per annum, dependent on skills and experience, with an annual increment up to £56,535 per annum. An exciting opportunity has arisen for a Lecturer to join the School of Engineering, Mathematics and Physics at UEA. Are you a high calibre, research excellent individual? Do you work in sensors, data, diagnostics and/or smart materials with renewable energy or circular economy applications? Are you able to develop engaging teaching materials in energy engineering and sustainability? Can you support growth in student numbers through advocacy for engineering and UEA? You will receive support to develop your research profile, producing high quality proposals to secure external research funding and disseminate results through academic publications and conferences. You will be encouraged to work with local business to develop consultancy and research impact. Teaching is a key part of this role, and you will be mentored to deliver at undergraduate and postgraduate levels. You must have a PhD (or equivalent) in a relevant subject area, with experience of undergraduate teaching and student assessment. You will have a track record, appropriate to experience, of high quality publications and have produced research which has the potential to have an impact beyond academia. Evidence of successful grant applications are essential for appointment at Lecturer B. An ability to provide academic leadership and supervise PhD students would be advantageous. This full time post is available from 20 April 2026 on an indefinite basis. One appointment will be considered at either Lecturer A (grade 7) or Lecturer B (grade 8) dependent on skills and experience. Within your personal statement please specify the grade of role you would like to be considered for. You must be able to meet all the essential criteria for that grade set out in the Person Specification, full details can be found in the Candidate Brochure. UEA offers a variety of flexible working options and although this role is advertised on a full time basis, we encourage applications from individuals who would prefer a flexible working pattern including annualised hours, compressed working hours, part time, job share, term time only and/or hybrid working. Details of preferred hours should be stated in the personal statement and will be discussed further at interview. Further information on our great benefits package, including 44 days annual leave inclusive of Bank Holidays and additional University Customary days (pro rata for part time), can be found on our benefits page. Closing date: 19 January 2026 The University holds an Athena Swan Silver Institutional Award in recognition of our advancement towards gender equality.
AWS Data Engineer
Stackstudio Digital Ltd. Ipswich, Suffolk
Job title: AWS Data Engineer Location: Ipswich (Onsite- 5 days) Type of Employment- Permanent Job Overview: We are seeking an experienced AWS Data Engineer with strong expertise in ETL pipelines, Redshift, Iceberg, Athena, and S3 to support large-scale data processing and analytics initiatives in the telecom domain click apply for full job details
Dec 23, 2025
Full time
Job title: AWS Data Engineer Location: Ipswich (Onsite- 5 days) Type of Employment- Permanent Job Overview: We are seeking an experienced AWS Data Engineer with strong expertise in ETL pipelines, Redshift, Iceberg, Athena, and S3 to support large-scale data processing and analytics initiatives in the telecom domain click apply for full job details
L3 Data Engineer - Support
Stackstudio Digital Ltd.
Job title- L3 Data Engineer- Support Location: London, UK Duration: 6 Months Work Mode: Hybrid (12 Days Onsite per week) Job Description Looking for L3 Data Engineer Support Key Skills & Expertise AWS Core Services : S3, Redshift, Glue, Athena, Lake Formation, IAM Data Engineering / ETL: Building and optimizing ETL pipelines Data ingestion, transformation & orchestration using AWS Glue (PySpark/Python) Worki click apply for full job details
Dec 18, 2025
Contractor
Job title- L3 Data Engineer- Support Location: London, UK Duration: 6 Months Work Mode: Hybrid (12 Days Onsite per week) Job Description Looking for L3 Data Engineer Support Key Skills & Expertise AWS Core Services : S3, Redshift, Glue, Athena, Lake Formation, IAM Data Engineering / ETL: Building and optimizing ETL pipelines Data ingestion, transformation & orchestration using AWS Glue (PySpark/Python) Worki click apply for full job details

Modal Window

  • Home
  • Contact
  • About Us
  • Terms & Conditions
  • Privacy
  • Employer
  • Post a Job
  • Search Resumes
  • Sign in
  • Job Seeker
  • Find Jobs
  • Create Resume
  • Sign in
  • Facebook
  • Twitter
  • Google Plus
  • LinkedIn
Parent and Partner sites: IT Job Board | Jobs Near Me | RightTalent.co.uk | Quantity Surveyor jobs | Building Surveyor jobs | Construction Recruitment | Talent Recruiter | Construction Job Board | Property jobs | myJobsnearme.com | Jobs near me
© 2008-2026 Jobsite Jobs | Designed by Web Design Agency