• Home
  • Search Jobs
  • Register CV
  • Post a Job
  • Employer Pricing
  • Contact Us
  • Sign in
  • Sign up
  • Home
  • Search Jobs
  • Register CV
  • Post a Job
  • Employer Pricing
  • Contact Us
Sorry, that job is no longer available. Here are some results that may be similar to the job you were looking for.

7 jobs found

Email me jobs like this
Refine Search
Current Search
distributed systems engineer apache kafka java python
Senior Technical Lead - Compute Services, SVP
Citigroup Inc.
XiP is building a next-generation cross-asset calculation system for Citi trading desks and enterprise users in the largest global financial markets and exchanges in New York, London, and other major financial hubs. Team & Role Overview Our team owns multiple Java Spring Boot Services that execute, partition, and track quantitative risk graphs/trades in a distributed environment. These graphs can fail due to their complexity and our system must adapt quickly to these failures to provide a seamless experience for clients. XiP Compute Services are deployed onto OpenShift and Amazon's Elastic Kubernetes Service (EKS). An important initiative in 2025 will be onboarding Google's Kubernetes Engine to further expand our coverage. Our system scales on-demand, and we can run up to tens of thousands of replicas of our services across all asset classes. The role of the Senior Technical Lead is to lead a variety of engineering activities including design decisions regarding technical direction of the platform with short, medium, and long-term changes, with a key focus on public cloud onboarding. The project requires constant review of the technologies, patterns and paradigms used to ensure the system is easy to understand, performant, scalable, testable, robust, and observable. The role is a conjunction of technical and managerial roles, with line-management duties, while giving technical direction to a growing team of developers globally. The platform is a Greenfield build using standard modern technologies such as Java, Spring Boot, Kubernetes, Kafka, MongoDB, RabbitMQ, Solace, Apache Ignite. The platform runs in a hybrid mode both on-premise and in AWS utilising technologies such as EKS, S3, FSX. The main purpose of this role is to lead efforts of continued platform onboarding to AWS as well as the new initiative to deploy into to GCP. The project is in a scale-out phase, with a goal of expanding the userbase and workloads towards running billions of financial calculations per day across hundreds of thousands of cores. The aim of the project is to run all finance calculations for Citi's Front Office Markets business globally. Responsibilities: Steering platform onboarding into AWS and Google Cloud, while collaborating with Citi HPC team and AWS/Google partners Challenging proposed and provided solutions in terms of performance, robustness and cost effectiveness Making decisions regarding technical direction of platform, including evaluating new technologies and executing proof-of-concept implementations, with good understanding of various limitations Identifying and defining necessary system enhancements to improve current processes and architecture Hands-on coding of fixes, features, and improvements Investigating reported or observed platform issues Reviewing pull-requests from other team members and giving robust critique/feedback Identifying and proposing teamwork enhancements Reviewing requests for new features, balancing user requirements with defending the platform from complexity and low-value features Collaborating with key partners across the firm for extending the platform, such as: the infrastructure provider group; quant group; upstream and downstream systems Mentoring/coaching junior developers on coding/architecture approaches and best practices Skills and Experience: Expert knowledge of distributed systems including event-driven architecture; at-least-once messaging; CAP Theorem; horizontal and vertical scaling strategies; massively distributed architectures Expert knowledge of Java, JVM, memory management, garbage collection Thorough understanding of multithreaded environment challenges Expert knowledge of Spring, SpringBoot framework and associated technologies Expert knowledge of test frameworks, such as Junit, Mockito, writing easily-testable code Expertise in Java debugging, including remote debugging of services deployed to K8s Expert knowledge of Kubernetes and associated technologies such as KEDA, Karpenter, ClusterAutoscaler, CoreDNS, Expert knowledge of SQL and/or NoSQL database technologies Expert knowledge of various messaging protocols and technologies such as REST, HTTP/S, AMQP, WebSocket Expert knowledge of Confluent Kafka Experience and good understanding of core technologies provided by GCP/AWS, such as S3, FSX, EKS, SQS, SNS, Kinesis, AmazonMQ, DynamoDB, GKE, CloudStorage, PubSub, Filestore, Knowledge of modern observability technologies such as ELK, Splunk, Prometheus, Grafana, Micrometer "What-if" thinking, while designing or reviewing solutions, to foresee or catch potential problems as early in the development process, as only possible Nice to have: Good knowledge of Python, Groovy, Bash C++ basic knowledge/experience Good knowledge of PubSub model Good knowledge of Finance, especially large-scale risk calculation Good knowledge of representing complex calculations as graphs of instructions which can be horizontally distributed What we can offer you We work hard to have a positive financial and social impact on the communities we serve. In turn, we put our employees first and provide the best-in-class benefits they need to be well, live well and save well. By joining Citi London, you will not only be part of a business casual workplace with a hybrid working model (up to 2 days working at home per week), but also receive a competitive base salary (which is annually reviewed), and enjoy a whole host of additional benefits such as: Generous holiday allowance starting at 27 days plus bank holidays; increasing with tenure A discretional annual performance related bonus Private medical insurance packages to suit your personal circumstances Employee Assistance Program Pension Plan Paid Parental Leave Special discounts for employees, family, and friends Access to an array of learning and development resources Alongside these benefits Citi is committed to ensuring our workplace is where everyone feels comfortable coming to work as their whole self every day. We want the best talent around the world to be energized to join us, motivated to stay, and empowered to thrive. Sounds like Citi has everything you need? Then apply to discover the true extent of your capabilities. Job Family Group: Technology Job Family: Applications Development Time Type: Full time Citi is an equal opportunity and affirmative action employer. Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Citigroup Inc. and its subsidiaries ("Citi") invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View the " EEO is the Law " poster. View the EEO is the Law Supplement . View the EEO Policy Statement . View the Pay Transparency Posting
Jul 23, 2025
Full time
XiP is building a next-generation cross-asset calculation system for Citi trading desks and enterprise users in the largest global financial markets and exchanges in New York, London, and other major financial hubs. Team & Role Overview Our team owns multiple Java Spring Boot Services that execute, partition, and track quantitative risk graphs/trades in a distributed environment. These graphs can fail due to their complexity and our system must adapt quickly to these failures to provide a seamless experience for clients. XiP Compute Services are deployed onto OpenShift and Amazon's Elastic Kubernetes Service (EKS). An important initiative in 2025 will be onboarding Google's Kubernetes Engine to further expand our coverage. Our system scales on-demand, and we can run up to tens of thousands of replicas of our services across all asset classes. The role of the Senior Technical Lead is to lead a variety of engineering activities including design decisions regarding technical direction of the platform with short, medium, and long-term changes, with a key focus on public cloud onboarding. The project requires constant review of the technologies, patterns and paradigms used to ensure the system is easy to understand, performant, scalable, testable, robust, and observable. The role is a conjunction of technical and managerial roles, with line-management duties, while giving technical direction to a growing team of developers globally. The platform is a Greenfield build using standard modern technologies such as Java, Spring Boot, Kubernetes, Kafka, MongoDB, RabbitMQ, Solace, Apache Ignite. The platform runs in a hybrid mode both on-premise and in AWS utilising technologies such as EKS, S3, FSX. The main purpose of this role is to lead efforts of continued platform onboarding to AWS as well as the new initiative to deploy into to GCP. The project is in a scale-out phase, with a goal of expanding the userbase and workloads towards running billions of financial calculations per day across hundreds of thousands of cores. The aim of the project is to run all finance calculations for Citi's Front Office Markets business globally. Responsibilities: Steering platform onboarding into AWS and Google Cloud, while collaborating with Citi HPC team and AWS/Google partners Challenging proposed and provided solutions in terms of performance, robustness and cost effectiveness Making decisions regarding technical direction of platform, including evaluating new technologies and executing proof-of-concept implementations, with good understanding of various limitations Identifying and defining necessary system enhancements to improve current processes and architecture Hands-on coding of fixes, features, and improvements Investigating reported or observed platform issues Reviewing pull-requests from other team members and giving robust critique/feedback Identifying and proposing teamwork enhancements Reviewing requests for new features, balancing user requirements with defending the platform from complexity and low-value features Collaborating with key partners across the firm for extending the platform, such as: the infrastructure provider group; quant group; upstream and downstream systems Mentoring/coaching junior developers on coding/architecture approaches and best practices Skills and Experience: Expert knowledge of distributed systems including event-driven architecture; at-least-once messaging; CAP Theorem; horizontal and vertical scaling strategies; massively distributed architectures Expert knowledge of Java, JVM, memory management, garbage collection Thorough understanding of multithreaded environment challenges Expert knowledge of Spring, SpringBoot framework and associated technologies Expert knowledge of test frameworks, such as Junit, Mockito, writing easily-testable code Expertise in Java debugging, including remote debugging of services deployed to K8s Expert knowledge of Kubernetes and associated technologies such as KEDA, Karpenter, ClusterAutoscaler, CoreDNS, Expert knowledge of SQL and/or NoSQL database technologies Expert knowledge of various messaging protocols and technologies such as REST, HTTP/S, AMQP, WebSocket Expert knowledge of Confluent Kafka Experience and good understanding of core technologies provided by GCP/AWS, such as S3, FSX, EKS, SQS, SNS, Kinesis, AmazonMQ, DynamoDB, GKE, CloudStorage, PubSub, Filestore, Knowledge of modern observability technologies such as ELK, Splunk, Prometheus, Grafana, Micrometer "What-if" thinking, while designing or reviewing solutions, to foresee or catch potential problems as early in the development process, as only possible Nice to have: Good knowledge of Python, Groovy, Bash C++ basic knowledge/experience Good knowledge of PubSub model Good knowledge of Finance, especially large-scale risk calculation Good knowledge of representing complex calculations as graphs of instructions which can be horizontally distributed What we can offer you We work hard to have a positive financial and social impact on the communities we serve. In turn, we put our employees first and provide the best-in-class benefits they need to be well, live well and save well. By joining Citi London, you will not only be part of a business casual workplace with a hybrid working model (up to 2 days working at home per week), but also receive a competitive base salary (which is annually reviewed), and enjoy a whole host of additional benefits such as: Generous holiday allowance starting at 27 days plus bank holidays; increasing with tenure A discretional annual performance related bonus Private medical insurance packages to suit your personal circumstances Employee Assistance Program Pension Plan Paid Parental Leave Special discounts for employees, family, and friends Access to an array of learning and development resources Alongside these benefits Citi is committed to ensuring our workplace is where everyone feels comfortable coming to work as their whole self every day. We want the best talent around the world to be energized to join us, motivated to stay, and empowered to thrive. Sounds like Citi has everything you need? Then apply to discover the true extent of your capabilities. Job Family Group: Technology Job Family: Applications Development Time Type: Full time Citi is an equal opportunity and affirmative action employer. Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Citigroup Inc. and its subsidiaries ("Citi") invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View the " EEO is the Law " poster. View the EEO is the Law Supplement . View the EEO Policy Statement . View the Pay Transparency Posting
Data Engineer London, Singapore
GSR Markets Limited
Founded in 2013, GSR is a leading market-making and programmatic trading company in the exciting and fast-evolving world of cryptocurrency trading. With more than 200 employees in 5 countries, we provide billions of dollars of liquidity to cryptocurrency protocols and exchanges on a daily basis. We build long-term relationships with cryptocurrency communities and traditional investors by offering exceptional service, expertise and trading capabilities tailored to their specific needs. GSR works with token issuers, traders, investors, miners, and more than 30 cryptocurrency exchanges around the world. In volatile markets we are a trusted partner to crypto native builders and to those exploring the industry for the first time. Our team of veteran finance and technology executives from Goldman Sachs, Two Sigma, and Citadel, among others, has developed one of the world's most robust trading platforms designed to navigate issues unique to the digital asset markets. We have continuously improved our technology throughout our history, allowing for our clients to scale and execute their strategies with the highest level of efficiency. Working at GSR is an opportunity to be deeply embedded in every major sector of the cryptocurrency ecosystem. About the role: This role sits within GSR's global Data Engineering team, where you'll contribute to the design and development of scalable data systems that support our trading and business operations. You'll work closely with stakeholders across the firm to build and maintain pipelines, manage data infrastructure, and ensure data is reliable, accessible, and secure. It's a hands-on engineering position with scope to shape the way data is handled across the business, working with modern tools in a fast-moving, high-performance environment. Your responsibilities may include: Build and maintain scalable, efficient ETL/ELT pipelines for both real-time and batch processing. Integrate data from APIs, streaming platforms, and legacy systems, with a focus on data quality and reliability. Design and manage data storage solutions, including databases, warehouses, and lakes. Leverage cloud-native services and distributed processing tools (e.g., Apache Flink, AWS Batch) to support large-scale data workloads. Operations & Tooling Monitor, troubleshoot, and optimize data pipelines to ensure performance and cost efficiency. Implement data governance, access controls, and security measures in line with best practices and regulatory standards. Develop observability and anomaly detection tools to support Tier 1 systems. Work with engineers and business teams to gather requirements and translate them into technical solutions. Maintain documentation, follow coding standards, and contribute to CI/CD processes. Stay current with new technologies and help improve the team's tooling and infrastructure. What We're Looking For 8+ years of experience in data engineering or a related field. Strong programming skills in Java, Python and SQL; familiarity with Rust is a plus. Proven experience designing and maintaining scalable ETL/ELT pipelines and data architectures. Hands-on expertise with cloud platforms (e.g., AWS) and cloud-native data services. Comfortable with big data tools and distributed processing frameworks such as Apache Flink or AWS Batch. Strong understanding of data governance, security, and best practices for data quality. Effective communicator with the ability to work across technical and non-technical teams. Additional Strengths Experience with orchestration tools like Apache Airflow. Knowledge of real-time data processing and event-driven architectures. Familiarity with observability tools and anomaly detection for production systems. Exposure to data visualization platforms such as Tableau or Looker. Relevant cloud or data engineering certifications. What we offer: A collaborative and transparent company culture founded on Integrity, Innovation and Performance. Competitive salary with two discretionary bonus payments a year. Benefits such as Healthcare, Dental, Vision, Retirement Planning, 30 days holiday and free lunches when in the office. Regular Town Halls, team lunches and drinks. A Corporate and Social Responsibility program as well as charity fundraising matching and volunteer days. GSR is proudly an Equal Employment Opportunity employer. We do not discriminate based upon any applicable legally protected characteristics such as race, religion, colour, country of origin, sexual orientation, gender, gender identity, gender expression or age. We operate a meritocracy, all aspects of people engagement from the decision to hire or promote as well as our performance management process will be based on the business needs and individual merit, competence in the role. Learn more about us at . Apply for this job indicates a required field First Name Last Name Preferred First Name Email Phone Resume/CV Enter manually Accepted file types: pdf, doc, docx, txt, rtf Enter manually Accepted file types: pdf, doc, docx, txt, rtf LinkedIn Profile Select Prior Total Comp Select Notice / Non-Compete? Select What is the total length of time, you will need to serve to clear your current post-termination restrictions? Related industry experience? Select Have you worked in any of the following prior to applying to GSR? Experience level, applicable to this role? Select How many years have you designed, built, and operated stateful, exactly once streaming pipelines in Apache Flink (or an equivalent framework such as Spark Structured Streaming or Kafka Streams)? Select Which statement best describes your hands on responsibility for architecting and tuning cloud native data lake / warehouse solutions (e.g., AWS S3 + Glue/Redshift, GCP BigQuery, Azure Synapse)? Select What best reflects your experience building ETL/ELT workflows with Apache Airflow (or similar) and integrating them into containerised CI/CD pipelines (Docker, GitHub Actions, Jenkins, etc.)? Select Which option best describes your experience building observability and automated anomaly detection tooling for data pipelines? Select What best describes your current location and working rights status? Select By submitting your application, you confirm that you have read and understood GSR's Privacy Notice for Candidates and consent to the processing of your personal data in accordance with GDPR and applicable data protection laws. Select GSR is proudly an Equal Employment Opportunity employer. We do not discriminate based upon any applicable legally protected characteristics such as race, religion, colour, country of origin, sexual orientation, gender, gender identity, gender expression or age. We operate a meritocracy, all aspects of people engagement from the decision to hire or promote as well as our performance management process will be based on the business needs and individual merit, competence in the role. Learn more about us at Your responses will be used (in aggregate only) to help us identifyareas of improvement in our process. Your responseswill notbe associated with your specific application andwill notin any way be used in the hiring decision. Select I identify as transgender: Select I identify my sexual orientation as: Select I identify my ethnicity as (mark all that apply): Select
Jul 18, 2025
Full time
Founded in 2013, GSR is a leading market-making and programmatic trading company in the exciting and fast-evolving world of cryptocurrency trading. With more than 200 employees in 5 countries, we provide billions of dollars of liquidity to cryptocurrency protocols and exchanges on a daily basis. We build long-term relationships with cryptocurrency communities and traditional investors by offering exceptional service, expertise and trading capabilities tailored to their specific needs. GSR works with token issuers, traders, investors, miners, and more than 30 cryptocurrency exchanges around the world. In volatile markets we are a trusted partner to crypto native builders and to those exploring the industry for the first time. Our team of veteran finance and technology executives from Goldman Sachs, Two Sigma, and Citadel, among others, has developed one of the world's most robust trading platforms designed to navigate issues unique to the digital asset markets. We have continuously improved our technology throughout our history, allowing for our clients to scale and execute their strategies with the highest level of efficiency. Working at GSR is an opportunity to be deeply embedded in every major sector of the cryptocurrency ecosystem. About the role: This role sits within GSR's global Data Engineering team, where you'll contribute to the design and development of scalable data systems that support our trading and business operations. You'll work closely with stakeholders across the firm to build and maintain pipelines, manage data infrastructure, and ensure data is reliable, accessible, and secure. It's a hands-on engineering position with scope to shape the way data is handled across the business, working with modern tools in a fast-moving, high-performance environment. Your responsibilities may include: Build and maintain scalable, efficient ETL/ELT pipelines for both real-time and batch processing. Integrate data from APIs, streaming platforms, and legacy systems, with a focus on data quality and reliability. Design and manage data storage solutions, including databases, warehouses, and lakes. Leverage cloud-native services and distributed processing tools (e.g., Apache Flink, AWS Batch) to support large-scale data workloads. Operations & Tooling Monitor, troubleshoot, and optimize data pipelines to ensure performance and cost efficiency. Implement data governance, access controls, and security measures in line with best practices and regulatory standards. Develop observability and anomaly detection tools to support Tier 1 systems. Work with engineers and business teams to gather requirements and translate them into technical solutions. Maintain documentation, follow coding standards, and contribute to CI/CD processes. Stay current with new technologies and help improve the team's tooling and infrastructure. What We're Looking For 8+ years of experience in data engineering or a related field. Strong programming skills in Java, Python and SQL; familiarity with Rust is a plus. Proven experience designing and maintaining scalable ETL/ELT pipelines and data architectures. Hands-on expertise with cloud platforms (e.g., AWS) and cloud-native data services. Comfortable with big data tools and distributed processing frameworks such as Apache Flink or AWS Batch. Strong understanding of data governance, security, and best practices for data quality. Effective communicator with the ability to work across technical and non-technical teams. Additional Strengths Experience with orchestration tools like Apache Airflow. Knowledge of real-time data processing and event-driven architectures. Familiarity with observability tools and anomaly detection for production systems. Exposure to data visualization platforms such as Tableau or Looker. Relevant cloud or data engineering certifications. What we offer: A collaborative and transparent company culture founded on Integrity, Innovation and Performance. Competitive salary with two discretionary bonus payments a year. Benefits such as Healthcare, Dental, Vision, Retirement Planning, 30 days holiday and free lunches when in the office. Regular Town Halls, team lunches and drinks. A Corporate and Social Responsibility program as well as charity fundraising matching and volunteer days. GSR is proudly an Equal Employment Opportunity employer. We do not discriminate based upon any applicable legally protected characteristics such as race, religion, colour, country of origin, sexual orientation, gender, gender identity, gender expression or age. We operate a meritocracy, all aspects of people engagement from the decision to hire or promote as well as our performance management process will be based on the business needs and individual merit, competence in the role. Learn more about us at . Apply for this job indicates a required field First Name Last Name Preferred First Name Email Phone Resume/CV Enter manually Accepted file types: pdf, doc, docx, txt, rtf Enter manually Accepted file types: pdf, doc, docx, txt, rtf LinkedIn Profile Select Prior Total Comp Select Notice / Non-Compete? Select What is the total length of time, you will need to serve to clear your current post-termination restrictions? Related industry experience? Select Have you worked in any of the following prior to applying to GSR? Experience level, applicable to this role? Select How many years have you designed, built, and operated stateful, exactly once streaming pipelines in Apache Flink (or an equivalent framework such as Spark Structured Streaming or Kafka Streams)? Select Which statement best describes your hands on responsibility for architecting and tuning cloud native data lake / warehouse solutions (e.g., AWS S3 + Glue/Redshift, GCP BigQuery, Azure Synapse)? Select What best reflects your experience building ETL/ELT workflows with Apache Airflow (or similar) and integrating them into containerised CI/CD pipelines (Docker, GitHub Actions, Jenkins, etc.)? Select Which option best describes your experience building observability and automated anomaly detection tooling for data pipelines? Select What best describes your current location and working rights status? Select By submitting your application, you confirm that you have read and understood GSR's Privacy Notice for Candidates and consent to the processing of your personal data in accordance with GDPR and applicable data protection laws. Select GSR is proudly an Equal Employment Opportunity employer. We do not discriminate based upon any applicable legally protected characteristics such as race, religion, colour, country of origin, sexual orientation, gender, gender identity, gender expression or age. We operate a meritocracy, all aspects of people engagement from the decision to hire or promote as well as our performance management process will be based on the business needs and individual merit, competence in the role. Learn more about us at Your responses will be used (in aggregate only) to help us identifyareas of improvement in our process. Your responseswill notbe associated with your specific application andwill notin any way be used in the hiring decision. Select I identify as transgender: Select I identify my sexual orientation as: Select I identify my ethnicity as (mark all that apply): Select
Senior Data Engineer
Sandtech
Sand Technologies is a fast-growing enterprise AI company that solves real-world problems for large blue-chip companies and governments worldwide. We're pioneers of meaningful AI : our solutions go far beyond chatbots. We are using data and AI to solve the world's biggest issues in telecommunications, sustainable water management, energy, healthcare, climate change, smart cities, and other areas that have a real impact on the world. For example, our AI systems help to manage the water supply for the entire city of London. We created the AI algorithms that enabled the 7th largest telecommunications company in the world to plan its network in 300 cities in record time. And we built a digital healthcare system that enables 30m people in a country to get world-class healthcare despite a shortage of doctors. We've grown our revenues by over 500% in the last 12 months while winning prestigious scientific and industry awards for our cutting-edge technology. We're underpinned by over 300 engineers and scientists working across Africa, Europe, the UK and the US. ABOUT THE ROLE Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data engineers create pipelines that support our data scientists and power our front-end applications. This means we do data-intensive work for both OLTP and OLAP use cases. Our environments are primarily cloud-native spanning AWS, Azure and GCP, but we also work on systems running self-hosted open source services exclusively. We strive towards a strong code-first, data as a product mindset at all times, where testing and reliability with a keen eye on performance is a non-negotiable. JOB SUMMARY A Senior Data Engineer, has the primary role of designing, building, and maintaining scalable data pipelines and infrastructure to support data-intensive applications and analytics solutions. In this role, you will be responsible for not only developing data pipelines but also designing data architectures and overseeing data engineering projects. You will work closely with cross-functional teams and contribute to the strategic direction of our data initiatives. RESPONSIBILITIES Data Pipeline Development: Lead the design, implement, and maintain scalable data pipelines for ingesting, processing, and transforming large volumes of data from various sources using tools such as databricks, python and pyspark. Data Architecture: Architect scalable and efficient data solutions using the appropriate architecture design, opting for modern architectures where possible. Data Modeling: Design and optimize data models and schemas for efficient storage, retrieval, and analysis of structured and unstructured data. ETL Processes: Develop, optimize and automate ETL workflows to extract data from diverse sources, transform it into usable formats, and load it into data warehouses, data lakes or lakehouses. Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics. Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), leveraging cloud-native services for data storage, processing, and analytics. Data Quality and Governance: Implement and oversee data governance, quality, and security measures. Monitoring, Optimization and Troubleshooting: Monitor data pipelines and infrastructure performance, identify bottlenecks and optimize for scalability, reliability, and cost-efficiency. Troubleshoot and fix data-related issues. DevOps: Build and maintain basic CI/CD pipelines, commit code to version control and deploy data solutions. Collaboration: Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to understand requirements, define data architectures, and deliver data-driven solutions. Documentation: Create and maintain technical documentation, including data architecture diagrams, ETL workflows, and system documentation, to facilitate understanding and maintainability of data solutions. Best Practices: Stay current with emerging technologies and best practices in data engineering, cloud architecture, and DevOps. Mentoring: Mentor and guide junior and mid-level data engineers. Technology Selection: Evaluate and recommend technologies, frameworks, and tools that best suit project requirements and architecture goals. Performance Optimization: Optimize software performance, scalability, and efficiency through architectural design decisions and performance tuning. QUALIFICATIONS Proven experience as a Senior Data Engineer, or in a similar role, with hands-on experience building and optimizing data pipelines and infrastructure, and designing data architectures. Proven experience working with Big Data and tools used to process Big Data Strong problem-solving and analytical skills with the ability to diagnose and resolve complex data-related issues. Excellent understanding of data engineering principles and practices. Excellent communication and collaboration skills to work effectively in cross-functional teams and communicate technical concepts to non-technical stakeholders. Ability to adapt to new technologies, tools, and methodologies in a dynamic and fast-paced environment. Ability to write clean, scalable, robust code using python or similar programming languages. Background in software engineering a plus. Knowledge of data governance frameworks and practices. Understanding of machine learning workflows and how to support them with robust data pipelines. DESIRABLE LANGUAGES/TOOLS Proficiency in programming languages such as Python, Java, Scala, or SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks, Spark, Kafka, and Flink. Experience in using modern data architectures, such as lakehouse. Experience with CI/CD pipelines, version control systems like Git, and containerization (e.g., Docker). Experience with ETL tools and technologies such as Apache Airflow, Informatica, or Talend. Strong understanding of data governance and best practices in data management. Experience with cloud platforms and services such as AWS, Azure, or GCP for deploying and managing data solutions. Strong problem-solving and analytical skills with the ability to diagnose and resolve complex data-related issues. SQL (for database management and querying) Apache Spark (for distributed data processing) Apache Spark Streaming, Kafka or similar (for real-time data streaming) Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc) Would you like to join us as we work hard, have fun and make history? Apply for this job indicates a required field First Name Last Name Preferred First Name Email Phone Resume/CV Enter manually Accepted file types: pdf, doc, docx, txt, rtf Enter manually Accepted file types: pdf, doc, docx, txt, rtf What interests and excites you about joining Sand? Where are you currently located? What are your gross annual salary expectations (in USD)? Select When would you be able to join us? How did you hear about the role? Select If you selected other, Sand Staff or Media, please specify
Jul 15, 2025
Full time
Sand Technologies is a fast-growing enterprise AI company that solves real-world problems for large blue-chip companies and governments worldwide. We're pioneers of meaningful AI : our solutions go far beyond chatbots. We are using data and AI to solve the world's biggest issues in telecommunications, sustainable water management, energy, healthcare, climate change, smart cities, and other areas that have a real impact on the world. For example, our AI systems help to manage the water supply for the entire city of London. We created the AI algorithms that enabled the 7th largest telecommunications company in the world to plan its network in 300 cities in record time. And we built a digital healthcare system that enables 30m people in a country to get world-class healthcare despite a shortage of doctors. We've grown our revenues by over 500% in the last 12 months while winning prestigious scientific and industry awards for our cutting-edge technology. We're underpinned by over 300 engineers and scientists working across Africa, Europe, the UK and the US. ABOUT THE ROLE Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data engineers create pipelines that support our data scientists and power our front-end applications. This means we do data-intensive work for both OLTP and OLAP use cases. Our environments are primarily cloud-native spanning AWS, Azure and GCP, but we also work on systems running self-hosted open source services exclusively. We strive towards a strong code-first, data as a product mindset at all times, where testing and reliability with a keen eye on performance is a non-negotiable. JOB SUMMARY A Senior Data Engineer, has the primary role of designing, building, and maintaining scalable data pipelines and infrastructure to support data-intensive applications and analytics solutions. In this role, you will be responsible for not only developing data pipelines but also designing data architectures and overseeing data engineering projects. You will work closely with cross-functional teams and contribute to the strategic direction of our data initiatives. RESPONSIBILITIES Data Pipeline Development: Lead the design, implement, and maintain scalable data pipelines for ingesting, processing, and transforming large volumes of data from various sources using tools such as databricks, python and pyspark. Data Architecture: Architect scalable and efficient data solutions using the appropriate architecture design, opting for modern architectures where possible. Data Modeling: Design and optimize data models and schemas for efficient storage, retrieval, and analysis of structured and unstructured data. ETL Processes: Develop, optimize and automate ETL workflows to extract data from diverse sources, transform it into usable formats, and load it into data warehouses, data lakes or lakehouses. Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics. Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), leveraging cloud-native services for data storage, processing, and analytics. Data Quality and Governance: Implement and oversee data governance, quality, and security measures. Monitoring, Optimization and Troubleshooting: Monitor data pipelines and infrastructure performance, identify bottlenecks and optimize for scalability, reliability, and cost-efficiency. Troubleshoot and fix data-related issues. DevOps: Build and maintain basic CI/CD pipelines, commit code to version control and deploy data solutions. Collaboration: Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to understand requirements, define data architectures, and deliver data-driven solutions. Documentation: Create and maintain technical documentation, including data architecture diagrams, ETL workflows, and system documentation, to facilitate understanding and maintainability of data solutions. Best Practices: Stay current with emerging technologies and best practices in data engineering, cloud architecture, and DevOps. Mentoring: Mentor and guide junior and mid-level data engineers. Technology Selection: Evaluate and recommend technologies, frameworks, and tools that best suit project requirements and architecture goals. Performance Optimization: Optimize software performance, scalability, and efficiency through architectural design decisions and performance tuning. QUALIFICATIONS Proven experience as a Senior Data Engineer, or in a similar role, with hands-on experience building and optimizing data pipelines and infrastructure, and designing data architectures. Proven experience working with Big Data and tools used to process Big Data Strong problem-solving and analytical skills with the ability to diagnose and resolve complex data-related issues. Excellent understanding of data engineering principles and practices. Excellent communication and collaboration skills to work effectively in cross-functional teams and communicate technical concepts to non-technical stakeholders. Ability to adapt to new technologies, tools, and methodologies in a dynamic and fast-paced environment. Ability to write clean, scalable, robust code using python or similar programming languages. Background in software engineering a plus. Knowledge of data governance frameworks and practices. Understanding of machine learning workflows and how to support them with robust data pipelines. DESIRABLE LANGUAGES/TOOLS Proficiency in programming languages such as Python, Java, Scala, or SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks, Spark, Kafka, and Flink. Experience in using modern data architectures, such as lakehouse. Experience with CI/CD pipelines, version control systems like Git, and containerization (e.g., Docker). Experience with ETL tools and technologies such as Apache Airflow, Informatica, or Talend. Strong understanding of data governance and best practices in data management. Experience with cloud platforms and services such as AWS, Azure, or GCP for deploying and managing data solutions. Strong problem-solving and analytical skills with the ability to diagnose and resolve complex data-related issues. SQL (for database management and querying) Apache Spark (for distributed data processing) Apache Spark Streaming, Kafka or similar (for real-time data streaming) Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc) Would you like to join us as we work hard, have fun and make history? Apply for this job indicates a required field First Name Last Name Preferred First Name Email Phone Resume/CV Enter manually Accepted file types: pdf, doc, docx, txt, rtf Enter manually Accepted file types: pdf, doc, docx, txt, rtf What interests and excites you about joining Sand? Where are you currently located? What are your gross annual salary expectations (in USD)? Select When would you be able to join us? How did you hear about the role? Select If you selected other, Sand Staff or Media, please specify
Roke
Data Engineer
Roke City, Manchester
As a Data Engineer, you'll be actively involved in development of mission critical technical solutions that focus on data services for our National Security customers. Roke is a leading technology & engineering company with clients spanning National Security, Defence and Intelligence. You will work alongside our customers to solve their complex and unique challenges. As our next Data Engineer, you'll be managing and developing data pipelines that transform raw data into valuable insights for Roke's National Security customers, enabling downstream analytics and reporting. You'll be working with diverse data sources (batch, streaming, real-time and unstructured), applying distributed compute techniques to handle large datasets. The key requirements Able to develop Extract, Transform, Load (ETL) or Extract, Load, Transform (ELT) workflows to move data from source systems to date stores You will have used one or more supporting technologies i.e. Apache, Kafka, NiFi, Spark, Flink or Airflow etc. A history working with SQL and NoSQL type databases (PostgreSQL, Mongo, ElasticSearch, Accumulo or Neo4j etc.) You will be able to code using a modern software language such as Python, Java or Go Experience of distributed computing techniques. Built over a 60-year heritage, Roke offers specialist knowledge in sensors, communications, cyber, and AI and ML, and Data Science. We change the way organisations think and act - through dynamic insights from the analysis of multiple layers of data. We take care of the innovative, technical stuff that keeps everyone safe - that's our mission, passion, and motivation. Joining a team united by purpose and ambition, you'll be at the heart of an exciting growth journey: having doubled in size over the last 4 years, we intend to double our headcount by 2027. At Roke, every individual counts. We push technical boundaries, together. We re-invest in product innovation, and we empower our people to make a difference. Where you'll work You'll find our Manchester site located in the heart of Manchester ; Europe's fastest growing tech hub. You'll become a key part of Roke's growing local tech community as we support the Government levelling up agenda. There is easy, local access to our client community with great transport links. Why you should join us We are one Roke. We believe we all have a responsibility to create an environment where we all have the time, trust and freedom to succeed and where we are encouraged to bring our whole self to work. We are committed to a policy of Equal Opportunity, Diversity and Inclusion. Our working environment is friendly, creative, inclusive and support a diverse work-force and those with additional needs. The benefits and perks Flexi-time: Working hours to suit you and your life Annual bonus: Based on profit share and personal performance Private medical insurance: Includes cover for existing conditions Holiday: You'll receive competitive annual leave plus bank holidays. We also offer the opportunity to buy and sell annual leave Chemring Share Save: Monthly savings into a 3 or 5 year plan. Clearances Due to the nature of this role, we require you to be eligible to achieve DV clearance. As a result, you should be a British Citizen and have resided in the U.K. for the last 10 years. The next step Click apply, submitting an up-to-date CV. We look forward to hearing from you.
Mar 07, 2025
Full time
As a Data Engineer, you'll be actively involved in development of mission critical technical solutions that focus on data services for our National Security customers. Roke is a leading technology & engineering company with clients spanning National Security, Defence and Intelligence. You will work alongside our customers to solve their complex and unique challenges. As our next Data Engineer, you'll be managing and developing data pipelines that transform raw data into valuable insights for Roke's National Security customers, enabling downstream analytics and reporting. You'll be working with diverse data sources (batch, streaming, real-time and unstructured), applying distributed compute techniques to handle large datasets. The key requirements Able to develop Extract, Transform, Load (ETL) or Extract, Load, Transform (ELT) workflows to move data from source systems to date stores You will have used one or more supporting technologies i.e. Apache, Kafka, NiFi, Spark, Flink or Airflow etc. A history working with SQL and NoSQL type databases (PostgreSQL, Mongo, ElasticSearch, Accumulo or Neo4j etc.) You will be able to code using a modern software language such as Python, Java or Go Experience of distributed computing techniques. Built over a 60-year heritage, Roke offers specialist knowledge in sensors, communications, cyber, and AI and ML, and Data Science. We change the way organisations think and act - through dynamic insights from the analysis of multiple layers of data. We take care of the innovative, technical stuff that keeps everyone safe - that's our mission, passion, and motivation. Joining a team united by purpose and ambition, you'll be at the heart of an exciting growth journey: having doubled in size over the last 4 years, we intend to double our headcount by 2027. At Roke, every individual counts. We push technical boundaries, together. We re-invest in product innovation, and we empower our people to make a difference. Where you'll work You'll find our Manchester site located in the heart of Manchester ; Europe's fastest growing tech hub. You'll become a key part of Roke's growing local tech community as we support the Government levelling up agenda. There is easy, local access to our client community with great transport links. Why you should join us We are one Roke. We believe we all have a responsibility to create an environment where we all have the time, trust and freedom to succeed and where we are encouraged to bring our whole self to work. We are committed to a policy of Equal Opportunity, Diversity and Inclusion. Our working environment is friendly, creative, inclusive and support a diverse work-force and those with additional needs. The benefits and perks Flexi-time: Working hours to suit you and your life Annual bonus: Based on profit share and personal performance Private medical insurance: Includes cover for existing conditions Holiday: You'll receive competitive annual leave plus bank holidays. We also offer the opportunity to buy and sell annual leave Chemring Share Save: Monthly savings into a 3 or 5 year plan. Clearances Due to the nature of this role, we require you to be eligible to achieve DV clearance. As a result, you should be a British Citizen and have resided in the U.K. for the last 10 years. The next step Click apply, submitting an up-to-date CV. We look forward to hearing from you.
Roke
Data Engineer
Roke Innsworth, Gloucestershire
As a Data Engineer, you'll be actively involved in development of mission critical technical solutions that focus on data services for our National Security customers. Roke is a leading technology & engineering company with clients spanning National Security, Defence and Intelligence. You will work alongside our customers to solve their complex and unique challenges. As our next Data Engineer, you'll be managing and developing data pipelines that transform raw data into valuable insights for Roke's National Security customers, enabling downstream analytics and reporting. You'll be working with diverse data sources (batch, streaming, real-time and unstructured), applying distributed compute techniques to handle large datasets. The key requirements Able to develop Extract, Transform, Load (ETL) or Extract, Load, Transform (ELT) workflows to move data from source systems to date stores You will have used one or more supporting technologies i.e. Apache, Kafka, NiFi, Spark, Flink or Airflow etc. A history working with SQL and NoSQL type databases (PostgreSQL, Mongo, ElasticSearch, Accumulo or Neo4j etc.) You will be able to code using a modern software language such as Python, Java or Go Experience of distributed computing techniques. Built over a 60-year heritage, Roke offers specialist knowledge in sensors, communications, cyber, and AI and ML, and Data Science. We change the way organisations think and act - through dynamic insights from the analysis of multiple layers of data. We take care of the innovative, technical stuff that keeps everyone safe - that's our mission, passion, and motivation. Joining a team united by purpose and ambition, you'll be at the heart of an exciting growth journey: having doubled in size over the last 4 years, we intend to double our headcount by 2027. At Roke, every individual counts. We push technical boundaries, together. We re-invest in product innovation, and we empower our people to make a difference. Where you'll work You'll find our Gloucester site in a business park two minutes from junction 11A of the M5; The site allows easy access to our local customer base. Set on the outskirts of the Cotswolds, you are never far from a picturesque view or lunch time walk. Why you should join us We are one Roke. We believe we all have a responsibility to create an environment where we all have the time, trust and freedom to succeed and where we are encouraged to bring our whole self to work. We are committed to a policy of Equal Opportunity, Diversity and Inclusion. Our working environment is friendly, creative, inclusive and support a diverse work-force and those with additional needs. The benefits and perks Flexi-time: Working hours to suit you and your life Annual bonus: Based on profit share and personal performance Private medical insurance: Includes cover for existing conditions Holiday: You'll receive competitive annual leave plus bank holidays. We also offer the opportunity to buy and sell annual leave Chemring Share Save: Monthly savings into a 3 or 5 year plan. Clearances Due to the nature of this role, we require you to be eligible to achieve DV clearance. As a result, you should be a British Citizen and have resided in the U.K. for the last 10 years. The next step Click apply, submitting an up-to-date CV. We look forward to hearing from you.
Mar 07, 2025
Full time
As a Data Engineer, you'll be actively involved in development of mission critical technical solutions that focus on data services for our National Security customers. Roke is a leading technology & engineering company with clients spanning National Security, Defence and Intelligence. You will work alongside our customers to solve their complex and unique challenges. As our next Data Engineer, you'll be managing and developing data pipelines that transform raw data into valuable insights for Roke's National Security customers, enabling downstream analytics and reporting. You'll be working with diverse data sources (batch, streaming, real-time and unstructured), applying distributed compute techniques to handle large datasets. The key requirements Able to develop Extract, Transform, Load (ETL) or Extract, Load, Transform (ELT) workflows to move data from source systems to date stores You will have used one or more supporting technologies i.e. Apache, Kafka, NiFi, Spark, Flink or Airflow etc. A history working with SQL and NoSQL type databases (PostgreSQL, Mongo, ElasticSearch, Accumulo or Neo4j etc.) You will be able to code using a modern software language such as Python, Java or Go Experience of distributed computing techniques. Built over a 60-year heritage, Roke offers specialist knowledge in sensors, communications, cyber, and AI and ML, and Data Science. We change the way organisations think and act - through dynamic insights from the analysis of multiple layers of data. We take care of the innovative, technical stuff that keeps everyone safe - that's our mission, passion, and motivation. Joining a team united by purpose and ambition, you'll be at the heart of an exciting growth journey: having doubled in size over the last 4 years, we intend to double our headcount by 2027. At Roke, every individual counts. We push technical boundaries, together. We re-invest in product innovation, and we empower our people to make a difference. Where you'll work You'll find our Gloucester site in a business park two minutes from junction 11A of the M5; The site allows easy access to our local customer base. Set on the outskirts of the Cotswolds, you are never far from a picturesque view or lunch time walk. Why you should join us We are one Roke. We believe we all have a responsibility to create an environment where we all have the time, trust and freedom to succeed and where we are encouraged to bring our whole self to work. We are committed to a policy of Equal Opportunity, Diversity and Inclusion. Our working environment is friendly, creative, inclusive and support a diverse work-force and those with additional needs. The benefits and perks Flexi-time: Working hours to suit you and your life Annual bonus: Based on profit share and personal performance Private medical insurance: Includes cover for existing conditions Holiday: You'll receive competitive annual leave plus bank holidays. We also offer the opportunity to buy and sell annual leave Chemring Share Save: Monthly savings into a 3 or 5 year plan. Clearances Due to the nature of this role, we require you to be eligible to achieve DV clearance. As a result, you should be a British Citizen and have resided in the U.K. for the last 10 years. The next step Click apply, submitting an up-to-date CV. We look forward to hearing from you.
Senior Compute Cloud Integration Lead, Senior Vice President
Citigroup Inc.
Job description: XiP is building a next-generation cross-asset calculation system for Citi trading desks and enterprise users in the largest global financial markets and exchanges in New York, London, and other major financial hubs. Our team owns multiple Java Spring Boot Services that execute, partition, and track quantitative risk graphs/trades in a distributed environment. These graphs can fail due to their complexity and our system must adapt quickly to these failures to provide a seamless experience for clients. XiP Compute Services are deployed onto OpenShift and Amazon's Elastic Kubernetes Service (EKS). An important initiative in 2025 will be onboarding Google's Kubernetes Engine to further expand our coverage. Our system scales on-demand, and we can run up to tens of thousands of replicas of our services across all asset classes. The role of the Senior Technical Lead is to lead a variety of engineering activities including design decisions regarding technical direction of the platform with short, medium, and long-term changes, with a key focus on public cloud onboarding. The project requires constant review of the technologies, patterns and paradigms used to ensure the system is easy to understand, performant, scalable, testable, robust, and observable. The role is a conjunction of technical and managerial roles, with line-management duties, while giving technical direction to a growing team of developers globally. The platform is a Greenfield build using standard modern technologies such as Java, Spring Boot, Kubernetes, Kafka, MongoDB, RabbitMQ, Solace, Apache Ignite. The platform runs in a hybrid mode both on-premise and in AWS utilizing technologies such as EKS, S3, FSX. The main purpose of this role is to lead efforts of continued platform onboarding to AWS as well as the new initiative to deploy into GCP. The project is in a scale-out phase, with a goal of expanding the user base and workloads towards running billions of financial calculations per day across hundreds of thousands of cores. The aim of the project is to run all finance calculations for Citi's Front Office Markets business globally. Responsibilities: Steering platform onboarding into AWS and Google Cloud, while collaborating with Citi HPC team and AWS/Google partners. Challenging proposed and provided solutions in terms of performance, robustness and cost effectiveness. Making decisions regarding technical direction of platform, including evaluating new technologies and executing proof-of-concept implementations, with good understanding of various limitations. Identifying and defining necessary system enhancements to improve current processes and architecture. Hands-on coding of fixes, features, and improvements. Investigating reported or observed platform issues. Reviewing pull-requests from other team members and giving robust critique/feedback. Identifying and proposing teamwork enhancements. Reviewing requests for new features, balancing user requirements with defending the platform from complexity and low-value features. Collaborating with key partners across the firm for extending the platform, such as: the infrastructure provider group; quant group; upstream and downstream systems. Mentoring/coaching junior developers on coding/architecture approaches and best practices. Skills and Experience: Expert knowledge of distributed systems including event-driven architecture; at-least-once messaging; CAP Theorem; horizontal and vertical scaling strategies; massively distributed architectures. Expert knowledge of Java, JVM, memory management, garbage collection. Thorough understanding of multithreaded environment challenges. Expert knowledge of Spring, SpringBoot framework and associated technologies. Expert knowledge of test frameworks, such as Junit, Mockito, writing easily-testable code. Expertise in Java debugging, including remote debugging of services deployed to K8s. Expert knowledge of Kubernetes and associated technologies such as KEDA, Karpenter, ClusterAutoscaler, CoreDNS. Expert knowledge of SQL and/or NoSQL database technologies. Expert knowledge of various messaging protocols and technologies such as REST, HTTP/S, AMQP, WebSocket. Expert knowledge of Confluent Kafka. Experience and good understanding of core technologies provided by GCP/AWS, such as S3, FSX, EKS, SQS, SNS, Kinesis, AmazonMQ, DynamoDB, GKE, CloudStorage, PubSub, Filestore. Knowledge of modern observability technologies such as ELK, Splunk, Prometheus, Grafana, Micrometer. "What-if" thinking, while designing or reviewing solutions, to foresee or catch potential problems as early in the development process, as only possible. Nice to have: Good knowledge of Python, Groovy, Bash. C++ basic knowledge/experience. Good knowledge of PubSub model. Good knowledge of Finance, especially large-scale risk calculation. Good knowledge of representing complex calculations as graphs of instructions which can be horizontally distributed. Job Family Group: Technology Job Family: Applications Development Time Type: Full time Citi is an equal opportunity and affirmative action employer. Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Citigroup Inc. and its subsidiaries ("Citi") invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View the " EEO is the Law " poster. View the EEO is the Law Supplement . View the EEO Policy Statement . View the Pay Transparency Posting .
Feb 18, 2025
Full time
Job description: XiP is building a next-generation cross-asset calculation system for Citi trading desks and enterprise users in the largest global financial markets and exchanges in New York, London, and other major financial hubs. Our team owns multiple Java Spring Boot Services that execute, partition, and track quantitative risk graphs/trades in a distributed environment. These graphs can fail due to their complexity and our system must adapt quickly to these failures to provide a seamless experience for clients. XiP Compute Services are deployed onto OpenShift and Amazon's Elastic Kubernetes Service (EKS). An important initiative in 2025 will be onboarding Google's Kubernetes Engine to further expand our coverage. Our system scales on-demand, and we can run up to tens of thousands of replicas of our services across all asset classes. The role of the Senior Technical Lead is to lead a variety of engineering activities including design decisions regarding technical direction of the platform with short, medium, and long-term changes, with a key focus on public cloud onboarding. The project requires constant review of the technologies, patterns and paradigms used to ensure the system is easy to understand, performant, scalable, testable, robust, and observable. The role is a conjunction of technical and managerial roles, with line-management duties, while giving technical direction to a growing team of developers globally. The platform is a Greenfield build using standard modern technologies such as Java, Spring Boot, Kubernetes, Kafka, MongoDB, RabbitMQ, Solace, Apache Ignite. The platform runs in a hybrid mode both on-premise and in AWS utilizing technologies such as EKS, S3, FSX. The main purpose of this role is to lead efforts of continued platform onboarding to AWS as well as the new initiative to deploy into GCP. The project is in a scale-out phase, with a goal of expanding the user base and workloads towards running billions of financial calculations per day across hundreds of thousands of cores. The aim of the project is to run all finance calculations for Citi's Front Office Markets business globally. Responsibilities: Steering platform onboarding into AWS and Google Cloud, while collaborating with Citi HPC team and AWS/Google partners. Challenging proposed and provided solutions in terms of performance, robustness and cost effectiveness. Making decisions regarding technical direction of platform, including evaluating new technologies and executing proof-of-concept implementations, with good understanding of various limitations. Identifying and defining necessary system enhancements to improve current processes and architecture. Hands-on coding of fixes, features, and improvements. Investigating reported or observed platform issues. Reviewing pull-requests from other team members and giving robust critique/feedback. Identifying and proposing teamwork enhancements. Reviewing requests for new features, balancing user requirements with defending the platform from complexity and low-value features. Collaborating with key partners across the firm for extending the platform, such as: the infrastructure provider group; quant group; upstream and downstream systems. Mentoring/coaching junior developers on coding/architecture approaches and best practices. Skills and Experience: Expert knowledge of distributed systems including event-driven architecture; at-least-once messaging; CAP Theorem; horizontal and vertical scaling strategies; massively distributed architectures. Expert knowledge of Java, JVM, memory management, garbage collection. Thorough understanding of multithreaded environment challenges. Expert knowledge of Spring, SpringBoot framework and associated technologies. Expert knowledge of test frameworks, such as Junit, Mockito, writing easily-testable code. Expertise in Java debugging, including remote debugging of services deployed to K8s. Expert knowledge of Kubernetes and associated technologies such as KEDA, Karpenter, ClusterAutoscaler, CoreDNS. Expert knowledge of SQL and/or NoSQL database technologies. Expert knowledge of various messaging protocols and technologies such as REST, HTTP/S, AMQP, WebSocket. Expert knowledge of Confluent Kafka. Experience and good understanding of core technologies provided by GCP/AWS, such as S3, FSX, EKS, SQS, SNS, Kinesis, AmazonMQ, DynamoDB, GKE, CloudStorage, PubSub, Filestore. Knowledge of modern observability technologies such as ELK, Splunk, Prometheus, Grafana, Micrometer. "What-if" thinking, while designing or reviewing solutions, to foresee or catch potential problems as early in the development process, as only possible. Nice to have: Good knowledge of Python, Groovy, Bash. C++ basic knowledge/experience. Good knowledge of PubSub model. Good knowledge of Finance, especially large-scale risk calculation. Good knowledge of representing complex calculations as graphs of instructions which can be horizontally distributed. Job Family Group: Technology Job Family: Applications Development Time Type: Full time Citi is an equal opportunity and affirmative action employer. Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Citigroup Inc. and its subsidiaries ("Citi") invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View the " EEO is the Law " poster. View the EEO is the Law Supplement . View the EEO Policy Statement . View the Pay Transparency Posting .
Technical lead for OTC Financial Accounting, SVP
Citigroup Inc.
At Citi, we value engineering and foster an environment where our best engineers continue to code and grow their careers. Oasys Financial is a sub-ledger for Citi's OTC business and covers global trading books across all asset classes. This system handles millions of trades daily and conducts all sub-ledger activities and is a critical system for the bank. We are taking up an overhaul & re-write of this platform. We invite applications from experienced and well-rounded senior technologists for a SVP role who can work in a globally distributed team. This role requires strong Technology and Business knowledge of OTC products, trade life cycle, and PnL. The candidate will join a team in the early stages of transformation. The candidate will find themselves spending their time in both coding & business analysis. There will be regular interfacing with senior stakeholders across the bank. The candidate will be expected to demonstrate an established track record of handling senior stakeholders & lead such discussions independently. Candidates are expected to demonstrate a strong delivery background in the financial industry, preferably in OTC products. We expect depth of knowledge in the Java stack. The candidate should be willing to drive re-architecture and lead by example. The candidate should be able to work in Kubernetes-based deployments in both on-prem and external cloud and deal with a variety of build technologies. The candidate will have limited line management responsibilities, intentionally designed to provide focus on coding & business analysis. Growth opportunities are excellent, and in the future, the candidate will have options of choosing between engineering, business analysis, or management tracks. The team you'll be working with is distributed across 3 countries. The development position involves: Strong analytical skills & problem solving Requirements analysis and capture, working closely with the business and business-aligned teams to define solutions. Documentation of requirement specifications and guiding junior developers on complex use cases Defining application changes, developing & scaling the existing team to drive change Mentoring developers & BAs in a globally distributed team Ability to establish testing practices in a team Development of high-quality software, emphasizing simplicity, maintainability, and reuse Participation in code and design reviews Good communication with support, other development, and infrastructure teams Required Skills OTC Trade life cycle, Trade modelling in FPML/variants, Settlement and PnL knowledge Programming skills - including concurrent, parallel and distributed systems programming Strong knowledge of Java, Linux & SQL Good understanding of Spring Framework and Kafka Strong understanding of automating testing Desirable Skills: Experience with Apache Ignite/ Redis Working knowledge of a scripting language such as Groovy, Python, JavaScript, etc. Knowledge of HTTP, ReSTful web services, and API design Messaging technologies Camel, Conductor Familiarity with databases, particularly NoSQL e.g. MongoDB, Couchbase, Snowflake, RDBMS. Experience with Kubernetes Good understanding of the Linux OS Job Family Group: Technology Job Family: Applications Development Time Type: Full time Citi is an equal opportunity and affirmative action employer. Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Citigroup Inc. and its subsidiaries ("Citi") invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View the " EEO is the Law " poster. View the EEO is the Law Supplement . View the EEO Policy Statement . View the Pay Transparency Posting .
Feb 14, 2025
Full time
At Citi, we value engineering and foster an environment where our best engineers continue to code and grow their careers. Oasys Financial is a sub-ledger for Citi's OTC business and covers global trading books across all asset classes. This system handles millions of trades daily and conducts all sub-ledger activities and is a critical system for the bank. We are taking up an overhaul & re-write of this platform. We invite applications from experienced and well-rounded senior technologists for a SVP role who can work in a globally distributed team. This role requires strong Technology and Business knowledge of OTC products, trade life cycle, and PnL. The candidate will join a team in the early stages of transformation. The candidate will find themselves spending their time in both coding & business analysis. There will be regular interfacing with senior stakeholders across the bank. The candidate will be expected to demonstrate an established track record of handling senior stakeholders & lead such discussions independently. Candidates are expected to demonstrate a strong delivery background in the financial industry, preferably in OTC products. We expect depth of knowledge in the Java stack. The candidate should be willing to drive re-architecture and lead by example. The candidate should be able to work in Kubernetes-based deployments in both on-prem and external cloud and deal with a variety of build technologies. The candidate will have limited line management responsibilities, intentionally designed to provide focus on coding & business analysis. Growth opportunities are excellent, and in the future, the candidate will have options of choosing between engineering, business analysis, or management tracks. The team you'll be working with is distributed across 3 countries. The development position involves: Strong analytical skills & problem solving Requirements analysis and capture, working closely with the business and business-aligned teams to define solutions. Documentation of requirement specifications and guiding junior developers on complex use cases Defining application changes, developing & scaling the existing team to drive change Mentoring developers & BAs in a globally distributed team Ability to establish testing practices in a team Development of high-quality software, emphasizing simplicity, maintainability, and reuse Participation in code and design reviews Good communication with support, other development, and infrastructure teams Required Skills OTC Trade life cycle, Trade modelling in FPML/variants, Settlement and PnL knowledge Programming skills - including concurrent, parallel and distributed systems programming Strong knowledge of Java, Linux & SQL Good understanding of Spring Framework and Kafka Strong understanding of automating testing Desirable Skills: Experience with Apache Ignite/ Redis Working knowledge of a scripting language such as Groovy, Python, JavaScript, etc. Knowledge of HTTP, ReSTful web services, and API design Messaging technologies Camel, Conductor Familiarity with databases, particularly NoSQL e.g. MongoDB, Couchbase, Snowflake, RDBMS. Experience with Kubernetes Good understanding of the Linux OS Job Family Group: Technology Job Family: Applications Development Time Type: Full time Citi is an equal opportunity and affirmative action employer. Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Citigroup Inc. and its subsidiaries ("Citi") invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View the " EEO is the Law " poster. View the EEO is the Law Supplement . View the EEO Policy Statement . View the Pay Transparency Posting .

Modal Window

  • Home
  • Contact
  • About Us
  • Terms & Conditions
  • Privacy
  • Employer
  • Post a Job
  • Search Resumes
  • Sign in
  • Job Seeker
  • Find Jobs
  • Create Resume
  • Sign in
  • Facebook
  • Twitter
  • Google Plus
  • LinkedIn
Parent and Partner sites: IT Job Board | Jobs Near Me | RightTalent.co.uk | Quantity Surveyor jobs | Building Surveyor jobs | Construction Recruitment | Talent Recruiter | Construction Job Board | Property jobs | myJobsnearme.com | Jobs near me
© 2008-2025 Jobsite Jobs | Designed by Web Design Agency