Project description Our client, a leading financial institution serving high-net-worth individuals, is seeking an experienced Senior Project Manager to lead strategic IT initiatives. This role involves managing complex, high-impact projects that enhance client experience, improve operational efficiency, and align with the organization's long-term digital transformation strategy. Responsibilities Project Leadership: Drive end-to-end delivery of strategic IT projects, ensuring alignment with business objectives and regulatory requirements. Stakeholder Management: Collaborate with senior executives, product owners, and technology teams to define scope, priorities, and success metrics. Planning & Execution: Develop detailed project plans, manage budgets, timelines, and resources, and ensure adherence to governance frameworks. Risk & Compliance: Identify and mitigate risks, ensuring compliance with banking regulations and internal security standards. Vendor Coordination: Manage relationships with external vendors and technology partners, ensuring quality and timely delivery. Reporting: Provide transparent status updates, KPIs, and executive-level reporting to senior management. Continuous Improvement: Promote best practices in project management and contribute to process optimization initiatives. Skills Must have Bachelor's degree in IT, Business, Finance, or related field; PMP, PRINCE2, or Agile certification strongly preferred. 8+ years in project management, with at least 3 years in financial services or banking. Proven track record of delivering large-scale IT projects in a regulated environment. Strong knowledge of project management methodologies (Agile, Waterfall, Hybrid). Excellent communication and stakeholder engagement skills, including executive-level interaction. Ability to manage budgets, complex dependencies, and multi-vendor environments. Strategic thinking with the ability to align technology initiatives to business goals. Strong leadership and team management capabilities. Familiarity with core banking systems, digital platforms, cybersecurity principles, and data privacy regulations. Nice to have n/a Other Languages English: C1 Advanced Seniority Senior
Jan 02, 2026
Full time
Project description Our client, a leading financial institution serving high-net-worth individuals, is seeking an experienced Senior Project Manager to lead strategic IT initiatives. This role involves managing complex, high-impact projects that enhance client experience, improve operational efficiency, and align with the organization's long-term digital transformation strategy. Responsibilities Project Leadership: Drive end-to-end delivery of strategic IT projects, ensuring alignment with business objectives and regulatory requirements. Stakeholder Management: Collaborate with senior executives, product owners, and technology teams to define scope, priorities, and success metrics. Planning & Execution: Develop detailed project plans, manage budgets, timelines, and resources, and ensure adherence to governance frameworks. Risk & Compliance: Identify and mitigate risks, ensuring compliance with banking regulations and internal security standards. Vendor Coordination: Manage relationships with external vendors and technology partners, ensuring quality and timely delivery. Reporting: Provide transparent status updates, KPIs, and executive-level reporting to senior management. Continuous Improvement: Promote best practices in project management and contribute to process optimization initiatives. Skills Must have Bachelor's degree in IT, Business, Finance, or related field; PMP, PRINCE2, or Agile certification strongly preferred. 8+ years in project management, with at least 3 years in financial services or banking. Proven track record of delivering large-scale IT projects in a regulated environment. Strong knowledge of project management methodologies (Agile, Waterfall, Hybrid). Excellent communication and stakeholder engagement skills, including executive-level interaction. Ability to manage budgets, complex dependencies, and multi-vendor environments. Strategic thinking with the ability to align technology initiatives to business goals. Strong leadership and team management capabilities. Familiarity with core banking systems, digital platforms, cybersecurity principles, and data privacy regulations. Nice to have n/a Other Languages English: C1 Advanced Seniority Senior
A prominent technology service provider in the United Kingdom seeks qualified personnel for a staff augmentation role. You will design, develop, and maintain application software within a banking environment using Mainframe technologies, including COBOL, CICS, and Hogan. Ideal candidates will have 5-8 years of experience with a deep understanding of software development methodologies. This position offers a dynamic opportunity for those experienced in Agile environments.
Jan 01, 2026
Full time
A prominent technology service provider in the United Kingdom seeks qualified personnel for a staff augmentation role. You will design, develop, and maintain application software within a banking environment using Mainframe technologies, including COBOL, CICS, and Hogan. Ideal candidates will have 5-8 years of experience with a deep understanding of software development methodologies. This position offers a dynamic opportunity for those experienced in Agile environments.
Overview We are seeking a highly experienced Data Scientist with deep expertise in Python and advanced machine learning techniques. You need to have a strong background in statistical analysis, big data platforms, and cloud integration, and you will be responsible for designing and deploying scalable data science solutions. Responsibilities Develop and deploy machine learning, deep learning, and predictive models. Perform statistical analysis, data mining, and feature engineering on large datasets. Build and optimize data pipelines and ETL workflows. Collaborate with data engineers and business stakeholders to deliver actionable insights. Create compelling data visualizations using tools like Tableau, Power BI, Matplotlib, or Plotly. Implement MLOps practices, including CI/CD, model monitoring, and lifecycle management. Mentor junior data scientists and contribute to team knowledge-sharing. Stay current with trends in AI/ML and data science. Skills Must have Minimum 8+ years of hands-on experience in Data Science with strong expertise in Python and libraries such as Pandas, NumPy, SciPy, Scikit-learn, TensorFlow, or PyTorch. Proven ability to design, develop, and deploy machine learning, deep learning, and predictive models to solve complex business problems. Strong background in statistical analysis, data mining, and feature engineering for large-scale structured and unstructured datasets. Experience working with big data platforms (Spark, Hadoop) and integrating with cloud environments (AWS, Azure, GCP). Proficiency in building data pipelines, ETL workflows, and collaborating with data engineers for scalable data solutions. Expertise in data visualization and storytelling using Tableau, Power BI, Matplotlib, Seaborn, or Plotly to present insights effectively. Strong knowledge of MLOps practices, including CI/CD pipelines, model deployment, monitoring, and lifecycle management. Ability to engage with business stakeholders, gather requirements, and deliver actionable insights aligned with business goals. Experience in mentoring junior data scientists/analysts, leading projects, and contributing to knowledge-sharing across teams. Continuous learner with strong problem-solving, communication, and leadership skills, staying updated with the latest trends in AI/ML and data science. Nice to have N/A
Jan 01, 2026
Full time
Overview We are seeking a highly experienced Data Scientist with deep expertise in Python and advanced machine learning techniques. You need to have a strong background in statistical analysis, big data platforms, and cloud integration, and you will be responsible for designing and deploying scalable data science solutions. Responsibilities Develop and deploy machine learning, deep learning, and predictive models. Perform statistical analysis, data mining, and feature engineering on large datasets. Build and optimize data pipelines and ETL workflows. Collaborate with data engineers and business stakeholders to deliver actionable insights. Create compelling data visualizations using tools like Tableau, Power BI, Matplotlib, or Plotly. Implement MLOps practices, including CI/CD, model monitoring, and lifecycle management. Mentor junior data scientists and contribute to team knowledge-sharing. Stay current with trends in AI/ML and data science. Skills Must have Minimum 8+ years of hands-on experience in Data Science with strong expertise in Python and libraries such as Pandas, NumPy, SciPy, Scikit-learn, TensorFlow, or PyTorch. Proven ability to design, develop, and deploy machine learning, deep learning, and predictive models to solve complex business problems. Strong background in statistical analysis, data mining, and feature engineering for large-scale structured and unstructured datasets. Experience working with big data platforms (Spark, Hadoop) and integrating with cloud environments (AWS, Azure, GCP). Proficiency in building data pipelines, ETL workflows, and collaborating with data engineers for scalable data solutions. Expertise in data visualization and storytelling using Tableau, Power BI, Matplotlib, Seaborn, or Plotly to present insights effectively. Strong knowledge of MLOps practices, including CI/CD pipelines, model deployment, monitoring, and lifecycle management. Ability to engage with business stakeholders, gather requirements, and deliver actionable insights aligned with business goals. Experience in mentoring junior data scientists/analysts, leading projects, and contributing to knowledge-sharing across teams. Continuous learner with strong problem-solving, communication, and leadership skills, staying updated with the latest trends in AI/ML and data science. Nice to have N/A
Project description Our client is a global medical technology company transforming digital manufacturing through advanced engineering and high-performance software. They are building the backend systems that convert complex 3D digital designs into physical products at scale. As a Senior Software Engineer, you will work on the client's core computational pipeline - a distributed, compute-heavy system that processes large 3D datasets and generates high-precision instructions for automated production hardware. The role involves solving deep challenges in distributed systems, data-intensive workflows, and algorithmic integration while working closely with cross-functional engineering teams. Responsibilities Design, build, and maintain scalable, high-availability backend services for the client's manufacturing pipeline. Develop robust, event-driven data pipelines for processing large 3D files and generating high-accuracy outputs for automated hardware systems. Own services end to end, including architecture, implementation, containerization, deployment, and cloud-native operations. Collaborate with the client's R&D teams, operations engineers, and hardware/controls groups to integrate and deploy new components. Ensure strong observability through structured logging, metrics, and performance monitoring. Contribute to architectural decisions and maintain engineering excellence through code and design reviews. SKILLS Must have 5+ years of professional experience in backend or distributed systems engineering. Strong proficiency in C++ and Python, with the ability to work across multiple languages (one of the programming languages should be on a senior level, other two on the middle level). At least basic knowledge of Golang. Proven experience designing, building, and operating compute-heavy, data-intensive, and distributed backend services. Solid understanding of core computer science principles and system-level engineering. Nice to have Experience deploying containerized applications to Kubernetes or other cloud orchestration platforms. Background developing high-performance services that operate across multiple programming languages. Familiarity with geometric, scientific, or numerical computation libraries. Experience developing software integrated with automated hardware or robotics. Advanced debugging and performance optimization skills for distributed, multi-threaded systems.
Jan 01, 2026
Full time
Project description Our client is a global medical technology company transforming digital manufacturing through advanced engineering and high-performance software. They are building the backend systems that convert complex 3D digital designs into physical products at scale. As a Senior Software Engineer, you will work on the client's core computational pipeline - a distributed, compute-heavy system that processes large 3D datasets and generates high-precision instructions for automated production hardware. The role involves solving deep challenges in distributed systems, data-intensive workflows, and algorithmic integration while working closely with cross-functional engineering teams. Responsibilities Design, build, and maintain scalable, high-availability backend services for the client's manufacturing pipeline. Develop robust, event-driven data pipelines for processing large 3D files and generating high-accuracy outputs for automated hardware systems. Own services end to end, including architecture, implementation, containerization, deployment, and cloud-native operations. Collaborate with the client's R&D teams, operations engineers, and hardware/controls groups to integrate and deploy new components. Ensure strong observability through structured logging, metrics, and performance monitoring. Contribute to architectural decisions and maintain engineering excellence through code and design reviews. SKILLS Must have 5+ years of professional experience in backend or distributed systems engineering. Strong proficiency in C++ and Python, with the ability to work across multiple languages (one of the programming languages should be on a senior level, other two on the middle level). At least basic knowledge of Golang. Proven experience designing, building, and operating compute-heavy, data-intensive, and distributed backend services. Solid understanding of core computer science principles and system-level engineering. Nice to have Experience deploying containerized applications to Kubernetes or other cloud orchestration platforms. Background developing high-performance services that operate across multiple programming languages. Familiarity with geometric, scientific, or numerical computation libraries. Experience developing software integrated with automated hardware or robotics. Advanced debugging and performance optimization skills for distributed, multi-threaded systems.
Project description We are seeking highly skilled and motivated Generative AI Engineers to join our growing AI team. You will be responsible for designing, developing, and deploying cutting-edge Generative AI solutions using LLMs, Transformers, and Diffusion models. This role involves working on enterprise-grade applications such as intelligent chatbots, document summarization, code assistants, and more. Responsibilities Design, prototype, and deploy Generative AI models (LLMs, Transformers, Diffusion models) for real-world enterprise use cases.Build and fine-tune LLM-based applications such as:ChatbotsDocument Q&A systemsReport generatorsCode assistantsSummarization toolsApply prompt engineering, Retrieval-Augmented Generation (RAG), and context-aware pipelines to enhance model accuracy and relevance.Integrate AI models with enterprise systems, APIs, and data stores using Python, Java, or Node.js.Collaborate with architects to define scalable, secure, and cost-efficient AI service architectures.Implement AI/ML pipelines for training, validation, and deployment using tools like MLflow, Vertex AI, or Azure ML.Monitor model performance, detect drift, and drive continuous improvement.Optimize inference performance and cost through model compression, quantization, and API optimization.Ensure compliance with AI ethics, security, and governance standards.Prepare and curate training datasets (structured/unstructured text, images, code).Apply data preprocessing, tokenization, and embedding generation techniques.Work with vector databases (e.g., Pinecone, Weaviate, FAISS, Chroma) for semantic search and retrieval.Partner with business stakeholders to identify and shape impactful AI use cases.Contribute to the development of a strategic AI adoption roadmap and reusable AI Workbench/platform components.Support POCs, pilots, and full-scale implementations using agile methodologies.Document and present solution designs, technical findings, and outcomes to leadership and clients. SKILLS Must have Strong programming skills in Python (preferred), with experience in Java or Node.js.Hands-on experience with LLMs (e.g., GPT, LLaMA, Claude, Mistral), Transformers, and Diffusion models.Experience with Hugging Face Transformers, LangChain, LLM orchestration frameworks, and prompt tuning.Familiarity with RAG pipelines, embedding models, and vector databases.Experience with cloud platforms (AWS, GCP, Azure) and AI/ML services.Knowledge of MLOps tools and practices (e.g., MLflow, Kubeflow, Vertex AI, Azure ML).Strong understanding of data engineering, data pipelines, and ETL workflows.Excellent problem-solving, communication, and stakeholder engagement skills.Bachelor's or Master's degree in Computer Science, AI/ML, Data Science, or related field. Nice to have n/a
Jan 01, 2026
Full time
Project description We are seeking highly skilled and motivated Generative AI Engineers to join our growing AI team. You will be responsible for designing, developing, and deploying cutting-edge Generative AI solutions using LLMs, Transformers, and Diffusion models. This role involves working on enterprise-grade applications such as intelligent chatbots, document summarization, code assistants, and more. Responsibilities Design, prototype, and deploy Generative AI models (LLMs, Transformers, Diffusion models) for real-world enterprise use cases.Build and fine-tune LLM-based applications such as:ChatbotsDocument Q&A systemsReport generatorsCode assistantsSummarization toolsApply prompt engineering, Retrieval-Augmented Generation (RAG), and context-aware pipelines to enhance model accuracy and relevance.Integrate AI models with enterprise systems, APIs, and data stores using Python, Java, or Node.js.Collaborate with architects to define scalable, secure, and cost-efficient AI service architectures.Implement AI/ML pipelines for training, validation, and deployment using tools like MLflow, Vertex AI, or Azure ML.Monitor model performance, detect drift, and drive continuous improvement.Optimize inference performance and cost through model compression, quantization, and API optimization.Ensure compliance with AI ethics, security, and governance standards.Prepare and curate training datasets (structured/unstructured text, images, code).Apply data preprocessing, tokenization, and embedding generation techniques.Work with vector databases (e.g., Pinecone, Weaviate, FAISS, Chroma) for semantic search and retrieval.Partner with business stakeholders to identify and shape impactful AI use cases.Contribute to the development of a strategic AI adoption roadmap and reusable AI Workbench/platform components.Support POCs, pilots, and full-scale implementations using agile methodologies.Document and present solution designs, technical findings, and outcomes to leadership and clients. SKILLS Must have Strong programming skills in Python (preferred), with experience in Java or Node.js.Hands-on experience with LLMs (e.g., GPT, LLaMA, Claude, Mistral), Transformers, and Diffusion models.Experience with Hugging Face Transformers, LangChain, LLM orchestration frameworks, and prompt tuning.Familiarity with RAG pipelines, embedding models, and vector databases.Experience with cloud platforms (AWS, GCP, Azure) and AI/ML services.Knowledge of MLOps tools and practices (e.g., MLflow, Kubeflow, Vertex AI, Azure ML).Strong understanding of data engineering, data pipelines, and ETL workflows.Excellent problem-solving, communication, and stakeholder engagement skills.Bachelor's or Master's degree in Computer Science, AI/ML, Data Science, or related field. Nice to have n/a
A top-tier investment bank is looking for a skilled developer to enhance their team. You will develop and maintain user interfaces using React JS and work closely with backend teams. Ideal candidates possess strong proficiency in JavaScript, Redux, and understanding of responsive design. This role offers a collaborative work environment where creative thinking and professional growth are encouraged. Join us to be part of a success story, working with high-level financial instruments.
Jan 01, 2026
Full time
A top-tier investment bank is looking for a skilled developer to enhance their team. You will develop and maintain user interfaces using React JS and work closely with backend teams. Ideal candidates possess strong proficiency in JavaScript, Redux, and understanding of responsive design. This role offers a collaborative work environment where creative thinking and professional growth are encouraged. Join us to be part of a success story, working with high-level financial instruments.
Overview We have an ambitious goal to migrate a legacy system written in HLASM (High-Level Assembler) from the mainframe to a cloud-based Java environment for one of the largest banks in the USA. Responsibilities Analyze, understand, and enhance the existing mainframe codebase (COBOL, JCL, CICS, DB2, IMS). Extract and transform business logic from mainframe systems for migration to a modern architecture. Troubleshoot, debug, and resolve issues within mainframe applications during migration. Collaborate with Java and client teams to ensure a smooth transition and accurate functionality replication. Provide knowledge transfer and documentation of mainframe components and processes. Mandatory work from the office 5 days per week. Skills Must have Proficiency in COBOL development and JCL (5+ years). Strong debugging, performance tuning, and problem-solving skills. Experience with CICS transactions and DB2 database programming. Solid understanding of mainframe batch and online processing. Experience working in distributed, global teams with US customers. Excellent communication skills for collaboration with client teams. Nice to have Experience with IMS, VSAM, or HLASM. Familiarity with large-scale system migrations and modernization projects. Understanding of integration between mainframe and cloud-based systems. Prior experience in banking or financial services industry. Knowledge of automation tools for mainframe testing and deployment.
Jan 01, 2026
Full time
Overview We have an ambitious goal to migrate a legacy system written in HLASM (High-Level Assembler) from the mainframe to a cloud-based Java environment for one of the largest banks in the USA. Responsibilities Analyze, understand, and enhance the existing mainframe codebase (COBOL, JCL, CICS, DB2, IMS). Extract and transform business logic from mainframe systems for migration to a modern architecture. Troubleshoot, debug, and resolve issues within mainframe applications during migration. Collaborate with Java and client teams to ensure a smooth transition and accurate functionality replication. Provide knowledge transfer and documentation of mainframe components and processes. Mandatory work from the office 5 days per week. Skills Must have Proficiency in COBOL development and JCL (5+ years). Strong debugging, performance tuning, and problem-solving skills. Experience with CICS transactions and DB2 database programming. Solid understanding of mainframe batch and online processing. Experience working in distributed, global teams with US customers. Excellent communication skills for collaboration with client teams. Nice to have Experience with IMS, VSAM, or HLASM. Familiarity with large-scale system migrations and modernization projects. Understanding of integration between mainframe and cloud-based systems. Prior experience in banking or financial services industry. Knowledge of automation tools for mainframe testing and deployment.
Project description DXC Luxoft will supply resources in Staff Augmentation mode to client at client's direction and with client's approval. The Service Personnel will possess sufficient general technical Hogan application and/or mainframe skills, and will be expected to develop and maintain additional knowledge and skills regarding client specific environment and customizations. All Services will be performed in a Staff Augmentation model wherein client is responsible to provide the necessary instruction and information required to conduct Services. Service Personnel will adhere to client's security policies and procedures in the delivery of Services. Responsibilities Hogan Development and Services:Designs, develops, installs, tests, and documents complex applications software at client product installations. Contributes to the design and delivery of technical architecture solution components. SKILLS Must have Minimum 2 years of Hogan experience in CAMS or any Application- 5+ Banking product experience is Mandatory- 7+ years Mainframe/COBOL/CICS/JCL/IMS/DB2/MQ Series- Master's/BE degree or equivalent combination of education and experience- Experience working with software design, software development life cycle, and development methodologies and implementation- Strong communication skills Nice to have NA
Jan 01, 2026
Full time
Project description DXC Luxoft will supply resources in Staff Augmentation mode to client at client's direction and with client's approval. The Service Personnel will possess sufficient general technical Hogan application and/or mainframe skills, and will be expected to develop and maintain additional knowledge and skills regarding client specific environment and customizations. All Services will be performed in a Staff Augmentation model wherein client is responsible to provide the necessary instruction and information required to conduct Services. Service Personnel will adhere to client's security policies and procedures in the delivery of Services. Responsibilities Hogan Development and Services:Designs, develops, installs, tests, and documents complex applications software at client product installations. Contributes to the design and delivery of technical architecture solution components. SKILLS Must have Minimum 2 years of Hogan experience in CAMS or any Application- 5+ Banking product experience is Mandatory- 7+ years Mainframe/COBOL/CICS/JCL/IMS/DB2/MQ Series- Master's/BE degree or equivalent combination of education and experience- Experience working with software design, software development life cycle, and development methodologies and implementation- Strong communication skills Nice to have NA
A leading IT consultancy in the United Kingdom is seeking an experienced Data Scientist/Machine Learning Engineer to modernize an eCommerce platform for a major retail client. The role requires strong expertise in Java, Python, and information retrieval technologies like Lucene and Solr. Ideal candidates will have a deep understanding of NLP techniques and experience deploying ML systems in production environments.
Jan 01, 2026
Full time
A leading IT consultancy in the United Kingdom is seeking an experienced Data Scientist/Machine Learning Engineer to modernize an eCommerce platform for a major retail client. The role requires strong expertise in Java, Python, and information retrieval technologies like Lucene and Solr. Ideal candidates will have a deep understanding of NLP techniques and experience deploying ML systems in production environments.
Overview Project description The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week. Solutions are delivered by several Product Teams focused on different domains - Customer, Loyalty, Search and Browse, Data Integration, Cart. Current overriding priorities are new brands onboarding, re-architecture, database migrations, migration of microservices to a unified cloud-native solution without any disruption to business. Responsibilities Design, develop, and optimize semantic and vector-based search solutions leveraging Lucene/Solr and modern embeddings. Apply machine learning, deep learning, and natural language processing techniques to improve search relevance and ranking. Develop scalable data pipelines and APIs for indexing, retrieval, and model inference. Integrate ML models and search capabilities into production systems. Evaluate, fine-tune, and monitor search performance metrics. Collaborate with software engineers, data engineers, and product teams to translate business needs into technical implementations. Stay current with advancements in search technologies, LLMs, and semantic retrieval frameworks. Must have 5+ years of experience in Data Science or Machine Learning Engineering, with a focus on Information Retrieval or Semantic Search. Strong programming experience in both Java and Python (production-level code, not just prototyping). Deep knowledge of Lucene, Apache Solr, or Elasticsearch (indexing, query tuning, analyzers, scoring models). Experience with Vector Databases, Embeddings, and Semantic Search techniques. Strong understanding of NLP techniques (tokenization, embeddings, transformers, etc.). Experience deploying and maintaining ML/search systems in production. Solid understanding of software engineering best practices (CI/CD, testing, version control, code review). Nice to have Experience of work in distributed teams, with US customers Experience with LLMs, RAG pipelines, and vector retrieval frameworks. Knowledge of Spring Boot, FastAPI, or similar backend frameworks. Familiarity with Kubernetes, Docker, and cloud platforms (AWS/Azure/GCP). Experience with MLOps and model monitoring tools. Contributions to open-source search or ML projects.
Jan 01, 2026
Full time
Overview Project description The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week. Solutions are delivered by several Product Teams focused on different domains - Customer, Loyalty, Search and Browse, Data Integration, Cart. Current overriding priorities are new brands onboarding, re-architecture, database migrations, migration of microservices to a unified cloud-native solution without any disruption to business. Responsibilities Design, develop, and optimize semantic and vector-based search solutions leveraging Lucene/Solr and modern embeddings. Apply machine learning, deep learning, and natural language processing techniques to improve search relevance and ranking. Develop scalable data pipelines and APIs for indexing, retrieval, and model inference. Integrate ML models and search capabilities into production systems. Evaluate, fine-tune, and monitor search performance metrics. Collaborate with software engineers, data engineers, and product teams to translate business needs into technical implementations. Stay current with advancements in search technologies, LLMs, and semantic retrieval frameworks. Must have 5+ years of experience in Data Science or Machine Learning Engineering, with a focus on Information Retrieval or Semantic Search. Strong programming experience in both Java and Python (production-level code, not just prototyping). Deep knowledge of Lucene, Apache Solr, or Elasticsearch (indexing, query tuning, analyzers, scoring models). Experience with Vector Databases, Embeddings, and Semantic Search techniques. Strong understanding of NLP techniques (tokenization, embeddings, transformers, etc.). Experience deploying and maintaining ML/search systems in production. Solid understanding of software engineering best practices (CI/CD, testing, version control, code review). Nice to have Experience of work in distributed teams, with US customers Experience with LLMs, RAG pipelines, and vector retrieval frameworks. Knowledge of Spring Boot, FastAPI, or similar backend frameworks. Familiarity with Kubernetes, Docker, and cloud platforms (AWS/Azure/GCP). Experience with MLOps and model monitoring tools. Contributions to open-source search or ML projects.