- Duration of assignment: 4 Months
- Onsite Days: 5
- Work Location: UK Salford
- Hybrid: Hybrid,State: Manchester; City: Salford Zip: M50 2UE
Job description:
Key Responsibilities
- Design, develop, and maintain metadata-driven data pipelines using ADF and Databricks.
- Build and implement end-to-end metadata frameworks, ensuring scalability and reusability.
- Optimize data workflows leveraging SparkSQL and Pandas for large-scale data processing.
- Collaborate with cross-functional teams to integrate data solutions into enterprise architecture.
- Implement CI/CD pipelines for automated deployment and testing of data solutions.
- Ensure data quality, governance, and compliance with organizational standards.
- Provide technical leadership and take complete ownership of assigned projects.
Technical Skills Required
- Azure Data Factory (ADF): Expertise in building and orchestrating data pipelines.
- Databricks: Hands-on experience with notebooks, clusters, and job scheduling.
- Pandas: Advanced data manipulation and transformation skills.
- SparkSQL: Strong knowledge of distributed data processing and query optimization.
- CI/CD: Experience with tools like Azure DevOps, Git, or similar for automated deployments.
- Metadata-driven architecture: Proven experience in designing and implementing metadata frameworks.
- Programming: Proficiency in Python and/or Scala for data engineering tasks
JBRP1_UKTJ