We're hiring a Data Engineer / Consultant Data Engineer to design and deliver scalable data pipelines and big data solutions for clients.
This is a client-facing role, combining hands-on engineering with stakeholder engagement and solution design.
Key Responsibilities- Build and optimise data pipelines, ETL processes, and data platforms
- Develop solutions using Python, Spark, Kafka, Hadoop or similar
- Work in Agile teams to deliver production-ready systems
- Translate business requirements into technical data solutions
- Engage with stakeholders and communicate technical concepts clearly
- Deploy solutions using cloud (AWS/Azure/GCP), Docker, Kubernetes, CI/CD
Skills & Experience- Experience as a Data Engineer / Big Data Engineer
- Strong coding in Python, Scala or Java
- Hands-on with Spark, Kafka, ETL / data pipelines
- Knowledge of cloud platforms (AWS, Azure or GCP)
- Familiar with Agile and software engineering best practices
- Strong communication / stakeholder skills
Nice to Have- Consulting or client-facing experience
- Docker, Kubernetes, DevOps, CI/CD
- Streaming or real-time data experience