Job Purpose
We are seeking a technically versatile Data Process Engineer to maintain, modernise, and develop automated data workflows and reporting systems across the business. The successful candidate will work with Python, SQL, and emerging cloud tools — particularly Microsoft Fabric — to support financial reporting, monitor internal systems, and help drive the move from legacy KNIME workflows. This is a hybrid operational and developmental role: maintaining core business processes today while helping to evolve our architecture for the future. It offers significant autonomy and exposure to business-critical functions, with growing emphasis on building scalable, cloud-native solutions (e.g., data lakehouse architectures).
Main Duties and Responsibilities
- Maintain and debug existing KNIME workflows related to finance functions (e.g., commissions, clawbacks, internal reporting) as part of a phased decommissioning plan
- Develop and maintain Python-based ETL pipelines to monitor internal systems and proactively flag issues
- Design, build, and manage cloud-based data warehouses and transformation pipelines using Microsoft Fabric — including Lakehouse, Pipelines, and Dataflows
- Translate business requirements into actionable workflows and reporting solutions in close collaboration with the Finance team
- Support the migration of legacy processes into modern Python and SQL-based solutions
- Contribute to the design and optimisation of data lakehouse environments to enable analytics at scale (desirable)
- Support the development of automation and orchestration frameworks using tools such as Apache Airflow, Prefect, or Azure Data Factory (planned for future implementation, desirable)
Key Skills / Attributes
- Strong Python skills for data transformation (e.g., pandas, polars, SQLAlchemy)
- Solid SQL experience across varied data sources
- Proven experience building and managing end-to-end ETL/ELT workflows
- Strong understanding of data workflows in financial reporting contexts
- Ability to take proactive ownership of tasks in a fast-changing environment
- Hands-on experience with Microsoft Fabric — particularly Lakehouses and Pipelines — and exposure to data lake design (desirable)
- Familiarity with orchestration tools (e.g., Azure Data Factory, Dagster, Airflow, Prefect) (desirable)
- Exposure to containerisation or DevOps practices (e.g., Docker, Docker Swarm, Bitbucket) (desirable)
- Exposure to event streaming platforms like Kafka (desirable)
- Experience with version control and CI/CD practices (git, Bitbucket) (desirable)
- Experience building RESTful APIs (FastAPI, Flask, Django) (desirable)
- Experience migrating legacy tools (e.g., KNIME) to modern cloud-based systems (desirable)
- Proficiency in Power BI for reporting and dashboarding (desirable)
Personal Qualities
- Can-do attitude — prepared to "go the extra mile"
- Lots of energy — a team player who can work off their own initiative
- Heightened sense of urgency
- Curious learner with an interest in professional development
- Actively demonstrates a passion for the business
- Eagerness and ability to quickly learn new concepts, tools, and languages
- Strong communication and collaboration skills, with the ability to explain technical ideas to non-technical stakeholders
- Adaptability to thrive in a fast-paced, changing environment
- Ambassador of corporate values — TRUST, PASSION, EXCELLENCE
Education & Experience
- Bachelor's or Master's degree in Engineering, Computer Science, Mathematics/Statistics, or equivalent industry experience
- Proven experience (2+ years) as a process engineer or data analyst in a data/computationally intensive role