Tech-015: Introduction to data-ops, automating the beast
When I first arrived at Zupa, my first assignment wasn’t a small bug fix or a safe starter project. I was thrown straight into the deep end: refactoring the very manual process of deploying a critical microservice called Data
. It wasn’t just clunky — it was burning hours of the data team’s time, every single release.
The Data
microservice was a mash-up of components: Azure Synapse, a Dedicated SQL Pool, and a Power BI workspace. Deploying them was manual, inconsistent, and stressful.
Every deployment was essentially a mini project, requiring coordination, tribal knowledge, and a lot of patience ...
The data team were spending countless hours on deployments. That meant less time on actual engineering, slower delivery, and higher risk of errors. Everyone agreed it was unsustainable — but until now, nobody had taken the time to fix it
I analysed the solution, taking my time to break it apart piece by piece and figure out how this thing really worked. Working closely with the data team I mapped the pain points and went away to research automated approaches.
I designed two potential solutions, then off I went again following my usual process of documenting them out with estimated effort, pros and cons of each approach.
I then produced yet another due-diligence presentation for the solutions.
Once a direction was agreed, I implemented the automated pipeline, integrating; Synapse, Dedicated SQL Pool and Power BI into one consistent and consolidated deployment solution
The change was immediate!
The data team went from spending hours on manual deployment tasks (click-ops) to running a pipeline that did it all automatically. What had once been a dreaded job became a button click.
Reflection:
This project taught me the value of research + collaboration. It wasn’t just about dropping in some automation — it was about engaging the people closest to the pain, designing solutions they could trust, and delivering something that genuinely improved their day-to-day.