News and Events

Taking off with DataOps

Posted by Benjamin Peterson on 06 June 2017
Taking off with DataOps

New technologies and approaches in financial IT are always exciting – but often that excitement is tinged with reservations. Ground-breaking quantum leaps don’t always work out as intended, and don’t always add as much value as they initially promise. That’s why it’s always good to see an exciting new development that’s actually a common-sense application of existing principles and tools, to achieve a known – as opposed to theoretical – result.

DataOps is that kind of new development. It promises to deliver considerable benefits to everyone who owns, processes or exploits data, but it’s simply a natural extension of where we were going already.

DataOps is the unification, integration and automation of the different functions in the data pipeline. Just as DevOps, on the software side, integrated such unglamorous functions as test management and release management with the software development lifecycle, so DataOps integrates functions such as profiling, metadata documentation, provenance and packaging with the data production lifecycle. The result isn’t just savings in operational efficiency – it also makes sure that those quality-focused functions actually happen, which isn’t necessarily the case in every instance today.

In DataOps, technology tools are used to break down functional silos within the data pipeline and create a unified, service-oriented process. These tools don’t have to be particularly high-tech – a lot of us are still busy absorbing the last generation of high-tech tools after all. But they do have to be deployed in such a way as to create a set of well-defined services within your data organisation, services that can be stacked to produce a highly automated pipeline whose outputs include not just data but quality metrics, exceptions, metadata, and analytics.

It could be argued that DevOps came along and saved software development just at the time when we moved, rightly, from software systems to data sets as the unit of ownership and control. DataOps catches up with that trend and ensures that the ownership and management of data is front and center.

Under DataOps, software purchases in the data organisation happen not in the name of a specific function, but in order to implement the comprehensive set of services you specify as you plan your DataOps-based pipeline. This means of course that a service specification, covering far more than just production of as-is data, has to exist, and other artefacts have to exist with it, such as an operating model, quality tolerances, data owners… in other words, the things every organisation should in theory have already, but which get pushed to the back of the queue by each new business need or regulatory challenge. With DataOps, there’s finally a methodology for making sure those artefacts come into being, and for embedding an end-to-end production process that keeps them relevant.

In other words, with the advent of DataOps, we’re happy to see the community giving a name to what people like us have been doing for years!