One of the big challenges within organisations is building collaboration between IT and the business. This challenge has increased over the years as business have become more adept at using IT tools. For example, knowing your customer and being able to offer them the best product at the right time through advanced CRM requires clever analytics coordinated with good data. Enterprise IT has sped up tremendously over the past few years with the building blocks becoming quicker to integrate and extend. However, this is somewhat of a double-edged sword, the faster it speeds up, the greater the competition exists and therefore the speed with which solutions need to be implemented. What this leads to is a friction between the relatively slow-moving world of IT process to the need for solutions and information in the business.
To address this friction there needs to be healthy leverage of technology in the business in collaboration with Enterprise IT. Through tools like Python, MS Office, Tableau, Qlik the business is more empowered than ever to implement solutions. Many successful organisations leverage this ability to meet demands from regulation and to advance management information. Over time, these capabilities and solutions start to get more complicated due to the nature in which they evolve. At some point this complexity reaches a critical stage and errors happen. This leads to regulatory fines and losses and normally a knee jerk reaction that exasperates the issue rather than improving it. A proactive solution to this problem is to have a healthy flow between the fast-moving environment in the business into enterprise solutions from IT.
To make this work what needs to be recognised is that not all information or solutions in the business needs to find its way into enterprise IT. The reason for this is that the scope of the data/solution may only exist for one user. For example, if a business user wants to see a set of Sales orders bucketed into categories based on value, i.e., 0-1,000 | 1,000-5,000 | 5,000-15,000 | 15,000-50,000, It may not be relevant for any other job function to know these categories. Has IT got time to manage these requirements? Is it prudent to spend your budget implementing them inside the data lake with the maintenance that goes with them? I would argue no. If no is your answer, then how do you manage this data that defines a particular business process? I find that when I have been working with clients it is often this small data that is the barrier between IT and the business. Generally, this data is not understood or appreciated by the large processes in IT, but it is also the barrier for why the data is extracted from the data lake and manipulated in the business when it is used. It is often the reason that the business needs to create EUCs.
How does DataOps help? Firstly, DataOps recognises this data. Within the 6 pillars it discovers the data, categorises it, and architects it. The focus of DataOps is collaboration, and extensibility, therefore the methodology identifies that the data items that undergo the most change need to be located as closely as possible to the change agent. Translating this into the example above, the small data needs to be organised, documented, and owned by the business through IT enabled systems. This is achieved through defining the right metadata, managing the metadata and then governing the small data in a way that democratises control. i.e., give someone some rope, but make sure they use it to build a bridge with the rope between enterprise IT and the business. In short Kinaesis Acutect and DataOps recognises and implements a methodology and approach that allows you to look after the small data, so that the big data looks after itself.
This article follows on from the recent articles on DataOps by Kinaesis:
• Why you need DataOps?
• What is DataOps?
• How does DataOps make a difference?
• Get control of your glossaries to set yourself up for success
• Why DataOps should come before (and during) your Analytics and AI