News and Events

Kinaesis mentioned in SD Times

Posted by Emma McMahon on 10 April 2019
Kinaesis mentioned in SD Times

Did you spot us in SD Times’ latest article on DataOps?

We are delighted our work on DataOps is being picked up on and we hope to continue adding value to our clients utilising our DataOps methodology and also to continue giving to the DataOps Community. You can find the feature here: https://sdtimes.com/data/a-guide-to-dataops-tools/

Why DataOps should come before (and during) your Analytics and AI

Posted by Emma McMahon on 06 March 2019
Why DataOps should come before (and during) your Analytics and AI

We have all seen the flashy ads and promised benefits when it comes to enabling Analytics and AI for our businesses. Analytics and associated AI solutions are integral for the future of business, gaining you that competitive edge and we would never ever dispute this. Yet before you run out and build out your solutions, it’s time for a health check.

Why? Here is the nightmare. Imagine your new fancy dashboards aren’t showing you what’s really happening. Imagine making business decisions on projections that are false. Imagine your AI is automatically driving your business out of control through poor or corrupted information. Or forget all that and imagine the data within your organisation slowly teaching your AI bad habits and corrupting it’s learning behaviours.

Implementing a DataOps approach correctly before, during and continually after an implementation is the perfect answer to that nightmare. DataOps is the healthcare checkup that checks what is feeding into the analytics and AI solutions to ensure what they are telling you is not harmful. For example, it can be used to assess your data sources, standardise your data into a universal format, check the right data is informing the right areas, understand what data is actually needed for the business to grow and learn.

It is why a Kinaesis partnership is invaluable reassurance on your AI and analytics endeavours as we can increase the success rate on projects simply by ensuring that the result truly reflects what the business needs.

Yes, you need analytics and AI within your business. Yet you must first check your Data is healthy, correct and as detailed as possible consistently throughout the AI process. This will enable you as a business to plan, project and optimise to the highest degree.

Effective DataOps is the way to make sure the fears of ineffective and misinformed data don’t seep into reality.

DataOps Pillar: (Collaborative) Analytics

Posted by Benjamin Peterson on 27 February 2019
DataOps Pillar: (Collaborative) Analytics

Data is very valuable - and yet, it's often hard to find someone to step up and own it. We live in a moment in which the data analyst, the one who presents conclusions, is pre-eminent. It’s the upstream roles that own, steward, cleanse, define and provide data are currently less glamorous. In some ways this is a pity, because conclusions are only ever as reliable as the data that went into them.

DataOps addresses this challenge through the practice of “Collaborative Analytics” - analytics whose conclusions come from a collaboration between the analytics function and the other roles on which analytics depends. Collaborative Analytics (like everything else in the world) is about people, process and tools:

» People include the data owners, metadata providers, DataOps professionals and all the other roles whose actions affect the outputs of analytics. You have to also add into this the actual analysts and model owners themselves.

» Process includes an operating model that encourages collaboration between those roles and ensures that staff at different points in the analytics pipeline have the same understanding of terms, timestamps and quality.

» Tooling, in this case, is the easy part - any modern analytics tooling can provide sharing, annotation and metadata features that can make Collaborative Analytics a reality.

A fully DataOps-enabled pipeline would accompany analytics conclusions with metadata showing the people and processes behind those conclusions - all the way upstream to data origination.

That's a long way in the future. But what most institutions can do right now is ensure that data providers and data interpreters speak the same language.