News and Events

Articles in News

Simon Trewin provides his input to DevOps online to explain why DataOps is essential to drive business value

Posted by Simon Trewin on 19 December 2020
Simon Trewin provides his input to DevOps online to explain why DataOps is essential to drive business value

Click here to read the article

Is small data holding back your ambitions to be more data driven?

Posted by Simon Trewin on 18 December 2020

Author:Simon Trewin founder DataOps Thinktank, founder DataOps Academy, Author of The DataOps Revolution.

What is small data?
Small data is all of that information that you attach to your operational data / big data to be able to make sense of it or to transform it into business insight. Typically, it is owned very close to the where data is leveraged for operations, or insight. It helps with cleansing, grouping, aggregating, filtering, and tagging, or it helps to drive a business process.

Small Data

Why is it needed?
Small data is needed because the use cases for operations /data insight change rapidly. This cadence is generally too fast for enterprise IT to keep up with and I would argue that it is something that they should not try to keep up with.

Small  Data

What are the Organisational Challenges wishing to be data driven and incorporate ML and AI?
The challenges are that often the truth in terms of data only exists once data pipelines have passed through the small data filters and checkers. Therefore, the accuracy of machine learning and AI models and is hindered by the fact they do not have access to the truth. The CDAO strategies are held back by their ability to leverage the right information.

Small data is also very easy to copy and to reuse making it hard to maintain and master. It exists within reporting systems and end user applications like Excel and is emailed around, linked, and reused. It is the source of reporting errors that can lead to regulatory fines, missed opportunities, and bad decisions. It can also lead to many versions of the truth preventing organisations from knowing the true state of things and preventing them from making decisions.

It often gets complex making it hard to unwind and building up organisational inertia which makes it hard to move forward with a digital strategy. It needs to be combined in the overall data strategy for the organisation but is often considered too hard and complicated to tackle.

What you need to do
The key to small data is to be able to democratise it incorporating data quality controls and master it to empower your employees. This needs to be done incrementally in a system that provides secure transparency through lineage, usage statistics, and links to business terminology. This system should provide an easy migration of assets to enterprise systems for the purpose of enabling the digital enterprise.

To deal with the complexity you need to automate the analysis of your existing estate efficiently and effectively to be able to group your small data by complexity, importance, risk and dependencies. You can then prioritise the actions to take to make improvements, at this stage you should be able to track improvements through time to see the changes made and the changes still required.

For some small data it is OK for it to remain as small data however for completeness this should be logged and monitored and kept up to date through time.

Conclusion
Small data is essential in any organisation to bridge the gap from operational data to business processes and knowledge. It moves quickly due to the nature of changing business requirements; it quickly can become complex and will introduce poor data quality through duplication and inconsistencies. As a CDO you need to incorporate it into your overall strategy if you truly want to deliver the data driven enterprise.

Edward Chu joins Kinaesis Innovation team

Posted by Simon Trewin on 17 December 2020
Edward Chu joins Kinaesis Innovation team

Kinaesis is very pleased to have Edward Chu join their growing Innovation team. Edward will be focusing his efforts on Acutect, delivering his expertise to our advanced EUC solutions.

Edward is a frontend developer with a degree in Statistics and Financial Mathematics. He previously worked at a start-up and a global IT consultancy firm. His expertise is in the development of web applications although he has also had experience with backend development.

DataOps Academy: Target Pillar Introduction Offer

Posted by Simon Trewin on 16 December 2020
DataOps Academy:  Target Pillar Introduction Offer

Great Bundle of DataOps Resources to start your DataOps journey!

We have bundled our DataOps Target Pillar course with our DataOps – Scaled Agile Framework (SAFe) video and included a free copy of the DataOps Revolution Book due out on the 6 August. All for the same price as our standard DataOps Target Pillar training course.

To get hold of this bundle follow this link

DataOps Academy: How does the Six IMPACT pillars DataOps methodology integrate with SAFe?

Posted by Simon Trewin on 15 December 2020
DataOps Academy:  How does the Six IMPACT pillars DataOps methodology integrate with SAFe?

How do you leverage our DataOps six IMPACT pillars alongside your SAFe agile methodology?

We get asked this question a lot. Many organisations have rolled out a SAFe agile methodology for software development in their organisations. Where does our DataOps Six Pillars methodology fit into this?

This new DataOps academy video provides you with the knowledge you need to understand how to apply SAFe effectively to a data and analytics project by leveraging the Six IMPACT pillars of DataOps.

Click on this link to access this new course.

DataOps Revolution

Posted by Simon Trewin on 14 December 2020
DataOps Revolution

Having access to reliable data is key to being able to make informed decisions and provide service levels to your customers that the digital world demands. The first realisation of this has manifested itself in the recognition of the Chief Data Officer role. It has driven executives to employ strategists and consultants to move their organisations forward. The strategy has been described accurately, the gap is that it is hard to drive this through the organisation and deliver the results that are now expected.

DataOps Revolution describes a methodology and approach that is proven to work. It connects the strategy with the people on the ground that must implement the changes required. It is based on real life scenarios that exist and describes the keys to transformational success. The DataOps approach is told in a narrative style to appeal and resonate with as many people involved in the data revolution as possible. We hope that it leads you to improving your delivery and the value you’re your data and analytics projects to accelerating your data driven initiatives.

DataOps Revolution is available to pre-order here

What to expect from us for the rest of 2021

Posted by Simon Trewin on 12 December 2020
What to expect from us for the rest of 2021

Kinaesis’ mission statement is to enable organisations to make better decisions through leveraging data and analytics efficiently and effectively. Over the ten years of working with clients our aims have been to add value and deliver solutions.

During that time, our methodology and software has been developed to enhance our client’s data capabilities through training, innovative tooling and services. Our amazing engineers are focused on providing innovative solutions to today’s data and analytics challenges.

Our goal in 2021 is to make data capabilities accessible to all. To achieve this, we have enhanced our service offerings and made large Research and Development investments in our knowledge and technical capabilities.

DataOps Academy

In February 2021 we launched the Kinaesis DataOps Academy platform which we will use to deliver content throughout 2021, specifically our proprietary 6 DataOps Pillars. There is a free introductory course to complete, and coming soon is the Target pillar to be closely followed by further pillar breakdowns. To help you to learn how to leverage these skills into your existing delivery frameworks, we are producing a course on how they integrate with agile methodologies.

Register here

Know Your Register (KYR)

In March 2021 we launched version 2 of Kinaesis KYR for the Investment Management community to help bring better data insights to their Sales and Marketing efforts. You can see demonstrations of the platform and its capabilities on our dedicated training site:

Register here

Acutect

In May 2021 Kinaesis Acutect will be launched in its first release. Our aim is to surface “small data” from End User Computing tools (EUCs) and to empower users and IT to work more collaboratively around data solutions. We have plans to take this forward to new levels so that we can help financial organisations get on top of their EUC estates. Take a look at our product overview and continue to watch this space for more announcements.

Brief Product Video

DataOps Book

In the middle of the 2021 we are releasing a DataOps book to go hand in hand with our training. The purpose of this is to explain the benefits of following a data driven approach for your data projects. Here we explain in narrative style how the Kinaesis DataOps methodology can be put to practice.

Pre-Order Here

It is going to be an exciting year at Kinaesis where we aim to accelerate your effectiveness with data and analytics through the application of DataOps tools and techniques.

Kinaesis is hiring!

Posted by Allan Eyears on 11 December 2020
Kinaesis is hiring!

We’re hiring several new data and software engineering roles for our team in London.

If you like to develop cloud based applications, are an expert in web, (micro)service or infrastructure management, we’d like to talk to you.

To learn more about us and take a look at our vacancies, click here

Kinaesis launch online demo of KYR

Posted by Allan Eyears on 10 December 2020
Kinaesis launch online demo of KYR

We have just launched an online demonstration of our successful Know Your Register (KYR) platform for Investment Managers. To see how this modern, innovative platform can revolutionise your Sales and Operations analytics, click here. At the end of the demonstration you will be able to request access to our live demonstrator.

This is what one Operations Director from a UK Fund Manager had to say after using the KYR platform:

"We have used Kinaesis KYR for over 4 years to provide comprehensive, detailed and complete breakdowns of intermediary activity across all registers. In that time it has helped us improve the efficiency of both our Operational Reporting and Sales and Marketing activities through its up-to-date and accurate data. We no longer have to rely on infrequent, manually intensive and error-prone spreadsheet solutions to generate MI reports and Sales data. I would recommend KYR to Fund Managers who struggle to understand their intermediary distribution."

Online DataOps Training from Kinaesis

Posted by Simon Trewin on 09 December 2020
Online DataOps Training from Kinaesis

We are really pleased to announce that our world class DataOps training is now online. Over the past few months we have been converting our content into consumable short videos, quizzes and certificates. The first instalment "introduction to DataOps" is FREE and provides an overview of DataOps and our proprietary 6 pillars methodology. In the coming months we will deliver more content on each of the 6 pillars in turn. In addition to "introduction to DataOps" we have provided a video of our recent webinar with DataKitchen: "Differentiation through DataOps in Financial Services".

If you are inpatient for results then why not book yourself and a number of your colleagues a DataOps training course with an industry leading professional. Please send all enquiries via this link

Differentiation through DataOps - Webinar Recording

Posted by Simon Trewin on 08 December 2020
Differentiation through DataOps - Webinar Recording

Simon Trewin provides insight on the differentiation through DataOps for financial services with Chris Bergh from DataKitchen. For those who missed it please sign up and watch through this link

Kinaesis is 10 Years Old

Posted by Simon Trewin on 07 December 2020
Kinaesis is 10 Years Old

10 years ago Kinaesis was incorporated. Over the last 10 years we have been very lucky to work with excellent clients, partners and employees.

A big thank you to everyone who has been involved. We are looking forward to adding even more value over the coming years.

Differentiation Through DataOps in Financial Services

Posted by Simon Trewin on 06 December 2020
Differentiation Through DataOps in Financial Services

Wed, Feb 10th | 11 am ET / 4 pm GMT

When financial institutions use data more efficiently and innovatively they can deliver the product and customer experiences that differentiate them from the competition. Although most financial services companies collect and store tremendous amounts of data, new analytics are delivered through incredibly complex pipelines. Furthermore, governance and security are not optional. Existing and emerging regulations require that financial institutions that collect customer data manage it carefully. Balancing speed, quality, and governance is critical.

In this webinar, Simon Trewin of Kinaesis joins Chris Bergh of DataKitchen to discuss how DataOps enables financial institutions to move fast without breaking things. They’ll cover how DataOps enables organizations to:

  1. Increase collaboration and transparency;
  2. Deliver new analytics fast via virtual environments and continuous deployment;
  3. Increase the quality of data by automating testing across end-to-end pipelines; and
  4. Balance agility with compliance and control.

They’ll also share real-life examples of how financial service companies have successfully implemented DataOps as the foundation for a digital transformation.

To register for this webinar please use this link

Kinaesis Acutect: Leveraging DataOps to organise and rationalise small data

Posted by Simon Trewin on 05 December 2020
Kinaesis Acutect: Leveraging DataOps to organise and rationalise small data

One of the big challenges within organisations is building collaboration between IT and the business. This challenge has increased over the years as business have become more adept at using IT tools. For example, knowing your customer and being able to offer them the best product at the right time through advanced CRM requires clever analytics coordinated with good data. Enterprise IT has sped up tremendously over the past few years with the building blocks becoming quicker to integrate and extend. However, this is somewhat of a double-edged sword, the faster it speeds up, the greater the competition exists and therefore the speed with which solutions need to be implemented. What this leads to is a friction between the relatively slow-moving world of IT process to the need for solutions and information in the business.

To address this friction there needs to be healthy leverage of technology in the business in collaboration with Enterprise IT. Through tools like Python, MS Office, Tableau, Qlik the business is more empowered than ever to implement solutions. Many successful organisations leverage this ability to meet demands from regulation and to advance management information. Over time, these capabilities and solutions start to get more complicated due to the nature in which they evolve. At some point this complexity reaches a critical stage and errors happen. This leads to regulatory fines and losses and normally a knee jerk reaction that exasperates the issue rather than improving it. A proactive solution to this problem is to have a healthy flow between the fast-moving environment in the business into enterprise solutions from IT.

To make this work what needs to be recognised is that not all information or solutions in the business needs to find its way into enterprise IT. The reason for this is that the scope of the data/solution may only exist for one user. For example, if a business user wants to see a set of Sales orders bucketed into categories based on value, i.e., 0-1,000 | 1,000-5,000 | 5,000-15,000 | 15,000-50,000, It may not be relevant for any other job function to know these categories. Has IT got time to manage these requirements? Is it prudent to spend your budget implementing them inside the data lake with the maintenance that goes with them? I would argue no. If no is your answer, then how do you manage this data that defines a particular business process? I find that when I have been working with clients it is often this small data that is the barrier between IT and the business. Generally, this data is not understood or appreciated by the large processes in IT, but it is also the barrier for why the data is extracted from the data lake and manipulated in the business when it is used. It is often the reason that the business needs to create EUCs.

How does DataOps help? Firstly, DataOps recognises this data. Within the 6 pillars it discovers the data, categorises it, and architects it. The focus of DataOps is collaboration, and extensibility, therefore the methodology identifies that the data items that undergo the most change need to be located as closely as possible to the change agent. Translating this into the example above, the small data needs to be organised, documented, and owned by the business through IT enabled systems. This is achieved through defining the right metadata, managing the metadata and then governing the small data in a way that democratises control. i.e., give someone some rope, but make sure they use it to build a bridge with the rope between enterprise IT and the business. In short Kinaesis Acutect and DataOps recognises and implements a methodology and approach that allows you to look after the small data, so that the big data looks after itself.

This article follows on from the recent articles on DataOps by Kinaesis:
Why you need DataOps?
What is DataOps?
How does DataOps make a difference?
Get control of your glossaries to set yourself up for success
Why DataOps should come before (and during) your Analytics and AI

The six IMPACT pillars of DataOps
Instrument
Metadata
(Extensible) Platforms
(Collaborative) Analytics
Control
Target

Kinaesis KYR : Six great features for Investment Managers

Posted by Allan Eyears on 04 December 2020
Kinaesis KYR : Six great features for Investment Managers

Built specifically for the Investment Management sector, Kinaesis "Know Your Register" (KYR) provides Investment Managers with the ability to view all their Investor holdings and transaction information, from all sources, in one single reporting and analytics solution. Your compliance team can review liquidity, concentration and exposure risk, whilst your sales, account management and marketing teams can use advanced analytics to allow them to target specific clients for targeted sales activity.

If you’d like us to contact you to discuss further, or if you just want to ask a question about KYR, please click here to add your details.

The untapped potential of your EUC estate

Posted by Simon Trewin on 14 November 2020
The untapped potential of your EUC estate

End-user Computing (EUC) and in particular spreadsheet risk, has been well documented recently with one of the most high-profile headlines concerning the test and trace system for COVID-19 in the UK. Other examples include issues where formulas were implemented incorrectly within a spreadsheet, in one example leading to billions of dollars in loss to JP Morgan due to the miscalculation of their VAR position. However, what is the other side of the coin? What is the lost opportunity of having your data and processes tied up inside many versions of a tool where there is no reconciliation, data quality checks, and no mastering of information? Industry is moving rapidly towards a data and analytics arms race, where the winners and losers are being decided by the ability of organisations to leverage their time and energy efficiently and effectively. This is enabled by advanced analytics that enables firms to:

  1. Market the right product to the right customer efficiently
  2. Manage risk and financials effectively, and
  3. Optimise departments to offer services faster and cheaper than their competitors

To get to the front or even the middle of the pack, then you need to be sure of your data. You also need to have your data to hand and available to the analysts, and data scientists to run their models. It is still a fact that 80% of a data scientists’ job is taken up with the consolidation, cleansing and conforming of data and training sets prior to being able to deploy their models. Given the escalating salaries of these resources, it is extremely expensive to have them performing work outside of their paid for skillset. If the real view of data is hidden inside spreadsheets, then it may not even be possible to pull together a valid data set without months of preparation work.

It is not uncommon for hard working users, in answer to business questions, to have formed a complex web of EUCs and manual processes over dozens of years to resolve business challenges. In organisations where there has been heavy M&A activity, it is not uncommon to find that data and business rules tied up in EUCs are the only place where there is truth in the organisation. However, more than likely the truth of one EUC can quite often contradict a second EUC that is reporting or analysing a different business problem. This leads to inconsistencies and eventually a lack of confidence in the results, which means that organisations are not able to capitalise on their information and insight.

One example that comes to mind where an analyst had diligently and very carefully organised a set of EUCs by date over several years. The total number of snapshots saved was around 90 and represented the reported truth for several key metrics that the business used. One request came in that required a trend of the KPIs to support business decisions. The task required the analyst to open each of the spreadsheets in turn, conform the data across all 90 instances and then extract the key metrics. The effort involved was around 3 weeks work. In this example, by the time the work had completed the business opportunity had passed. What isn’t clear is, could this knowledge have prevented a disaster or enabled the organisation to take advantage of an opportunity?

It is clear from the striking headlines that there are risks in having your data and processes in ungoverned EUC processes, as well as potential for regulatory fines. However, it is also as important for people to be aware of the limitations that are caused by a large EUC estate in the modern fast-moving world.

Kinaesis wins R&D development grant

Posted by Simon Trewin on 06 November 2020
Kinaesis wins R&D development grant

In recognition of our expertise in migrating EUC estates and simplifying complex business processes, Kinaesis have been awarded a discretionary R&D grant from Innovate UK. This will be used to develop a new platform, Kinaesis Acutect, which will accelerate the migration of legacy manual processes thus improving the compliance and competitiveness of the financial services sector.

Innovate UK Executive Chair Dr Ian Campbell said:

“In these difficult times we have seen the best of British business innovation. The pandemic is not just a health emergency but one that impacts society and the economy.

“Kinaesis, along with every initiative Innovate UK has supported through this fund, is an important step forward in driving sustainable economic development. Each one is also helping to realise the ambitions of hard-working people.”

Allan Eyears, Founder at Kinaesis says "This is a very exciting opportunity for us to develop a genuinely unique product to complement our existing consulting capabilities."

Simon Trewin, Founder, CEO at Kinaesis says "We feel very proud that Kinaesis has been chosen for this grant. It is great opportunity for us to leverage our DataOps skills and deliver huge value to our customers."

If you are interested in finding out more then please provide details by following this link

Kinaesis / Mitratech End-User Computing Roundtable

Posted by Simon Trewin on 14 September 2020
Kinaesis / Mitratech End-User Computing Roundtable

For those who missed the round table session with Mitratech, click here to request access to the video and a new Kinaesis case study.

Kinaesis is hiring!

Posted by Allan Eyears on 01 September 2020
Kinaesis is hiring!

We’re hiring several new data and software engineering roles for our team in London.

If you like to develop cloud based applications, are an expert in web, (micro)service or database application development, we’d like to talk to you.

To learn more about us and take a look at our vacancies, click here

Kinaesis End-User Computing Reform : Control and migrate your estate

Posted by Allan Eyears on 18 June 2020
Kinaesis End-User Computing Reform : Control and migrate your estate

In a recent poll conducted by our partner ClusterSeven, 54% of respondents indicated that they had no single inventory of their End-User Computing (EUC) artefacts and 74% said that their C-suite had limited understanding of the risks posed by EUCs. There are many well documented cases where spreadsheet error, in particular, has led to significant material loss - JP Morgan’s VaR modelling being a good example.

We know that existing IT infrastructure doesn’t always include the right data or timescales for delivering new functionality, so we leverage our DataOps methodology to blend governance with migration supported by our Kinaesis Clarity cloud hosted analysis and data extraction software. Furthermore, we partner with ClusterSeven and use their cloud hosted tooling to provide deep insights into the most complex EUCs.

To demonstrate our approach we would like to offer you a free assessment of how our cloud hosted tooling can be used on one of your existing Excel spreadsheets. This offer is limited to the first five respondents before 26th June.

Click here to register your interest

How does DataOps make a difference?

Posted by Simon Trewin on 17 June 2019
How does DataOps make a difference?

We often get asked by our clients to differentiate the Kinaesis DataOps methodology to a standard Data Management Methodology. Many organisations are implementing standard methodologies and are not seeing the benefits. The key differentiator to me between the methodologies is that DataOps is based around practical actions that over time add up to deliver results greater than the sum of the parts. If delivered correctly across the 6 pillars of DataOps they help you to transform the organisation to be data driven. The key to this is the integration of people and process that deliver real business outcomes avoiding paper exercises.

On my consulting travels I find that many industry data management methodologies layout the theory around implementing a Data Dictionary for example and this is taken as a mandate to deliver a dictionary as a business outcome. Within DataOps a dictionary is not a business outcome, it is part of the deliverables that are part of the methodology and an accelerator of delivering a business outcome. This is a subtle difference, but one which leads to the effort of the Data Dictionary being part of the business process and not an additional tax on the strained budgets. Within the methodology it is produced as an asset within the project and for the future to make subsequent projects easier.

Another difference is that in standard data management approaches the methods are quite prescribed and consistent across all different use cases. The nature of the DataOps methodology is that it fits the approach to the problem being solved. For example some data management problems are highly model driven like credit scoring, customer propensity, capital calculations, etc. Other problems can be more reporting and analytics. Each of these require a different focus and a different emphasis and sometimes a different operating model. Through the iterative approach then there is freedom within the methodology to achieve this.

Many data management approaches prescribe an approach that tries to encapsulate all of the data within an organisation. This is a noble cause, but it is a large impediment to making progress. Firstly in many cases we have found that only 15-20% of the legacy data is ever required to meet existing business cases. Secondly we find that the shape of data is highly dependent on the use case being implemented and because you do not know all the use cases and future use cases it is not pragmatic to do this. By being able to measure this usage through instrumentation and driving them through use cases then the data management problems can be simplified to achievable outcomes in short periods that can form the foundations of the data management strategy for a business area to be leveraged.

There are many other differentiators within the DataOps methodology however all of them start with the principles that anything you do needs to be implemented and pervasive. The methodology builds its strength from tying business benefit to the process and builds from this. The goal is to deliver value early and often and to leverage the benefits to build momentum to deliver more benefits over time. Once integrated into the operating model then the approach builds and transforms the culture from the ground up. This delivers great benefits to the organisation where many data management methodologies start with great promise and then struggle to gain support when the size and the complexity of the task start to become apparent.

If you would like to find out more about the Differences of DataOps then please do not hesitate to contact me.

DataOps Requirements Process

Posted by Simon Trewin on 24 May 2019
DataOps Requirements Process

In recent meetings with clients we have come across a lot of instances of the need to improve the art of capturing requirements and building the 'Solution Contract'. Typically, these projects are large data analytics and reporting projects where the data is spread across the organisation and needs to be pulled together and analysed in particular ways. A typical data science, data engineering task in today's data centric world.

The problem that people are describing is that they are quite often asked to solve problems that the data does not support, or the requirements process does not extract the true definition of what a solution should be.

We are asked “how can you improve the process of requirements negotiation using the DataOps methodology?”

Kinaesis breaks down its DataOps methodology into the 6 IMPACT pillars. These are;
• Instrumentation
• Meta Data
• (Extensible) Platforms
• (Collaborative) Analytics
• Control
• Target

The requirements process is predominantly within the Target Pillar with leverage of the Instrumentation and Meta Data pillars. The Target pillar starts with establishing the correct questions to ask within the requirements process. These questions recognise the need to establish not only output, but people, process and data. You should ask a series of questions to capture this for the immediate requirement, but within the context of an overall Vision.

The second step is then Instrumenting the data and Meta Data. It is important to capture these efficiently and effectively using tools and techniques, but also to run profiling of the data to match to the model and check feasibility. Through the results of this process you can then work Collaboratively with the sponsor and stakeholders to solve the data and process requirements. Using data prototyping methods to illustrate the solution further assists in communicating the agreed output and the identified wrinkles which helps to build the collaboration through shared vision.
We find in our projects that following a structured approach to this part of the project yields results of building consensus, establishing gaps and building trust.

In one particular client engagement this improved the delivery velocity to the level that is the difference between success and failure. In another client engagement we are able to deliver very large and complex problems within an incredibly tight timescales.

The key point is that requirements definitions are there to build a shared contract that defines a solution that is achievable, therefore you need to include the DataOps analysis into the process to achieve the results that you want.

Why you need DataOps

Posted by Simon Trewin on 16 May 2019
Why you need DataOps

An ex colleague and I were talking a few days ago and he mentioned people don't want to buy DataOps. To this I have given some thought and can only come up with the conclusion that I agree, in the same way that I can agree that I don't want to buy running shoes, however I want to be fit and healthy. It is interesting to find so many people that are happy with their problems that a solution is less attractive.

The way to look at DataOps methodology and training is that it is an investment in yourself, or your organisation that enables you to tackle the needs that you struggle to make progress on. The needs that might resonate more with you are machine learning and AI, Digitisation, Legacy migration, Optimisation and cost reduction, improving your customer experience, and improving the speed to delivery and compliance of your Reporting and Analytics.

The DataOps methodology provides you with a recipe book of tools and techniques that if followed enables you to deliver the data driven organisation.

At one organisation that works with us, they found that working with the DataOps tools and techniques enabled them to deliver an important regulation in record time, rebuild the collaboration between stakeholders and form a template for future projects. For more information then please feel free to reach out to me.

What is the extensible platforms pillar within the DataOps methodology?

Posted by Simon Trewin on 14 May 2019
What is the extensible platforms pillar within the DataOps methodology?

What is the extensible platforms pillar within the DataOps methodology? The purpose of the platform within DataOps is to enable the agility within the methodology and to recognise the fact that data science is evolving rapidly. Due to the constant innovation around tools, hardware and solutions, what is cutting edge today could well be out of date tomorrow. What you need to know from your data today may only be the tip of the iceberg once you have productionised the solution and the next requirement could completely change the solution you have proposed. To address this issue, DataOps requires an evolving and extendable platform.

Extensibility of data platforms is delivered in a number of different ways through:
• Infrastructure patterns
• A DataOps Development Approach
• Architecture Patterns
• Data Patterns

Infrastructure
In most large organisations, data centres and infrastructure teams have many competing priorities and delivery times can be as much as 6-9 months for new hardware. With data projects this can be the difference between running through agile iterations or implementing waterfall where you collect requirements to size the hardware upfront. To manage the risks, project teams either over order hardware creating massive redundancy or to keep costs down, under order and then have large project delays. An example of this are Big data solutions requiring a large number crunching capability to process metrics which stresses the system for a number of hours each day, but after this the infrastructure sits idle until the next batch of data arrives. The cost to organisations of redundant hardware is significant. To address this the developing answer is the cloud where servers can be set up with data processes to generate results and then brought down again reducing the redundancy significantly. Grids and internal clouds offer an on premises option. To migrate and leverage this flexibility, organisations need to consider their strategy and approach for data migration where lift and shift would duplicate data therefore meaning incremental reengineering makes more sense.

DataOps Development Approach
A DataOps development approach enables the integration of Data Science with Engineering leading to innovation reaching production quality levels more rapidly and at lower risk. Results with data projects are best when you can use tools and techniques directly on the data to prototype, profile, cleanse and build analytics on the fly. This agile approach requires you to build a bridge to the data engineers who can take the data science and turn it into a repeatable production quality process. The key to this is a DataOps development approach that builds operating models and patterns to promote analytics into production quickly and efficiently.

Architecture Patterns
One of the challenges in driving innovation and agility in data platforms forwards is the architecture of production quality data with traceability and reusable components. Too small and these components become a nightmare to join and use, too large and too much is hardcoded hampering reuse. Often data in production will need to be shared with the data scientists. This is difficult because the production processes can break a poorly formed process, and poor documentation can lead to numbers being used out of context. Complexity exists where outputs from processes become inputs to other processes and sometimes in reverse creating a tangle of dependencies. The key to solving this is building out architecture patterns to enable reuse of common data in a governed way, but with the ability to enrich the data with business specific content within the architecture. Quality processes need to be embedded along the data path.

Data Patterns
The final challenge is to organise data within the system in logical patterns that allow it to be extended rapidly for individual use cases, but to form a structure from which to maintain governance and control. Historically and with modern tools, analytical schemas enable slice and dice on known dimensions which is great for known workloads. To deliver extensibility, DataOps requires a more flexible data pattern to generate either one off analytics or to tailor analytics to individual use cases. The data pattern and organisation needs to allow for trial and error but with this there is a need to have discipline. Meta data should be kept up to date and in line with the data itself. External or enrichment data needs to be integrated almost instantly and removed again, or promoted into a production ready state. To do this you need patterns which allow for the federation of the data schemas.

The capabilities above combine to enable you to create an extensible platform as part of an overall DataOps approach. Marry this up with the other 5 pillars of DataOps then each new requirement should become an extension to your data organisation rather than a brand new system or capability.

Get control of your glossaries to set yourself up for success

Posted by Simon Trewin on 11 April 2019
Get control of your glossaries to set yourself up for success

By Simon Trewin

Are you amazed by how quickly business glossaries fill up and become hard to use? I have been involved with large complex organisations with numerous departments whose teams have tried to document their data and reports without proper guidance. Typically, the results I have witnessed are glossaries 10,000 lines long, with different grains of information being entered, technical terms being uploaded alongside business terms and everything at a consistent level. What is the right way to implement a model to fill out a glossary to make it useful in this circumstance?

Many organisations have tried to implement a directed approach through the CDO leveraging budgets for BCBS 239 and other regulatory compliance initiatives to build out their data glossaries. Attempts have been made to create both federated models and centralised models for this initiative, however I have yet to see an organisation succeed in building out a resource that truly is value add. Every implementation seems to be a tax on the workforce who show it little enthusiasm, care and attention.

If you want to avoid falling into a perpetual circle of disappointment and wasted time, here are some tips that I have picked up in my years working with data:

  1. Understand the scope of your terms. It is likely that there will be many representations of Country for instance, Country of Risk, Country of Issue, etc understand which one you have. Ask yourself: why does the term that you are entering exist, was it because a regulator referred to it in a report, or is it a core term?

  2. Make terminology value add. Make it useful in the applications that surface data, i.e. context sensitive help. If someone must keep seeing a bad term when they hover their mouse they are more likely to fix it.

  3. Link it to technical terms. If a dictionary term does not represent something physical then it becomes a theoretical concept, which is good for providing food for debate for many years, but not very helpful to an organisation.

  4. Communicate using the terms, they should provide clarity of understanding through the organisation but they quite often establish language barriers. Make sure that people can find the terms in the appropriate resource efficiently so that they can use modern search to enhance their learning.

  5. Build relationships between terms. Language requires context to enable it to be understood. Context is provided through relationships.

  6. Set out your structure and your principles and rules before employing a glossary tool. Setting an organisation loose on glossary tools before setting them up correctly is a recipe for a lot of head scratching and wasted budget.

  7. Start Small and test your model for the glossary before you try to document the whole world.

I am not saying that this is easy but following the rules above is likely to set you up for success.

Kinaesis mentioned in SD Times

Posted by Emma McMahon on 10 April 2019
Kinaesis mentioned in SD Times

Did you spot us in SD Times’ latest article on DataOps?

We are delighted our work on DataOps is being picked up on and we hope to continue adding value to our clients utilising our DataOps methodology and also to continue giving to the DataOps Community. You can find the feature here: https://sdtimes.com/data/a-guide-to-dataops-tools/

Why DataOps should come before (and during) your Analytics and AI

Posted by Emma McMahon on 06 March 2019
Why DataOps should come before (and during) your Analytics and AI

We have all seen the flashy ads and promised benefits when it comes to enabling Analytics and AI for our businesses. Analytics and associated AI solutions are integral for the future of business, gaining you that competitive edge and we would never ever dispute this. Yet before you run out and build out your solutions, it’s time for a health check.

Why? Here is the nightmare. Imagine your new fancy dashboards aren’t showing you what’s really happening. Imagine making business decisions on projections that are false. Imagine your AI is automatically driving your business out of control through poor or corrupted information. Or forget all that and imagine the data within your organisation slowly teaching your AI bad habits and corrupting it’s learning behaviours.

Implementing a DataOps approach correctly before, during and continually after an implementation is the perfect answer to that nightmare. DataOps is the healthcare checkup that checks what is feeding into the analytics and AI solutions to ensure what they are telling you is not harmful. For example, it can be used to assess your data sources, standardise your data into a universal format, check the right data is informing the right areas, understand what data is actually needed for the business to grow and learn.

It is why a Kinaesis partnership is invaluable reassurance on your AI and analytics endeavours as we can increase the success rate on projects simply by ensuring that the result truly reflects what the business needs.

Yes, you need analytics and AI within your business. Yet you must first check your Data is healthy, correct and as detailed as possible consistently throughout the AI process. This will enable you as a business to plan, project and optimise to the highest degree.

Effective DataOps is the way to make sure the fears of ineffective and misinformed data don’t seep into reality.

DataOps Pillar: (Collaborative) Analytics

Posted by Benjamin Peterson on 27 February 2019
DataOps Pillar: (Collaborative) Analytics

Data is very valuable - and yet, it's often hard to find someone to step up and own it. We live in a moment in which the data analyst, the one who presents conclusions, is pre-eminent. It’s the upstream roles that own, steward, cleanse, define and provide data are currently less glamorous. In some ways this is a pity, because conclusions are only ever as reliable as the data that went into them.

DataOps addresses this challenge through the practice of “Collaborative Analytics” - analytics whose conclusions come from a collaboration between the analytics function and the other roles on which analytics depends. Collaborative Analytics (like everything else in the world) is about people, process and tools:

» People include the data owners, metadata providers, DataOps professionals and all the other roles whose actions affect the outputs of analytics. You have to also add into this the actual analysts and model owners themselves.

» Process includes an operating model that encourages collaboration between those roles and ensures that staff at different points in the analytics pipeline have the same understanding of terms, timestamps and quality.

» Tooling, in this case, is the easy part - any modern analytics tooling can provide sharing, annotation and metadata features that can make Collaborative Analytics a reality.

A fully DataOps-enabled pipeline would accompany analytics conclusions with metadata showing the people and processes behind those conclusions - all the way upstream to data origination.

That's a long way in the future. But what most institutions can do right now is ensure that data providers and data interpreters speak the same language.

Launch: DataOps Courses

Posted by Emma McMahon on 14 February 2019
Launch: DataOps Courses

Want to enhance your own DataOps knowledge? Want to learn how to use it to drive change across all departments? We mix our knowledge and consultancy experience to empower you. DataOps is a relatively new concept but learning it and the potential within it will give you a competitive edge.

We are launching our DataOps Courses, tailored for businesses looking to increase the real knowledge available to decision makers, deliver the user experience needed to drive value and to address sources of latency, risk and complexity in their data and analytics pipeline.

Our DataOps training will help you in the following areas:
Regulatory Compliance: Address Regulatory compliance for BCBS 239, IFRS 9, CCAR, SFTR, GDPR, Dodd Frank MiFID II, Basel III / CRD, Solvency II, AIFMD.
Optimised IT Estate: Migrate EUDAs, adopt a data-centric SLDC lifecycle, automate manual solutions, orchestrate your data pipeline to allow you to decommission legacy systems, provide a pragmatic and achievable path for Cloud/Hybrid migration.
Reporting and Analytics Delivery: Solve data issues blocking the building of Reporting, Analytics, AI and Machine Learning solutions. Solve change bottlenecks through enablement of federated delivery and iterative adoption.
Enterprise Wide Data Aggregation: Build enterprise views of core data such as Single Customer View without the dependency on building slow moving monolithic solutions. Maximise the value of your existing estate and enable clear path to simplification and consistency.
Data Governance and Control: Build pervasive data management and governance capabilities as opposed to ‘one-off’ fixes, through embedded, efficient and sustainable capability. Govern and control your data lakes whilst maintaining project agility, combine the governance and lineage into the project process and architecture of the solution.
Data Culture: Help employees understand how they can continually harness data to drive better decision-making and uncover untapped value.

2 hour workshop: walks you through the DataOps Methodology at a high level. Key takeaways: understand the six pillars of DataOps as a set of tools to measure your organisation’s maturity and plan for the future.

2-5 day course: complete with interactive exercises and case studies, the course is a definitive overview of all you need to know about DataOps. You can learn the trade secrets, pitfalls and most importantly how DataOps can benefit your progression and your organisation as a whole. This runs as either an introductory (2 day) or advanced (5 day) course depending on your level of maturity.

Provided by expert trainers with more than 60 years combined experience in delivering Data initiatives using our DataOps methodology. Not sure what DataOps is? Watch our video to understand why DataOps is increasingly growing in popularity:

https://www.youtube.com/watch?v=HboW3BmdbSQ&t=8s

If you are interested in talking more about how this can work for you, let me know if you would like to arrange a chat! If you are interested in seeing more content and tasters you can sign up for DataOps Course updates here.

For more information please click here.

Announcement: First Kinaesis DataOps Course

Posted by Emma McMahon on 04 February 2019
Announcement: First Kinaesis DataOps Course

Kinaesis have delivered our first DataOps Course for RiskCare, a financial services consultancy. Following on from our creation of the DataOps Thinktank, this new training course represents our latest contribution to the DataOps movement.

DataOps is a new comprehensive methodology for managing data pipelines and ensuring compliance, data quality and quick time to market for analytics.

The Kinaesis DataOps Course is an engaging, pragmatic toolkit, breaking down DataOps tooling and processes into pillars that each solve key data delivery challenge. Our course materials help enterprises to use DataOps to gain maximum value from the information they hold, taking full advantage of modern analytics while satisfying regulators, creating a culture of collaboration, improving control, helping delivery and reducing risk.

Sign up for DataOps Course updates here. Subscribe to be the first to have updates as to when the course is available and sneak previews of what is included!

DataOps graphic

We were delighted to receive great and helpful feedback and we would like to thank the team at RiskCare and for their expert comments and support. We look forward to taking it forward ahead of launching the course.

DataOps Pillar: Instrument

Posted by Simon Trewin on 03 January 2019
DataOps Pillar: Instrument

By Simon Trewin

What is Instrumentation all about? It is easiest to define through a question.

'Have you ever been on a project that has spent 3 months trying to implement a solution that is not possible due to the quality, availability, timeliness of data, or the capacity of your infrastructure?'

It is interesting, because when you start a project you don't know what you don't know. You are full of enthusiasm about this great vision but you can easily overlook the obvious barriers.

An example of a failure at a bank comes to mind. Thankfully this was not my project, but it serves as a reminder when I follow the DataOps methodology that there is good reason for the discipline it brings.

In this instance, the vision for a new risk system was proposed. The goal: To reduce the processing time so front office risk was available at 7am. This would enable the traders to know their positions and risk without having to rely on spreadsheets, bringing massive competitive advantage. Teams started looking at processes, technology and infrastructure to handle the calculations and the flow of data. A new frontend application was designed, with new calculation engines and protocols were established and the project team held conversations with data sources. But everything was not as it seemed. After speaking to one of the sources, it was clear that the data required to deliver accurate results would not be available until 10am which deemed it worthless.

The cost to the project included the disruption, budget and opportunity which was lost not only from the project team, but the stakeholders.
Instrumenting your pipeline is about:
• Establishing data dependencies up front in the project.
• Understanding the parameters of the challenge before choosing a tool to help you.
• Defining and discussing the constraints around a solution before UAT.
• Avoiding costly dead ends that take years to unwind.
Instrumenting your data pipeline consists of collecting information about the pipeline either to support implementation of a change, or to manage the operations of the pipeline and process.

Instrument

Traceability
Collecting information how data gets from source into analytics is key to understanding the different dependencies that exist between data sources. Being able to write this down into a single plan empowers you to pre-empt potential bottlenecks, hold ups and to also establish the critical path.

Quality
Data Quality is a key to the pipeline. Can you trust the information that is being sent through? In general, the answer to this question is to code defensively – however the importance of accuracy in your end system will determine how defensive you need to be. Understanding the implications of incorrect data coming through is important. During operation it provides reassurance that the data flowing can be trusted. During implementation it can determine if the solution is viable and should commence.

Variety
The types and varieties of data has implications on your pipeline. Underneath these types there are different ways of transporting the data. Within each of these transports there are different formats that require different processing. Understanding the complexity of this is important because it has a large impact on the processing requirements. We have found that on some projects certain translations of data from one format to another has cost nearly 60% of the processing power. Fixing this enabled the project to be viable and deliver.

Volume
Understanding the Volume along the data pipeline is key to understanding how you can optimise it. The Kinaesis DataOps methodology enables you to document this and then use it to establish a workable design. A good target operating model and platform enables you to manage this proactively to avoid production support issues, maintenance and rework.

Velocity or throughput
Coupled with the volume of data is the velocity. The interesting thing about velocity is when multiplied by volume it generates pressure, understanding these pressure points enables you to navigate to a successful outcome. When implementing a project, you need to know the answer to this question at the beginning of the analysis phase to establish your infrastructure requirements. For daily operational use it is important to capacity manage the system and predict future demand.

Value
The final instrument is value. All implementations of data pipelines require some cost benefit analysis. In many instances through my career I have had to persuade stakeholders of the value of delivering a use case. It is key to rank the value of your data items against the cost of implementation. Sometimes, when people understand the eventual cost, they will lower the need for the requirement in the first place. This is essential in moving to a collaborative analytics process which is key to your delivery effectiveness.

Conclusion
Instrumentation is as important in a well governed data pipeline as it is in any other production line or engineering process. Let us compare to a factory producing baked beans. The production line has many points of measurement for quality to make sure that the product being shipped is trusted and reliable otherwise the beans will not be delivered to the customer in a satisfactory manner. Learn to instrument your data pipelines and projects, enables reduced risk, improved efficiency and the capacity to deliver trusted solutions.

Sign up for updates for our DataOps Course here.

DataKitchen Partnership

Posted by Emma McMahon on 03 December 2018
DataKitchen Partnership

DataKitchen and Kinaesis have formed an alliance to strengthen the DataOps movement in the UK and Europe.

We will especially be working together to provide content, advice and expert opinions within the ‘DataOps Thinktank' that Kinaesis has founded for Data Enthusiasts.

Kinaesis are also now recognised consultancy supplier of the DataKitchen products and we are a preferred partner to accompany implementations of their platform to European-based financial services businesses.

We are excited to start this new journey with DataKitchen and can’t wait to get started.

Any further enquiries please direct to info@kinaesis.com

News: Solidatus Partnership

Posted by Emma McMahon on 07 November 2018
News: Solidatus Partnership

We are delighted to announce that we are welcoming the award-winning data lineage solution, Solidatus, as our newest partner. Solidatus is a modern, specialised and powerful data lineage tool and we are looking forward to utilising their solution in several upcoming projects and propositions.

This marks what we hope to be the beginning of a great partnership as we look to further our work on intelligent data integration and instrumentation in the next year.

Simon Trewin, Director of Kinaesis, welcomed the new partnership by saying, “We are very pleased to have Solidatus join us as a partner. Their data lineage solution combined with Kinaesis services will help our clients better understand where and how data is being used in their organisations with a focus on improving quality and usage.”

Howard Travers, Chief Commercial Officer for Solidatus added “We’re delighted to be joining the Kinaesis partnership and look forward to working with them on some exciting projects. Solidatus was developed due to the genuine need for a sophisticated, scaleable data lineage tool to help companies meet their regulatory & transformational change goals. We’re thrilled to be working with Kinaesis and believe this partnership adds another important piece of the puzzle to our overall proposition.”

We would like to take this opportunity to officially welcome them as our Partner this month.

Any further queries should be directed to info@kinaesis.com

DataOps Pillar: Metadata

Posted by Benjamin Peterson on 22 October 2018
DataOps Pillar: Metadata

Not long ago, 'metadata' was a fairly rare word, representing something exotic and a bit geeky that generally wasn't considered essential to business.

Times have changed. Regulation has forced business to build up metadata. Vendors are emphasising the metadata management capabilities of their systems. The word 'metadata' almost sums up the post-BCBS 239 era of data management - the era in which enterprises are expected to be able to show their working, rather than just present numbers.

Customers are frequently asking for improved and a greater volume of metadata - looking to reduce costs and risk, please auditors and satisfy regulators.

The trouble with labels, though, is that they tend to hide the truth. 'Metadata' itself is a label and the more we discuss 'metadata' and how we'd like to have more of it, the more we start to wonder if 'metadata' actually means the same thing to everyone. In this article, I'd like to propose a strawman breakdown of what metadata actually consists of. That way, we'll have a concise, domain-appropriate definition to share when we refer to "global metadata" - good practice, to say the least!

So, when we gather and manage metadata, what do we gather and manage?

Terms: what data means

DataOps Metadata Terms

To become ‘information’ rather than just ‘data’, a number must be associated with some business meaning. Unfortunately, experience shows that simple words like 'arrears' or 'loan amount' do not, in fact, have a generally agreed business meaning, even within one enterprise. This is the reason why we have glossary systems; to keep track of business terms and to relate them to physical data. Managing terms and showing how physical data relates to business terms is an important aspect of metadata. Much has been invested and achieved in this area over the last few years. Nevertheless, compiling glossaries that really represent the business and that can practically be applied to physical data remains a complex and challenging affair.

Lineage: where data comes from

DataOps Metadata Lineage

Lineage (not to be confused with provenance) is a description of how data is transformed, enriched and changed as it flows through the pipeline. It generally takes the form of a dependency graph. When I say 'the risk numbers submitted to the Fed flows through the following systems,' that's lineage. If it's fine-grained and correct, lineage is an incredibly valuable kind of metadata; it's also required, explicitly or implicitly, by many regulations.

Provenance: what data is made of

DataOps Metadata Provenance

Provenance (not to be confused with lineage) is a description of where a particular set of data exiting the pipeline has come from: the filenames, software versions, manual adjustments and quality processes that are relevant to that particular physical batch of data. When I say 'the risk numbers submitted to the Fed in Q2 came from the following risk batches and reference data files,' that's provenance. Provenance is flat-out essential in many highly regulated areas, including stress testing, credit scoring models and many others.

Quality metrics: what data is like

DataOps Metadata Quality

Everyone has a data quality process. Not everyone can take the outputs and apply them to actual data delivery so that quality measures and profiling information are delivered alongside the data itself. It’s great that clued in businesses are starting to ask for this kind of metadata frequently. The other good news is that advances in DataOps approaches and in tooling are making it easier and easier to deliver.

Usage metadata: how data may be used

DataOps Metadata Usage

'Usage metadata' is not a very commonly used term. Yet it's a very important type of metadata, in terms of the money and risk that could be saved by applying it pervasively and getting it right. Usage metadata describes how data should be used. One example is the identification of golden sources and golden redistributors; that metadata tells us which data should be re-used as a mart and which data should not be depended upon. But another extremely important type of metadata to maintain is sizing and capacity information, without which new use cases may require painful trial and error before reaching production.

There are other kinds of metadata as well; one organisation might have complex ontology information that goes beyond what's normally meant by 'terms' and another may describe file permissions and timestamps as 'metadata'. In the list above, I've tried to outline the types of metadata that should be considered as part of any discussion of how to improve an enterprise data estate... and I've also tried to sneak in a quick explanation of how 'lineage' is different from 'provenance'. Of all life's pleasures, well defined terms are perhaps the greatest.

Sign up for updates for our DataOps Course here.

Mango Solutions Partnership

Posted by Emma McMahon on 11 September 2018
Mango Solutions Partnership

We are delighted to announce that we have just signed a partnership agreement with Mango Solutions. This heralds some very exciting developments for us and we look forward to utilising their Data Science expertise in some upcoming projects so keep your eyes peeled for further developments!

Simon Trewin, Director of Kinaesis, welcomed in the new partnership by saying, “We are very pleased to welcome Mango Solutions as a partner. We are looking forward to working with them to enhance Kinaesis solutions delivery capability.”

A representative from Mango added that: “We are delighted to formalise the partnership with Kinaesis. As Financial Services companies continue to embrace a data-driven approach, working with Kinaesis, consultants with deep domain knowledge and trusted relationships, is a logical and complimentary next step for Mango Solutions.”

We would like to take this opportunity to officially welcome them as our Partner.

Any further queries should be directed to info@kinaesis.com

Power BI Partnership Announcement

Posted by Emma McMahon on 06 August 2018
Power BI Partnership Announcement

We are delighted to announce that Kinaesis has been recognised by Microsoft as a Power BI Partner! You can now see our profile on the Power BI directory here. We have provided a Power BI cloud-based solution (Kinaesis® Clarity KYR) which has been servicing our clients in the Investment Management sector for over two years. It’s fantastic to see our hard work with Power BI technology being acknowledged by Microsoft.

We would like to thank everyone who made this recognition possible and especially our clients whose kind recommendations were instrumental in getting us recognised.

DataOps Pillar: Control

Posted by Benjamin Peterson on 19 July 2018
DataOps Pillar: Control

Control, in the sense of governance, repeatability and transparency is currently much stronger in software development than data delivery. Software delivery has recently been through the DevOps revolution - but even before DevOps became a buzzword, the best teams had already adopted strong version control, continuous integration, release management and other powerful techniques. During the last two decades, software delivery has moved from a cottage industry driven by individuals to a relatively well-automated process supported by standard tooling.

Data delivery brings many additional challenges. Software delivery, for instance, is done once and then shared between many users of the software; data must be delivered differently for each unique organisation. Data delivery involves the handling of a much greater bulk of data and the co-ordination of parts that are rarely under control of a single team; it’s a superset of software delivery, too, as it involves tracking the delivery of the software components that handle the data and relating their history to the data itself!

Given these challenges, it’s unsurprising that the tools and methodologies which have been in place for years in the software world are still relatively rare and underdeveloped in the data world.

Take version control, for example. Since time immemorial, version control has been ubiquitous in software development. Good version control permits tighter governance, fewer quality issues, greater transparency and thus greater collaboration and re-use.

You expect to be able to re-create the state of your business logic as it was at any point in the past.

You expect to be able to list and attribute all the changes that were made to it since then.

You expect to be able to branch and merge, leaving teams free to change their own branches until the powers that be unify the logic into a release.

On the data side, that's still the exception rather than the rule - but with the growing profile of DataOps, the demand is now there and increasingly the tooling is too. The will to change and reform the way data is delivered also seems to be there - perhaps driven by auditors and regulators who are increasingly interested in what's possible, rather than in what vendors have traditionally got away with in the past. We stand on the brink of great changes in the way data is delivered and it's going to get a fair bit more technical.

What isn't quite visible yet is a well-defined methodology, so as we start to incorporate proper governance and collaboration into our data pipeline, we face a choice of approaches. Here are a few of the considerations around version control, which of course is only a part of the Control pillar:
» Some vendors are already strong in this area and we have the option of leveraging their offerings - for example, Informatica Powercenter has had version control for some time and many Powercenter deployments are already run in a DevOps-like way.
» Some vendors offer a choice between visual and non-visual approaches - for example with some vendors you can stick to non-visual development and use most of the same techniques you might use in DevOps. If you want to take advantage of visual design features, however, you'll need to solve the problem of version and release control yourself.
» Some enterprises govern each software system in their pipeline separately, using whatever tools are provided by vendors and don't attempt a unified paper trail across the entire pipeline.
» Some enterprises that have a diverse vendor environment take a 'snapshot' approach to version and release control - freezing virtual environments in time to provide a snapshot of the pipeline that can be brought back to life to reproduce key regulatory outputs. This helps ensure compliance, but does little to streamline development and delivery.

It's no small matter to pick an approach when there are multiple vendors, each with varying levels of support for varying forms of governance, in your data estate. Yet the implications of the approach you choose are profound.

DevOps has helped to define a target and to demonstrate the benefits of good tooling and governance. To achieve that target, those who own or manage data pipelines need to consider widely different functions, from ETL to ML, with widely different vendor tooling. Navigating that complex set of functions and forming a policy that takes advantage of your investment in vendors while still covering your pipeline will require skill and experience.

Knowledge of DataOps and related methodologies, knowledge of data storage, distribution, profiling, quality and analytics functions, knowledge of regulatory and business needs and, above all, knowledge of the data itself, will be critical in making DataOps deliver.

Sign up for updates for our DataOps Course here.

DataOps Pillar: Target

Posted by Benjamin Peterson on 03 July 2018
DataOps Pillar: Target

By Benjamin Peterson

DataOps comes from a background of Agile, Lean and above all DevOps - so it's no surprise that it embodies a strong focus on governance, automation and collaborative delivery. The formulation of the Kinaesis® DataOps pillars isn't too different from others, although our interpretation reflects a background in financial sector data, rather than just software. However, I believe there's an extra pillar in DataOps that’s missing from the usual set.

Most versions of DataOps focus primarily on the idea of a supply chain, a data pipeline that leads to delivery via functions such as quality, analytics and lifecycle management. That's a good thing. However supply chains exist for a purpose - to support a target state, a business vision. The creation and management of that vision and the connecting of the target to actual data is just as important as the delivery of data itself.

The importance of setting a Target

DataOps work shouldn’t be driven entirely by the current state and the available data; it should support a well-defined target. That target consists of a business vision describing the experience the business needs to have.

On the software development side, it's been accepted for a long time that User Experience (UX) is an important and somewhat separate branch of software delivery. Even before the 'digital channels' trend, whole companies focused on designing and building a user friendly experience.

Delivering an experience is different from working to a spec because a close interaction with actual users is required. Whole new approaches to testing and ways of measuring success are needed. UX development includes important methodologies such as iterative refinement - a flavour of Agile which involves delivering a whole solution at a certain level of detail and then drilling down to refine certain aspects of the experience, as necessary. Eventually, UX has become a mature recognised branch of software development.

Delivery of data has much to learn from UX approaches. If users are expected to make decisions based on the data - via dashboards, analytics, ML or free-form discovery - then essentially you are providing a user experience, a visual workflow in which users will interact with the data to explore and to achieve a result. That's far closer to UX development than to a traditional functional requirements document.

Design and DataOps: a match made in heaven

To achieve results that are truly transformative for the business, those UX principles can be applied to data delivery. 'User journeys' can provide a way to record and express the actual workflow of users across time as they exploit data. Rapid prototyping can be used to evaluate and refine dashboard ideas. Requirements can, and should, be driven from the user's desktop experience, not allowed to flow down from IT. All these artefacts are developed in a way that not only contributes to the target, but allows pragmatic assessment of the required effort.

Most of all, work should start with a vision expressing how the business should ideally be using information. That vision can be elicited through a design exercise, whose aim is not to specify data flows and metadata (that comes later) but to show how the information in question should be used, how it should be adding value if legacy features and old habits were not in the way. Some would even say this vision does not have to be, strictly speaking, feasible; I'm not sure I'd go that far, but certainly the aim is to show the art of the possible, an aspirational target state against which subsequent changes to data delivery can be measured. Without that vision, DataOps only really optimises what’s already there - it can improve quality but it can't turn data into better information and deeper knowledge.

Sometimes, the remit of DataOps is just to improve quality, to reduce defects or to satisfy auditors and this in itself is often an interesting and substantial challenge. But when the aim is to transform and empower business, to improve decisions, to discover opportunity, we need our Target pillar: a design process, along with an early investment in developing a vision. That way, our data delivery can truly become an agent of transformation.

Sign up for updates for our DataOps Course here.

What to expect from us in 2018.

Posted by Emma McMahon on 16 January 2018
What to expect from us in 2018.

We are anticipating a busy but exciting 2018 at Kinaesis! The industry’s priorities over the next year are mainly driven by regulatory change. Here are some quick highlights of what is filling up our calendar in the next 12 months to meet these challenges:

GDPR:

Kinaesis are working with our clients to define the scope of work needed to achieve GDPR compliance. It’s not just a race to meet the deadline of 25th May 2018 but actually implementing maintainable solutions that will ensure continued compliance.

For an overview of our services and accelerators, click here or if you are interested in something more specialised, please email us here.

MiFID II:

Implementation of MiFID II continues through 2018 and our specialist tools for target market identification/design/validation provide solutions around product governance. In addition, we provide wider MiFID II consultancy for selected clients.

DataOps:

We are seeing a sharp rise in demand for our DataOps expertise and services. We have several new partners which enhances our DataOps capabilities. We look forward to introducing you to the work we are performing with our partners SAS, MicroStrategy and NORMAN & SONS. We are also excited to be working with Snowflake on a new project with their cutting edge, enterprise data warehouse, built for the Cloud.

FRTB:

As we move towards the first reporting date, in 2019, under the new standards developed as part of the Fundamental Review of Trading Book (FRTB), the focus is on the implementation of a sustainable operational process to ensure effective ongoing compliance. The new standards demand daily monitoring, controls and reporting to ensure capital requirements are met on a continuous basis. This presents a complex data, reporting and operating model challenge for banks.

In 2018, Kinaesis will help clients to build high performing, sustainable solutions for FRTB. We will be using our accelerators, expert resources and agile delivery to help banks to address the risk modelling, data modelling, architecture and process challenges.

BCBS 239:

The implications of BCBS 239 for DSIBs and GSIBs remain wide-reaching and demanding for both organisations who have gained compliance and those who are aiming to achieve it. Existing BCBS solutions, in many cases, cause greater operational overheads and onerous change management processes. Expected gains and benefits in analytic and reporting capability also fail to materialise.

Kinaesis in 2018 will continue to help both DSIBs and GSIBs with their respective levels of BCBS 239 compliance. We know how to leverage the recurring challenge in gaining benefit from work already undertaken to meet the regulations and are changing the way our clients deal with Risk. Read more about BCBS 239 + here.

If you’re reading this and the above sounds horribly familiar, we would love to help you with your challenges, talk to us here

NORMAN & SONS and Kinaesis Partnership Announcement.

Posted by Emma McMahon on 22 November 2017
NORMAN & SONS and Kinaesis Partnership Announcement.

We are delighted to announce that we have just signed a partnership agreement with digital design firm NORMAN & SONS. NORMAN & SONS is a forward-thinking business with revolutionary design concepts in the capital markets space.

In this partnership, we will be working together on advances in Risk management. We will be focussing on the development of new solutions, utilising the Kinaesis DataOps data management approach and NORMAN & SONS DesignOps human-centred approach.

Simon Trewin, Director of Kinaesis, welcomed in the new partnership by saying, “We’ve always prided ourselves on delivering what a business truly wants and NORMAN & SONS are experts in eliciting a vision. Connecting that vision to physical delivery is the perfect match for our philosophy. Together, we can provide fast innovation to move our clients’ business thinking forward.”

Graeme Harker, Managing Partner of NORMAN & SONS, added, "our clients know that delivering innovative solutions that really make a difference to the business is about combining great product design with great software and data architecture. Together we cover all of the bases.”

We would like to take this opportunity to officially welcome them as our Partner.

Any further queries should be directed to info@kinaesis.com or norman@normanandsons.com.

Don't be schooled. Learn your facts on how GDPR is actually affecting Credit Checks.

Posted by Benjamin Peterson on 08 November 2017
Don't be schooled. Learn your facts on how GDPR is actually affecting Credit Checks.

GDPR will force changes onto pre loan credit check processes. Benjamin Peterson, our Head of Data Privacy, takes you through what to expect and how to solve the problems this will create.
 
Some banking processes are more GDPR-sensitive than others. Pre-loan credit checks, that depend on modelling and analytics are very significant in GDPR terms. While consuming large amounts of personal data, they also involve profiling and automated decision making - two areas on which GDPR specifically focuses. Despite their importance, many have been assuming that these processes won’t be hugely impacted by GDPR. After all, credit checking is so fundamental to what a bank does - surely it’ll turn out that credit checks are a justified use of whatever personal data we happen to need?
 
Recent guidance from the Article 29 Working Party – the committee that spends time clarifying GDPR, section by section – has demolished that hope, imposing more discipline than expected. October’s guidance on profiling and automated decision-making does three things: adjusts some definitions, clarifies some principles and discusses some key edge cases. It’s surprising how tweaking a few terms can make credit checking and modelling seem far more difficult, in privacy terms.
 
Yet, in many ways, the new guidance throws banks a lifeline. First, though, let’s map out the problematic tweaks at a high level:
 
- Credit checking is not deemed ‘necessary in order to enter a contract’. Lenders had hoped that credit checks might be considered as such and thus justified in GDPR terms.
- Automated decision-making is prohibited by default. Lenders had hoped automated decision-making would not attract significant extra restrictions.
- Credit refusal can be deemed ‘of similar significance’ to a ‘legal effect’. Lenders had hoped credit decisions would not be given the same status as legal effects – due to the restrictions and customer rights that accompany them.
 
So, there are small tweaks that could prove hard work for data and risk owners. Banks will have to make sure that their credit checking and modelling processes stick to GDPR principles. Processes such as data minimisation and the various rights to challenge, correct and be informed will prove tricky to follow when other regulators need to audit historical models!
 
But we can protect ourselves. One thing we can do is avoid full automation; fully automated decision-making has stringent constraints but adding a manual review sidesteps them. We also need to stick close to the general GDPR principles. For example, data minimisation - this can mean controlling data lifecycle and scope by utilising clever desensitisation and anonymisation to satisfy audit and model management requirements. This will keep you on the right side of GDPR. 
 
Additionally, the recent guidance contains a very interesting set of clarifications around processing justifications. The best kind is the subject’s consent. Establishing justification through necessity or unreasonable cost is complex and subjective; the subject’s consent is an unassailable justification. The recent guidance reinforces the power of the subject’s consent and tells banks how to make that consent more powerful still – by keeping subjects informed. The flip side is, of course, that the consent of an uninformed subject is not really consent at all and could lead to serious breaches.
 
So, well informed customers are an essential part of our solution for running credit checks and building models in the post-GDPR world. Fortunately, the Article 29 Working Party released detailed and sensible guidance on just how to keep them informed – here’s a high level summary:
 
· The bank should find a 'simple way' to tell the subject about each process in which their personal data will be involved.
· For each piece of personal information used, the subject should be told the characteristics, source and relevance of that information. Good metadata and lineage would make this task very easy.
· The bank need not provide exhaustive technical detail – it’s about creating a realistic understanding of the subject, not about exposing every detail of the bank’s logic.
· The guidance suggests using visualisations, standard icons and step by step views to create an easily understood summary of data usage and processes affecting the subject.
 
So, if you want your banking business to experience minimum impact from GDPR, one message is clear – you need to provide transparency to your customers, as well as your internal officers and auditors. Just as you provide various perspectives on your data flows to your various stakeholders, you’ll benefit from providing a simplified perspective to your customers. The metadata, lineage and quality information you’ve accumulated now has an extra use case: keeping your customers informed, so you are able to keep running the modelling and checking processes that you depend on.
 
Want more from our GDPR experts? Check out our GDPR solution packages here and see more of our regulatory compliance projects here. Or you can reach us on 020 7347 5666.

Kinaesis announce new DataOps partnership.

Posted by Emma McMahon on 20 September 2017
Kinaesis announce new DataOps partnership.

Kinaesis are pleased to announce that we have now entered into an exciting partnership with MicroStrategy.

MicroStrategy are a US based company who provide powerful software solutions and expert services that empower every individual with actionable intelligence, helping enterprises unleash the full potential of their people and investments. Their analytics and mobility platform delivers high-performance business applications that meet the needs of both business and IT.

This newest partnership will see Kinaesis working with MicroStrategy to develop new propositions utilising the Kinaesis DataOps data management approach and MicroStrategy’s analytical, mobile, and security solutions. These new propositions will focus on assisting their clients with enabling a data culture within the organisation to reduce costs, drive revenue, enrich the customer experience, and manage risk and regulatory requirements.

We would like to take this opportunity to officially welcome them as our Partner.

Any further queries should be directed to info@kinaesis.com.

MiFID II: What's the Problem?!

Posted on 13 July 2017
MiFID II: What's the Problem?!

As January 2018 looms heavy on the horizon, the investment community from the smallest IFA to the largest Fund Manager and every platform in between is trying to ensure that they are MiFID II compliant. The changes to their operating model are significant and wide ranging. Remember this is against the backdrop of another significant regulation, GDPR, which has to be in place by May 2018.

So is it boom time for business analysts, developers and testers as more requirements are identified, developed, tested and implemented?

Well yes and no.

There are certainly some requirements which will need functional changes to IT systems and the ability to capture additional data. However, more importantly do you have this data? Operationally how are you going to get collect it? You could capture it at “point of sale” but this would be onerous, require significant IT changes and could cause a transaction to fail. Additionally, what about ongoing monitoring and if you provide online information for investors, how are you going to ensure that they’ve seen it, as an example?

This brings us back to the original question of “What’s the problem?” - there are changes which need to be made around how the whole business will operate including transparency and allocation of costs. This is really the tip of the MiFID II iceberg and it is becoming clear that the changes required are multi-faceted and need a more holistic approach than just throwing IT resources at the problem.

The solution lies with the current functional operating model for the organisation. This needs to be adapted to reach the final MiFID II state. Changes made in one area will, by definition, have a knock on effect in other parts of the organisation and may generate new and exciting activities. The overall impact can then be mapped onto a functional heat map of the organisation which highlights pain points.

Only once this work has been completed, can decisions be made around whether an issue requires an operational, data or IT change or some combination of all three. A simple example of gathering information around an investor’s profile will require additional information to be captured (application forms updated etc), an IT change to capture it and a data model change to store it. By definition, if you capture information then you need to do something with it! So somewhere further down the line this information will be needed for some purpose. Again this goes back to the operating model which defines how this data will be used and whether it needs to be updated or validated regularly. Does the data have a shelf life?

The final decision also comes down to profitability. If a particular activity is too onerous then is the margin associated with this piece of business worth it? If only a few investors have specific reporting needs that are out of kilter to the norm then does this business generate sufficient profit to make it worthwhile to keep them? A sobering thought.

As we all prepare for these regulatory changes, remember that IT changes cannot solve everything and an update of your operating model will provide the best long term solution.

Kinaesis has worked with many high profile organisations to identify and tailor operating models to meet regulatory needs. Kinaesis also provides MiFID II compliant software solutions for the Product Governance requirements.

To read more about our Regulatory work, click here.

For more information or If you would like to discuss how we can help your organisation then please contact sales@kinaesis.com.

The best throw with the Dice is to throw them away

Posted on 04 July 2017
The best throw with the Dice is to throw them away

We’re a nation of gamblers. Whether it’s the National Lottery, Grand National, World Cup sweepstake, pub fruit machine or even the penny falls on Brighton pier. It’s not necessarily hardcore darkened rooms with swirling smoke and the sound of ice chinking in whisky glasses but more that we like a little flutter every so often. You just never know….

However, most of us gamble with huge sums of money every day without giving it a moment’s thought. Every day we go to the casino, put our money into a stranger’s hands and hope that they’ll do the best for us and we’ll come out on top. And depending on how well they gamble with our money dictates how our life will turn out. Scary huh?

Imagine, if you can, what it would be like to be starting work again with the knowledge that you’ve gained over the years. I’m sure there are many things that you’d do differently but I’m pretty sure your finances would be pretty high up that list. (If only I’d got into Microsoft, Apple, Facebook etc). However, I’m also pretty sure that if someone came to you now and said that they’d met a clever chap selling books out of his garage that would make millions but needed £25k then you’d be unlikely to invest). And why wouldn’t you invest? Concerns about legitimacy, trust, potential loss of £25k would probably be pretty high on your list.

So how does this relate to MiFID II? This amendment to the original Markets in Financial Instruments Directive (MiFID) has been introduced following the financial crisis to improve the functioning of the financial markets and improve investor protection.

Within the enhanced investor protection part of MiFID II, the concept of product governance is defined to ensure, in the simplest terms, that the right product is sold to the right person. Additionally, that the person or company selling a product to an investor understands their hopes and dreams but also the cold hard reality of their financial position. This attempts to ensure that the investor receives independent advice and is guided towards the most appropriate product or products.

It doesn’t stop there though. The product governance element also places a responsibility on the manufacturer (investment managers) to provide a target market for their products. This allows the distributor (seller) to understand for whom this product is suitable. The manufacturer can even define a negative target market which tells the distributor who shouldn’t buy this product.

An added complication is that the manufacturer has to keep an ongoing eye on sales of their products and ensure that they’re being targeted at the right people. So there is a whole new infrastructure of data exchange between the distributors and manufacturers. If you find yourself in the middle of that chain then you have to play pass the parcel.

There are many other parts to MiFID II but this piece alone creates significant IT challenges as it requires new solutions and additional data capture which all needs to be in place by January 2018.

And what happens if you choose to ignore the independent advice and gamble on something? Well Caveat Emptor of course!

For further information on MiFID II Product Governance or how Kinaesis can help you then please contact sales@kinaesis.com.

Take your partners for the GDPR tango.

Posted by Benjamin Peterson on 23 June 2017
Take your partners for the GDPR tango.

Just when we'd grown used to the idea that it matters how we handle our data, regulators have taken it to the next level. It’s not enough to have our own data management practises well-groomed – as we step onto the data privacy dance floor, we need to be intimately acquainted with our partner’s habits as well.

The GDPR’s strong words about data controllers and data processors make it clear that compliance is now a team effort, with financial institutions and their service providers expected to work together to meet the regulation’s goals. Financial institutions almost invariably have significant service provider relationships – from large banks, with their galaxy of data processing partners, to simple funds whose fund administrator is a single but crucially important partner in the personal data tango.

Fortunately, the GDPR does make it clear what it expects from data controller / data processor relationships. The Data Processing Agreement enshrines the data processor’s responsibilities to the data controller in some detail. Beyond that, both types of organization are held to the same standard and must support the same rights for the data subject. Our existing governance models, then, must be extended to cover:

• Our own internal data governance
• Our interfaces (technical and contractual) to our data processors
• Our data processor’s governance

The good news is that an effective data governance model can actually be extended quite naturally over this new dancefloor. For our internal data, we’d expect to already be identifying sensitive data (the GDPR gives us hints, rather than a fixed set of criteria, but it’s nothing we can’t manage). Identifying systems and processes that handle that data, and checking those systems for compatibility with GDPR. ‘Compatibility’ here is a concept that can be broken down into two areas: support for GDPR rights (such as the right to be forgotten), and support for GDPR principles (such as access control).

To sort out our data privacy social life, we could decide to form a governance model for partnerships analogous to the ones we apply to in-house systems. Just as we evaluate the maturity of a system, we can evaluate the GDPR maturity of a relationship with a data processor:

• Immature: A relationship that makes no specific provision for data management.
• Better: Formal, contractual coverage of data handling and privacy parameters. In-house metadata that describes the sensitivity, lifespan, and access rights of the data in question.
• Better still: A GDPR-compliant Data Processing Agreement.
• Bulletproof: A Data Processing Agreement, an independent DPO role with adequate visibility of the relationship on both sides, and metadata that covers both controller and provider.

Once we’ve enumerated relationships, evaluated their maturity, and put in place a change model that covers new relationships and contractual changes, the problem starts to look finite. That change model is imperative – in the future, will organizations even want to dance with a partner who doesn’t know the GDPR steps?

Read about the two different solutions to GDPR Kinaesis provide here: http://www.kinaesis.com/solution/017-new-practical-kinaesis-gdpr-solutions

Taking off with DataOps

Posted by Benjamin Peterson on 06 June 2017
Taking off with DataOps

New technologies and approaches in financial IT are always exciting – but often that excitement is tinged with reservations. Ground-breaking quantum leaps don’t always work out as intended, and don’t always add as much value as they initially promise. That’s why it’s always good to see an exciting new development that’s actually a common-sense application of existing principles and tools, to achieve a known – as opposed to theoretical – result.

DataOps is that kind of new development. It promises to deliver considerable benefits to everyone who owns, processes or exploits data, but it’s simply a natural extension of where we were going already.

DataOps is the unification, integration and automation of the different functions in the data pipeline. Just as DevOps, on the software side, integrated such unglamorous functions as test management and release management with the software development lifecycle, so DataOps integrates functions such as profiling, metadata documentation, provenance and packaging with the data production lifecycle. The result isn’t just savings in operational efficiency – it also makes sure that those quality-focused functions actually happen, which isn’t necessarily the case in every instance today.

In DataOps, technology tools are used to break down functional silos within the data pipeline and create a unified, service-oriented process. These tools don’t have to be particularly high-tech – a lot of us are still busy absorbing the last generation of high-tech tools after all. But they do have to be deployed in such a way as to create a set of well-defined services within your data organisation, services that can be stacked to produce a highly automated pipeline whose outputs include not just data but quality metrics, exceptions, metadata, and analytics.

It could be argued that DevOps came along and saved software development just at the time when we moved, rightly, from software systems to data sets as the unit of ownership and control. DataOps catches up with that trend and ensures that the ownership and management of data is front and center.

Under DataOps, software purchases in the data organisation happen not in the name of a specific function, but in order to implement the comprehensive set of services you specify as you plan your DataOps-based pipeline. This means of course that a service specification, covering far more than just production of as-is data, has to exist, and other artefacts have to exist with it, such as an operating model, quality tolerances, data owners… in other words, the things every organisation should in theory have already, but which get pushed to the back of the queue by each new business need or regulatory challenge. With DataOps, there’s finally a methodology for making sure those artefacts come into being, and for embedding an end-to-end production process that keeps them relevant.

In other words, with the advent of DataOps, we’re happy to see the community giving a name to what people like us have been doing for years!

The theory, reality and somewhere in between.

Posted on 23 May 2017
The theory, reality and somewhere in between.

Big, small, old, new, structured, unstructured, invaluable and downright annoying data all form part of our daily lives whether we like it or not. Organisations thrive on it and we generate heaps of it every day. (I’m doing it right now even as I stand on the platform looking at the apologetic data that is predicting a delayed train).

Data helps us make informed decisions and has become key to our daily lives. Organisations exist because of its creation, storage and accessibility. Financial organisations rely on it to make informed decisions about investments, risk, budgets, profitability and trends.

Control and quality of this data is becoming key to ensure that these decisions are based on reality and not some random element or “informed opinion”. As the importance of data grows, the governments of the world are waking up to the importance of data ownership and accuracy. Especially where that accuracy can impact their voters’ decision making!
As you’d expect, organisations who weathered the financial storm of the recession and the ensuing blizzard of regulation are now fully aware of the importance of accurate source data that generates risk metrics. Risk committees opine on these metrics to make informed and auditable decisions.

So all is well with the world and we can sleep easily in our beds knowing our savings have capital adequacy protection and liquidity rules which ensure we can withdraw our cash whenever we want.

The reality is somewhat different though. The theory of implementing controls and having appropriate risk metrics sounds just peachy, however, they are only as good as the data that makes up the metrics and the business has to buy into this and not just nod wisely and agree.

All businesses (with a few exceptions) are in the business of making money for their shareholders. So additional controls around their ability to transact, especially ones that don’t appear to add tangible business value, are not embraced positively by the management and workforce. They’re an added overhead and an obstacle to their daily lives.

This means that we have two parties, business and risk, living under the same roof with the same paymaster but viewing the world through different lenses. How do we get them to work together so that they happily disappear over the horizon arm in arm?

Kinaesis works with many organisations to find a middle ground that satisfies both parties without destabilising what either is trying to achieve. Appropriate checks and balances are adopted which provide desired protection but also allow the business to operate freely and expand.

There is no “one size fits all” easy solution but implementing a structured methodology into the organisation ensures that constant improvement is delivered across the board.

To drive business adoption, it is important to balance the “push” and “pull” factors. Data governance and control are esoteric subjects to most people with cryptic justifications and benefits at best.

The initial “push” is to ensure data management controls are strongly embraced as a primary first line of defence responsibility. Practically this means leveraging existing control frameworks including adding specific data management principles and measures into risk appetite statements, risk culture statements and divisional objectives.

The “pull” of delivering real business value will gain traction by following some core do’s and don’ts:

• Don’t treat data governance as a stand-alone, specialist function ->
Do embed practices into day-to-day operations that bring you continual improvements

• Don’t target a mythical static end state ->
Do plan for things to change and put data governance capability at its core

• Don’t just focus on a set of simplistic key data terms and measures ->
Do measure the impact of data issues and use the results to drive priorities

• Don’t invest in tick-the-box static documentation that is past its sell by date on publication -> Do build a sustainable, open data knowledge base that supports and accelerates change

We ensure that the business has bought into these “controls” and then we implement a data governance strategy that is part of their everyday life. This gives the risk committees comfort that their metrics are correct and current. The business accepts a degree of oversight whilst reaping the benefits of more effective data capability.

This brings the theory into practice and provides the comfort to organisations that their business of making money isn’t impeded but there is also a good risk and return barometer in place.

Why BCBS 239 Still Matters.

Posted on 23 March 2017
Why BCBS 239 Still Matters.

We asked our Kinaesis subject matter experts why those who are not swept up by the BCBS 239 regulations should be interested in implementing a data governance framework. Kinaesis are supporting DSIBs, who are currently facing oncoming deadlines from January 2018 onwards to become compliant with the regulations, as we did previously with GSIB clients.

Our Banking leadership specialist team consists of:

Simon Trewin, Director and Head of Architecture at Kinaesis, 23+ years of experience in the Financial Services industry, leading major change programmes to consolidate and manage and organise data to bring regulatory compliance, actionable trading insight, and financial optimisation.

Barney Walker, Head of Consulting at Kinaesis, Strategic leader with over 20 years of experience in leading global divisions, delivering projects and defining strategy in technology, operations and finance.

Benjamin Peterson, Senior Managing Consultant at Kinaesis, Financial Technology expert and Data Architect in the risk and trading markets. In-depth experience of data analytics and solution architecture with 22 years working with Financial Services Data initiatives.

So, why should everyone else start concerning themselves with complying with the BCBS 239 Guidelines sooner, rather than later?

"First, it is important to understand, the BCBS 239 regulation is not there to be difficult. It is there for protection and to encourage best practice. It allows you the chance to know and understand your data and even, by extension, the business better. Studies claim that businesses who understand and trust their data and use it for the day to day operations and decision making have brighter futures. This is no surprise: undertaking steps to comply with BCBS 239 regulations leads to a superior understanding of your data. Good governance and a common understanding of information, drives efficiencies through the reduction in reconciliation and poor decision making based on inaccurate data."

"Another major consideration is the competitor aspect. With larger corporations adhering to the regulations by sorting and analysing their data, testing the limits, this gives them a significant competitive advantage over the smaller players. Ultimately, BCBS 239 is not going away. Firms that have adopted the full spirit of the principles have created a platform for a modern data-driven enterprise with significant competitive advantages over their outdated competitors."

"Not making a move to comply with BCBS 239 regulations defines and limits your growth. It is worth thinking of your future and seeing compliance to BCBS 239 as an opportunity, an enabler and one not to be brushed aside."

"On a final note, the better you understand your data and have a data governance framework in place then the easier it will be to implement GDPR by 2018."

So, if you agree that BCBS 239 compliance is relevant now and beneficial to your organisation, here’s how we can help you.

We offer unrivalled consultancy expertise with the data management and analytics solutions to match. If you are interested in what we offer precisely to deal with BCBS 239 challenges, click here for more information.

We would be happy to present a free customised consultation as to what we do to make the process quick, focussed and above all beneficial. For more information, please contact Phil Marsden at sales@kinaesis.com.

Kinaesis Wins New Data Governance Deal.

Posted by Emma McMahon on 07 March 2017
Kinaesis Wins New Data Governance Deal.

Kinaesis are delighted to win another Data Governance project with a new Financial Services client. The deal was signed last week and brings our consultancy expertise to bear on not only audit, analysis and advice on best practices for their existing systems but also to design a sustainable operating model for their data. Kinaesis has a successful portfolio of Data Governance ventures, most recently having worked with a large European Bank to provide an ongoing governance model having previously helped them gain BCBS 239 accreditation. To view the case study in more detail, please look at the Data Governance page under the Services tab.

We offer our clients proven best practice governance, quality and lineage techniques with in-house developed methodologies gained through the practical implementation of numerous data projects in financial services. Our approach is to combine the governance and lineage into the project process and architecture of the solution and embed quality into the culture of an organisation to create lasting change.

Simon Trewin, co-founder of Kinaesis, commented “We are very pleased to add another client to our growing portfolio of clients rolling out our data governance practices. It is a testament to our pragmatic governance approach utilising in-house developed models, templates, accelerators and software tools. Our ability to provide a practical plan of action to generate quick wins, engage key business stakeholders and provide a clear delivery roadmap proved to be a key differentiator for our client. This win demonstrates how Kinaesis can add value quickly across a diverse range of clients with differing operating models and business lines.”

Any further queries, please direct to info@kinaesis.com.

Kinaesis wins SAS Special Partnership award for BCBS 239 project success

Posted by Gillian Gray on 01 February 2017
Kinaesis wins SAS Special Partnership award for BCBS 239 project success

Kinaesis, the industry leading, independent, data management, analytics, reporting and domain expert, was awarded a Special Partnership Award by SAS, the leader in analytics, at its recent Northern Europe Sales Kick-Off in London. The award was presented to Kinaesis in recognition of their market leading approach for tackling banking regulation BCBS 239, including close collaboration with SAS to ensure a global banking client gained regulatory compliance.

Simon Overton, Head of Financial Services at SAS UK & Ireland, commented: “We welcomed nominations for firms across Northern Europe, from very small niche practices to large corporations. Through their engagement with the client and SAS, Kinaesis demonstrated how their approach can provide solutions to our clients‘ BCBS 239 compliance challenges. We very much look forward to working on similar projects with Kinaesis throughout 2017 and beyond.”

Simon Trewin, co-founder of Kinaesis, commented: “We are delighted to have been recognised for the successful delivery of BCBS 239 compliance for our client working alongside SAS. This project demonstrates that utilising a data centric approach and SAS best-in-class technology provides a cost effective and sustainable compliant solution to meet strict timescales and complex regulatory requirements.”

The Special Partnership Awards were developed to recognise and reward excellence, best practice and innovation. They are open to all partners operating and working in a wide range of sectors, including financial services, telecommunications, retail, manufacturing and the public sector.

Welcome Phil Marsden to Kinaesis!

Posted on 22 June 2016
Welcome Phil Marsden to Kinaesis!

We are delighted to announce that Phil Marsden has joined Kinaesis on 16th May 2016, as Head of Business Development. He has over 18 years of experience in Account Management within Financial Services and previously worked at both Icon Solutions Ltd and Rule Financial Ltd. Welcome to the team, Phil!

Kinaesis sponsors the 5th Edition Risk Data Aggregation and Reporting Forum

Posted on 27 April 2016

Kinaesis are pleased to announce they are co-sponsoring this event with our partner, SAS. This event focusses on benchmarking progress on BCBS 239 across GSIBs and DSIBs and assessing the ongoing impact on business. Delegates who attend, will discover practical steps to implementation and ongoing control, governance and real added value of BCBS 239 data principles.

Kinaesis are one of the sponsors of this event, as we work with our clients on BCBS 239 solutions. We use our skills and frameworks for visualising and rationalising their meta-data contributions, leading to increased insight and improved management of complex lineage and dependencies.
Come and join us:
Time: 28th-29th April 2016.
Location: Hilton Canary Wharf, London, E14 9SH
To find out more about this event, please click here.

Advanced Analytics - Speeding up time to insight / compliance and reducing risk

Posted by Simon Trewin on 29 March 2016

Looking at the traditional lifecycle for a data development project, there are key constraints that drive all organisations into a waterfall model. These are data sourcing and hardware provision. Typically, it takes around 6 months or more in most organisations to be able to identify and collect data from upstream systems, and even longer to procure hardware. This then forces the project into a waterfall approach, where users need to define exactly what they want to analyse 6 months before the capability to analyse it can be created. The critical path on the project plan, is predicated by the time taken to procure the machines of the correct size to house the data for the business to analyse and the time taken to schedule feeds from upstream systems. One thing I have learnt over my years in the industry is that this is not how users work. Typically, they want to analyse some data to learn some new insight and they want to do it now, while the subject is a priority. In fact, the BCBS 239 requirements and the regulatory demands dictate that this should be how solutions work. When you have a slow waterfall approach this is simply not possible. Also, what if the new data needed for an analysis takes you beyond the capacity that you have set up, based on what you knew about requirements at the start of the project? The upfront cost of a large data project includes hardware to provide the required capacity across 3-4 environments, such as Development, Test, Production and Backup. Costs include the team to build the requirements, map the data and specify the architecture, an implementation team to build the models, integrate and then present the data, and optimise for the hardware chosen and finally, a test team to validate that the results are accurate.

This conundrum presents considerable challenges to organisations. On the one hand, the solution offered by IT can only really work in a mechanical way, through scoping, specification, design and build, yet business leaders require agile ad-hoc analysis, rapid turnaround and the flexibility to change their minds. The resulting gap creates a divide between business and IT, which benefits neither party. Business build their own data environments saving down spreadsheets and datasets to build ad-hoc data environments, whilst IT build warehouse solutions that really lack the agility to be able to satisfy the user base needs. As a solution, many organisations are now looking to big data technologies. Innovation labs are springing up to load lots of data into lakes to reduce the time to source. Hadoop clusters are being created to provide flexible processing capability and advanced visual analytics are being used to pull the data together to produce rapid results.

To get this right there are many frameworks that need to be established to prevent the lake from turning into landfill.

  1. Strong governance driven by a well-defined operating model, business terminology, lineage and common understanding.

  2. A set of architectural principles defining the data processes, organisation and rules of engagement.

  3. A clear strategy and model for change control and quality control. This needs to enable rapid development, whilst protecting the environment from introduction of confusion, clearly observed in end user environments where many versions of the truth are allowed to exist and confidence in underlying figures is low.

Kinaesis has implemented solutions to satisfy all of the above in a number of financial organisations. We have a model for building maturity within your data environment; this consists of an initial assessment followed by a set of recommendations and a roadmap for success. Following on from this, we have a considerable number of accelerators to help progress your maturity, including:

• Kinaesis Clarity Control - Control framework designed to advance your end user environments to a controlled understood asset.

• Kinaesis Clarity Meta Data - Enables you to holistically visualise your lineage data and to make informed decisions on improving the quality and consistency of your analytics platform.

• Kinaesis Clarity Analytics - A cloud hosted analytics environment to deliver a best practice solution born out of years of experience and capability delivering analytics on the move to the key decision makers in the organisation.

In addition, and in combination with our partners, we can implement the latest in Dictionaries, Governance, MDM, Reference Data as well as advanced data architectures which will enable you to be at the forefront of the data revolution.

In conclusion, building data platforms can be expensive and high risk. To help reduce this risk there are a number of paths to success.

  1. Implement the project with best practice accelerators to keep on the correct track, reduce risk and improve time and cost to actionable insight.

  2. Implement the latest technologies to enable faster time to value and quicker iteration, making sure that you combine this with the latest control and governance structures.

  3. Use a prebuilt best practice cloud service to deliver the solution rapidly to users through any device anywhere.

  4. Make sure that you combine this with the latest control and governance structures.

If you are interested in finding out more about the above, or to receive a brochure then please contact simon.trewin@kinaesis.com, or info@kinaesis.com.

Kinaesis Signs Strategic Partnership with SAS

Posted on 22 March 2016

Kinaesis offers SAS customers targeted financial services solutions to meet the increasing challenges of data management to consolidate, aggregate, govern, accurately report and provide insight in a timely manner.

As a leading financial services data management consultancy offering software backed services within Banking, Insurance and Investment Management, today we announced a strategic partnership with global business analytics leader SAS. This partnership opens up opportunities for both firms to jointly deliver best practice SAS data management solutions. SAS products provide our financial services clients with an industry leading data platform, containing all of the building blocks to develop a complete strategy to enhance their existing tools. At Kinaesis, we have developed a strategy around a new approach to traditional data management that is cost effective and provides dynamism, self-service, scale and governance. We see the SAS toolset complementing this strategy alongside our methodology, operating model and governance.

Kinaesis has been working with global systemically important banks (GSIB) to successfully implement their BCBS 239 solution on SAS technology. Simon Trewin, Director, Kinaesis, commented “We believe that the marriage of Kinaesis regulatory response solutions and SAS software, offers clients an excellent roadmap to satisfy their regulatory needs efficiently and effectively. This has clearly been demonstrated at one of the top twenty banks in the world.”

“As we expand our Partner strategy to take advantage of our broad portfolio, it is imperative that we work closely with specialist companies which provide deep domain expertise and delivery capability. Kinaesis offer just this to the Financial Services industry and we foresee a bright future between our companies, as SAS technologies augment the skills and relationships Kinaesis brings to bear in this marketplace.”
Richard Bradbury, UKI Director of Alliances & Channels

Ambient Intelligence meets Business Intelligence in the Cloud

Posted by Allan Eyears on 09 November 2015

Most businesses today are, quite understandably, focused on the solving their immediate reporting and analysis problems by leveraging business intelligence (BI) platforms within their organisational perimeters. Some businesses are prepared to push beyond these perimeters and look at Cloud hosting for their BI needs. This has all been against the backdrop of the data landscape changing immeasurably in the first half of this decade with an explosion of ambient data, from remote sensor data, social media data through to analysis of voice and video. There’s now a wealth of ambient intelligence available with more and more ubiquitous computing (UC) devices contributing increasingly more data. So how do the worlds of BI, cloud and UC come together? And what impact does UC have on the current state of Cloud hosted BI?

For some time, the large technology vendors have been investing in BI solutions on their Cloud platforms. Some smaller players, such as Birst have emerged recently to challenge the dominance of the large vendors (named as “Niche Challengers” in Gartner’s 2015 Magic Quadrant for Business Intelligence and Analytics Platforms). The majority of Cloud BI deployments to date have tended to be smaller, standalone solutions requiring only light integration to the enterprise BI environment. Recently, support for universal data consumption (desktop, tablet, smartphone) has further strengthened the case for the Cloud deployment model. Other historical drivers for adopting a Cloud based BI strategy have been the need for functional agility (many PaaS/SaaS solutions are pre-packaged and available immediately, on demand), scalability (Iaas – Cloud environments give us true elastic scalability) and the opportunity to decrease costs (usage based subscription rather than incurring recurrent, fixed, licence fees). These drivers have been more than enough to drive a decision to use off-premises BI resulting in thousands of deployments to date (at Kinaesis we run all our IT functions, BI included off premises!)

So what impact, if any, does UC have on Cloud BI?

For some time, it has been acknowledged UC as a driver for greater levels of Cloud BI adoption. As far back as 2011 a paper in the 17th Americas Conference on Information Systems discussed the impact of UC on Cloud BI. In this paper, there are three key observations linking UC to greater utilisation of Cloud BI:

  1. Solutions leveraging UC (particularly streaming) data lead to decentralized data architectures that are better suited to Cloud deployments.
  2. BI solutions based on UC applications are sensitive to changes in the operational business processes and require scalability. The elastic nature of the Cloud is therefore an ideal operating environment.
  3. In order to react to unexpected changes in the environment, BI solutions based on UC data need to be able to flexibly include specific analysis features. The feature richness of PaaS and some SaaS platforms allows additional analysis features to be added when required.

The core theme here is the ability of the platform and BI tool to handle the large and potentially fast volumes characterised by UC. The collection, integration and analysis of this data is the hinge point for BI, Cloud and UC. For those more visual individuals the following picture starts to emerge:

VennDiag

Bringing all of this together requires that you have a powerful BI solution built specifically to handle Cloud to device data transfer characteristics. That’s where the value of the PaaS offerings come in – amongst others, Tibco Spotfire has been adding support lately for the integration of UC streaming data. One other interesting recent move in the market has been the acquisition of Datazen Software by software giant Microsoft.

Shortly after his appointment, incoming Microsoft CEO Satya Nadella's was very vocal about "ubiquitous computing," and "ambient intelligence" as well as advocating a “Mobile First, Cloud First” strategy. Up to that point, Microsoft’s main player in Cloud BI was Power BI however with the Datazen acquisition and subsequent integration into the Microsoft Azure (cloud) platform it’s evident that the Redmond giant sees a fully UC integrated and enhanced Cloud hosted mobile BI platform as an investment in the future.

Datazen

So, in summary your organisation’s Business Intelligence strategy needs to start thinking out of the box and looking at the value of UC integration in an off-premises solution. At first sight, there may be insufficient drivers for off-premise given your current business model, however with the increasing breadth and depth of UC data, available drivers may appear that change that quicker than you think.

It’s not easy to condense such a large and quickly moving subject into a short article, so I’ll be following this piece up with a more expansive whitepaper looking at some specific use cases and integrated UC-Cloud BI architectures to draw out guidance points on strategy. Keep your eyes peeled!

Kinaesis' growth reflected in new and improved website.

Posted on 02 July 2015

Due to our continued growth and diversification, Kinaesis is launching a new and improved website.

Simon Trewin, Director at Kinaesis comments, "Having taken on 11 new clients, we have had a period of major growth in the last 18 months. We have also substantially broadened our solution portfolio and need a website that represents more accurately, the work that we now do. The new website not only reflects this, but more importantly, features many case studies and testimonials from some of the largest financial services organisations in the world, who view us as trusted partners and undoubtedly recommend all that Kinaesis do."

Kinaesis' growth continues on 2015, with yet another great first half of the year.

Posted on 28 June 2015

Another great start to the year for Kinaesis, as we continue to grow both by retaining existing clients and acquiring a number of new clients in both the Banking and Investment Management sectors.

Kinaesis continue to be the provider of choice for Risk and Finance Data and MI project delivery, with existing clients calling on our capabilities to assist in the delivery of BCBS 239 and Volcker programmes and new clients engaging with us on Finance Data Management programmes.

Kinaesis are proud to say that we are now trusted delivery partners of 3 of the top 5 British owned banks. Kinaesis are also being heavily engaged in the Insurance sector as Solvency II, Data Governance and Data Science continue to drive client requirements.

Kinaesis repeats the success of the first half of 2014, with a great second half year performance.

Posted on 19 January 2015

Another great 6 months for Kinaesis in H2 2014, which saw a further 4 new clients joining the ever growing list of banks, insurers and investment managers, who are choosing Kinaesis as trusted partners for their Data and Analytics projects.

We saw very strong demand for our Independent Valuation and Risk Analytics service which we offer in a joint venture with CLOUDRISK, with 3 new Asset Managers added to the service in H2 2014. We also added another large new client to the Insurance Practice in a Master Data Management and Data Quality project.

Kinaesis also moved to strengthen the management team, with the addition of Barney Walker as Head of Banking Practice, as we look to increase our capabilities to help resolve the numerous data driven challenges being thrown at the Banking market in 2015.

Kinaesis has great first half year performance.

Posted on 25 July 2014

A great first 6 months of 2014 for Kinaesis, with 5 new clients joining the ever-growing list of banks, insurers and investment managers, who are choosing Kinaesis as trusted partners.

The progress underlines the market view that Kinaesis are the partner of choice for Finance, Risk and Product Control Reporting and Analytics projects. We are also pleased to see growth in our HPC and Quantitative Research Function, with 2 new clients here, plus a very strong pipeline for H2 for our Independent Valuation and Risk Analytics Service.