« August 2016 | Main | October 2016 »

September 27, 2016

Is Traditional Data Management Still Relevant?

Noah Consulting blog series: When Oil & Gas Prices Recover storyline

Is Traditional Data Management Still Relevant?

Data management plays a key role in helping organizations create value from their data assets by enabling advanced analytics, reporting and traditional transactions. With lower oil and gas prices putting significant pressure on operators and oilfield service companies, gaining new insight from data to improve efficiency is no longer a luxury.  But this comes at a cost - as the "information intensity" of oil and gas operation increases, new challenges are placed on the data management infrastructure creating a data dilemma.

Historically, the oil & gas exploration and production companies are organized in disciplinary and operational silos aligned with the asset life-cycle. As the project "moves" through its life-cycle, it is transferred from department to department and it is often assumed that the associated data moves also - seems logical. These departments (exploration & appraisal, drilling & completions, production, etc.), their associated sub-groups, (geophysics, reservoir management, facilities, production engineering, operations and maintenance) and the corporate departmental silos (finance, procurement, land/lease, etc.) result in a virtual duck soup!

Each group has their own set of tools, a specific application portfolio that is fit-for-purpose. Corporate departments tend towards a single large platform (ex., SAP for ERP) but the other departments use diverse applications from a variety of vendors making data integration a challenge. While the digital oilfield encompasses a great deal more data, actionable insights into specific elements of the value chain are easier done inside the silo than cross-discipline.

The ultimate goal should be to create an asset-centric focus by integrating silos of functional and operational information. When operators begin tying work processes together to take advantage of the information derived at each stage, they gain a holistic perspective to base larger initiatives in efficiency and improve long term profitability.

As the industry moves towards this information integration, it is also adding unprecedented volume as some producers are already collecting up to 1 terabyte of data per well, per day. Large organizations have data collections that add up to hundreds of terabytes or even tens of petabytes in size. Many organizations that collect smaller amounts of data are struggling to analyze much of the data they collect.

IT departments normally have mature processes for managing data which, at the least, have data retention and security policies for protecting the most critical data.  Other elements of a comprehensive plan include defined systems of record for each function (a home for most data), a master data management system with the "gold version" of each critical information object, ETL routines for transferring data from source to enterprise data warehousing and standard reporting tools for routine reports. Additionally, there should be separate data management solutions for documents, transactions, models and sensor data.

This environment was built to solve the business challenges from yesterday's perspective, but how are they coping with today's world? With the Big Data Three Vs (volume, variety and velocity) changing the playing field, can IT respond to higher expectations with a traditional architecture? How fast can internal IT respond to data loading, data mastering, data quality checks, data modeling when the volume is rising, the variety is growing, the pace is accelerating and the need for trusted data, integrated into a more holistic view is now the minimum requirement?

Sadly, I think traditional solutions are coming up short. That is not to say they don't have an important role still to play, but despite internal IT services improving, the gap is growing as the three Vs are becoming a powerful force in business. Many of the new demands are coming from operations, traditionally not a large user of resources, as a result of the integration of sensors and the industrial Internet of Things (IoT). When IT cannot keep up they are not invited to the planning meetings when important business projects are discussed and they are even left out of the vendor presentation for the new technology alternatives.

The business requirements that drove the traditional solutions are still there, especially from the traditional drivers for IT, finance, HR, procurement. However, if a business reaches the point where they have given up on the expectations that IT can move at their accelerated pace, there are consequences, not just for IT but for the success of data initiatives. Despite the perceived slow pace of internal IT, they do have important controls for data quality, data governance, information protection and master (and meta) data management. Leave IT behind and most likely these processes are also left behind. Does data quality no longer matter when you have a lot of statistics to compensate? Actually there are activities when relevant patterns can be identified even with poor data quality outliers in your data, but there are other activities when your data needs to be right in order for you to make the correct business decisions.

In a perfect world, the right way to solve this dilemma would be for IT to speed up, businesses to listen to IT's issues, vendors to partner and consultants to facilitate alliances. But will the urgent by pass the important, will the temptation to jump outweigh the need for caution, will the marketing hype leap over the proven?

There is no substitute for good data management and it is everyone's job to manage data. That doesn't mean everyone is a DBA or a programmer, but an effective governance process assigns roles and responsibilities and the management structure rewards each employee when they do their jobs. Analysts (and most everyone is an analyst of some sort) will be able to trust, access and use standard sets of tools to perform their jobs accurately and efficiently. And if possible, companies should establish a center of analytics excellence where data scientist fish in their data lakes to find innovative insights with a designed architecture to keep the hybrid ecosystem performing.

So to answer the question, "is traditional data management still relevant?" Our answer is Yes, but... Yes, the basic premise of good data management practices has not changed. Data governance, data quality management, meta and master data management processes are as relevant in the old world as the new. But as changing business requirements present new challenges to the traditional data management team, new emerging technologies are opening up opportunities to manage different types of data, greater volumes of data and bringing near real time operational data to the engineers' and analysts' desktops. Data management organizations cannot stand still - bringing the traditional expertise and disciplines, integrating them into new processes and technologies will solidify the value and relevance data management teams provide in evolving data-driven enterprises.

Here are a few thoughts to consider in evaluating data management relevance:

1) Keep the Existing Data Environment:  Don't throw it away but ensure it provides a flexible architecture that maintains the traditional solutions and grows new capabilities.

2) Focus on Data Governance:  Who owns the data? Include a shift towards business ownership of critical data which will result in greater value from data investments and change decision-making responsibilities, accountability, data flows and processes. Inevitably, it will change the way your business works--for the better.

3) Data Quality Matters: Bad data leads to bad decisions (though not every decision requires perfect data). Don't pollute your data lake or your data warehouse with poor data quality. Information protection, individual privacy and intellectual property, is essential.

4) Experiment with New Technology: Advances are realized through experiments, but remember the company relies on consistent, accurate, and compliant data on which to base best practices, innovations and good decisions.

5) It's the People:  Good data management requires experienced and valued data professionals. Invest in the team that will take care of one of your most valuable assets - data.



September 20, 2016

Suffering in Silence

As a consultant, it isn't very often I get to spend much time with end-users - the ultimate audience for most digital solutions projects. They are busy, often distracted and usually suspicious of why a third-party is poking around their work processes. I might get a few time-limited interviews on specific questions and requirements gathering, but it doesn't often get beyond that.

Usually, the sponsors are in the IT department and give a good picture of what is going on technically, with the computing infrastructure, with the applications and the support perspective. But that is only one side of the story.

One of the subjects I am always interested in is the adoption of technology. The long-term success rate in many industries is not very high, but the problem is rarely due to technical issues. New technologies work, so why don't we want to use them?

While there are well-documented challenges in introduction and adoption methods for new technologies, one silent-but-deadly killer is the lack of trusted data. One of the biggest and most obvious symptoms is to find that critical data is missing or incorrect. Like a slow-moving cancer, the insidious data quality issues don't take long to surface leading to users "self-treating" with spreadsheets. 

But if data quality is that poor, why aren't the business users speaking up and demanding better support? From my experience, there are a few reasons companies are content to allow serious data quality problems to persist and why there isn't a user revolt.

To start, experience has taught users to have low expectations of enterprise systems (such as the ERP platform, the engineering standards data base or other discipline related systems) and at the first hiccup, they revert to their work-arounds and spreadsheets. They know how to cope with the current state - regardless of how bad it is. Requests for IT to fix the data issues generally take longer than the deadline to make a decision. Add in that the definition of good data is often different depending on your location in the business and lack of clear ownership responsibility. The perspective and approach to resolution can be wildly different, therefore, strong communications are required to make any real progress.

Whenever a new application is deployed, standard practice is often that IT will perform a one-off spring cleaning of data along with migration of (some) historic data. Data quality improves for a while, but poor data management processes soon pollute the new system and trust declines over time. Since not all legacy systems are incorporated into new solutions, access to older systems are often migrated into the shadow IT world along with personal spreadsheets.

Most users are good at scraping together the data they need and storing it in spreadsheets. They have their own version of the truth so they don't worry much about the data quality in the official system of record. Depending on skill levels, it isn't surprising to find some basic SQL database programming and simpler structured databases such as Microsoft Access. There are even those using Visual Basic language and building sophisticated algorithms as spreadsheet macros to manipulate data in applications like Tibco Spotfire.

When senior managers are asked about data quality, the usual response is that they don't have any data quality issues and they have all the data needed to make decisions. Pressed a little further and the story is one of heroic efforts by a few individuals behind the scenes who gather data from diverse sources, clean the critical data elements in their spreadsheets and make sure that management is always presented with the most accurate and timely data. These guys are eager to impress so working late in the day or over the weekend before a big meeting is just part of the job for a young staffer.

Often the last quality check uses the rarest of tools - the Experience Filter (otherwise known as the "I- know-bad-data-just-by-looking-at-it" filter). It is true - an experienced manager can look at data and identify the questionable entries quickly, however, it can provide space for the data cancer to grow. Because the bad data is filtered out, it isn't corrected leaving the poor data in place. Experience can also narrow down the data that is deemed relevant - searches are focused on a critical data elements and as soon as those are found, a decision is made.

All these coping mechanisms help prop up current data foundations. A top-down data clean-up is daunting, time and resource intensive, therefore, not attempted. The attraction of new solutions seems irresistible, applications are added to the current inventory, complexity grows without a serious review of existing standards or strategic planning.

The best quality and primary data source should be from the official foundation with spreadsheets having a smaller role to play. Enterprises need tools to explore data collections, put together analysis, discover new patterns and gain new insights. Users and IT will need to work to find agreement on the approach to data quality, for starters. This can build trust - between teams and of the data sources. And if it doesn't, we will all be better off if there are more complaints and less suffering in silence.