Skip to main content

Bad Data + Lack of Standards = Lousy Supply Chain Analytics

One of the truths that is often overlooked in the splurge of press and hype around “Analytics” is the need for data integrity.  You can’t have analytics without data that is reliable, timely, validated, and accurate.  In a 2014 Deloitte survey of 239 chief procurement officers and directors from 25 countries, 67% of respondents stated that poor data quality was a key barrier to implementing systems. In another recent study by Procurement Leaders, 95% of procurement professional identified data quality as extremely important or crucial to achieving procurement objectives.  Another Deloitte study in 2017 also found that 49% of CPO’s believe the quality of data is a major barrier, and the lack of data integration was the number two barrier (42%).

Why is data integrity such a barrier?  The primary reason:  the lack of a data governance standard business process. One of the biggest challenges faced by major companies who are seeking to build real-time integrated supply chains is to keep the data synchronized across the data execution systems. This is especially important across transactional data, to ensure that the entire system is aligned in terms of using the same “source of truth”.  This only occurs when there is a strong governance structure across the end to end supply chain.  There is no point in having lead-time data if it is not coming together very well and misleading in terms of the conclusions people will make based on this information.  Poor data integrity will undermine the viability of the entire system.  The top supply chain executives I meet with note that this is the biggest challenge they face today.  And data governance is challenging, because it is a “people problem”, not a data or technology problem.  Bad data has a source and a root cause- and it is almost always due to human errors in inputting information into a system.  It is easy for people to fall back into their old behaviors, once a data synchronization standard is put into place.   The fact is that the data integrity flows are part of our life. You never can get to 100% correct data;  rather, the challenge for analytics is to figure out these flows and mend the integrity issues so they are synchronized and people can trust the execution of the systems, and not have to question the integrity of the data underlying them.

It is important to not only have lead-time data, but the RIGHT data lead-time data, and the correct execution related data. Once collected, data also has to be filtered in analytical approaches to eliminate noise and prioritize what data is looked at, to render it truly exception-based. With current trends around data capture, the amount of data we are exposed to is exponential. Most of the data says the supply chain is fine, so there is no need to look at this information.   Instead, people need to focus on the things that are delayed, and have to extract information from data on what needs to be done to fix the problem.   Filtering and prioritizing – allows people to focus on the decisions that can be made faster and impacted by performance.

On the downside, incorrect data can have massive repercussions.  Think of the price of a single component, and if it is off by only 10 cents.  That can escalate based on the millions of parts that are in the product line and bills of material for that company, and lead to incorrect pricing and cost reporting in the customer base.  It can lead to massive discrepancies.  The same goes for lead-time data, which can impact working capital and free cash flow.  Bad data is an easy excuse when people complain about their systems.  It’s about time managers start doing something about it.