Member Article
The devil is in the detail
Many organisations are starting to realise that data analysis needs to be a strategic focus for their business if they are to successfully enter a new era of growth in a rapidly changing market. The past few years have seen the creation of ever-increasing volumes of data and a significant increase in the complexity of data sets. This has been hugely beneficial for businesses. Whether that data is from social media, mobile or the cloud, data enables organisations to gather powerful customer insights, predict trends and bring products to market faster. However, this data-deluge does add an unprecedented margin for error.
Organisations often take it for granted that their data is accurate, blindly trusting the figures and relying on them to make both strategic and operational decisions. However, what happens when this data is wrong? Inaccurate or ‘bad data’ occurs when information is duplicated, outdated, imprecise or incomplete. As a result, most organisations will have some form of inaccurate data on their books, often simply due to human error. Often information is simply entered incorrectly, say for example, if someone from HR, through a typo, accidentally listed a pilot as ‘three’ rather than ‘38’, then their records are wrong and could skew the overall picture of the company. Recent research has shown that only 38% of organisations use software to check data at the point of capture and remarkably that 23% of companies rely solely on manual checks to check their records.
And yet, bad data can have serious consequences. In fact, the average company loses 12% of its revenue as a direct result of it. The problem is not simply that the data contains errors, but rather, that no one is aware of them. By the time they’re flagged, entire chains of decisions could have been made, sequences of events could have played out and investments could already be lost. These types of mistakes happen across any industry and can have serious implications - imagine a venture capitalist firm wrongly estimating projections and profitability or healthcare organisations wrongly predicting their staffing needs.
The problem lies in the fact that these errors are extremely difficult to spot. Attempting to check thousands of rows and columns of data to look for errors is both time consuming and unproductive. With companies producing such a mass of information, it’s far too easy for errors to slip through the net. But, viewing complex data in a visual format can help businesses to uncover discrepancies and see errors that are hidden by more traditional data analysis methods.
Visualisation is sometimes thought of as simply an exercise in design, a way of adding colour and form to what would otherwise be a bland series of tables and numbers. In reality however, visualisation is much more than this. Expressing quantitative data visually has benefits far beyond the aesthetic: many features of quantitative data can be best perceived and understood when values are displayed graphically. Visualisation enables us to see the big picture and provides an overview of a data set, making it easy to identify patterns and determine how they differ.
All of this means users can spot outliers and correlations in complex data, enabling businesses to understand the quality of the data they hold in a way they never could before.
It’s an age-old adage that ‘seeing is believing’, but in the case of bad data, seeing is actually ‘realising’ – realising that the inaccuracies in the data you were taking as verbatim are clouding your view and helping you to put them right for a better, more precise understanding of your company. When it comes to visualisation and bad data – the devil really is in the detail.
This was posted in Bdaily's Members' News section by Patrik Lundblad .