TMCnet News

In an Era of Big Data: How Can We Ensure Data Integrity?
[June 18, 2019]

In an Era of Big Data: How Can We Ensure Data Integrity?


By Special Guest
Neil Barton, CTO of WhereScape
 
Multi-dimensional is something of an understatement when it comes to the beast of big data. However, the dimension that outweighs all others - so much so that it's even in the name - is volume. With the enormous potential that this data could possibly hold, the challenge becomes applying all of the usual methodologies and technologies at scale. This is particularly important in a world where about 2.5 quintillion bytes of data are created every single day, and the rate of data growth is only increasing. In addition, an increasingly large portion of this data is unstructured data, which is harder to categorize and sort than its structured alternative; IDC (News - Alert) has estimated that as much as 90% of all big data is unstructured.



Compounding the problem, most businesses expect that decisions made based on data will be more effective and successful in the long run. However, with big data often comes big noise - after all, the more information you have, the more chance that some of that information might be incorrect, duplicated, outdated or otherwise flawed. This is a challenge that most data analysts are prepared for, but one that IT teams need to consider and factor into their downstream processing and decision making to ensure that any bad data does not skew the resulting insights.

This is why overarching big data analytics solutions alone are not enough to ensure data integrity in the era of big data. In addition, while new technologies like AI and machine learning can help make sense of the data en masse, often these rely on a certain amount of cleaning and condensing going on behind the scenes to be effective and able to run at scale. While accounting for some errors in the data is fine, being able to find and eliminate mistakes where possible is a valuable capability - particularly if there is a configuration error or problem with a single data source creating a stream of bad data which can have a catastrophic effect in terms of derailing effective analysis and delaying the time to value. Without the right tools, these kinds of errors can create unexpected results and leave data professionals with an unwieldy mass of data to sort through to try and find the culprit.


This problem is compounded when data is ingested from multiple different sources and systems, each of which may have treated the data in a different way. The sheer complexity of big data architecture can turn the challenge from finding a single needle in a haystack to one more akin to finding a single needle in a whole barn.

Meanwhile, this problem has become one that doesn't just affect the IT function and business decision making but is becoming a legal requirement to overcome. Legislation like the GDPR mandates that businesses find a way to manage and track all of their personal data, no matter how complicated the infrastructure or unstructured the information. In addition, upon receiving a valid request, organizations need to be able to delete information pertaining to an individual or collect and share it as part of an individual's right to data portability.

So, what's the solution? One of the best solutions for managing the beast of big data overall is also one that builds in a way to ensure data integrity - ensuring a full data lineage by automating data ingestion. This creates a clear path showing how data has been used over time, as well as its origins. In addition, this process is done automatically, making it much easier and more reliable. However, it is important to ensure that lineage is done at down to the fine detail level.

With the right data lineage tools, ensuring data integrity in a big data environment becomes far easier. The right tracking means that data scientists can track data back through the process to explain what data was used, from where, and why. Meanwhile, businesses can track down the data of a single individual, sorting through all the noise to fulfil subject access requests without disrupting the big data pipeline as a whole, or diverting significant business resource. As a result, analysis of big data can deliver more insight, and thus more value, faster - despite its multidimensional complexity.

[ Back To TMCnet.com's Homepage ]