Data management then and now


Always-on technology isn’t new. In fact, most of us grew up with a version of this tech in our homes – the telephone dial tone. In 1908, German engineer August Kruckow created the first dial tone as a signal to users that their phone was off-hook and ready for them to enter digits for the automated switchboard to relay. This technology was adapted by the US in 1940 as the availability of in-home telephones increased and the demands on the existing telephone infrastructure made the workload for live operators untenable. In modern parlance, the system of the day wasn’t scalable and so an automated system was created to ease the burden on the network.

Today, always-on technology takes a host of different forms that can be vital to modern business. And of those forms master data management is perhaps the fastest growing field. Businesses find themselves more and more in need of data management solutions that differentiate between the need for always-on data and almost always-on data. Telephone landlines are almost an anachronism nowadays. With the ubiquitous availability of cellular phone technology, people carry their numbers and a host of other data features with them at all times. This is a good example of almost always-on technology. Your mobile device can connect to a wireless network when it is needed, rather than staying connected at all times.
The advantage of almost always-on is greater efficiency and a more scalable infrastructure. By optimising data management you can increase access speed to vital data while increasing your ability to manage massive data sets. From a more practical standing, for most businesses this demarcation between always-on and almost always-on means that a business can improve its efficiency on the back end while improving the user experience on the front end.
In the Enterprise landscape always-on is most crucial in relational databases. When you insert a record or read one you expect it to always be available on demand. Enter a customer name and all the pertinent details and there’s an expectation that you can recall that information instantly with only a few keystrokes. The drawback to this type of data management is that it does not scale easily. Over time, the database becomes unwieldy with its endless array of records and demographic information and customer notes. The larger the organisation and the greater the number of customers, employees or vendors, the less efficient a static database will become.
In contrast, an almost always-on database is more dynamic and therefore more scalable. This data set will tend to include information that isn’t necessary for a purchase or return but may need to be referenced under special and typically irregular circumstances. The key is to consider how many of your use cases actually need immediate- and how many can be managed with eventual consistency. Once we’ve made this determination we can begin to segment the data and pair it with the right capabilities. We can increase efficiency by splitting must-on tasks from eventual-consistency tasks. This process is meant for scaling and for processing large amounts of data – the bulk of which is not required for day-to-day transactions. The data goes into the queue but it is not immediately available.
Facebook, LinkedIn and Google have all implemented this sort of segmented approach to data management, most evident in Google’s search suggestions. When you enter a search string you will immediately see a series of search suggestions. Google is attempting to finish your sentence as it were to pre-emptively offer search results before you’ve even completed your thought. Even more astounding is that the instant you hit “Enter” you are served with a selection of relevant links and maps. On the surface this might seem like “immediate consistency”. Google has your location and it is serving up suggestions for things that are local to you drawing instantly from its large database and serving you the specific data you need. The fact is that this is not readily available data, it is stored data that happens to fit the most common searches. The real miracle of this search is that Google is playing a game of probabilities. It builds a list pre-emptively and serves it up to you so that it looks like the search engine is thinking for you. In this case Google is using always-on technologies to serve you results from an almost always-on database.
In general, a business has a set of data that it must manage. This is master data – static and unchanging. Then there are data sets and subsets that do change over time. These include cost, markdown, sale prices, limited quantities and more. This is dynamic data, subject to fluctuations and changes over an unspecified time. Having a demarcation between these two data sets gives us the ability to be flexible with respect to information that could define a product. Having this demarcation of data, therefore, enriches the experience for the consumer as well as for partners.
First and foremost, consistency and accuracy are key. Second to that is how you build upon that consistency to enhance the experience for the consumer. For a shirt, you would go online and see the exact descriptions and options as you would encounter in a brick-and-mortar storefront. Size, color, style, fabric – all of this static and unchanging data is present and consistent across the database, providing the customer with a reliable experience; while simultaneously more dynamic and changing information – such as price, length of a sale and availability – are updated as well, so the customer sees the most relevant and current data which could influence his or her behaviour.
Now fold in the social aspects of data – how do people feel about this product? How is the competition doing with this product? What are the current trends? With always-on data this doesn’t process. We can’t understand it or use it. But if it becomes more dynamic we can better understand the “grey areas” of our data. Even better, we can tap into those grey areas and infer relationships and trends that can impact our business decisions. Is the product hot? If so, why is it hot? Is the competition doing better with the same product? Why? Do they get more customer reviews? More points of contact from SEO? This sort of dynamic data empowers us to take action and move toward business goals. More granular data makes it easier for us to scale. The key is to understand up-front the outcomes we are trying to achieve. Consistency and accuracy are great but what does it drive? Are we striving for increased revenue? Better inventory prediction and management? The current master data management space is focused on three concepts: accuracy, consistency, and good data quality. What’s missing is the connection with actual business outcomes.

Additional Link

This article was published in The Produktkulturmagazin, issue Q3 2016. Picture credit © Peter Dazeley / Getty Images


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s