How to Keep from Drowning in an Ocean of Data

2280
woman looking at a virtual analytics dashboard in holograph form

In order to keep from drowning in an ocean of data, companies now have to utilize machine learning and predictive modeling.

Making Data Actionable

Imagine a world where you can turn 1.4 trillion data points into hundreds or thousands of ready, actionable attributes allowing you to engage your customers in ways previously impossible.

Most modern companies are collecting billions of data points from customers every year. Oftentimes, this data sits unused, coalescing into vast data lakes and in some cases, data oceans. In the modern enterprise, we are easily capturing application data, customer data, behavior data, analytics data, etc. However, today’s challenge lies in turning these deep reservoirs of data into actionable insights. It’s important to stay ahead of the data growth continue drowning in an ocean of data.

This begs the question of why is it so difficult to operationalize our data? The two biggest factors preventing us from mining valuable insights are the disparate locations of data and the difficulty of accurate, dependable attribution.

Typically, data is spread across various platforms such as traditional relational databases, non-relational stores, flat files, buckets, third parties, etc. Conventional methodologies would dictate that you build vast warehouses that integrate data into fully normalized models. Few companies have the resources or time to do so, because the data landscape is constantly evolving, and new data sources are added at a breakneck pace.

Even with the proper resources to attempt normalizing the data, there is still the challenge of attributing the data to the right customer. To make decisions based on the data, you must have confidence that you have pinned it to the right person.

Analyzing the Data

At Marlette Funding, we have over 1.4 trillion data points that forced us to rethink how we engage and leverage our data. We have invested in developing systems and practices that resolve these two blockers for operationalizing our data.

By developing a proprietary algorithm, we can uniquely identify our customers with a handful of data attributes. We assume a customer is unknown, until we have enough data to match them to an existing customer or are certain they are new. This occurs in real-time as the customer engages with our products and services. This unique pin allows us to confidently treat the customer in a manner informed by all past interactions.

With the identity issue resolved, we can begin turning our ocean of data into actionable attributes. To accomplish this, we built a data streaming platform that calculates business-defined data attributes in real-time as the underlying data changes. Any customer or system-initiated event can trigger a recalculation of a defined attribute, which ensures the most accurate version is always available. Allowing us to make real-time decisions based on important attributes and having complete confidence in those decisions.

We intentionally do not expose all our data in this way. We use advanced machine learning and human experience to find meaningful attributes, then code those attributes and deploy them into our Data Streaming Platform. We can develop a new attribute in a few hours and activate it for both new and existing historical interactions. This is only possible because we leverage the power and scale of the public cloud.

By building data services and practices that focus on surfacing meaningful attributes, we can provide constant innovation, finding better ways to service our customers. Combining analytical expertise, our proprietary services, and the public cloud has allowed us to do things that were not possible only 5 years ago.

Keeping One Step Ahead

During a meeting at Marlette, a team member posed the following hypothesis, “I wonder if there is a correlation between the customer checking their current balance and their likelihood to pay off their loan early.” This data point is captured in our dataset, but, at the time, had not yet been activated in our Data Streaming Platform. Within a few hours of work, one of our data engineers was able to build and activate this attribute, back-scoring all known customer interactions. Our analysts now had access to an attribute that tracked how often a customer checked their balance. A quick analysis that was performed, showed there was indeed a correlation between customers checking their balance and paying off their loan early. You may be asking yourself, “So what?”

The real power is in having this attribute, accurately tracked per customer, confident that it will always be updated as the underlying data points change. We can create personalized, event-driven engagement with customers. We could offer them a proactive discount via email, or a chat session asking them if they would consider refinancing their current loan. This same pattern of curiosity, discovery, activation, and engagement can and will happen hundreds of times per year. Allowing us consistently wins for both our customers and our business.

Subscribe

* indicates required
Previous articleHow to Build a Responsive and Iterative Infrastructure
Next articleElias Guerra
Brian Conneen
Brian Conneen has had a journeyman’s career in Software Development and Information Technology spanning multiple decades, two centuries and at least one millennia. He brings a critical and enthusiastic approach to problem-solving issues at code, system and organizational levels. He has worked in varied roles within IT; customer-facing troubleshooter, critical systems architect and dynamic team leader. His most recent journey includes successfully launching then shepherding Best Egg from a business plan to over 11 billion dollars originated as CIO/CTO.