To Make the Splunk Acquisition Successful, a New Approach to Storage is Needed

Print Friendly, PDF & Email

Cisco’s acquisition of Splunk in September generated a lot of commentary, most of which unsurprisingly focused on how the two companies complement each other and what this means for their respective customers. 

As Cisco CEO Chuck Robbins stated when announcing the purchase, “our combined capabilities will drive the next generation of AI-enabled security and observability. From threat detection and response to threat prediction and prevention, we will help make organizations of all sizes more secure and resilient.”

From the product perspective, it is clear that the synergies are substantial. Cisco sells hardware that generates massive amounts of data and Splunk is the category leader for data-intensive observability and security information and event management (SIEM) products.

Viewed from the industry perspective, Splunk’s acquisition fits a distinct pattern.This transaction represents  the fifth this year for an observability platform following changes of control at Moogsoft, Ops Ramp, Sumo Logic and New Relic.

In all cases, including PE firm Francisco Partner’s takeover of both New Relic and Sumo Logic, the aim is clear: use the data these companies collect to fuel the next big wave of AI-powered operations and security tools. 

However, this next generation of AI-enabled tools faces a significant challenge: AI is data-hungry and requires always-hot storage, which is likely to be prohibitively expensive on current platforms.

This fundamental economic challenge confronts not just Cisco, but also HPE (Ops Ramp), Dell (Moogsoft), and Francisco Partners, as they attempt to make good on this AI-driven vision. It is possible, unless architectures change, that the high cost of storing and using data in these platforms, and the tradeoffs these costs impose, will impede the building of AI-enabled products.

AI is Data Hungry

With a few caveats, it is safe to say that more data makes for better AI models and, by extension, AI-enabled products. Larger training sets translate into greater accuracy, the ability to detect subtle patterns, and most importantly for the use cases envisioned by Cisco, generalization accuracy. Generalization describes how well a model can analyze and make accurate predictions on new data. For security use cases this can mean the difference between detecting or failing to detect a cyber threat.

But it’s not just enough to have a lot of data at hand. That data needs to be easy to access repeatedly and on a basically ad hoc basis. That’s because the process of building and training models is experimental and iterative. 

In data storage terms, AI use cases require hot data. And when it comes to platforms like Splunk, that’s a problem.

In AI, All Data Must Be Hot

To minimize costs, data on today’s leading SIEM and observability platforms is stored in hot and cold tiers.  

Hot storage is for data that must be accessed frequently and requires fast or low-latency query responses. This could be anything from customer databases to Kubernetes logs. It is data used in the daily operation of an application. 

Cold storage, on the other hand, serves as a low-cost archive. But in order to achieve this cost savings, performance is sacrificed. Cold data is slow to access and difficult to query. To be usable, cold data must be transferred back to the hot storage tier, which can take hours or even days. Cold storage simply won’t work for AI use cases.

Data science teams use data in three phases: exploratory analysis, feature engineering and training, and maintenance of deployed models, each of which is characterized by constant refinement through experimentation. Each phase is highly iterative, as is the entire process.

Anything that slows down these iterations, increases costs, or otherwise creates operational friction – and restoring data from cold storage does all three – will negatively impact the quality of AI-enabled products. 

The High Cost of Storage Forces Tradeoffs

It is no surprise to anyone paying attention to the industry that Splunk, like its competitors, is perceived as expensive. It was a top concern of customers before the acquisition and it remains the number one concern in surveys taken since. It is easy to see why. Though their pricing is somewhat opaque, estimates put the cost to store a GB of data for a month at $1,800 for hot data. Compare that to the starting cost to store data in AWS’s S3 for $0.023 (essentially cold storage).

Of course, there’s a lot of value added to the data stored in observability platforms, such as compute and storage resources required to build indexes that make that data searchable, but understanding the costs doesn’t change the fact that storing data in these platforms is expensive. According to Honeycomb and other sources, companies on average spend an astounding 20 to 30 percent of their overall cloud budget on observability.

The solution Splunk and others adopted to help manage these massive costs – and the crux of the problem for Cisco’s AI ambitions – is an aggressive retention policy that keeps only thirty to ninety days of data in hot storage. After that, data can be deleted or, optionally, moved to the cold tier from which, according to Splunk’s own documentation, it takes 24 hours to restore.

A New Model is Needed

Observability and SIEM are here to stay. The service that platforms like Splunk provide is valuable enough for companies to dedicate a significant percentage of their budget to provisioning it. But the costs to deliver these services today will impede the products they deliver tomorrow if the fundamental economics of hot data storage isn’t overturned. Hot storage costs need to be much closer to raw object storage to serve the AI ambitions of companies like Cisco, Dell, and HPE. Architectures are emerging that decouple storage, allowing compute and storage to scale independently, and index that data so that it can be searched quickly. This provides solid-state drive-like query performance at near object storage prices.

The biggest hurdle may not be a strictly technical one, though. The incumbent observability and SIEM vendors must recognize that they have a significant economic barrier to executing on their AI-enabled product roadmap. Once they realize this, they can proceed to the solution: integrating next-gen data storage technologies optimized for machine-generated data into their underlying infrastructure. Only then can vendors like Cisco, HP, etc. transform the economics of big data and deliver on the promise of AI-enabled security and observability.

About the Author

Marty Kagan is the CEO and co-founder of Hydrolix, an Oregon-based maker of cloud data processing software. He was previously founder and CEO of Cedexis (acquired by Citrix) and held executive engineering positions at Akamai Technologies, Fastly, and Jive Software.

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW

Speak Your Mind

*