Cloud computing has taken hold of most global enterprises as a solution for issues that have long plagued already over-taxed IT departments. Lured by the attractive cost and performance benefits that cloud vendor services and products offer, the enterprise is looking to invest to both compete and differentiate. And vendors have responded to that trend. Gartner predicts the number of cloud managed service providers to triple by 2020. As forward-thinking executives embark on this new frontier with high hopes, there are concerns.
Lock-In. How bad is it?
Few statements strike fear in an IT leader’s heart like this one because in most cases you’re not aware you’re locked in until you encounter a problem big enough that you are forced to reconsider your original decision. It’s at that point one of your architects drops the bomb: Changing that code/system/vendor is going to be a year-long multi-million-dollar project.
The emerging risks that executives are considering are real and although extreme financial and reputational damage has yet to occur at the hands of cloud computing failures, the possibility worries executives. The usual suspects are unauthorized access to sensitive or restricted information, or disruption from the provider itself. But in 2019, the most formidable lock-in is your public cloud choice. A lot of things create lock-in and prevent you from being future proofed and flexible to migrate or take advantage of opportunities/avoid problems.
Potentially the biggest cause comes from code that you write and manage to enable your analytics pipeline. Hardwiring your data consumers to your back-end data and writing bespoke ETL means that the cognitive load to move the data, and the technology cost to rewrite ETL pipelines is extremely high.
The ability to work seamlessly between multiple clouds – an essential part of a public cloud strategy – means you can’t be locked-in. With lock-in, your ability to optimize across the dimensions of security, performance, and the cost is not just limited—they are removed.
What’s Exactly is at Stake?
JPMorgan Chase spent 16 percent of its budget on technology last year—that’s more than $10B, $3B of which was allotted to “new initiatives” where the public cloud lives. The company has more than 40,000 technologists, and roughly 18,000 of them are developers creating intellectual property. Jamie Dimon once said Silicon Valley is coming to eat Wall Street’s lunch, and he needs to invest, innovate, and frankly out-hire and out-spend Silicon Valley to compete. The biggest, best-run banks think of themselves as information technology companies.
“JPM really is like a large tech company in some respects, basically, if you name a process the banks do, JPM is likely trying to automate that process and also grow market share.” —Brian Kleinhanzl, Keefe, Bruyette & Woods.
Future-Proofing the Enterprise: Data Warehouse Virtualization
As Wall Street grows more comfortable using the public cloud, many firms are considering how to split work across the three main providers. Most banks would prefer to be cloud agnostic, maintaining the ability to seamlessly move between cloud environments but doing so is no easy task. The biggest hurdle for most firms is navigating the applications that require significant amounts of data which is a common occurrence in finance.
In such cases, firms are forced to pick one provider, or else face steep costs maintaining data spread across multiple cloud environments. Moreover, most have a hybrid on-premise and cloud environment (spoiler alert: they do; they all do.). The bottom line: Wall Street is finally willing to go to Amazon, Google or Microsoft’s cloud, but nobody can agree on the best way to do it. And as a leader in IT, if you make a decision and pick the wrong provider, you’re fired.
But there’s hope. At the heart of enterprise database modernization is data warehouse virtualization. Without it, banks aren’t able to manage large data sets across multiple cloud platforms and leverage the benefits of automated data engineering. Using data warehouse virtualization, there is no reason to pick a winner. Pick all three. Virtualization done right, you can interface directly with an Azure SQL data warehouse, Google Big Query, Amazon Redshift, Snowflake, and your on-premise Teradata, Oracle, and DB2—and use machine-learned optimization to manage the complexity of figuring out what’s working and where to save money on processing costs. Data and query will naturally evolve to the right platform and be served from there.
The Data Warehouse Virtualization Journey for Wall Street Leads to Performance, Scale, Concurrency, Security and Cost.
Data warehouse virtualization alleviates the need to choose just one cloud provider and risk vendor lock-in. The reality of autonomously managing three cloud environments and on-premise platforms seamlessly is possible. Firms position themselves for future profitability, viability and competitive advantage when they leverage the flexibility to move work between multiple cloud platforms. The major advantage of eliminating risk while optimizing for cost (avoiding costly and often incorrect analysis and pricey software projects to actually move the data) can’t be ignored. A common, cloud-built virtual data warehouse platform that is not database specific/tied is the answer.
About the Author
Matthew Baird is Co-Founder and Chief Technology Officer of AtScale. He has a Statistics and Computer Science double major from Queen’s University. He has built software and managed teams at companies like PeopleSoft, Siebel Systems and Oracle. He loves the open source movement, and building scalable, innovative enterprise software. Prior to AtScale, Matt held positions as Vice President engineering at Ticketfly which was acquired for $450M by Pandora and as the CTO at Inflection, an enterprise trust and safety software platform, where his team developed Archives, a leading genealogy site that was acquired by Ancestry.com in 2012.
Sign up for the free insideAI News newsletter.
Speak Your Mind