The predominance of cloud computing, the advent of edge computing, the prevalence of the Internet of Things, and the pervasiveness of cognitive computing have had a profound impact on the data sphere—and on data governance.
While each of these developments has enhanced the array of data available to organizations, the speed at which they’re delivered, and the effectiveness of analytics derived from them, they have produced the converse for data governance.
They’ve made this critical data management requisite harder to implement, increased the amount of risk for it to mitigate, and broadened area of concerns like data privacy and regulatory compliance that effective governance redresses. Factor in an unresolved public health calamity reinforcing the direness of distributed, remote work (and data), and it becomes clear that data governance itself must rapidly evolve to account for a task unique to data management as a whole.
To become as decentralized and heterogeneous as the data landscape is today, data governance requires “central administration, but local enforcement,” summarized Privacera CEO Balaji Ganesan. “This really means that the actual enforcement is done by databases and applications as close to the data as possible, not putting in another layer which becomes a single point of failure.”
To maintain the centralization of policies, processes, and people upon which data governance is predicated, yet administer it within source systems, several fundamental changes in data governance must occur. These include a paradigmatic shift in its architecture, an improvement (not mere reliance) on cloud computing, and a rigorous adherence to standards to underpin data privacy, security, and regulatory compliance.
Success is an enabler for provident employment of all data; failure results in compliance penalties, data breaches, and impaired operations.
Traditional Governance Architecture
Centralization has always been seminal to data governance, which is but the output of policies devised by governance councils, Chief Data Officers, and enterprise stakeholders. One of its core functions is to eradicate silos and the proverbial ‘data free-for-all’ in which users access and manipulate data however they want to suit their needs at the time. Traditional on-premise governance solutions delivered centralized access to data in which policies were reinforced in a hub akin to Master Data Management platforms.
However, implementing this approach in the contemporary world of the cloud, edge computing, and the Internet of Things is markedly unpractical. Ganesan referenced a ubiquitous retailer that previously employed a governance platform “that sat in the middle between the sources and the point of access. As the underlying systems grew they hit a bottleneck impacting the experience of the end users, which defeats the purpose of serving the end users.”
Modern Governance Architecture
The latency and performance issues of this architecture are readily replaced with a decentralized one in which policies are still centrally managed, yet pushed to the sources for distributed data governance. This nimble architecture is implemented two ways, the first of which involves plug-ins “or small agent-like software that runs in the database or application,” Ganesan explained. “They take policies from the central engine and locally enforce them.” The other approach relies on Application Programming Interfaces (APIs), which Ganesan characterized as “the most prevalent way to push policies into databases.”
Ergo, no matter where sources are or how distributed they are, organizations can still govern them as though they were all in a single repository. In fact, this architecture and its governance underpinnings don’t simply rely on the cloud, but make it safer, better governed, and less an extension of enterprise IT systems and more a central component of them.
Not What the Cloud Can Do For You…
The implications of this approach in today’s hybrid and multi-cloud world are nothing short of revolutionary. Firms can leverage as many clouds or on-premise, geographically dispersed locations as they like, yet still deliver timely data access in a well-governed manner. “This is the modern way of doing things and more scalable,” Ganesan reflected. “The older way of doing security in the older world is still being carried forward, but it’s untenable in the modern world where customers expect fast access to data.” Moreover, the timely delivery of centrally authenticated governance, security, and privacy policies to remote sources actually fulfills the cloud’s promise of ubiquitous accessibility in an enterprise worthy manner.
This distributed architecture’s issuance of data governance from a single pane of glass in the cloud improves this medium which, for acute organizations, is rapidly becoming synonymous with data management. Surely, some variation of this approach will impact the way all data governance is carried out in the impending future, for the simple fact it offers a “decentralized architecture for those that don’t want something in the middle, that don’t want to impact the user experience, but that want governance so the right people access the right data,” Ganesan commented.
A Single Pane of Glass
Today, the quality of governance isn’t solely determined by the various policies created, particularly when using cloud data governance frameworks as third parties for fulfilling this vital data management function. The most credible of these platforms utilize a SaaS model for distributed governance with certification from established standards like the System and Organization Control (SOC) 2 Type 2. Furthermore, they enable organizations to streamline everything from onboarding new datasets to implementing policies in an automated manner that encompasses all sources, regardless of where they are.
Ganesan articulated a use case in which the foregoing retailer uses this method “for a single pane of glass for policies where they can think about building policies in one place and not have to manage it in 10 different places.” This capability supersedes mere convenience to reduce time to value so a spectrum of users—from data analysts to supply chain personnel—securely obtain the data required to do their jobs. “It used to take them two weeks to onboard new datasets and new users,” Ganesan remarked. “With this system they’ve automated everything to shrink that two week process of onboarding to a few hours.”
Going Forward
The reality is that the realms of data governance, data privacy, data access, and data security are converging as swiftly as external data sources are constantly emerging. Organizations must consolidate their means of governing these sources into a centralized mechanism while doing so at these sources to avoid the latency, performance issues, and access problems that otherwise arise.
Contemporary data governance architecture with a cloud emphasis provisions this necessity at the pace of business while reinforcing the capacity for regulatory compliance, security, and controlled access. Enterprises “can no longer be very open where anybody can access any data,” Ganesan cautioned. “We don’t live in that world anywhere. We live in a world where you have to make sure that data scientists and analysists or any user of data only accesses the data that they’re supposed to.”
Jelani Harper is an editorial consultant servicing the information technology market. He specializes in data-driven applications focused on semantic technologies, data governance and analytics.
Sign up for the free insideAI News newsletter.
Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1
Speak Your Mind