Perhaps the single greatest force shaping—if not reshaping—the contemporary data sphere is the pervasive presence of foundation models. Manifest most acutely in deployments of generative Artificial Intelligence, these models are impacting everything from external customer interactions to internal employee interfaces with data systems.
Consequently, new paradigms for storing and retrieving data, applying and generating value from foundation models, and emphasizing mainstays of data-driven processes (such as data security and data privacy) will solidify in 2024. As advanced machine learning deployments continue to color and inform our lives, foundational elements of protecting data and ensuring regulatory compliance will keep pace so that the growth of the one is tempered—and governed by—the other.
Natural Language Generation from intelligent bots is only the beginning. An entire ecosystem of imperatives is arising to support these AI capabilities and lead them into 2025. According to Talentica Software Principal Data Scientist Abhishek Gupta, these developments will “provide a more comprehensive and immersive grasp of our world, deepening the way we interact with and perceive information through AI.”
Multimodal Generative Models
Foundation models are so adept at generating text that it’s easy to forget that by definition, they excel at any number of tasks to which they can be applied. As such, organizations will begin to fully avail themselves of these capabilities over the next several months, boosting their ROI from generative AI investments.
“GPT-4 can seamlessly integrate image and text, and this trajectory will soon expand into additional modes, including voice, video, music and other…inputs like sensor data,” Gupta commented. Savvy organizations will begin exploring and piloting use cases for multimodal generative AI, which is primed to positively impact aspects of marketing, digital assets, customer service, and more.
Vector Databases Triumph
Due in no small part to the normalization of foundation models to the enterprise for generative AI applications involving Retrieval Augmented Generation and semantic search, vector database capabilities are projected to redouble their value—and adoption rates. These similarity search engines may be best conceived of as AI retrieval systems: the optimal means of storing the wealth of unstructured data organizations have, and querying those data with language models.
“Vector databases have swiftly gained prominence due to their prowess in handling high-dimensionality data and facilitating complex similarity searches,” observed Ratnesh Singh Parihar, Principal Architect at Talentica Software. Once organizations determine how to circumvent potential cost inhibitors of maintaining vector database indices in memory, these repositories will enhance any number of use cases, including “recommendation systems, image recognition, Natural Language Processing, financial forecasting, or other AI-driven ventures,” Parihar noted.
Generative AI Prioritizes Personalization
The reams of unstructured data (previously considered dark data) regularly accessed by Generative AI models in RAG implementations and vector similarity search are increasing the omnipresent concerns for data security and regulatory compliance.
According to Gupta, another dominant trend in 2024 will entail organizations seeing “generative AI zero in on the development of domain-specific chatbots, while ensuring safeguards for data privacy at the organizational level.” RAG can assist in this endeavor by ensuring chatbots powered by generative AI models access data that’s been vetted for, and includes controls for, data privacy, regulatory compliance, and data security.
Confidential Computing Adoption Increases
Depending on how it’s implemented, the confidential computing construct can immensely aid in the data protection reinforced by the personalization of generative AI models. This computing model involves sequestering confidential data in a secure CPU enclave for processing in the cloud. Those data and their processing methods can only be accessed by code that’s authorized for the enclave.
“In the coming year, we can expect an increase in the integration of hardware-based confidential computing as cloud solutions strategically employ it to entice applications with heightened privacy and security demands,” remarked Pankaj Mendki, Head of Emerging Technology at Talentica Software. Mendki’s point is reinforced by the reality that nothing else, save for the authorized programming code, will even know what’s in the foresaid enclave. “This [confidential computing] trend will be especially prevalent in specialized domains such as machine learning, financial services, and genomics,” Mendki added.
A New Day
The changes wrought by foundation models include, yet ultimately exceed, the data landscape in which they’re so impactful. In reality, they’re affecting both professional and private spheres of life in small and large ways. Multimodal deployments, vector databases, personalization, and confidential computing will be some of the many ways in which these AI applications are facilitated for the greater good of the enterprise—and, maybe, even for society.
About the Author
Jelani Harper is an editorial consultant servicing the information technology market. He specializes in data-driven applications focused on semantic technologies, data governance and analytics.
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideAI NewsNOW
Speak Your Mind