In the nine months since ChatGPT’s debut dazzled the public and news media, the technology has yet to establish much of a beachhead in business. Chatbots aren’t new to business and the development of any significant new business applications built on the back of the impressive chatbot are conspicuously missing.
Against this backdrop, OpenAI, the organization that built ChatGPT, a generative AI that depends on large language models to create human sounding conversation, released an enterprise-grade version of ChatGPT on August 28. It was the company’s largest release since the lightning strike introduction of ChatGPT. The company obviously seeks to make the technology more appealing to business. But, what’s become clear to many working in AI is that one of the more promising ways to turn generative AI models into the juggernaut business tools many predicted they would become, is by pairing them with their more time-tested cousin: predictive AI.
Predictive AI has Produced Value
Makes perfect sense. Predictive AI, also known as traditional AI, has for years been the go-to AI for business. Predictive AI helps Uber offer real-time dynamic pricing and assists Netflix with recommending movies to subscribers. The model enables Progressive to improve risk analysis and offer lower insurance rates to safe drivers.
Both generative AI and predictive AI use machine learning, but the two models solve two very different classes of problems. Predictive AI relies on statistical algorithms to analyze data, identify patterns and then make predictions about future events. The technology doesn’t create anything it hasn’t been programmed to create. In contrast, generative AI finds patterns in datasets and then recreates structure or style from within a wide variety of content, including video, text and spoken language. In short, generative AI is trained on existing data and generates new content based on its learned knowledge.
Boosting Chatbot Performance
During the past decade, companies spent billions on predictive AI research, building engineering teams, and refining tools. All the infrastructure, services and knowhow that generative AI needs to make a splash in the business world exists within predictive AI’s ecosystem. The amount of innovative apps that might be created by uniting the best capabilities of these two AI seems endless.
For starters, predictive AI tools and techniques could help raise the quality level of the prompts that direct language model applications, such as ChatGPT. Predictive AI enables the integration of real-time data into consumer-facing and personalized applications. An app could receive prompts from human users in addition to prompts derived from real-time data sources. This would lead to more informed and illuminating responses
Consider the possibility of training a chatbot to gauge and react to the changes in customer sentiment. Merchants have learned that understanding a customer’s satisfaction level can help them influence buying decisions. Say, for example, that a retail customer grows increasingly frustrated during an exchange with a chatbot. The bot could alert a human support agent and then the agent might save the customer relationship or sale. If the customer’s mood brightens while interacting with a bot, the company and the bot could learn more about pleasing customers and adopt more impactful policies.
Personalized Apps are the Future and Predictive AI Knows the Way
For generative AI to make further strides into real-time, customer-facing applications, it will need to make use of new and established tools and practices — LangChain and feature stores among them.
LangChain, one of the year’s most popular development frameworks, simplifies the building of Large Language Models, a generative AI designed to reason in a similar way to humans. LangChain connects AI models to key data sources, including feature stores, and also enables the creation of templates for LLM prompts. LangChain then populates the templates with external data and enables the retrieval of values.
In machine learning, a feature is a measurable piece of data or property of an event or phenomenon. This data may include names, ages, the number of purchases, or visits to an online store. Perhaps the best example of a feature is any dataset column.
Feature Stores Help LLMs Make Better Decisions
Feature stores centralize the storing and organizing of features. These pre-engineered features either help train models or are used to make real-time predictions. Data-science teams re-use features to save themselves the hassle and cost involved with engineering features from scratch.
The importance of feeding real-time data into AI models can’t be overstated. Just like with humans, LLMs make better decisions when they have access to the most up-to-date and accurate data.
When estimating the amount of time required to drive from Los Angeles to San Francisco, a person or an LLM is more likely to provide an accurate prediction if they’re enabled to consider information about up-to-the-minute traffic patterns and weather forecasts. LLMs connected to feature stores can assist generative AI models to gather all this data and make predictions much more rapidly than humans ever could on their own.
High-Quality Data Makes All AI Possible
Certainly, AI is still in its infancy as a business application. It’s important to remember that no matter what transpires between generative AI and predictive AI, there is no road forward without making available to both models the highest-quality data.
Data makes all forms of AI possible.
It’s also important to note that the many media stories that followed the introduction of ChatGPT, the ones that featured headlines with some version of “Predictive AI vs. Generative AI,” missed the mark. It’s not a competition or zero-sum game.
On the contrary. These AI are different types of machine learning but they have the potential, if fused together in imaginative ways, to create exciting and innovative applications.
About the Author
Gaetan Castelein is the VP of Marketing at Tecton, the leading machine learning feature platform company. Prior to this, he served as the VP of Product Marketing at Confluent where he launched the Confluent Cloud SaaS product. He also served as the Head of Product Marketing at Cohesity where he helped customers take back control of all their secondary data with Cohesity’s distributed data platform. Prior to Cohesity, he spent 8 years at VMware running Product Management and Product Marketing for the company’s Software Defined Storage products. He has an MBA from Stanford University Graduate School of Business and an MSEE in Electrical Engineering from Université catholique de Louvain.
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideAI NewsNOW
Speak Your Mind