Generative AI Report – 1/11/2024

Print Friendly, PDF & Email

Welcome to the Generative AI Report round-up feature here on insideAI News with a special focus on all the new applications and integrations tied to generative AI technologies. We’ve been receiving so many cool news items relating to applications and deployments centered on large language models (LLMs), we thought it would be a timely service for readers to start a new channel along these lines. The combination of a LLM, fine tuned on proprietary data equals an AI application, and this is what these innovative companies are creating. The field of AI is accelerating at such fast rate, we want to help our loyal global audience keep pace.

Agiloft Launches Generative AI Capability to Streamline Contract Negotiation, Review, and Redlining

Agiloft, the trusted global leader in data-first contract lifecycle management (CLM), announced a new generative AI (GenAI) capability that streamlines negotiations and empowers users to significantly increase the speed with which they’re able to redline, negotiate, and ultimately agree on contract terms.

Users can leverage language from their clause library to power Agiloft’s new GenAI capabilities or create new content with GenAI that matches their needed terms and closely conforms to the text of the contract under negotiation. Agiloft is working directly with early adopters to build and improve this and other features that enable connected, intelligent, and autonomous contracting processes so companies can unlock the value of contract data and accelerate business.

“We are thrilled today to announce new generative AI capabilities for 2024 that build onto our existing AI Platform,” says Andy Wishart, Chief Product Officer at Agiloft. “With this release we are addressing the all-too common problem legal and contracting professionals see when negotiating contracts: excessive and endless redlining between parties. This new capability employs generative AI to understand approved clauses, review third-party contract language for areas of misalignment, and then compose redlines that marry third-party contract language with legal’s preferred phrasing. This greatly reduces the back-and-forth negotiations between parties, providing an express route to contract execution.”

Kinetica Launches Quick Start for SQL-GPT

Kinetica, the real-time database for analytics and generative AI, announced the availability of a Quick Start for deploying natural language to SQL on enterprise data. This Quick Start is for organizations that want to experience ad-hoc data analysis on real-time, structured data using an LLM that accurately and securely converts natural language to SQL and returns quick, conversational answers. This offering makes it fast and easy to load structured data, optimize the SQL-GPT Large Language Model (LLM), and begin asking questions of the data using natural language. This announcement follows a series of GenAI innovations which began last May with Kinetica becoming the first analytic database to incorporate natural language into SQL

The Kinetica database converts natural language queries to SQL, and returns answers within seconds, even for complex and unknown questions. Further, Kinetica converges multiple modes of analytics such as time series, spatial, graph, and machine learning that broadens the types of questions that can be answered. What makes it possible for Kinetica to deliver on conversational query is the use of native vectorization that leverages NVIDIA GPUs and modern CPUs. NVIDIA GPUs are the compute paradigm behind every major AI breakthrough this century, and are now extending into data management and ad-hoc analytics. In a vectorized query engine, data is stored in fixed-size blocks called vectors, and query operations are performed on these vectors in parallel, rather than on individual data elements. This allows the query engine to process multiple data elements simultaneously, resulting in radically faster query execution on a smaller compute footprint.

“We’re thrilled to introduce Kinetica’s groundbreaking Quick Start for SQL-GPT, enabling organizations to seamlessly harness the power of Language to SQL on their enterprise data in just one hour,” said Phil Darringer, VP of Product, Kinetica. “With our fine-tuned LLM tailored to each customer’s data and our commitment to guaranteed accuracy and speed, we’re revolutionizing enterprise data analytics with generative AI.”

New solution intended to help enterprises seeking to better manage training data for generative AI systems across organizations

Casper Labs and IBM (NYSE: IBM) Consulting announced they will work to help clients leverage blockchain to gain greater transparency and auditability in their AI systems. Together, Casper Labs and IBM Consulting plan to develop a new Casper Labs solution, designed with blockchain and built leveraging IBM watsonx.governance, that establishes an additional analytics and policy enforcement layer for governing AI training data across organizations. 

The process of training, developing and deploying generative AI models happens across multiple organizations, from the original model creator to the end user organization. As different organizations integrate new data sets or modify the models, their outputs change accordingly, and many organizations need to be able to track and audit those changes as well as accurately diagnose and remediate issues. Blockchain can help organizations share their trusted context information via metadata in the ledger documenting that the models have changed while mitigating the risk of intellectual property crossover or unnecessary data sharing across organizational lines. 

Casper Labs’ solution is planned to be built on Casper,  a tamper-resistant and highly serialized ledger, and leverage IBM watsonx.governance and watsonx.ai to monitor and measure highly serialized input and output data for training generative AI systems across organizations. Thanks to the Casper Blockchain’s hybrid nature and permissioning system, organizations can expect to be able to better protect sensitive data stored in the solution from being accessible to external actors; they have control over who can access what data. The solution will also be built to support version control using the serialization capabilities of blockchain, so organizations can efficiently revert to previous iterations of an AI system if performance issues or biased outputs occur.

“While generative AI has justifiably excited organizations for its transformative potential, its practical applications have been severely limited by an inability to monitor and react to the data feeding AI systems,” said Mrinal Manohar, CEO at Casper Labs. “With IBM’s help, we’re committed to delivering a better way to not only understand why AI systems behave the way that they do but also a clearer path to remediate behavior if hallucinations or performance issues occur. AI’s long-term potential will be dictated by how effectively and efficiently organizations can understand, govern and react to increasingly massive AI training data sets.”

Dappier Launches to Create Branded AI Chat for Media Brands & Marketplaces, With New Data Monetization Opportunities

Dappier, an AI platform helping media organizations go-to-market with natural language chat & real-time search, is launching out of stealth to deliver branded AI chat experiences for media brands & marketplaces, beginning with leading web3 data intelligence platform EdgeIn. Dappier enables media brands, data orgs., and their end users with next-gen natural language chat that can generate new insights, close data gaps, and enable new revenue opportunities.

The launch of Dappier comes shortly after industry leader OpenAI introduced landmark content deals with news publishers like Axel Springer and the Associated Press, indicating the need for data licensing models for AI chat. With the oncoming widespread adoption of LLM-driven natural language, Dappier is building new monetization opportunities for brands who monetize data access: marketplaces, directories, and media publishers.

Dappier branded AI chat across business endpoints: the startup also makes it easy for marketplaces, media brands and database directories to integrate themselves into other AI chat tools, introducing an industry-first licensing fee model through its upcoming GPT Developer Market.

“As AI tools proliferate, users will increasingly expect to be able to do just about anything from their chat experiences – without needing to launch a new browser or separate app,” said Dan Goikhman, CEO of Dappier “Dappier is offering monetization opportunities for an AI-first world, letting media brands monetize data access in new ways while improving end user experiences with natural language conversations.”

Thompson Street Capital Partners selects Alkymi to accelerate data analysis

Alkymi, a leading business system for unstructured data, has been selected by Thompson Street Capital Partners (TSCP), a private equity firm based in St. Louis, to expedite their data review processes utilizing Alkymi’s new generative AI product, Alpha.

TSCP sought to accelerate their review of industry and market information, which includes analyzing large amounts of unstructured data. TSCP reviews hundreds of reports and information sources annually to better understand markets, spot industry trends, and identify potential opportunities for their firm, and doing so quickly, efficiently, and accurately is essential to TSCP’s success.

TSCP selected Alkymi and their Alpha product, a generative AI tool built for financial services teams and powered by cutting-edge large language models, for their ability to expedite data review and processing, as well as extend the types of information that they were able to capture.

“We designed Alpha for firms like TSCP to embrace large language models to not only speed up manual review processes, but to expand the level of data they are able to get,” says Harald Collet, CEO of Alkymi. “Firms that are implementing LLMs will be able to access deeper levels of information, faster and will lead the way in the new era of data-enabled decision making.”

Franz Unveils Allegro CL v11 with Unique Neuro-Symbolic AI Programming Capabilities

Franz Inc., an early innovator in Artificial Intelligence (AI) and a leading supplier of Common Lisp (CL) development tools and Knowledge Graph solutions, announced Allegro CL v11, which includes key performance enhancements now available within the most effective system for developing and deploying applications to solve complex problems in the field of Artificial Intelligence.

The combination of Allegro CL and AllegroGraph offers a unique and powerful, dynamic Artificial Intelligence development system that is especially well-suited for enterprise-wide Neuro-Symbolic AI applications. Merging classic AI symbolic reasoning, Large Language Models (LLMs), and Knowledge Graphs empowers Franz’s customers to deliver the next wave of advanced AI applications. 

“The rapid adoption of Generative AI (LLMs) is fueling heightened demand for guided, fact-based applications which is significantly impacting applications in traditional AI industries like national defense, as well as in life sciences, manufacturing and financial analytics,” said Dr. Jans Aasman, CEO of Franz Inc. “The complexity of today’s software applications coupled with the explosion of data size requires a highly versatile and robust programming language. With Allegro CL v11, machine intelligence developers now have a high-performance tool to scale their applications and deliver innovative products to market.”

Synthetic Acumen, Inc. Launches Generative AI Qualitative Research Platform ResearchGOAT

Synthetic Acumen, Inc. announced the availability of its generative Artificial Intelligence (AI) platform ResearchGOAT for in-depth qualitative customer and market research. The company appointed Business-to-Customer and Business-to-Business research expert Ross Mitchell, PhD, as its new Chief Executive Officer.

ResearchGOAT leverages cutting-edge generative AI technology to conduct comprehensive qualitative interviews and analyze research results. The company’s unique approach combines advanced algorithms with human-like understanding and expert prompt engineering. ResearchGOAT is not a survey tool but instead enables facilitated conversations, assessment and analysis of customer preferences, market trends and industry insights.

Dr. Mitchell leverages decades of experience, including leading innovation and research teams at Ford Motor Co., AT&T, Inc., and most recently Accenture plc, along with earlier roles across innovation, design research and executive leadership at independent research firms. He expressed enthusiasm about his new role, stating, “I am thrilled to launch ResearchGOAT. The potential for generative AI in qualitative research is immense, and we have the talent, track record and team to become a standout leader in this space through revelatory results.”

NEC launches new AI business strategy with the enhancement and expansion of generative AI

NEC Corporation (NEC; TSE: 6701) has enhanced and expanded the performance of its lightweight large language model (LLM) and is scheduled to launch it in the spring of 2024. With this development, NEC is aiming to provide an optimal environment for the use of generative artificial intelligence (AI) that is customized for each customer’s business and centered on a specialized model that is based on NEC’s industry and business know-how.

These services are expected to dramatically expand the environment for transforming operations across a wide range of industries, including healthcare, finance, local governments and manufacturing. Moreover, NEC will focus on developing specialized models for driving the transformation of business and promoting the use of generative AI from individual companies to entire industries through managed application programming interface (API) services.

NEC has enhanced its LLM by doubling the amount of high-quality training data and has confirmed that it outperformed a group of top-class LLMs in Japan and abroad in a comparative evaluation of Japanese dialogue skills (Rakuda*). Furthermore, the LLM can handle up to 300,000 Japanese characters, which is up to 150 times longer than third-party LLMs, enabling it to be used for a wide range of operations involving huge volumes of documents, such as internal and external business manuals.

NEC is also developing a “new architecture” that will create new AI models by flexibly combining models according to input data and tasks. Using this architecture, NEC aims to establish a scalable foundation model that can expand the number of parameters and extend functionality. Specifically, the model size can be scalable from small to large without performance degradation, and it is possible to flexibly link with a variety of AI models, including specialized AI for legal or medical purposes, and models from other companies and partners. Additionally, its small size and low power consumption enable to be installed in edge devices. Furthermore, by combining NEC’s world-class image recognition, audio processing, and sensing technologies, the LLMs can process a variety of real-world events with high accuracy and autonomy.

Coveo Enterprise Customers See Impressive Results from Generative Answering – Now Generally Available

Coveo (TSX:CVO), a leading provider of enterprise AI platforms that enable individualized, connected, and trusted digital experiences at scale with semantic search, AI recommendations, and GenAI answering, announced that Coveo Relevance Generative Answering™ will be generally available starting December 15th, after several months of Beta testing with several enterprises. The company continues to add to its roster of customers signing order forms for Coveo’s enterprise-ready Relevance Generative Answering™, with large enterprises like SAP Concur.

Deployed in as little as 90 minutes on top of the Coveo AI Search Platform, Coveo Relevance Generative Answering effortlessly generates answers to complex user queries within digital experiences by leveraging Large Language Models (LLMs) on top of the leading unified indexing and relevance functionality of Coveo’s platform. An enterprise-ready solution, Coveo Relevance Generative Answering is content-agnostic, scalable, secure, traceable, and can provide accurate and relevant answering, and composite abstracts from multiple internal and external sources of content – meaning it is not limited to the content or knowledge base within existing systems. Coveo Relevance Generative Answering is an addition to the suite of Coveo AI models and can be injected to improve any touchpoint across the customer or employee digital journey. Relevance Generative Answering can be used across multiple interfaces from standalone search pages, in-product experiences, self-service portals and communities, service management consoles and more.

“We are in a new era, where technology is not only about meeting expectations; it’s setting the stage for the future of digital interaction,” said Laurent Simoneau, President, CTO and Founder at Coveo. “We’ve been working with forward-thinking global enterprises on their AI strategy for more than a decade. It’s exciting to be a part of the quantum leap generative answering has created and to witness the exponential business value our customers are already achieving with our platform. As more enterprises roll out generative answering across commerce, service, workplace, and website applications, we’re looking forward to driving business value and impacting the bottom-line for our customers.”

Folloze Announces GeneratorAI to Unlock the Personalized Buyer Experience at Scale for Marketers

Folloze, creator of the no-code B2B Buyer Experience Platform (BX 3.0), announced the release of Folloze GeneratorAI, the content engine that enables marketers to accelerate the go-to-market (GTM) process by creating targeted and personalized campaign experiences at scale. GeneratorAI unlocks next-level productivity and performance for marketers across the entire organization by improving the speed and quality of digital campaign creation that enhances the customer’s experience wherever they are in their buying journey. The Folloze GeneratorAI preview adds to the company’s suite of AI tools available within Folloze’s easy-to-use no code buyer experience creator.  

“We are witnessing a transformational change powered by AI that influences every aspect of the B2B GTM Process. After the initial excitement around generative AI, now businesses are looking to embed these capabilities in a way that drives clear business value,” said David Brutman, Chief Product Officer & Co-founder at Folloze. “We are excited to add generative AI capabilities with the release of Folloze GeneratorAI, to drive higher performance and quality in customer engagement. This together with our content recommendation capabilities and content classification provide marketers with powerful insights and the ability to create smart, agile and  relevant experiences at scale.”

Blue Yonder Launches Generative AI Capability To Dramatically Simplify Supply Chain Management and Orchestration

With supply chain challenges and disruptions becoming more prevalent, companies need to make informed decisions faster and more accurately. However, with an overabundance of often-disparate data and an imperative to transfer knowledge due to workforce retirement, companies need assistance making sense of this data to ensure their supply chain can manage disruptions and stay ahead. That’s why Blue Yonder, a leading supply chain solutions provider, launched Blue Yonder Orchestrator, a generative AI capability that allows companies to fuel more intelligent decision-making and faster supply chain orchestration.

Blue Yonder Orchestrator synthesizes the natural language capabilities of large language models (LLMs) and the depth of the company’s supply chain IP to accelerate data-driven decision-making. Integrated within Blue Yonder’s Luminate® Cognitive Platform, Blue Yonder Orchestrator is available to customers using Blue Yonder’s cognitive solutions suite.

“Blue Yonder Orchestrator helps companies bring value to their data, which is where many companies struggle,” said Duncan Angove, CEO, Blue Yonder. “It allows business users to quickly access recommendations, predictive insights, and intelligent decisions to ensure they generate the best outcomes to impact their supply chain positively. In today’s supply chain environment, in which many professionals are nearing retirement age and it’s challenging to retain that institutional knowledge, companies can use Blue Yonder Orchestrator as a trusty supply chain assistant that can augment intuition – using the value of the data – to make better and faster decisions.”

bitHuman Introduces Game-Changing Interactive AI Platform for Enterprise Customers 

Imagine full-size service agents appearing on any screen providing you with instant information. That’s what bitHuman, a generative AI platform is doing: literally bringing a sci-fi future to today’s business scenarios. As the sophisticated, engineering-driven company emerges from stealth, it is delivering the most advanced, customizable, and lifelike agents for the enterprise.

Focusing on creating comprehensive advanced AI solutions through multi-modality interaction (i.e., via chat, text, voice) bitHuman’s technology powers real-time photo-realistic human-like AI solutions for businesses in hospitality, fashion, retail, healthcare, and more. 

“We are developing interactive AI that can address the needs of any particular business, demonstrating swift problem solving and, critically, incorporating a delightful human touch,” said Steve Gu, bitHuman CEO and co-founder. “The next frontier of generative AI is interactive AI, and making interactive AI in real-time work to solve business problems.”

Franz Unveils AllegroGraph 8.0, the First Neuro-Symbolic AI Platform Merging Knowledge Graphs, Generative AI and Vector Storage

Franz Inc., an early innovator in Artificial Intelligence (AI) and leading supplier of Graph Database technology for Entity-Event Knowledge Graph Solutions, announced AllegroGraph 8.0, a groundbreaking Neuro-Symbolic AI Platform that incorporates Large Language Model (LLM) components directly into SPARQL along with vector generation and vector storage for a comprehensive AI Knowledge Graph solution. AllegroGraph 8.0 redefines how Knowledge Graphs are created and expands the boundaries of what AI can achieve within the most secure triplestore database on the market.

“While general-purpose LLMs excel at straightforward tasks that do not necessitate background or changing knowledge, addressing more complex, knowledge-intensive queries demands the capabilities provided with a Knowledge Graph to avoid generating ‘hallucinations,’” said Dr. Jans Aasman, CEO of Franz Inc. “We designed AllegroGraph 8.0 with Retrieval Augmented Generation (RAG) capabilities to provide users with seamless Generative AI capabilities within a Knowledge Graph platform, while dynamically fact-checking LLM outputs to ensure that they are grounded in fact-based knowledge.”

Tenyx Launches Fine-tuning Platform to Fix Catastrophic Forgetting in Large Language Models

Tenyx, a leader in voice AI systems that automate customer service functions for the enterprise, announces a novel solution to one of the most significant challenges in AI: catastrophic forgetting during fine-tuning of large language models (LLMs). With this groundbreaking methodology, Tenyx helps businesses adapt LLMs to their unique requirements without compromising foundational knowledge and protective safeguards.

The conventional approach to fine-tuning LLMs poses inherent risks. Training models with new data to perform better in certain areas can cause unintentional loss or degradation of previously learned capabilities. The complexity of these models makes it exceedingly challenging to pinpoint and rectify these distortions. Current fine-tuning solutions rely primarily on Low-Rank Adaptation, or LoRA, a technique that lacks the ability to mitigate forgetting effects. Additionally, conventional schemes used for fine-tuning risk eroding the safety measures established by RLHF (reinforcement learning from human feedback). This mechanism, vital for preventing harmful model outputs, can be inadvertently weakened or retracted during fine-tuning using traditional methods.

By leveraging a novel mathematical interpretation of the geometric representations formed during the initial LLM training, Tenyx’s methodology alleviates the aforementioned drawbacks and ensures that models can be customized to a specific customer domain without significant loss of prior capabilities. This approach not only improves the retention of prior knowledge and reasoning abilities, but also retains the RLHF protection, providing an unparalleled boost in enterprise use of LLMs. Moreover, safer fine-tuning is aligned with changes to the regulatory environment, specifically as they relate to the recent White House executive order on Safe, Secure, and Trustworthy AI.

“In the rapidly evolving landscape of AI, our commitment has always been to address its inherent challenges head-on. With this novel methodology, we’re not just pioneering an advanced solution; we’re revolutionizing the way enterprises utilize LLMs. Our innovation ensures that businesses no longer have to choose between customization and core capabilities. They can confidently enjoy the best of both worlds,” said Itamar Arel, CEO and founder of Tenyx. 

AnswerRocket Unveils Skill Studio to Empower Enterprises with Custom AI Analysts for Enhanced Business Outcomes

AnswerRocket, an innovator in GenAI-powered analytics, announced the launch of Skill Studio, which empowers enterprises to develop custom AI analysts that apply the business’ unique approach to data analysis.

AI copilots have emerged as a powerful tool for enterprises to access their data and streamline operations, but existing solutions fail to meet the unique data analysis needs of each organization or job role. Skill Studio addresses this gap by providing organizations with the ability to personalize their AI assistants to their specific business, department, and role, which enables users to more easily access relevant, highly specialized insights. Skill Studio elevates Max’s existing AI assistant capabilities by conducting domain-specific analyses, such as running cohort and brand analyses.

“AI copilots have revolutionized the way organizations access their data, but current solutions on the market are general-use and not personalized to specific use cases,” said Alon Goren, CEO of AnswerRocket. “Skill Studio puts the power of AI analysts back in the hands of our customers by powering Max to analyze their data in a way that helps them achieve their specific business outcomes.”

BlueCloud Launches Integrated ML and Generative AI Solutions for Marketers, Built on the Snowflake Data Cloud

BlueCloud, a digital transformation company and global leader in data-driven solutions, announced the launch of BlueCasual and BlueInsights, both Powered by Snowflake, that will enable non-technical business users to measure media campaigns with the sophistication of a data scientist. The solutions include causal inference machine learning (ML) and a generative AI (LLM) chat feature, providing easily digestible data to marketers and other non-technical users.

According to Gartner®, “nearly 63 percent of marketing leaders plan to invest in GenAI in the next 24 months … Complexity of the current ecosystem, customer data challenges and inflexible governance were identified by survey respondents as the most common impediments to greater utilization of their martech stack.” The modern martech stack is helping to change this by leveraging advanced yet streamlined solutions such as Snowflake as its foundation, delivering a single source of truth within a secure environment.

“The vast majority of marketers aren’t using data and analytics to determine campaign ROI because advanced insights are still interpreted by engineering and analyst teams,” said Kerem Koca, CEO and Co-founder of BlueCloud. “BlueCausal and BlueInsights streamline the entire process for non-technical users and allow IT teams to deliver greater returns to their stakeholders at scale.” 

Auquan Launches Prompt Intelligence to Deliver Real-Time Insights on Any Company at Any Time

Auquan, an AI innovator for financial services, announced the first and only capability in financial services that can generate equity, credit, risk or impact intelligence on any company worldwide, regardless of whether prior coverage exists. Financial services professionals can access Prompt Intelligence within the Auquan Intelligence Engine to produce material insights for conducting investment pre-screening, due diligence, know your business (KYB), researching ESG risks and impacts, and uncovering hidden controversies — within seconds.

Financial professionals continuously rely on vast amounts of unstructured text — company reports, regulatory documents, broker research, and news coverage — and sourcing and summarizing this data is time-consuming and costly. While vendors and consultants can help, their data is limited to companies included in their existing coverage, and their reports lack the immediacy, comprehensiveness and adaptability that today’s financial services firms need.

Auquan’s Prompt Intelligence solves this by generating material insights on any company or issuer — public or private — instantaneously and tailored for the user and use case. This eliminates wait times of days or weeks and frees up professionals to focus on analysis and making more informed decisions before markets react — a strategic advantage in the fast-paced world of finance. Some of the largest asset managers, investment banks, and private equity funds in the U.S. and Europe have already deployed Auquan.

The Auquan Intelligence Engine is the only solution for financial services that leverages retrieval augmented generation, a cutting-edge AI technique designed to handle the kinds of knowledge-intensive use cases that are found throughout the financial services industry. RAG combines the power of retrieval-based models and their ability to access real-time and external data to find relevant information, with generative models and their ability to create responses in natural language.

Using RAG technology, Auquan’s Prompt Intelligence capability quickly accesses niche industry datasets and open source information to generate insights on any company worldwide that are comprehensive, trustworthy and accurate.

“With Prompt Intelligence, Auquan is completely revolutionizing how our customers experience the world’s information by delivering insights the moment they need them, effortlessly and efficiently,” said Chandini Jain, co-founder and CEO of Auquan. “This has been made possible with retrieval augmented generation, or RAG, which means no more reliance on teams of humans to build coverage and populate data.”

Generative AI Startup DataCebo Launches to Bring Synthetic Data to All Enterprises

DataCebo emerged with SDV Enterprise, a commercial offering of the popular open source product, Synthetic Data Vault (SDV). With SDV Enterprise, developers can easily build, deploy and manage sophisticated generative AI models for enterprise-grade applications when real data is limited or unavailable. SDV Enterprise’s models create higher quality synthetic data that is statistically similar to original data so developers can effectively test applications and train robust ML models. SDV Enterprise is currently in beta with the Global 2000. Today Global 2000 organizations have 500 to 2000 applications for which they need to create synthetic data 12 times a year.

DataCebo co-founders Kalyan Veeramachaneni (CEO) and Neha Patki (vice president of product) created SDV when at MIT’s Data to AI Lab. SDV lets developers build a proof-of-concept generative AI model for small tabular and relational datasets with simple schemas and create synthetic data. SDV has been downloaded more than a million times and has the largest community around synthetic data. DataCebo was then founded in 2020 to revolutionize developer productivity at enterprises by leveraging generative AI. 

Veeramachaneni said: “The ability to build generative models on-prem is critical for enterprises. Their data is proprietary and is very specific. In our first year, we quickly learned that this unique capability that SDV Enterprise provides is a massive enabler for them. Our customers often ask whether they need massive hardware or specific hardware requirements to use SDV Enterprise. They are often surprised that with SDV Enterprise, they can train generative models on a single machine. This opens up a new horizon of possibilities for training and using these models and applying them to a variety of use cases. As one customer said, if we have to spend $100,000 to train a model, it simply reduces the number of use cases we can use it for.”

DISCO Publicly Launches Cecilia, an AI-Powered Platform for Legal Professionals to Transform their Workflows and Accelerate Fact Finding 

DISCO (NYSE: LAW), a leader in AI-enabled legal technology, announced the general availability of its Cecilia AI platform, a comprehensive suite of features that includes Cecilia Q&A and Cecilia Timelines. Cecilia leverages the power of generative AI technology in a scalable and secure way, and is designed to give legal professionals an advanced solution to access the facts of their case faster, spend less time on cumbersome manual tasks, and enhance their ability to deliver better results.  

As the legal world continues to feel the transformational impact of generative AI, organizations are starting to become more comfortable with the idea of integrating these AI-driven tools into their tech stacks. Cecilia was built on a variety of underlying large language model technologies and advanced search techniques and gives organizations a competitive edge for identifying key factual insights, creating case strategy, and managing large-scale document reviews.   

“Businesses are now starting to understand the immense potential AI can have in disrupting numerous workflows and use cases. As generative AI matures, we continue to equip lawyers with more powerful capabilities than they have ever had, such as comprehensive document parsing, case building, and information extraction,” said Kevin Smith, DISCO’s Chief Product Officer. “Similar to smart home assistants, we envision Cecilia to be continuously adding new skills into the platform, and this is another step towards our goal of creating a truly end-to-end platform that can handle the world’s most complex legal matters.” 

Kinetica Unveils First SQL-GPT for Telecom, Transforming Natural Language into SQL Fine-Tuned for the Telco Industry

Kinetica announced the availability of Kinetica SQL-GPT for Telecom, the industry’s only real-time solution that leverages generative AI and vectorized processing to enable telco professionals to have an interactive conversation with their data using natural language, simplifying data exploration and analysis to make informed decisions faster. The Large Language Model (LLM) utilized is native to Kinetica, ensuring robust security measures that address concerns often associated with public LLMs, like OpenAI.

Kinetica’s origins as a real-time GPU database, purpose-built for spatial and time-series workloads, is well suited for the demands of the telecommunications industry. Telcos rely heavily on spatial and time-series data to optimize network performance, track coverage, and ensure reliability. Kinetica stands out by offering telecommunications companies the unique capability to visualize and interact effortlessly with billions of data points on a map, enabling unparalleled insights and rapid decision-making. SQL-GPT for Telecom makes it easy for anyone to now ask complex and novel questions that previously required assistance from highly specialized development resources.

“Kinetica’s SQL-GPT for Telco has undergone rigorous fine-tuning to understand and respond to the unique data sets and industry-specific vernacular used in the telecommunications sector,” said Nima Negahban, Cofounder and CEO, Kinetica. “This ensures that telco professionals can easily extract insights from their data without the need for extensive SQL expertise, reducing operational bottlenecks and accelerating decision-making.”

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW

Speak Your Mind

*