In this contributed article, Varun Singh, President and co-founder of Moveworks, sees rockets as a fitting analogy for AI language models. While the core engines impress, he explains the critical role of Vernier Thrusters in providing stability for the larger engine. Likewise, large language models need the addition of smaller, specialized models to enable oversight and real-world grounding. With the right thrusters in place, enterprises can steer high-powered language models in the right direction.
Unveiling Jamba: AI21’s Groundbreaking Hybrid SSM-Transformer Open-Source Model
AI21, a leader in AI systems for the enterprise, unveiled Jamba, the production-grade Mamba-style model – integrating Mamba Structured State Space model (SSM) technology with elements of traditional Transformer architecture. Jamba marks a significant advancement in large language model (LLM) development, offering unparalleled efficiency, throughput, and performance.
Video Highlights: Gemini Ultra — How to Release an AI Product for Billions of Users — with Google’s Lisa Cohen
In this video presentation, our good friend Jon Krohn, Co-Founder and Chief Data Scientist at the machine learning company Nebula, is joined by Lisa Cohen, Google’s Director of Data Science and Engineering, to discuss the launch of Gemini Ultra. Discover the capabilities of this cutting-edge large language model and how it stands toe-to-toe with GPT-4.
How Can Companies Protect their Data from Misuse by LLMs?
In this contributed article, Jan Chorowski, CTO at AI-firm Pathway, highlights why LLM safety begins at the model build and input stage, rather than the output stage – and what this means in practice; how LLM models can be engineered with safety at the forefront, and the role that a structured LLM Ops model plays; and the role of data chosen to train models, and how businesses can appropriately select the right data to feed into LLMs
Opaque Systems Extends Confidential Computing to Augmented Language Model Implementations
In this contributed article, editorial consultant Jelani Harper discusses how Opaque Systems recently unveiled Opaque Gateway, a software offering that broadens the utility of confidential computing to include augmented prompt applications of language models. One of the chief use cases of the gateway technology is to protect the data privacy, data sovereignty, and data security of organizations’ data that frequently augments language model prompts with enterprise data sources.
When Algorithms Wander: The Impact of AI Model Drift on Customer Experience
In this contributed article, Christoph Bӧrner, Senior Director of Digital at Cyara, discusses the risks and dangers of AI model drift on CX and how organizations can navigate the balance between leveraging AI advancements and maintaining exceptional CX standards.
Beyond Tech Hype: A Practical Guide to Harnessing LLMs for Positive Change
In this contributed article, Dr. Ivan Yamshchikov who leads the Data Advocates team at Toloka believes that whether it’s breaking down language silos, aiding education in underserved regions, or facilitating cross-cultural communication, LLMs have altered the way we interact with information –enhance human well-being by improving healthcare, education, and social services.
Video Highlights: Open-Source LLM Libraries and Techniques — with Dr. Sebastian Raschka
In this video presentation, our good friend Jon Krohn, Co-Founder and Chief Data Scientist at the machine learning company Nebula, sits down with industry luminary Sebastian Raschka to discuss his latest book, Machine Learning Q and AI, the open-source libraries developed by Lightning AI, how to exploit the greatest opportunities for LLM development, and what’s on the horizon for LLMs.
The Essential Role of Clean Data in Unleashing the Power of AI
In this contributed article, Stephanie Wong, Director of Data and Technology Consulting at DataGPT, highlights how in the fast-paced world of business, the pursuit of immediate growth can often overshadow the essential task of maintaining clean, consolidated data sets. With AI technology, the importance of data hygiene becomes even more apparent, as language models heavily rely on it.
Kinetica Delivers Real-Time Vector Similarity Search
Kinetica, the real-time GPU-accelerated database for analytics and generative AI, unveiled at NVIDIA GTC its real-time vector similarity search engine that can ingest vector embeddings 5X faster than the previous market leader, based on the popular VectorDBBench benchmark.