At InsideAI News, we believe that the best way to stay at the cutting edge of artificial intelligence is by engaging directly with the innovators, thinkers, and leaders driving the industry forward. That’s why we’re thrilled to announce our in-person attendance at some of the most prestigious AI industry conferences this year
InsideAI News on the Move: Join Us at the Premier AI Industry Events!
deepset Launches Studio for Architecting LLM Applications with Native Integrations to deepset Cloud and NVIDIA AI Enterprise
deepset, the mission-critical AI company, today announced an expansion of its offerings with deepset Studio, an interactive tool that empowers product, engineering and data teams to visually architect custom AI pipelines that power agentic and advanced RAG applications. AI teams are now able to more easily build top-tier composable AI systems, and immediately deploy them in cloud and on-premises environments using deepset Cloud and NVIDIA AI Enterprise software.
BigID Becomes the First DSPM to Pioneer Data Security and AI Innovation with AI Vector Database Scanning Solution
BigID, a leader in data security, compliance, privacy, and AI data management, announced a groundbreaking innovation that is set to transform the AI landscape. Today, BigID becomes the first Data Security Posture Management (DSPM) solution to offer scanning and securing of sensitive data stored in vector databases: a critical advancement for AI adoption and enterprise security.
New Study Puts Claude3 and GPT-4 up Against a Medical Knowledge Pressure Test
Kahun, the evidence-based clinical AI engine for healthcare providers, shares the findings from a new study on the medical capabilities of readily-available large language models (LLMs). The study compared the medical accuracy of OpenAI’s GPT-4 and Anthropic’s Claude3-Opus to each other and human medical experts through questions based on objective medical knowledge drawn from Kahun’s Knowledge Graph.
AI Appreciation Day
Artificial Intelligence Appreciation Day is celebrated on July 16 each year. With discoveries in science, tech, and healthcare, AI offers the possibility of a more evolved future. AI tools already dominate the market making human life much easier. n this special round-up, we’ve collected a number of commentaries from our friends in the AI industry ecosystem. We hope you enjoy reading them!
Embracing the Future: Generative AI for Executives
In this feature article, Daniel D. Gutierrez, insideAInews Editor-in-Chief & Resident Data Scientist, believes that as generative AI continues to evolve, its potential applications across industries are boundless. For executives, understanding the foundational concepts of transformers, LLMs, self-attention, multi-modal models, and retrieval-augmented generation is crucial.
Webinar: Getting Started with Llama 3 on AMD Radeon and Instinct GPUs
[Sponsored Post] This webinar: “Getting Started with Llama 3 on AMD Radeon and Instinct GPUs” provides a guide to installing Hugging Face transformers, Meta’s Llama 3 weights, and the necessary dependencies for running Llama locally on AMD systems with ROCm™ 6.0.
AI Startup Jivi’s LLM Beats OpenAI’s GPT-4 & Google’s Med-PaLM 2 in Answering Medical Questions
A purpose-built medical LLM developed by Jivi, an Indian startup co-founded by former BharatPe Chief Product Officer Ankur Jain, has claimed the number one slot on the Open Medical LLM Leaderboard.
Nutanix Accelerates Enterprise Adoption of Generative AI
Nutanix (NASDAQ: NTNX), a leader in hybrid multicloud computing, announced new functionality for Nutanix GPT-in-a-Box, including integrations with NVIDIA NIM inference microservices and Hugging Face Large Language Models (LLMs) library
Big AIs in Small Devices
In this contributed article, Luc Andrea, Engineering Director at Multiverse Computing, discusses the challenge of integrating increasingly complex AI systems, particularly Large Language Models, into resource-limited edge devices in the IoT era. It proposes quantum-inspired algorithms and tensor networks as potential solutions for compressing these large AI models, making them suitable for edge computing without compromising performance.