Heard on the Street – 2/8/2024

Print Friendly, PDF & Email

Welcome to insideAI News’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Click HERE to check out previous “Heard on the Street” round-ups.

Use science and math when making infrastructure design decisions for LLMs. Commentary by Colleen Tartow, Ph.D., Field CTO and Head of Strategy, VAST Data

When designing an AI system for training and fine-tuning LLMs, there is a lot of vendor-driven conversation that confounds the landscape. However, relying on common sense along with some math and science can get organizations far in a situation where small mistakes can be incredibly costly.For example, let’s look at calculating the bandwidth necessary for checkpoint operations in a recoverable model. There are multiple modes of parallelism to consider in an LLM training environment, each of which makes recoverability more streamlined. Data parallelism takes advantage of many processors by splitting data into chunks, so any individual GPU is only training on a portion of the full dataset. Model parallelism is similar – the algorithm itself is sharded into discrete layers, or tensors, and then distributed across multiple GPUs or CPUs. Finally, pipeline parallelism splits the model training process into smaller steps and executes them independently on different processors. Combining these modes of parallelism ensures that recoverability is possible with a much smaller checkpoint overall. In fact, since the model and data are copied in full to each octet (group of 8 DGXs) and parallelized within the octet, only one checkpoint is needed per octet, which drastically reduces the bandwidth needed to write a checkpoint.This is an example of how understanding the intricate details of parallelism and LLM training can help organizations design a system that is well-built for checkpoint and recoverability operations. Given the scale of infrastructure required here, it is paramount to neither over- nor under-size resources to avoid overpaying for hardware (wasting money) nor underprovisioning the model architecture (negatively affecting deliverables). Simply put, organizations need to rely on real technical knowledge and calculations when designing an AI ecosystem.

Generative transformation is the future of the enterprise. Commentary by Kevin Cochrane, CMO, Vultr

“Gartner’s annual IT Infrastructure, Operations & Cloud Strategies Conference explored the future of infrastructure operations, highlighting the biggest challenges and opportunities heading into 2024. One of the big themes for 2024? Generative transformation. 

Enterprises across industries – from healthcare and life sciences to financial services and media & entertainment – are racing to embrace AI transformation. However, the past year has highlighted the need for enterprises to actively pursue generative change to flourish in the evolving landscape of AI-driven business.  

Generative transformation involves implementing both technological and organizational shifts to integrate generative AI into the fundamental aspects of business operations. The three foundational steps needed to achieve this include: formulating a strategy around how your enterprise will use generative AI, planning for the organizational change needed to fully roll out generative transformation, and building and deploying a platform engineering solution to empower the IT, operations and data science teams supporting your generative transformation journey. 

With these three steps, enterprises will be well on their way to diving head first into generative transformation, and ultimately thriving in today’s dynamic business landscape.

GPT store allows for better ChatGPT user experience. Commentary by Toby Coulthard, CPO at Phrasee

“There’s been interesting discourse in the AI ‘hypefluencer’ space (namely on X) that paints GPTs as glorified instruction prompts and provide no more capability or functionality than before, and therefore provide no utility. They’re missing the point; this is a user experience change, not a capability one. A large part of what OpenAI has been struggling with is that most people beyond the early adopters of ChatGPT don’t know what to do with it. There’s nothing scarier than a flashing cursor in a blank field. ChatGPTs ‘chasm’ in the product adoption curve is that these early majority users want to use ChatGPT but don’t know how. No one was sharing prompts before, now they’re sharing GPTs, and the GPT Store facilitates that. The GPT Store opens the door to the next 100m+ weekly active users.

With the recent launch, I expect there to be a few very profitable GPTs, with a very large, long-tail of GPTs that are free to use. Plugins will allow for further monetization through third-party services via APIs, but that’s further down the line.

The comparison with an ‘app store’ is misguided, OpenAI isn’t facilitating the dissemination of applications, they’re facilitating the dissemination of workflows. Principally from the most advanced ChatGPT users with experience in prompt engineering, to the least. It’s improving the usability and accessibility of ChatGPT, and its intention is increased adoption and improved retention – in that regard it will improve OpenAI’s competitiveness. They also act as ‘agents lite’. OpenAI has even admitted this is a first step towards autonomous agents. They have a reputation of releasing earlier versions to democratize the ideation of use cases – both beneficial and harmful to inform where the product should go. OpenAI is aware that even they don’t know all the use cases for their models – the GPT Store enables them to see what people produce and what’s useful, popular, and potentially dangerous before they build in more capability and autonomy to these GPTs. The challenges lie in the edge-cases that OpenAI is yet to think of.”

The Information Commissioner’s Office (ICO) in the UK is investigating the legality of web scraping for collecting data used in training generative AI models. Commentary by Michael Rinehart, VP of AI, Securiti

“Enacted on May 25, 2018, the EU General Data Protection Regulation (GDPR) is having a profound impact on the enforcement of global privacy regulations. Emphasizing the importance of obtaining consent before processing information, GDPR is even more relevant today than it was six years ago when it was first implemented. In this evolving landscape, the surge of AI has fundamentally altered how data is handled, further reinforcing the significance of adhering to these laws. The recent investigation by the UK’s Information Commissioner’s Office (ICO) into the legality of web scraping for training generative AI models is therefore unsurprising. These models generate text or images based on large datasets, raising privacy concerns due to automated data collection.   

The ICO’s focus on data protection standards aligns with GDPR principles. Responsible AI implementation requires robust data governance practices. As AI systems continue to be integrated with sensitive data, organizations must establish strong controls, including strict access control, anonymization, and governance frameworks to optimally balance AI potential with data privacy and security.

Lacking Proper Data Governance, ‘Black Box’ AI Could Be Disastrous for the Enterprise. Commentary by John Ottman, Executive Chairman of Solix Technologies, Inc

“Over the past year OpenAI’s ChatGPT has dominated the press, business discussions and influenced every company’s stance and guidelines on the use of generative AI. The impact has been profound as the entire world has now largely seen how generative AI automates repetitive or difficult tasks, reduces workloads and simplifies processes. But more recently, concerns have arisen over enterprise use of ChatGPT, noting the data governance challenges caused by exposing enterprise data to ‘black box’ AI and ‘being too reliant on one company’s AI tech.’

Enterprise AI users are concerned that the risks posed by ‘black box’ AI models are just too great, and some have already banned their use. Chief amongst the list of concerns are data security, data privacy and compliance with laws regulating the handling of sensitive data and personally identifiable information (PII). Others even worry that ChatGPT would become so integral to their business that a failure at OpenAI would lead to a failure at their business as well.

It’s an unavoidable conclusion that training an external, proprietary ‘black box’ AI model with your private enterprise data is dangerous and may expose your company to data breach, legal risk and compliance findings. ‘Black box’ training inputs and operations aren’t visible for peer review and prompts may arrive at conclusions or decisions without providing any explanation as to how they were reached. ChatGPT introduced the world to generative AI, but so far data governance concerns rule out a central role in the enterprise. 

In the wake of ChatGPT, private LLMs have emerged as a leading alternative for enterprise use because they are safe, secure, affordable and solve the operational challenge of training public LLMs with private enterprise data. Private AI models reduce the risk of data exfiltration and adverse security and compliance findings because the data never leaves the enterprise. Several of the world’s most powerful LLMs are available as free and open source solutions, providing improved transparency and control over security and compliance. Most importantly, private LLMs may be safely and securely trained and fine-tuned with enterprise data.”

Open vs. Closed AI Heats up with New AI Alliance. Commentary by Mike Finley, CTO and co-Founder, AnswerRocket

“After Meta, IBM and Intel recently launched the AI Alliance, the battle between open and closed AI has begun to heat up. Closed AI took a big lead out of the gate, with technologies like ChatGPT (ironically from OpenAI) and Bard delivering a practical, powerful chatbot experience and are being leveraged by many businesses. Meanwhile, open AI offerings like Llama are rough around the edges, suffer from inferior performance and have been sparsely adopted within the enterprise.

However, the AI Alliance shows that open AI may start competing quicker than most expected – and growing community and vendor support is the reason why. The three partners in the alliance are playing catch up after being sidelined in AI (Watson “lost” to GPT, Intel “lost” to NVIDIA, and Meta’s Llama 2 is about six months behind OpenAI in terms of creativity, token sizes, task complexity, ease of use, and pretty much everything except cost). So the three are well resourced, capable, and motivated.

Ultimately, the race between the two will bear similarities to the iPhone vs. Android battle. The closed AI technologies will provide a premium, highly polished, easily usable products. Open AI tech will offer great value, flexibility and support for niche applications.”

Data sovereignty has gone viral. Commentary by Miles Ward, CTO of SADA

“Imagine you are building your business, making promises to customers and you get hacked! Not your fault, but your provider doesn’t see it that way and ignores your request for support, so you must take legal action. But wait, your provider is owned by different legal systems and different contract rules – if you think local lawyers are expensive, think international expense.

Companies want the relationships they have with customers governed under the same laws as they are, meaning they’ll want the data protected by those laws, which in most cases means it needs to be stored in the same country where they’re doing business. The challenge there is that there are 192 countries and there are far from 192 cloud providers.” 

Why PostgreSQL continues to be the fastest growing DBMS. Extensibility. Commentary by Charly Batista, PostgreSQL Technical Lead at Percona

“In the past year, PostgreSQL has not merely maintained but strengthened its position as one of the fastest growing database management systems in the world — continuing to enjoy rapid, sustained growth in overall adoption, as well as being named 2023’s DBMS of The Year by DB-Engines for the fourth time in just the past six years (which uses a variety of metrics, such as growth in web citations and professional profile entries, to determine which DBMS saw the greatest increase in popularity over the past year). 

While the DBMS has a variety of advantageous qualities, one of the main driving forces behind PostgreSQL’s enduring success lies in its unparalleled extensibility. Extensibility, in this case, refers to the ability of the database management system to be easily extended or customized to accommodate new features, data types, functions, and behaviors. Through that extensibility, PostgreSQL gives developers the flexibility to continually re-tool and expand upon the functionality of the DBMS as their needs and those of the market change. As a result, PostgreSQL promises what other DBMS’s can’t — seamless adaptation to diverse user requirements, and the ability to keep pace with an ever-evolving technological landscape; both of which have proven particularly advantageous as of late in relation to the booming fields of machine learning and AI.

Combined with this extensibility, PostgreSQL’s open-source nature implicitly provides immense opportunity. With countless extensions already freely available and permissive licensing that encourages experimentation, PostgreSQL has become a hotbed for innovation. Developers are able to consistently push the boundaries of what a single database management system can do. This open ethos invites engagement, participation, and contributions, from a multitude of sources, organizations and individuals alike, leading to a rich, diverse pool of talent and expertise

The synergy between PostgreSQL’s extensibility and open-source foundation has played a central role in propelling it to become one of the fastest-growing DBMSs on the market, and one of the most beloved. In a way, it has evolved beyond a mere database management system, instead manifesting into a platform where developers can adapt and create a DBMS that fits their specific needs. Thanks to PostgreSQL’s convergence of open-source principles and extensibility, developers have a platform for the continuous exchange of ideas, extensions, and shared indexes. PostgreSQL, thus, stands not just as a database of choice but as a testament to the power of collaboration, adaptability, and innovation in the ever-expanding realm of database technology.”

Balancing the Environmental Impact of Data Mining. Commentary by Neil Sahota, Chief Executive Officer ACSILabs Inc & United Nations AI Advisor

“Like the two sides of a coin, data mining has both positive and negative environmental impacts. Thus, it’s a balance between resource optimization and environmental monitoring and the contrast with significant energy consumption and potential for data misuse.

On the positive side, data mining optimizes resource management. For example, it aids in predictive maintenance of infrastructure, reducing the need for excessive raw materials. In agriculture, precision farming techniques (reliant on data mining) optimize the use of water and fertilizers, enhancing sustainability. A study by the USDA showed that precision agriculture could reduce fertilizer usage by up to 40%, significantly lowering environmental impact. Second, data mining plays a crucial role in environmental conservation. By monitoring and analyzing large data sets, researchers track changes in climate patterns, biodiversity, and pollution levels. The Global Forest Watch, for instance, has leveraged data mining to provide insights into forest loss.

Conversely, there are three crucial negative impacts. First is the high energy consumption. Data centers are essential for data mining but consume vast amounts of energy. According to a report by the International Energy Agency, data centers worldwide consumed about 200 TWh in 2020, which is roughly 1% of global electricity use. This consumption contributes to greenhouse gas emissions, particularly if the energy is sourced from fossil fuels. 

Second is e-waste and resource depletion. Hardware used in data mining (e.g. servers) has a finite lifespan, leading to electronic waste. Moreover, manufacturing these devices also contributes to the depletion of rare earth minerals. The United Nations University estimates that global e-waste reached 53.6 million metric tons in 2019, a figure that continually rises because of increasing demand for data processing infrastructure. 

Third is the potential for data misuse. While not a direct environmental impact, this data misuse can lead to misguided policies or exploitation of natural resources. Ensuring ethical and sustainable use of data is crucial to prevent negative environmental consequences. While data mining offers significant environmental and resource optimization benefits, its environmental footprint cannot be overlooked. Balancing the advantages with sustainable practices while minimizing its negative environmental impacts is essential for producing true positive net value.”

How AI hallucinations are unpredictable, yet avoidable. Commentary by Ram Menon; Founder and CEO of Avaamo 

“The phenomenon of hallucinations in large language models (LLMs) stems primarily from the limitations imposed by their training datasets. If an LLM is trained on data lacking the necessary knowledge to address a given question, it may resort to generating responses based on incomplete or incorrect information, leading to hallucinations. However, this is just one facet of the challenge. Complications arise due to the LLMs’ inability to verify the factual accuracy of its responses, often delivering convincing yet erroneous information. Additionally, the training datasets may contain a mix of fictional content and subjective elements like opinions and beliefs, further contributing to the complexity. The absence of a robust mechanism for admitting insufficient information can aggravate the issue, as the LLM tends to generate responses that are merely probable, not necessarily true, resulting in hallucinations.

To mitigate hallucinations in enterprise settings, companies have found success through an innovative approach known as Dynamic Grounding, which incorporates retrieval augmented generation (RAG), which supplements the LLM’s knowledge from its training dataset with information retrieved from secure and trusted enterprise data sources. 

By tapping into additional, up-to-date data within the enterprise’s repository, these new approaches significantly reduce hallucinations. This increase in information enhances user trust in conversational enterprise solutions, paving the way for secure and expedited deployment of generative AI across a diverse range of use cases.”

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW

Speak Your Mind

*