Heard on the Street – 2/29/2024

Print Friendly, PDF & Email

Welcome to insideAI News’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Click HERE to check out previous “Heard on the Street” round-ups.

On the Gemini meltdown. Commentary by Mehdi Esmail, co-founder and Chief Product Officer at ValidMind 

“Google’s intent was to prevent biased answers, ensuring Gemini did not produce responses where racial/gender bias was present. Their effort led to Gemini being “biased” towards racially/gender-diverse responses and, for example, on a request to produce a description of Nazi soldiers from 1943, it overcorrected the response by showing pictures of black and female Nazi soldiers, which was inaccurate. Gemini produced the incorrect output because it was trying too hard to adhere to the ‘racially/gender diverse’ output view that Google tried to ‘teach it’.” 

AI + speed cameras in NYC. Commentary by Dean Drako, CEO of Eagle Eye Networks 

“Speed cameras are just the start of automated AI-based law enforcement. Cameras are getting more prevalent and are likely to be utilized to enforce many types of laws. AI technologies such as LPR (license plate recognition) make these cameras more valuable and capable of detecting violations. Their use to enforce laws of all types is an obvious extension. Unless legislation is passed forbidding it, cameras and AI will inevitably be used for many types of law enforcement.”

Considerations When Building a Generative AI Team. Commentary by Tiago Yuzo Miyaoka, Manager, Data & Cloud Specializations, Andela 

“With the recent advances in Large Language Models (LLMs), many companies are looking for experienced technologists in this particular field but are failing to find talent due to skills gaps in the workforce. In the constantly evolving field of Generative AI, it is hard to evaluate skills with the traditional approach of measuring years of experience or, in this case, even months. To properly vet a potential candidate, one would first want to see that they have the foundational knowledge and expertise that is needed to understand how LLMs work and they have the ability to learn and adapt on the fly. Deeper experience with recent techniques such as LoRA (Low-Rank Adaptation) and PEFT (Parameter-Efficient Fine-Tuning) reflect skills that are highly valuable but those would not be something that hard to learn for someone who has years of experience fine-tuning BERT (Bidirectional Encoder Representations from Transformers) and similar models.

Another option a company could employ is to hire a Prompt Engineer without any previous Data Science or Machine Learning Engineering experience. A Prompt Engineer is focused on writing prompts and doesn’t need to have a Data Science or Machine Learning background to perform well. This strategy could limit the extensions of Generative AI applications, creating the need to replace the talent, given that upskilling in this case would be very costly.

Overall, a more holistic approach that evaluates foundational knowledge and expertise, including courses, certifications, personal projects and not only years of experience favors companies that want to build Generative AI teams and are struggling with availability in the market. Data Scientists and ML Engineers with prior experience in foundational models are more likely to succeed in a Generative AI-related role and are also more likely to upskill themselves with the recent advances in the field. Talents with both skills would still be the best scenario, however, their availability might be limited in the market, not impossible, but definitely more expensive. Another option for building a team would be to mix different profiles, considering senior talents with more theoretical knowledge of AI, and a few Prompt Engineers that could learn from them or at least deal with less demanding daily tasks.”

Enterprise AI Adoption will require reckoning with multi-cloud reality. Commentary by Ramesh Prabagaran, CEO and co-founder of Prosimo 

“The intersection of cloud networking and AI has become a critical discussion among enterprises seeking to deploy AI to solve complex problems. As they build AI applications, companies are encountering challenges with applications, data and workloads operating in multiple public clouds. For compute and data-intensive operations like AI, that makes the networking critically important. Intel’s spin off of a business unit, now called Articul8, to address this issue shows the challenges (and opportunity) created by data stored in multicloud environments. 

The preferred approach to training an AI model with cloud-based data has been to place both the model and the data in the same Cloud. 

However, any model or broader AI deployment needs to work in the real world, not just the lab, where enterprises have data and applications spread over several cloud deployments. The model needs to ingest data from one cloud, while the applications the model powers call different instances for the inference workloads. Establishing connectivity between these elements is a challenge, as customers prefer streamlined, private connections over public data highways. The solution lies in establishing cross-cloud, simplified pipelines with a focus on security and observability, and effective troubleshooting mechanisms integrated at the base level. 

Enterprise data security is also an issue as companies demand strict controls on how their data is used to train third-party models. 

Companies should consider their agreements with available commercial and open source models and ensure that enterprise data be isolated from their training dataset or that the data for a custom model is also cordoned off. 

The cost dynamics of AI implementation involve considerations for both the model itself and the connectivity between different sources. Locating AI models closer to their data source while maintaining security lowers egress and data transport fees. Cloud network optimization can support this dynamic network construction. 

As with any production grade enterprise solution, AI models and applications should adhere to strict authentication and authorization policies. The cost and technology complexities mean a strict security regime should be implemented. Cross-cloud connection of AI applications furthers this complexity and should be a core strategic IT initiative.”

What Does Software 2.0 Mean for Development? Commentary by Shiva Nathan, founder of app development startup, Onymos

The (over) promise of AI will provoke significant spending increases on experiments without well-defined use cases. Tech executives will rush to make sure they can include ‘AI’ in their marketing copy at the expense of commercial viability. New tech companies that are just wrappers over technology like ChatGPT will rise and fall quickly. All of this is already happening to an extent. I encourage every organization to take a more thoughtful approach.” 

Will GenAI Disrupt Industries? Chon Tang, Founder and General Partner, Berkeley SkyDeck Fund  

“AI is hugely influential in every industry and role, with potential for huge value creation but also abuse. Speaking as both an investor and a member of society, the government needs to play a constructive role in managing the implications here.  

As an investor, I’m excited because the right set of regulations will absolutely boost adoption of AI within the enterprise.  By clarifying guardrails around sensitive issues like data privacy + discrimination, buyers / users at enterprises will be able to understand and manage the risks behind adopting these new tools. There are real concerns about the implications of these regulations, in terms of cost around compliance.  

Two different components to this conversation:

The first — we should make sure the cost of compliance isn’t so high, that ‘big AI’ begins to resemble ‘big pharma,’ with innovation really monopolized by a small set of players that can afford the massive investments needed to satisfy regulators;

The second is that some of the policies around reporting seem to be focused on geopolitical considerations, and there is a real risk that some of the best open source projects will choose to locate offshore and avoid US regulation entirely.  A number of the best open source LLM models trained over the past 6 months include offerings from the UAE, France, and China.”

Why sharding angst accelerates shift to distributed SQL databases. Commentary by Sunny Bains, Software Architect at PingCAP

“The classic technique for solving scalability problems is divide and conquer. In the database world this translates to sharding or partitioning data. Many companies now have databases that are terabytes in size. Resharding a 10TB table is a nightmare fraught with complexity. The ideal database is one that scales in both dimensions — compute and storage — independently, and doesn’t need to be explicitly sharded by the application developers.

As IT departments get squeezed by shrinking budgets, they will want to move to database architectures that avoid explicit or manual sharding. The need to cut costs will accelerate the shift toward technologies like distributed SQL databases that eliminate the need for explicit sharding and provide multi-tenancy capabilities. These capabilities will help them  consolidate their database instances, reducing cost and complexity.”

Generative AI in Retail. Commentary by Bernadette Nixon, CEO, Algolia

“Retail leaders need to consider the resources and support needed to execute generative AI successfully. Most retailers don’t have the in-house expertise to build generative AI capabilities themselves. Hiring a team of data scientists and figuring out how to train the data is complicated and expensive. If that’s not an option, retailers must do the research and carefully select a technology partner with the AI expertise and proven track record that they don’t have in-house. With the right backing, AI will create a quantum leap in the e-commerce shopping experience. Now is the time for retailers to start working with the technology partners that will help them through the journey. Generative AI will be a huge competitive differentiator for retailers that embrace the technology.”

Unchecked Dangers of ChatGPT Hallucinations. Commentary by Liran Hason, CEO of Aporia

“The fact that ChatGPT is hallucinating isn’t surprising; it’s a dire warning come to life about the risks of unchecked artificial intelligence. The real issue isn’t why ChatGPT is hallucinating, but rather, how we can address and prevent such issues? AI has the potential to change our world for the better, yet there’s a very concerning lack of urgency in safeguarding against these very scenarios, to prevent AI from developing autonomous, unchecked impulses. The spread of misinformation, however trivial it may seem, poses a real danger by desensitizing the public to AI errors, blurring the lines between bias and fact. For AI to remain a safe tool, companies deploying LLMs absolutely need to establish guardrails and regulations to protect everyone involved.”

Striking a delicate balance between privacy and AI. Commentary by Lakshmikant Gundavarapu, Chief Innovation Officer, Tredence

“In an era dominated by big data, businesses are increasingly harnessing the power of AI models such as ChatGPT to revolutionize efficiency and elevate customer service standards. However, this surge in AI adoption comes hand in hand with substantial data privacy concerns, particularly prevalent in data-intensive sectors like banking and consumer goods. The pivotal challenge lies in effectively leveraging these advanced AI tools without compromising the confidentiality of sensitive information or violating stringent privacy regulations. 

Enterprises must embrace robust data privacy strategies to navigate this complex landscape successfully. This involves meticulous data classification to identify and safeguard sensitive information, minimizing the data input into AI models and implementing advanced techniques like data masking and encryption. Equally essential are stringent access controls and secure data-sharing practices to thwart unauthorized access attempts. 

A standout solution in this intricate ecosystem is synthetic data. By crafting data that mirrors authentic patterns yet contains no sensitive information, businesses can confidently train and test AI models without risking privacy breaches. This innovative approach presents a dual advantage: It not only fortifies privacy safeguards but also preserves the utility of data for diverse AI applications. 

In essence, businesses must strike a delicate balance—capitalizing on the vast potential of AI while safeguarding data privacy. The incorporation of synthetic data emerges as a prudent step in this direction. In our digitally-driven world, responsible AI usage is not just a strategic choice but a technical necessity. It forms the bedrock for upholding customer trust and maintaining industry reputation in an increasingly interconnected and privacy-conscious landscape.”

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW

Speak Your Mind

*