Welcome to insideAI News’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Click HERE to check out previous “Heard on the Street” round-ups.
Generative AI Model Hallucinations. Commentary by Vikram Chatterji, CEO of Galileo
“With more enterprise teams deploying Generative AI (GenAI) products in 2024, GenAI model hallucinations remain a fear.
Hallucinations exist on a spectrum ranging from acceptable to harmful. An acceptable hallucination would be misformatting an output, such as a date being spelled out rather than MM/DD/YYY format, whereas a harmful hallucination would be a vulgar or lewd response from a children’s application when asked, “What is happy hour?”. So why do these occur? Hallucinations are quite nuanced and their likelihood is influenced by a variety of factors. For instance, the choice of model, prompt, query, embedding model, context, and data all influence whether a GenAI system hallucinates. In addition, a model could be incapable of memorizing all of the information it was fed, contain training data errors, and outdated training data. If you’re using Retrieval Augmented Generation (RAG) solutions, it could be a result of your context data or your chunking strategy, and if you’re using fine-tuning, it could be due to biased data that you used to train your model.
While it is not possible to completely eliminate hallucinations, there are a number of steps a team can put in place to reduce them. For this reason, teams need to have a framework in place to monitor and evaluate system components closely, and constantly iterate to find a mix that yields highly accurate results.”
The Shift Towards Customized AI Solutions in the Workplace. Commentary by Dmitry Shapiro, founder & CEO of MindStudio
“As organizations enable their employees to use generative AI solutions like ChatGPT, they are discovering that AI productivity gains are hard to find. While generative AI can do powerful things like write, analyze, and summarize, getting just the right output from an AI model requires employees to spend too much time crafting and fine tuning prompts — more time than it would have taken them to just do the work themselves.
Many early enterprise adopters of ChatGPT are also reporting that their employees are wasting time with AI — distracted by AI’s capabilities that are not work related. Enterprises that have created custom business applications on top of AI models are showing the most productivity gains — as their employees benefit from AI, but aren’t responsible for engaging with the AI models directly. They simply utilize the custom apps that are made available to them by their enterprise. Additionally, because these custom applications are tightly integrated with other systems within the enterprise, they can be tightly managed, facilitating logging, compliance, and business intelligence on how each person, team, and the enterprise are using AI, and what can be made even better.
Bespoke platforms allow enterprises to rapidly create custom AI applications — made specifically to fit their unique operational processes.”
Combatting the National Digital Skills Gap through AI-Driven, Real-Time On-the-Job Training. Commentary by Karan Sood, Chief Product and Technology Officer at SupportLogic
“As technology advances, businesses are finding that their technological growth is outpacing the current rate at which their teams can be trained or upskilled. The result is a major skills gap across industries of all sizes. This skills gap is particularly acute in the customer support industry, where agents are constantly adapting to evolving technologies and methodologies while still being expected to deliver exceptional customer service. Putting the skills epidemic in context, 87% of companies have skill gap issues within their respective organizations, with 46% having no strategy to address the shortage of skilled workers. This highlights the urgent need to find new strategies to bridge the gap while proactively tackling their talent development challenges.
Leveraging AI tools, businesses now have a promising solution to address both upskilling and staff onboarding in customer support and beyond. Assistive AI, encompassing predictive and generative capabilities, offers a myriad of benefits compared to traditional learning and development. For example, AI analyzes and provides advice tailored to specific issues, empowering support agents to handle inquiries effectively while continuously providing agents with on-the-job customer support learning and development training. Real-time data insights and AI-generated contextual guidance further refine communication and problem-solving abilities.
Imagine a support team at a large tech company facing a high volume of support cases and complex inquiries. With the assistance of AI tools, human agents have the backup needed to manage caseloads more efficiently – improving operations and reducing escalation rates. This AI/Human collaboration allows employees not only to enhance customer satisfaction but also introduces an element of continuous learning within the daily workflow. GenAI provides agents with instant contextual advice tailored to specific customer issues, improving their ability to deliver empathetic and efficient support while offering personalized learning opportunities to identify which skills are strong and which need improvement.
The goal of deploying AI is to enhance each support agent’s innate human ability. By supporting support teams, businesses can scale their teams’ skills in tandem with tech advancements – simultaneously strengthening exceptional customer experiences.”
CPG Manufacturing Looks to Purpose-Built AI for Future Proofing the Industry. Commentary by Saar Yoskovitz, CEO and co-founder of Augury
“A majority (69%) of CPG manufacturers said they planned to increase AI investments over the last year, according to a survey of 500 global manufacturers. The same report also found that respondents believe AI can help them address workforce, capacity, and yield/throughput issues. When applied to manufacturing process health, purpose-built AI and machine learning algorithms collect and analyze unique variables of a production line, not just the data. As a result, teams can better target and meet goals around capacity, yield, throughput, and other KPIs.
CPG companies, take note: purpose-built AI solutions outperform traditional manufacturing optimization while also upskilling talent for today’s dynamic business needs. Teams become empowered by AI co-pilots and are able to act faster and smarter and focus on optimizing production lines by moving beyond mundane and reactive tasks. CPG companies should be pursuing this kind of AI investment as they look to rebalance market share and become resilient.”
Addressing Developer Concerns with GenAI. Commentary by Jay Allardyce, General Manager, Data and Analytics, insightsoftware
“Users expect a vision of the future from their analytics software. Developers are aware of this and have focused on advanced analytics features like predictive and generative artificial intelligence (AI).
Unfortunately, Gartner reports that only 10-20% of IT and customer-facing teams have actually adopted or intend to adopt generative AI. With AI, Organizations need to start with a clear outcome in mind. What business problem or KPI do I intend to impact? Once understood, identifying the workflow and where the result/change will impact users with significant adoption, organizations should consider re-imaging an existing application experience vs. defining a newly defined AI application. Simply because organizational productivity with AI will come via embedded intelligence, not shifting an organization to become an AI-first org with a new set of applications. This will allow teams (product and developers) to create data-driven projects leveraging the best embedded intelligence powered by augmented analytics.
Not only can this accelerate previously stickier data-driven experiences, but as more enterprises and employees are exposed to data-driven applications for use, the more they will accept these AI and automation experiences as part of their normal workflow. Absent a clear outcome, workflow, and existing application, most organizations will spend countless hours and dollars building low adoption-based proof of concepts.”
How prompt engineering can save hours of workforce training. Commentary by Daniel Fallmann, CEO of Mindbreeze
“By carefully crafting prompts to be clear and concise, trainees can quickly understand the information being presented without the need for lengthy explanations. This reduces the time spent on reiterating or clarifying instructions during training sessions. Clear and well-defined prompts leave little room for interpretation, reducing ambiguity and minimizing the likelihood of misunderstandings. This eliminates the need for additional clarification or correction, saving time that would otherwise be spent addressing confusion.
Prompt templates can be predefined for specific departments and enterprise types and then filled with context from AI-powered systems like an insight engine. One should avoid that every user has to write prompts on their own, as templates make information more actionable because the prompt is a result of a typical action done by a user. This leads to predictive outputs rather than trial and error with prompts on a per user basis.
Standardized prompts ensure consistency across different training sessions and instructors. This reduces variability in learning outcomes and eliminates the need for repeated explanations of the same concepts, saving valuable training time. By simplifying complex concepts into easily digestible prompts, prompt engineering reduces the cognitive load on trainees. This allows them to focus more effectively on learning and applying new information, ultimately accelerating the training process.
Adaptive prompts can be tailored to the individual learning needs and preferences of trainees. By adjusting the difficulty or complexity of prompts based on learner performance or feedback, training programs can optimize knowledge retention and minimize the time spent on unnecessary review or repetition. Providing trainers with standardized prompts streamlines the preparation process and reduces the time spent developing training materials. This allows trainers to focus on facilitating learning rather than creating instructional content from scratch.
Clear prompts make it easier for trainers to assess trainee comprehension and progress accurately. This allows for more efficient monitoring of learning outcomes and the identification of areas that may require additional focus or support.”
Skilling and reskilling required with the emerging Gen AI technology stack Commentary by Ashish Kakran, Principal at Thomvest Ventures
“One of the topics that is top of mind for people who are just entering the workforce or those in senior positions is how generative AI will impact their jobs. Is my job going to be automated away? What can I do now to future proof myself when generative AI products are good? Will my training and degree be still as valuable in the future? There is both optimism in certain sections of the workforce and skepticism in others. Whenever big tech disruptions happen, such emotional reactions are normal. I think there are reasons to be positive as there are new opportunities to consider for those in this boat. While there is no denying the fact that some jobs will get automated away, the future might belong to those who know how to use genAI tools effectively. Imagine as a software engineer, you don’t have to constantly look at code samples or reinvent the wheel writing simple logic. You can focus on your craft and core deliverables with assistance from code copilots that automate away the boring parts. Similar gains in efficiency are there to be had in multiple job functions across different industries. Keeping a close eye on emerging new startups and product announcements from AI leaders in your area of interest can help you gain an edge over peers who choose to not take advantage of such tech tools.
In addition to current jobs, many new roles will need to be created. From taking data in a variety of different formats to cleaning it, running experimentations, to evaluating or productionizing, these tools need a new way of thinking. The entire stack from hardware chips all the way to middleware and applications needs to be rethought. This is happening right now. Not only operations and core infrastructure, there will be need for security roles and compliance to ensure that the models comply with regulations like EU AI Act, GDPR, CCPA etc. The future is here and getting defined right now. With such a massive change, there will certainly be skilling and reskilling required to help the workforce be successful in an AI-native world. For the reasons outlined above, I think there is hope but policy makers, governments and members of the workforce will need to put in the effort to adapt to this change.”
The major considerations retailers must look into before implementing generative AI solutions. Commentary by Laura Ritchey, CEO of Radial
“Today’s retailers are competing for customer attention to capture and build their brand market share while customers are looking for a simple purchasing experience. Yet as personalization efforts persist, they often miss the mark, leaving customers to navigate experiences based on unrelatable and impersonal brand personas. Generative AI promises to open avenues for precise personalization, offering tailored product recommendations, related purchases, and forecasts for replenishment based on actual usage. These personalized journeys foster deeper loyalty, keeping customers engaged and spending more.
As retail leaders step into the future, there is much to consider, evaluating the various forms of GenAI available, the level of human intervention needed, how the technology will fit into their frameworks, and complement the brand voice. For instance, retailers must determine whether to train a model on the brand’s own unique, secure data or to access larger data sets with more historical knowledge but that may come with serious security risks and lack privacy safeguards. As the world becomes more accustomed to these technologies, retailers must continue investing to understand their customers’ expectations and begin piloting solutions to keep pace while keeping data privacy and security at the top of their minds.”
Biden Admin announces $6B to help clean up manufacturing. Commentary by Alp Kucukelbir, Ph.D., Chief Scientist and Co-Founder of Fero Labs
“We’re excited to see this support for industrial decarbonization and revitalization of domestic manufacturing from the Biden Administration. This transition will require a combination of solutions and this is a great first step. AI can benefit all of these projects to accelerate decarbonization. AI will help reduce the amount of industrial heat wasted, minimize out-of-spec production while transitioning technologies, and speed up the technology transition itself to get high-efficiency plants running as quickly as possible.
These projects are critical, as they replace dirty fuel sources with cleaner ones. This is something AI cannot do on its own, but it can provide greater insight and explainability to the transition. Although AI doesn’t ‘move molecules around,’ it can tell you how to move the molecules around and speed up the transition, further saving costs, and reducing emissions along the way.”
Why Data Quality Fuels Enterprise Generative AI Success. Commentary by Chandini Jain, Chief Executive Officer, Auquan
“As enterprises advance their adoption of generative AI, the importance of getting the retrieval model right has become a priority, particularly where retrieval augmented generation (RAG) techniques are employed for knowledge-intensive use cases. And a retrieval model is only as good as the data you feed it. Like any information system, it’s junk in, junk out with AI.
In order to maintain data quality to feed AI, enterprises must invest in well-tested and robust data pipelines and focus on input data model standardization. This includes implementing mandatory data validation to ensure the data you feed your generative model is of high quality. It’s essential to put in the effort to select high-quality, relevant datasets at the top of the funnel that AI systems can use effectively for your use case.
A lack of focus on data quality is a major source of enterprise AI deployment failure. The challenges of generative AI involving a lack of accuracy, comprehensiveness and trustworthiness are not new, but they are universal challenges when it comes to the kind of knowledge-intensive AI use cases found throughout the enterprise.
Establishing and maintaining quality datasets and continuously validating data helps preserve robustness across newer versions of AI. It also empowers teams to execute efficiently toward newer goals without needing to constantly validate the accuracy of prior models. Careful dataset selection ensures your models are trained on the data they expect to see in the production instance and usually leads to an increase in accuracy across all models.
Put simply, the better the input, the better the output when it comes to generative AI and RAG-based systems. For instance, a seemingly simple problem like summarization of a news article often suffers because input data was not adequately cleaned of ads and other superfluous content or the article itself is incomplete. It’s important to not lose sight of the need to feed AI with clean, relevant data.
IT leaders often think that data parsing is more important than investing in ensuring correct and quality datasets are used up front. Most of the infamous hallucination issues can be effectively addressed with quality input data. By investing in data quality and a robust retrieval model, IT teams can overcome these challenges and begin to realize the potential of generative AI solutions in the enterprise.”
Addressing Gender Bias in AI. Commentary by Cindi Howson, Chief Data Strategy Officer at ThoughtSpot
“The underrepresentation of women in tech is not a new problem and has gone as far as to weave its way into the very fabric of technology itself. Gender bias in generative AI exemplifies how pervasive stereotypes lurk within the data used to train these models. And this isn’t just a women’s issue, it affects all marginalised groups.
These biases may be unintentional, but the harm is still there. Neglecting these biases risks perpetuating harmful stereotypes, limiting opportunities for underrepresented groups and ultimately hinders the potential of the technology to improve daily business operations and humanity itself. To create truly inclusive AI, improving diversity in the tech industry is one critical approach. It’s not a revolutionary concept, but one that still requires focus and action.
Empowering girls and women to pursue a STEM careers from a young age through education and mentorship programmes will be a key step towards building a more inclusive workforce. As more women join the ranks of developers, researchers, and AI leaders, this will bring a wider range of perspectives to the table that will be vital in developing models that reflect the full spectrum of human experiences.
But this alone is not a complete solution. As the tech industry continues to make slow progress in developing the next generation of data and AI experts, organisations working on these models should leverage outside stakeholders and groups to identify biases in training data and the potential for disparate impact. We also need broad frameworks in place that include safeguards like explainability and data transparency to enable innovation while also mitigating bias. This should serve as a baseline, given regulation is too slow to address evolving issues. The dynamic nature of AI will require a collaborative effort between tech companies, researchers, and policymakers alike.
We must concentrate our efforts on working together to develop ethical frameworks and best practices that ensure AI serves as an inclusive tool for all of humanity.”
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideAI NewsNOW
Speak Your Mind