Welcome to insideAI News’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Click HERE to check out previous “Heard on the Street” round-ups.
Where is the better tooling for prompt engineering? Commentary by Naré Vardanyan, CEO and Co-Founder of Ntropy
“There’s a fallacy in building with LLMs where people believe that a model performs lots of real-world tasks much better than it actually does. Because the model does have reasoning resemblance and you can coach it with natural language commands or chains of natural language commands and various combinations, if something does not work, it is a prompt issue, not a model issue.
Because testing models are not standardized and you can test on your own test set, you can believe you’ve found the perfect solution. Then comes pushing to production. Further on, you land into thousands of edge cases. Every time, the perfect solution is a prompt or two away. After all, it’s infinitely programmable. It only takes a couple of new English commands to improve on a given use case.
But retraining a model on edge cases versus telling the model in English what to do is a massive paradigm shift, and tooling needs to improve to usher in this shift. The tooling around injecting knowledge and expertise into these models, particularly domain-specific, is super barebones. It’s like the Internet in the pre-browser days. I have seen people build prompt tooling for engineers around testing and version control, but this is not enough. You need a robust user interface and workflows to make domain experts utmost productive at prompting models. I’m sure the major LLM companies are investing heavily in this, but am optimistic that some other company can also build these capabilities, maybe as a standalone product.
Optimal prompting is key for LLMs in production, both for proprietary and in-house models. Not only are bigger prompts more expensive and slower to run, but they are also more likely to be out-of-distribution relative to a model’s training data. There is a fine line between adding “domain knowledge” into the prompt and killing the reasoning ability of the model. The more our world relies on probabilistic decision-makers, the more design we will need to cover the fallacies.”
The Cost of LLMs Driving Interest and Demand for Readily Available Small Language Models. Commentary by Maxime Vermeir, Senior Director, AI Strategy, at ABBYY
“The marvels of large language models (LLMs) advancements have been nothing short of revolutionary. But, as with all great things, they come with their own set of challenges, notably the hefty price tag on resources. This situation is a win for tech giants like Nvidia, powering the tech behind the scenes, yet it’s top of mind for businesses trying to stay competitive. It’s all about balancing the scales between the cost of leveraging LLMs and the tangible benefits they bring to the business.
In the second half of 2023, companies started to get that wake-up call. The spotlight is on two major buzzkills: the dollar signs attached to running these computational goliaths and their appetite for energy, which isn’t doing our planet any favors. Bring in small language models (SLMs) – the new kids on the block, or should I say the old kids on the block?
Trained for specific tasks, these streamlined versions of their bulkier brethren are showing us that sometimes, smaller can be smarter. They’re quick on their feet, lean on energy, and still pack a punch in accuracy and performance. Thanks to the science of knowledge distillation (think of it as putting an LLM on a diet, trimming away the excess to keep only what’s essential), SLMs are proving that you don’t need to go big to go home with impressive results.”
Reskilling workers with AI is easier than replacing them. Commentary by Paul Marca, VP of Learning and Dean of Ikigai Labs Academy
“In the current era of AI, prevailing attitudes have many traditional white-collar workers worrying about job security while employees with strong tech and AI expertise see increasing demand. It’s assumed that the former will be replaced by the latter. But this assumption is short-sighted as reskilling workers is easier than replacing them. That’s why we’ll instead see enterprises begin to invest aggressively in AI education for their existing employees.
LLMs (and more recently, LGMs) have already shown us that all professionals – regardless of existing technical expertise – can effectively leverage AI to increase their speed and effectiveness on-the-job. Rather than compete over a small pool of AI experts, major companies will re-train workers to leverage AI. For the next few years, AI education focused on upskilling existing employees will become a C-level priority.”
The fallout from the “learn to code” movement will land in 2024. Commentary by Suchinth Kumar, CRO of ISSQUARED
“For many years, there has been a push for more people to solve their career woes by learning to code. The surge of Americans who changed careers during the pandemic surely fueled even greater acceptance of this strategy. In fact, 15% of developers have spent less than four years coding, marking a significant increase in the number of people entering careers that require coding in recent years — but also calling out a lack of direct experience.
The steady push to encourage people to code, paired with the technology boom catalyzed by COVID-19, has resulted in unprepared individuals climbing into higher positions despite lacking the specialized skills and experience these roles demand. The growing adoption of AI technology has also amped up the need for deeper coding expertise. This employee skills gap can’t close at the rate needed to support innovation, nor are these individuals learning fast enough to fend off the threats posed by hackers leveraging AI tools. Education systems, government programs, internships and other training avenues need to shift to specialized tracks for coding, particularly in the cybersecurity world, to equip professionals with the advanced knowledge and differentiated competencies they need to fill increasingly vital and niche roles.
The year ahead will likely push this requirement even further, and tech companies need to be prepared to focus on upskilling for their current and new employees.”
The Power of AI for Local Governments. Commentary by Dustin Hudson, CivicPlus Vice President of Product Engineering
“While AI technology holds significant promise in revolutionizing government services, it also introduces complex challenges that demand careful attention. Chatbots represent just one example of AI’s ability to streamline resident services, offering swift responses to inquiries and enhancing accessibility to essential resources. However, as governments increasingly embrace AI-driven solutions, it’s imperative to recognize the need for heightened governance, security, and privacy measures, particularly in the context of the inherent ‘learning’ nature of AI. This brings a lot of potential, but underscores the importance of continuously monitoring responses for quality and not just performance.
In parallel with optimizing operations through AI, governments must prioritize the establishment of robust data governance frameworks. These frameworks should delineate clear protocols for data collection, storage, and usage, ensuring compliance with regulatory standards and safeguarding resident privacy. Furthermore, continuous monitoring and evaluation mechanisms must be in place to swiftly identify and mitigate any cybersecurity vulnerabilities or threats arising from AI implementation.”
DOJ Appoints First Chief AI Officer. Commentary by Alon Yamin, co-founder and CEO of Copyleaks
“The recent announcement from the US Justice Department naming the first official AI officer reflects the rapid impact AI has made on our world in the last year. It’s a necessary step in regulation to prevent technology misuse and provide transparency around AI for the public, especially pertaining to politics and the upcoming election. It is essential that processes continue to be implemented to help ensure responsible AI use that balances regulation with innovation as we continue to learn more about the scope and capabilities of AI.“
AWS removes egress fees. Commentary by Mark Boost, CEO of Civo
“The cloud is broken. The hyperscalers have not delivered on their lofty promises of low costs at scale that they set out more than a decade ago, complicating their offerings with convoluted pricing they are now backtracking on only due to regulatory pressure. Removing egress fees is a step in the right direction but is only one part of the equation. Our research found that 64% of users of the ‘Big Three’ observed an increase in cloud costs in the last 12 months. This situation can be hugely damaging, especially for smaller businesses, making it very difficult for them to build a bespoke, affordable approach to cloud that suits their needs.
Our focus should be on fairness, transparency, and the customer’s experience at every turn – not the shareholders’ bottom line. Cloud should be about empowering IT teams to experiment and innovate using the technology, finding the services they need, and paying a fair price for them. With this new approach, the cloud can become what it always had the potential to be: an incredible engine of equity in technology, levelling the playing field and ensuring anyone can access cutting-edge tech to innovate and build a successful business.“
The Pragmatic Path to AI Adoption. Commentary by Brett Hansen, Chief Growth Officer at Semarchy
“For it to be truly transformative — or even effective — AI must be approached thoughtfully. First, the data used for training AI models should be considered. Incomplete and poor-quality data can compromise the AI model’s effectiveness and, more importantly, its outcomes. Leaders should prioritize a data cleansing and management strategy that ensures AI solutions are operating on the correct assumptions and information. Conducting this audit before implementing AI is the only way to secure the organizational benefits of promising technologies like GenAI. Of equal importance is considering how AI will impact existing culture and workflows. Leaders should start the adoption process by communicating clear expectations to all employees, especially those whose workflows will be impacted. Then, they should initiate a cultural shift by implementing AI in small, predictable use cases, laying the groundwork for employees to understand the high-level initiatives they likely have planned for down the road.
It’s critical to take small steps when deploying AI. Data and business teams need to work intimately to define incremental steps when adopting AI capabilities. Leaders should select initial projects with narrow scopes and thoroughly evaluate AI effectiveness. Building upon success with more sophisticated projects will ensure AI is seamlessly integrated into organizational workflow. In the realm of AI, planning and patience is key. By encouraging a paradigm shift that is inclusive of employees, processes and technology, leaders optimize the likelihood of success. AI offers tremendous opportunity, but a pragmatic, thoughtful approach is necessary.”
The Growing Importance of Database Observability. Commentary by Kevin Kline, SolarWinds
“Today’s IT infrastructure has become increasingly complex, with databases serving as the backbone of numerous applications and services. As businesses rely heavily on data-driven decision-making, the importance of database observability for IT professionals has never been more critical. However, databases also pose some of the most complex challenges that IT teams face due to their complicated, business-critical nature and difficult-to-diagnose issues.
Without complete and precise database monitoring and observability, IT and DevOps teams struggle to diagnose the root cause of performance issues accurately. This can lead to costly downtime, decreased quality of service delivery, and other critical threats to the health and growth potential of the entire enterprise. In fact, without an effective database observability solution in place, most IT and DevOps teams become stuck in the rut of constant firefighting.
This is where observability plays a crucial role. By ensuring the reliability and performance of database systems, observability solutions allow IT professionals to stay ahead of potential issues, optimize resource utilization, and deliver a seamless user experience. Advanced capabilities, such as automated root cause analysis and machine learning-driven diagnostics, have revolutionized how IT professionals approach problem-solving by speeding up issue resolution and empowering teams to predict and prevent potential disruptions. Constant firefighting becomes a thing of the past. IT and DevOps teams can now spend their time on supporting and adding value to the mission of business.
Additionally, as AI and ML become more sophisticated, they bring powerful new tools to observability solutions including proactive AI-driven analytics, automated anomaly detection, and real-time remediation. These AI-powered improvements help IT organizations to accomplish more without growing staffing or expertise.
As businesses continue to embrace digital transformation, the need for robust database observability will remain paramount.”
Global Transformations: The Impact of AI Trends Across Industries. Commentary by Shailesh Dhuri, CEO, Decimal Point Analytics
“The latest trends in AI are revolutionizing various sectors globally. In defense, AI empowers autonomous systems, intelligence analysis, and predictive maintenance for enhanced security and resource efficiency. Agriculture benefits from AI-driven precision farming, disease detection, and yield prediction, leading to increased productivity and reduced waste. Similarly, healthcare leverages AI for diagnostics, virtual assistance, robotic surgery, and drug discovery, improving accessibility and quality of care. Smart cities, infrastructure maintenance, and disaster management are being transformed by AI, promoting sustainability, resilience, and public well-being. Additionally, the education sector is embracing AI for personalized learning, intelligent feedback, early intervention, and language learning, paving the way for a more inclusive and effective educational experience.
However, for AI to reach its full potential, addressing ethical considerations like transparency and bias is crucial. Furthermore, ensuring affordability, accessibility, and skill development are essential for equitable and sustainable AI adoption across diverse sectors and the broader population globally.”
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideAI NewsNOW
Speak Your Mind