Heard on the Street – 2/1/2024

Print Friendly, PDF & Email

Welcome to insideAI News’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Click HERE to check out previous “Heard on the Street” round-ups.

AI’s impact on human language. Commentary by Amin Ahmad, Co-Founder and Chief Technology Officer at Vectara.

“The last 60 years of progress in computer technology have hurt our ability to communicate because computers, while increasingly useful, could not understand our language beyond keyword salads. As a result, we’ve had to simplify our use of English to communicate with them, which has impacted how we communicate with each other. Now that we have computers that truly understand language and can tease out the nuance in a particular word choice, that trend will reverse. We will have the opportunity for a renaissance in expression through language, and the sophistication of our English will go up, aided by AI. Anthropologists and linguists will also discover that LLMs are a wonderful way to preserve cultures and languages headed for extinction. They will serve as time capsules and allow people in the future to interact with, say, the average Egyptian from 2023. This was definitely not something I could have foreseen even ten years ago when I entered the field.”

Generative AI requires a new approach to network management. Commentary by Bill Long, Chief Product & Strategy Officer at Zayo

“As businesses adopt generative AI to drive innovation, they’re overlooking and taking for granted one key element: their network infrastructure. AI technology demands a massive amount of capacity to handle the huge amounts of data required for model training and inference, and business networks must be ready to support these rapidly increasing demands before organizations can take advantage of generative AI and its benefits. As it stands now, 42% of IT leaders aren’t confident their current infrastructure can accommodate the rising use of AI.

Companies need to start planning now to ensure that their networks are ready to support these next-generation technologies. We’re already seeing top companies and hyperscalers building out data center campuses at an unprecedented scale and stocking up on capacity to prepare for future bandwidth needs. Those who fail to get ahead of network infrastructure needs now could be left behind on the AI wave.”

Bad actors will leverage hyper-realistic deepfakes to breach organizations. Commentary by Carl Froggett, Chief Information Officer at Deep Instinct

“In 2023, threat actors manipulated AI to deploy more sophisticated malware and ransomware, as well as hyper-realistic deepfake and phishing campaigns. We’ve seen this in action with the MGM breach involving social engineering and high-profile figure Kelly Clarkson being mimicked through deepfake technology. In 2024, bad actors will take these attack techniques to a new level, enacting more holistic end-to-end campaigns through AI and automation, leading to much more realistic, integrated, and sophisticated campaigns. As a result, traditional cybersecurity approaches to defend against these attacks, including information security training and awareness, will need to be significantly updated, refreshed, or totally revamped. This will lead to existing mechanisms of authentication or trust establishment potentially being removed completely as they’re susceptible to abuse by AI.”

Why the U.S. will take meaningful action on data privacy in 2024. Commentary from Vishal Gupta, co-founder & CEO at Seclore

“While places like India and the EU continue to strengthen data security standards with the passage of the recent DPDP Act and the upcoming minimum deadline for the Product Security and Telecommunications Act, the U.S. has remained stagnant on data security and privacy. In 2024, we will finally see the U.S. move from piecemeal data privacy regulation specific to certain states and sectors, to federal-level data privacy legislation. 

One element that will rapidly elevate the urgency for greater data privacy this coming year is the continual adoption of AI technology by enterprises. While AI is an important tool for the future of business, the more companies incorporate it into their business models, the more vulnerable their corporate data becomes. For example, in feeding generative AI chatbots with business-specific prompts and context, employees may unknowingly be handing over sensitive corporate data, putting the business (not to mention their partners, customers, and investors) at great risk. Legislators will realize this correlation and implement policies that force protection for sensitive personal data. It will be imperative that businesses comply with these measures by focusing on bolstering their data security practices, including ensuring data stays secure both within the enterprise and third-party AI tools.”

Manufacturing infrastructure doesn’t just need a digital twin – it needs the entire digital family. Commentary by Lior Susan, Co-Founder and Executive Chairman of Bright Machines, and Founding Partner of Eclipse 

“Manufacturing is the lifeblood of many essential industries — from electronics to commerce to healthcare. In recent years, the U.S. has recognized the critical need to reshore our manufacturing capacity, but it is impossible to simply replicate a manufacturing facility, and associated supply chain from China and expect things to work the same way.  The only way to create a thriving domestic manufacturing ecosystem, is through a marriage of skill and scale – training factory workers to use cutting-edge digital technologies.

How would this work? Take artificial intelligence as an example. The rapid progress of Large Language Models has led to skyrocketing demand for compute power. While it’s exciting to witness the AI/ML developer community building the next generation of AI applications, they face a massive hurdle: cloud compute providers are struggling to meet the surge in demand to deliver compute, data storage, and related network capabilities — what we call the “AI backbone.” However, through the use of sophisticated digital manufacturing platforms, there are companies today that can take the manufacture  of a server from months to minutes and enable cloud compute providers to rapidly meet the demand of customers. With intelligent systems that span the entire value chain, manufacturers and operators gain greater transparency, better adherence to standards and regulations, and the ability to create products faster — improving their overall competitive edge and market standing. We are at a turning point in the manufacturing industry. Heightened hardware and electronics demand, labor shortages, supply chain issues and the surge in AI all require us to reevaluate how and where we build the products we need. Let’s bring back manufacturing to the U.S., but bring it back, better.” 

Data-Driven Approach to Manufacturing of Materials-Based Products. Commentary by Ori Yudilevich, CTO of MaterialsZone

“In the world of materials-based products, data complexity, scarcity, and scatter are three key challenges companies face, compounded by the usual time and money constraints. Placing a data-driven approach at the forefront of a company’s strategic roadmap is critical for survival and success in a competitive landscape where the adoption of modern data techniques, while still in the early stages, is progressing rapidly.

R&D of materials-based products is complex due to intricate multi-layered materials formulations, long and non-linear preparation processes, and involved measurement and analysis techniques. This is further entangled by evergrowing regulatory and environmental requirements, often geographic and sector-dependent. 

At the same time, experimental data produced in the R&D process is scarce due to the relatively high costs, especially when considering the number of products a typical company has in its product line and the need to keep these products up-to-date with the continuously changing market needs, regulatory requirements, and tightening environmental targets. 

On top of this, the data collected throughout the company pipeline is usually scattered and siloed across different systems, such as ERP, LIMS, MES, and CRM, to name a few, as well as in Excel files, PDF documents, and handwritten notes. Integrating the data along the full process to obtain meaningful correlations and insights is difficult and sometimes impossible.

A data-first approach combined with a holistic end-to-end restructuring of an organization’s data network will result in a fast track to “lean R&D and manufacturing.” In practical terms, this means less trial and error-type experimentation, shorter time to market, and higher profitability. 

Data engineering, cloud infrastructure, machine learning, and now generative AI are enablers of this transformation. These technologies are becoming increasingly widespread via both open source and commercial solutions, some specializing in materials and addressing their special needs. Adopting such solutions requires forward thinking, change management, and time, and getting an early start is essential.”

Increasing data volumes make humans the weakest link in a digital business. Commentary by Jeremy Burton; CEO of Observe

“Cloud-native computing has created an unprecedented level of complexity between humans and computers when troubleshooting modern distributed applications. Enterprises typically make use of anywhere between five and 15 tools, which have evolved organically over the years, operate in silos, and rely on superhuman skills to correlate data points and determine root cause. To make matters worse, telemetry data volumes — the digital exhaust modern applications and infrastructure emit — are growing over 40% per year, making it practically impossible for humans to keep up. For years, technology leaders have struggled to hire the requisite DevOps and SRE skills. As a result, those in the role struggle with burnout amid increasing mean-time-to-resolution (MTTR) metrics. A fresh approach is clearly needed. 

The good news is that a fresh approach is here and it’s called Observability. Observability starts — quite simply — with streaming all the telemetry data into one place and letting computers do what they do best. Modern observability tooling can tear through hundreds of terabytes of data in an instant, analyzing and connecting the dots between the various types of data in the system. This makes it much easier for users to quickly triangulate disparate data points and determine the root cause of problems they are seeing. The impact? For the first time in years, DevOps and SRE teams see MTTR metrics improving, which in turn improves the interactions their customers have with the digital experience their business offers. This reduces the chance of customer churn, drives revenue growth and keeps the business humming.” 

Microsoft recently announced its Q2 results, with revenue increase due to its AI offering. Commentary by Mark Boost, Civo CEO

“Microsoft Q2 results miss the big picture about the tech sector in 2024. Its AI-first strategy, built on Azure, is founded on shaky ground. Big cloud providers have leveraged their dominance to overcharge and under-deliver for customers. They provide solutions to customers that are complex, expensive to run, and therefore a huge burden for smaller businesses. 

Too many business leaders are forced to settle for using these so-called ‘hyperscalers’, making choices based on a dominant brand and having to price in all the burdens that come with hyperscaler cloud computing.  

As we enter this AI-first era, we need a new approach. Providers must focus on giving businesses accessible solutions and support – not on putting the interests of shareholders first. Cloud providers need to focus on transparent and affordable services, creating a level playing field where any business – no matter their size – has everything they need for cutting-edge innovation” 

As AI Speeds Up Software Creation, Enterprises Need to Eliminate Bottlenecks in the Toolchain. Commentary by Wing To, general manager of intelligent devops for Digital.ai 

“AI capabilities are changing the game for businesses, as developers are embracing generative AI, AI code-assist, and large language models (LLMs) to significantly increase productivity, helping to drive more and faster innovation. However, these gains will only be realized if the ecosystem surrounding the developer moves at the same cadence – AND – if organizations are willing to release into production the software and services being developed. 

The current focus of AI tools has been on increasing developer productivity, but developing the software is just part of the software delivery lifecycle. There is a surrounding ecosystem of teams and processes such as quality assurance, security scanning, deployment, and staging.  These are already tension points and potential bottlenecks that will only worsen as more software is created, unless the rest of the delivery process flows at the same pace.  The application of modern delivery methodologies leveraging automation, as well as reusable templates, are a good step to drive productivity across the delivery pipeline. And what better way to match the AI-increased productivity of the developer than with AI itself – there is now an emergence of tools that apply AI across the wider software delivery lifecycle ecosystem to increase productivity; for example, AI-assisted test creation and execution and AI-generated configuration files for deployment. 

Even if the software can be delivered at the pace it is being developed, many organizations are still hesitant to release the software into production and make it available to their end users.  Organizations are concerned with the risk involved with the code introduced by AI, which may contain bugs, personal sensitive information, and security vulnerabilities. This is further exacerbated by the difficulty in detecting what is human developed vs. AI code, particularly when it is a combination, as is often the case.  

However, in many ways, the concerns about AI-generated code  are similar to those around methods for increasing developer productivity such as having a large outsource organization or many junior, but well-resourced developers. In these cases, robust governance is usually applied, from ensuring all code is reviewed, sufficient levels of test coverage has passed, scans are conducted for sensitive personal information and security vulnerabilities, and so on. The same can be applied for AI-assisted development environments. By leveraging automation and embedding it into the release process, organizations can manage the level of risk for any services going into production. This visibility can be even further enhanced with AI by leveraging all the data in these systems to provide greater insights into risk granularity.” 

Managing enterprise AI spend with the platform shift to AI. Commentary by Jody Shapiro, CEO and co-founder of Productiv 

“Periodically there are significant platform shifts in technology: on-prem to Cloud, traditional phones to smartphones, paper maps to navigation apps, physical money to digital payments. With the rise in generative AI over the past year, we’re seeing a platform shift with more organizations introducing AI into their tech stacks and day-to-day operations. And ChatGPT isn’t the only AI software being adopted by the enterprise. To ensure business leaders aren’t blindsided by AI-related costs, it’s important to know how to properly budget, understand, and assess AI spending.

With new AI tools introduced regularly, business leaders should not feel the need to implement the hottest tools for the sake of it. Discernment is crucial—what outcomes can be improved using AI? What processes are best optimized by AI? Which AI tools are relevant? Can we justify the cost of the functionality? Is there a true, valuable gain? To prepare for the costs associated with AI, business leaders first need to prepare for a change in how software is sold. AI tools often adopt usage-based pricing. Further, knowing the total cost of ownership before adopting the AI tool is a factor often overlooked. Some things to keep in mind, for example, are whether the models need to be trained on the cost of continual data and safeguard management. With unique pricing models and an abundance of AI tooling options, there are many considerations to ensure that businesses can maximize their AI budgets, while confidently delivering value.”

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW

Speak Your Mind

*