Machine learning (ML)—the ability for machines to perceive, learn from, abstract, and act on data—has been a catalyst for innovation and advancement across sectors, with national security being no exception. In the last year alone, there have been several prime examples of the enormous opportunity ML offers regarding artificial intelligence (AI) for defense and the intelligence community. The U.S. Department of Defense (DoD) is continuing efforts to scale AI and celebrating new achievements, like using AI to help control a U-2 “Dragon Lady” reconnaissance aircraft – the first time AI has been put in command of a U.S. military system.
The possibilities for advancement are endless: by helping with tasks related to data collection, processing, and analysis, ML can catch cyber breaches and hacks before humans can, speed up responses to electronic warfare attacks, and more closely target responses to kinetic fire through its continual updating and learning capabilities. Warfighters can also use ML to look across domains and resources, from ships to artillery, to match targets to resources.
As we settle into 2021, there’s one aspect of AI/ML that should not be overlooked: how to effectively get it into the hands of warfighters at the tactical edge, where fast decisions are at a premium and compute power and connectivity are often scarce. It is critical that these edge use cases characterize and shape planning for AI and ML-driven investment as digitization continues to accelerate the pace of war.
Current AI/ML constraints and challenges
Many AI/ML innovations today involve “boutique algorithms” operating in the cloud with a goal of making them the biggest and the best. Deployment in a commercial cloud means unlimited resources for database training, with high-power compute and great bandwidth. Algorithms train and operate in fully resourced silos—not the field—with many measures to protect them from outside influences like spoofing and jamming.
In short, it’s great if a model delivers amazing results “in the lab,” but this same algorithm is useless if it’s not operational and adaptable when and where warfighters need it most. Deploying AI/ML at the edge requires strategic investment in four key areas:
- Compute and storage capacity. Models running in a commercial cloud can be accessing data that consumes terabytes of memory. Meanwhile, warfighter field equipment can accommodate far less data than that for any one particular algorithm, typically in the 10s of gigabytes range.
- Power consumption and bandwidth. In order to function correctly, the algorithm must operate in a way that doesn’t drain the device’s battery. This will require ML engineers to invest time in ensuring the information caches when the device is disconnected and picks up where things left off when back on a network.
- Mass deployability. AI capabilities must be scalable, able to reach 3,000-5,000 soldiers in an infantry brigade. Many algorithms today deploy through just one server, so this will require more advanced planning.
- More efficient, reliable data maintenance. By design, AI/ML algorithms are continually learning and improving. When algorithms are siloed, however, training isn’t shared across the whole, meaning that one algorithm can easily drift and degrade—jeopardizing both performance and trust in the system.
MLOps standardizes, scales, and operationalizes algorithms
Another key area of investment in order to overcome these challenges is to prioritize training and development specific to Machine Learning Operations (MLOps), which is a set of practices that allows for standardized collaboration between data scientists and field operations, enabling organizations to manage the lifecycle of not only ML engineering, but deployment. This includes tracking and monitoring performance to identify and correct errors and implementing security in every phase.
MLOps breaks ML out of boutique algorithm silos, turning “analyst ML” into operational intelligence capabilities that work at the tactical edge.
Specifically, MLOps provides the following operational benefits:
- Increased ML Model Deployment Frequency: Decouples ML model training and software application build pipelines, which allows for ML models to be deployed independently at the frequencies needed to sustain performance on soldier devices.
- Improved ML Model Deployment in Challenged Environments: Provides ML Engineers with pre-built ML pipelines designed to optimize ML models for deployment across cellular and/or tactical edge networks
- System Level ML Test & Evaluation: Provides the capabilities needed to quantitatively – and automatically in many cases – measure both ML model and system performance using deep learning criteria like NVIDIA’s PLASTER framework (Programmability, Latency, Accuracy, Size of Model, Throughput, Energy Efficiency and Reproducibility). This also provides framework to assess ML Model compliance with DoD AI Ethical Principals.
If an organization working to build ML for defense and intelligence is aligned on these principles, it will be easier to mitigate challenges as they happen. Building an open architecture is also particularly important. With an open system, defense organizations benefit from an open feedback loop as data gets pushed back to environments with greater computing resource. Additionally, an open system provides the ability to swap existing models out with newer, better ones from public- and private-sector sources. In doing this, engineers don’t need to change the underlying system to accommodate different model types. MLOps puts a system on top of algorithms to make a ML solution scalable and gives it the supply chain to make it work.
AI, particularly ML, is a powerful solution to help America’s warfighters respond when every millisecond counts. By making strategic investments in specific areas of data maintenance and storage capacity, ML will be more pliable, repeatable, and updatable. MLOps helps defense organizations get valuable capabilities out in the field—so investment is necessary for warfighters to use, benefit from, and ultimately build trust in this technology.
About the Authors
Joel Dillon is a Booz Allen vice president and technical leader in the firm’s Army portfolio, leading the Digital Warrior Solutions practice. Prior, Joel worked at Amazon Web Services (AWS) after serving in the U.S. Army for more than 20 years as an infantry and acquisition officer, including in the Army’s Program Executive Office Soldier.
Eric Syphard is a principal in Booz Allen’s Strategic Innovation Group (SIG) and leads the firm’s support for the Army Intelligence Digital Transformation Engineering Services (AIDTES) Government-Sponsored Research and Development Task Force. Prior to joining Booz Allen, Eric was a biostatistician at Johns Hopkins University where he implemented statistical algorithms and analytical technologies to automate cancer research operations.
Sign up for the free insideAI News newsletter.
Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1
Speak Your Mind