DoD: Making AI Real. Making AI Work.

DoD: Making AI Real. Making AI Work.

The U.S. Department of Defense (DoD)’s Deputy Defense Secretary Patrick Shanahan issued a Memo on June 27th, 2018, officially establishing the Joint Artificial Intelligence Center (JAIC). The JAIC is charged with the “. . . overarching goal of accelerating the delivery of AI-enabled capabilities, scaling the Department-wide impact of AI, and synchronizing DoD AI activities to expand Joint Force advantages.”

The hope is to enable “teams across (the) DoD to swiftly deliver new AI-enabled capabilities and effectively experiment with new operating concepts in support of DoD’s military missions and business functions,” according to DoD spokeswoman Lt. Col. Michelle Baldanza.

“This effort is a Department priority. Speed and security are of the essence,” Shanahan wrote. “I expect all offices and personnel to provide all reasonable support necessary to make rapid enterprise-wide AI adoption a reality.”

As DoD stakeholders expeditiously move forward to establish an AI framework and put many AI initiatives in play, there are a number of steps that can be taken to optimize the efficiency and success of the process. Below, we highlight six (6) of the most critical.

  • Identify relatively small, discrete tasks that AI can address. Too often, AI is viewed as a sweeping force that brings about colossal, far-reaching change when, in fact, it is many times most effective when aimed at solving specific problems or accomplishing an individual undertaking. On this note, Dr. Lee Howells, an automation and AI expert, advises “think big, start small and scale fast.”
  • When studying and extrapolating AI use cases from the commercial sector, be willing to entertain applications that are non-traditional and seemingly off the beaten path. For example, earlier this month, technology maverick, NVIDIA, launched the GeForce RTX™ graphic cards for the global gaming community. Before dismissing a video game graphics card as having nothing to do with advancing Defense aims, consider that the RTX cards enable a technology called real-time ray-tracing that enables an unprecedented level of lighting simulation by mirroring the actual physical behavior of light. The result is that shadows, reflections and refractions occur as they do in reality. This is the kind of technology that could be utilized to accurately simulate an urban warfare environment for initiatives like the DARPA’s recently unveiled Urban Reconnaissance through Supervised Autonomy (URSA) program.


RTX Bomb

Images illustrate how the ray-tracing of RTX can simulate real-time light behavior.

  • Incorporate the latest data science regimen into your protocol. Even the most sophisticated AI application cannot overcome utilization barriers posed by data that is not clean, authoritative and trustworthy. Assuring data integrity at the outset can make all of the difference in a successful outcome. In the era of AI, data governance will become an increasingly important consideration.
  • Don’t try to stretch an AI, machine learning or deep learning application to do more than for what it is intended (or tested for.) Applications aimed at shoring up cyber operations should not be repurposed for intelligence and surveillance functions. Trying to extend code and hardware parameters can invite bugs into the system and, worse still, create vulnerability for cybersecurity breaches.
  • Insist that the AI applications you procure are readily scalable – and scale seamlessly without causing disruption. The concern is not limited to hardware that grows outdated and untenable, but software as well. You will want the capability to manage your workload as its scope increases over time. The workload may change for a wide variety of reasons including more users or more simultaneous users; greater storage capacity demand; mandated increase in functionality; or an uptick in the number of transactions. Scalable solutions also allow for the unknown or for the unforeseeable. You cannot always account for what you can’t predict – new weapons, new tactical environments, new cyber threats, even new adversaries, etc. Scalable AI permits you to morph and evolve right alongside the changing circumstance. It further bears mentioning that scalability protects the original technology investment by allowing you to build on it.
  • Spend requisite time at the outset to assess your baseline AI infrastructure and explore alternatives as necessary. You don’t want to hamstring your applications because your infrastructure cannot keep pace. AI applications demand networks featuring high performance, high bandwidth, low latency and ease of scalability. But as the variety and volume of data mushrooms, infrastructures will likewise need to be flexible, agile, efficient and accommodating. With these drivers in mind, global data storage leader Western Digital recently introduced a software composable infrastructure (SCI) solution called OpenFlex that achieves resource utilization numbers of 70% or higher compared to current traditional hyperscale utilization levels in the neighborhood of 45%.


Diverse Data Types

AI implementation can be a complicated and layered process. Questions are welcomed by all of us at Advanced HPC. If we do not know an answer to a question, the AI ecosystem of which we are a part will point us in the right direction to find the information you need.

Reach out. We’re eager to hear from you.