A Q&A on Federal AI

Scaling AI Catie Grasso

While the AI wave is still going strong, many organizations are just starting to figure out how to ride that wave. Not dissimilar from the private sector, federal entities understand that they must leverage AI to continue to effectively deliver their missions, but struggle with determining how to scale and evangelize AI throughout their organization.

To help illustrate this firsthand, specifically in the federal defense sector, we put together some of the top questions from our June AI Forum, hosted by Dataiku evangelist Christina Hsiao and special guest Ron Keesing from Leidos, a Fortune 500 science and technology leader that operates a significant number of large-scale, data-centric projects within the U.S. government. Please note that the answers have been shortened or paraphrased for brevity.

Q: What would you recommend for a company that has been investing in AI and machine learning for a while now, but is still struggling to generate value and deploy into production?

A: We see this a lot, as one of the key challenges businesses face when scaling AI and machine learning is, while organizations are able to consistently execute projects, it's actually hard to get them to deliver business value. To get to higher levels of AI maturity, there’s a need for a sophisticated MLOps practice.

Further, organizations need to promote the use of common platforms and processes for model development, but also — over time — need to become more sophisticated about their ability to deploy models into production. They can expedite that process by establishing repeatable processes for AI and machine learning, promote reuse of various model components, and ensure each project is value-driven and aims to answer a unique business problem.

Q: It seems like there’s a range of AI adoption across the federal government space. For the agencies that are at the higher end, are they consistently embedding these types of sophisticated analytics across the entire organization?

A: Many federal government agencies are not consistent across the organization, and frequently, one business unit may be significantly more capable than others within the same agency. For example, NASA has labs putting autonomous spacecrafts that can operate for years at a time on the surface of Mars. Toward the middle of the spectrum of maturity is their Security Operations Center (SOC) working to increase automation to help deal with scale and emerging threats. Lastly, they have other business units (HR, for example) that are much further behind, working their way up to basic modern digitization.

It’s not always easy for the federal government to flow higher levels of capability across an entire organization. It is a journey that takes time and experience to get there. It is critical to understand that you cannot jump right to the top of this maturity curve. Start with assistive AI — where AI makes human tasks easier — to gradually build user trust and adoption and ultimately increase user engagement in actively helping develop and extend the AI to be even more valuable.

Q: Some of the hottest topics in AI right now are around trust and explainability and breaking down black-box models. Do you have any thoughts on explainable AI in the federal sector?

A: In many conversations with our customers and my developers, I often hear people use explainability as a proxy, when I believe the real goal is to build trust between humans and machines. Certainly, there are times and places where it is critical to understand the inner workings of AI models, such as when we’re trying to debug it or before we make a critical decision that requires transparency and traceability. With the example of web search, none of us could find information without web search tools. Do we trust them or not?

A great way to measure how we’ve increased our trust in search tools over time is to look at the ranking of search results. Many of you likely used to look at the top ten or fifteen search results and may now only look at the top two or three — because you trust the quality of the results you get back through the AI. We need to understand that we’re building models in the context of delivering trusted AI systems grounded in transparency.

Q: What are the requirements and restrictions specific to government agencies to stay compliant?

A: This keeps many government agencies from consistently deploying AI systems. Many federal agencies have specific regulatory requirements on fairness and bias, so building AI systems that meet a criteria for fairness is very important. It’s easy to state fairness as a goal, but in practice, it can be quite complicated.

Assurance is also critical, as government is a higher-stakes environment and people might be put in harm’s way by the wrong call. Security and cyber-defense are big too, as there are malicious actors actively working to undermine your system. We need to consider systems that can deal with vulnerabilities like spoofing and adversarial AI. These things mean it’s difficult to draw directly on commercial models or completely rely on commercial and open datasets, and there is an impact on costs to do these things from scratch in-house.

Q: What is the prevailing attitude towards cloud in the defense space?

A: Based on observation, there’s a wide range and it can be hit or miss. Some federal agencies are very comfortable with it and understand why it’s desirable (i.e. for collaboration) while others don’t yet see the value it can provide their organization or are unsure how to use it, specifically with regard to security.

Q: How can we make sure our AI applications are relevant in the long term?

A: Truly harnessing AI should be done with a sustainable approach. Dataiku DSS, for example, promotes reusability of data assets and models in a flexible platform that is resilient to the rapidly changing technology ecosystem in the defense sector. By leveraging reuse across data projects (such as common components and services), teams can avoid building from the ground up each time.

Looking Ahead

For federal agencies, it will be critical to remember that successfully executed AI positively impacts mission outcomes, solves problems, and enriches human experiences. In the journey to organizational AI maturity, it will continue to be critical for federal entities to break down the barriers of siloed data (disparate datasets and systems that need to be combined), siloed people (lack of collaboration between data-centric and business-centric roles), and siloed processes (bridging the gap between IT, data, operations, and more to generate lasting business value).

You May Also Like

From Vision to Value: Visual GenAI in Dataiku

Read More

Understanding the Why and How of the LLM Mesh Architecture

Read More

The Ultimate Test of ChatGPT

Read More