AI and Fire: They Have More in Common Than You Think

Data Basics, Scaling AI, Featured Theresa Kushner

This blog is a guest post from our friends at NTT Data, a digital business and IT services leader. NTT Data leverages deep industry expertise and leading-edge technologies powered by AI, automation and cloud to create practical and scalable solutions that contribute to society and help clients worldwide. 

In a recent article by Karen Hao in the MIT Technology Review, she details the work Facebook has been doing on AI ethics with Joaquin Quiñonero Candela, a director of AI at Facebook. It’s a fascinating story and one that speaks to where we are today in our attempt to tame this new technology – the AI algorithm.

As humans, we once discovered another technology that had to be tamed — fire. Millions of years ago, cave people adopted fire and it changed not just their environment, but their behavior as well. Besides providing protection from predators and insects, fire gave them the ability to cook their food which, in turn, gave them protein and protein caused them to develop larger brains. But fire also terrified our ancestors because of its destructiveness. 

fire

Look at our society today and how we have learned to manage fire. We have firefighters, fire inspectors, and fire investigators. We have a whole training and education program around managing fire. We have learned the hard way about its destructiveness, and we have put in place systems and processes to control it.  

If we were to apply this approach to AI, we would have people today who understand AI and can manage it for the good of all that use it (fighters), people who govern or inspect the use of algorithms (inspectors), and people who are trained to evaluate the outcomes created by AI algorithms (investigators). In short, we would have AI literacy programs designed to help us manage this new technology. 

Developing Literacy Around AI

Where are you in developing literacy around AI in your company? Here are a few questions to ask yourself about how you are handling this new technology:

1. Do you have a strategy for AI that produces value for your company? If yes, what is the value of that strategy?  

AI doesn’t begin in the IT organization although the tools for managing it are there in all their technical glory. AI begins with the data that you collect, buy, or acquire and your desire to answer a question or solve a business problem where that data can be used to enlighten or direct your decisions. Having a strategy — and specifically a data strategy — is the first consideration when thinking about an AI project.

AI strategy begins with aligning AI initiatives with the company’s strategic objectives, creating synergies between business operations by fostering collaborative planning and portfolio management. In addition, the strategy should also assess and address risks associated with the deployment of AI models and set standards for how to assess and manage these models.

2. What are the risks associated with the AI strategy you have adopted? 

With any variety of AI algorithms in place, businesses need to ensure that they have accounted for biases or are closely monitoring applications that may learn from constant inflows of new data. AI/ML can add risk to business operations if not carefully handled. It’s important that your AI strategy accounts for these risks and sets boundaries for how much risk the company will assume. These boundaries are usually set by an oversight committee that watches the progression and use of AI/ML. 

3. Do you have a way to guarantee accountability for your AI results? 

Most organizations today have some form of oversight or governance team that is chartered to deal with issues related to audits and compliance. Your AI/ML projects need the same oversight and accountability.  Forming an AI/ML governance team or adding AI experts to your existing governance structures can help provide structure and accountability for your AI projects.

4. Do you have the technical knowledge to manage AI algorithms and their outcomes?

Nearly every company today employs someone who has the title of data scientist. These individuals usually have the technical knowledge to develop, implement, manage, and monitor an AI algorithm. Ensuring that this skill is available to the AI governance or oversight teams is an important part of delivering Responsible AI.

5. Are your AI applications delivering the expected value?  

A well-articulated strategy will project ROI for the AI effort. Otherwise, why would you put an AI program in place? The value associated with the AI effort should be well understood by both business constituents as well as IT. We talk about making AI ubiquitous throughout an organization, but ensuring that the organization understands where AI is being used, what value it brings to those who use it, and how it can be managed for great value should be the goal of the AI governance body and the team working with AI/ML capabilities.

6. Are you controlling the risks of your AI initiatives?

Controlling the risks associated with AI means that your teams understand what should be happening and have the right tools and the right processes to ensure the required outcome. Managing and monitoring AI/ML efforts requires a documented, well understood set of processes, tools that are accessible and easy to use, and articulated KPIs for traceability through the AI/ML lifecycle. AI/ML projects should have continuous improvement goals to ensure that the risks associated with the projects are managed and controlled.

So, if any answer to any question above was “No,” then the organization may need to stop and consider what they want from an AI/ML project.  

If history has taught us anything it is that ignorance or misuse of technology can have dire consequences.  AI is the latest of those technologies, but we have seen others. For example, fire has leveled entire cities, caused billions of dollars of damage, reset dwelling patterns, and displaced entire populations. And we have seen that ignorance or misuse of AI algorithms can sway elections, justify genocide, threaten institutions like democracy, and impact the entire fabric of society. 

Using this powerful new technology called AI is important to business today. But understanding its impact is equally important and, often, we are looking to the technologists to help us solve the issue.  Just as we are trying to implement technologies faster within our businesses, we may be ignoring the impact that those technologies have. We may not be looking at the risks and potential pitfalls. Just as we’ve learned to live with fire, control it, and manage it, we should start learning how to do the same with AI. Here are three steps to get you started. 

1. Get literate on AI, what it is, what it isn’t, and what you can and cannot do with it.
2. Test it out. Identify a project that will give you a feel for whether you have the right data, the right business problem, and the right people to accomplish an AI/ML project.

3. Talk to those who are further along than you are on their AI journey.  Talk to vendors, experts in your industry, or analysts. 

Finally, just try it. AI projects can be illuminating, energizing, and, more importantly, valuable. Like fire, AI is a disruptive technology — and from disruption comes greatness. It can help you create new sources of value from your data, generate new opportunities for growth and revenue, and open up avenues to transform your business.

You May Also Like

What Is DataOps?

Read More

What Does It Take to Democratize AI? Insights From the Field

Read More

Maintaining The Human Factor in Machine Learning

Read More

Navigating Targeted Ad Bias With Responsible AI

Read More