What Do We Do When Humanization Fails? Let’s Talk About It

Data Basics Joy Looney

This episode of the Banana Data Podcast is a special collaboration. We have joined forces with our friends from the Towards Data Science podcast for an exciting, future-focused conversation. Our hosts, joined by Jeremie Harris, host of the Towards Data Science Podcast, explore some of the many what-if’s and potential paths developing alongside AI progress. 

 

If you prefer reading over listening, here is a complete transcript of the podcast for you below: 

Corey: Hello everyone! Welcome to the fifth episode of the Banana Data podcast; I am so excited. This is the first crossover podcast that we have ever done. We are proud to be joined by Jeremie Harris from Towards Data Science today. 

We're going to introduce Jeremie in a moment. So CPM, before we get into everything going on today, tell us about what happened last week.

CPM: Well, you know, last week we also had a first date, but it was an internal date. We had a really great conversation with Emma, a solutions engineer here at Dataiku, talking about the different defensive versus offensive strategies within the data science field — what are the benefits, detractions, implications?

If you're interested, definitely check out that episode, but today we are joined by Jeremie. Jeremie, why don't you introduce yourself? Welcome. 

Jeremie: Thanks, CPM. Thanks, Corey. This is great. I'm really excited to be part of your three panel setup. I host the Towards Data Science podcast. On the podcast, I should just mention, we talk a lot about data science in general, but also, in our latest season, we've been talking a lot about AI safety and the things that people don't necessarily think about when they think about AI.

Usually, we think about more capabilities and being able to do more stuff with our data science and with our algorithms. However, increasingly, as our algorithms get more powerful, the question starts to become “When should we use AI to begin with the ethical use cases?” and “What are some of the ways we can make sure when these things are being deployed in more and more high stakes scenarios — that we're not screwing something up?” 

I think a lot of that is going to become increasingly important over the next few years. That has sort of been the theme. I am also the co-founder of a company called Sharpest Minds. We are a mentorship program for aspiring data scientists, analysts, machine learning engineers, and we work through income shares.

The idea is that you work for free, upfront with a mentor; then, you repay after you get hired. I also do a fair bit of AI policy stuff with different organizations — sort of AI for good type work. I'll park the thought there. I'm really excited to be with you folks! This is going to be really cool. 

Corey: Thank you Jeremie. So far this season, we have focused on the humanization of AI and what it means to have humans interacting with AI or the human role when it comes to data science or data science adjacent practices. Today, we're going to be focusing on, as Jeremie alluded to, the ethical challenges and potential issues with that — where human-in-the-loop doesn't necessarily meet the standard and what the consequences of that are. We might sound a little futuristic. We might be talking about things today that you think are not happening anytime soon, but you'd be surprised by how many of these things are coming into the limelight now and how international events are impacting these conversations. We're trying to get ahead of these issues and discuss them, so as the crossover alludes to, we're going to go towards data science today. 

We're going to start with a piece that we read from EGG On Air, which is a really cool initiative launched by Dataiku (it's a lot of really wonderful and free content). We recently had a piece published from EGG On Air which is about first-hand strategies from the CADO of Morgan Stanley. It is about becoming an intelligent organization in the age of AI, talking about accountability and how accountability can be maintained with the right operations and structure.

When it comes to data stewards, who's accountable for data accuracy? If you're using machine learning to pattern client behavior, and it doesn't work or has an unintended consequence, then who's accountable for it? Is it the machine learning algorithm, or is it the human in the loop? How should we judge something like this at the enterprise level? 

Jeremie: I think this is one of the classic problems that we're going to be dealing with more and more as these sorts of things pop up. Some questions are:

  • What is the space that the human is going to occupy?
  • What kinds of decisions are they going to be onboarding as the technology improves?
  • Are we moving towards more and more automated systems

We're in this weird transient right now where AI can't do everything, but it can do a lot of things and that border is constantly shifting. So, whatever set of principles we come up with that says how to place responsibility is going to have to be dynamic. A one-size-fits-all is not going to work here. Next year or in a few months, Open AI is going to come out with GPD 4 —  a whole new set of things that are going to be automatable that wasn’t yesterday. Everything will have to be re-thought. There's a sense that we need to move from thinking about a static model, which worked really well when we were talking about just the web 1.0 world of databases and front end UI and that sort of thing, to now everything needs to be a process. You can't count on static solutions. 

CPM: I agree, especially in a space like AI, where it's the same tool, but how you use it determines whether or not it is being used for good or evil. In these types of situations, it's very hard to allow for appropriate usage and regulate that. If we're thinking about what happens when AI goes wrong and there's a mistake or a failure? What's the result of that? How do we protect ourselves from those occurrences that are going to happen? How do we find that balance?

We can have things like the GDPR where we're talking about protecting our sensitive data or making sure that companies have a reason behind why they collect this type of information and why they want to use that type of information. We can have legal considerations around what we're going to do, but what happens when something goes wrong? Are we going to rest on that to exonerate ourselves from blame, or are we actually going to consider the moral implications of what happens when something goes wrong? These are very big questions that don't necessarily have answers right now. 

Corey: We were talking about specialist versus generalist the week before, and we were talking about the rise of the citizen data scientist. With no disrespect to anyone and no connotation of assumptions, at what point, as automation advances, does data science or solution engineering become like data maintenance, where you're just sort of maintaining things and observing things? The article that we're citing here says, “Ultimately, a human being still needs to make a decision, but if things are becoming more advanced and more automated, how big of a decision or how much in the loop do they need to be?”

Based on the example here, for the purpose of simplifying things, if you're trained to push a button or if you're just trained to record data without understanding the practice, the ethics, and the consequences of it, who is ultimately responsible? Is the enterprise liable? Is the organization liable? Is the individual liable? Are we going to start talking about needing to regulate algorithms just like we're regulating data privacy with GDPR? It starts becoming really complicated. 

Jeremie: As technology gets really powerful, as AI and its capabilities increase, everything becomes an alignment problem. Your whole problem in using these systems is figuring out where to point them and what problems to ask them to solve.

There is a really common failure mode of not just AI systems but also human systems. It comes from a  principle called Goodhart's law. Goodhart's law is basically the idea that the moment you settle on a metric to optimize, that metric ceases to be a good metric of the thing you're trying to optimize for. The stock market today is a pretty good example. If you look back at the year 1920, when the stock market went up, it was fair to say that the American economy was doing better. You could make that inference. Today, the stock market can go up for a million different reasons and many of them have nothing to do with the underlying thing that it was typically used to measure.

The idea is that as you define a metric, intelligence structures begin to form to optimize the metric because all of a sudden they're being rewarded for it. So you find all these dangerously creative ways of making this happen. For example, you're sitting at Twitter and you're saying we want to maximize user engagement for clicks on ads. All of a sudden you realize what kind of content is engaging. You start to optimize for a number, but at the expense of potentially even the user experience in the long run. There's an implicit time horizon when you choose a metric as well. 

Over time, our ability to define the metrics we want to optimize for becomes the limiting step. It becomes the thing that prevents us from making progress, whether it's in a corporate setting, in potentially dangerous uses of AI settings like defense and drones, etc. It all kind of boils down to the same thing. You need a metric. That metric needs to not be dangerous in the limit as you focus myopically on it. Goodhart's law is a problem in that we really have not solved it. It's the definition of irony here

CPM: You have a good heart in the sense that you want to do something, but then that is actually in turn causing the downfall in the long run.

I remember working for social media advertising and we ended up finding that the posts that had cute animals on it were driving a lot of clicks and exposure for the brand. The brand was all about cleaning up messes but people stopped focusing on that and more on the cute animals that were getting themselves into weird situations, getting dirty. Obviously, that is a type of situation where the stakes are fairly low. We're not changing the world necessarily by selling a cleaning product, but as we optimize towards clicks and comments and shares we kind of lost sight of our job as social media advertisers selling a product.

Corey: We're just talking about automation and we're talking about relatively small stakes stuff, but these things are important. AI at the enterprise level could be accomplishing very important things that have very large budgets make a very large impact but in a little bit of a smaller scope.

Jeremie, do you want to talk about AI safety, challenges, ethics, and how we can impact things on a much larger and more significant scale? 

Jeremie: Absolutely. I'll talk about AI safety first. Ethics is its own interesting conversation.

AI safety is an extension of what we've just been talking about. You can pretend that you're in this little sandbox with no big impact, but when we really talk about extrapolating to not far off from where we are today, we get into a world where AI can do an awful lot of things. This alignment problem that we were talking about earlier starts to apply to not just corporate reporting or computer vision for Twitter, but to weaponized drones and even to AI research itself. The furthest extent and limit of this concern has to do with what happens when you get an AI system that has been tasked with pretty much anything. It doesn't really matter what the task is. 

The point is that it has some numbers it's trying to optimize for, and it essentially carries out this process of, what's called in the AI safety community, instrumental convergence. Instrumental convergence means that regardless of what task a really intelligent system is designed to do, it's going to realize that in order to do that thing others must be done first. There are a couple of things that will always go into the foundation no matter what the task is that you specify. One of these things is control. This is very hard to imagine if you had a super-intelligent AI system wanting less control over its environment. In particular, it would want to prevent people from shutting it down. So, if you have AI that's designed to make paper clips, a super-intelligent paper clip maximizer,  the one thing it knows well is that it will not be able to make any paper clips if somebody turns it off, that's guaranteed. So, it will absolutely make sure that doesn't happen and controls its environment very closely. 

The second thing is that it can’t make any paper clips unless it has the materials. Then, let’s say, it starts to notice there are some of the resources it needs in the Earth's crust, in the moon, etc. Eventually, the whole universe gets turned into a giant paper clip factory. It sounds like the most ridiculous setup, but unfortunately, it seems that AI systems do have this tendency towards instrumental convergence and greater control over their environment is something that you could reasonably extrapolate from past behavior.

You might be looking at turning an AI on to perform a given task without actually knowing what that might lead to due to instrumental convergence and other unpredictable elements. The unpredictability of these systems is, unfortunately, a very deep feature, and people in the AI safety and alignment community have been struggling to solve this problem. There are a lot of promising avenues that they're going down, but they need time and resources to make it happen. That's a dominant concern for the future.

CPM: This is very similar to the discussion about power and transparency when it comes to AI and the regulation of AI in general. Broadly speaking, when an individual or any AI system has more power and influence then it should also have more transparency in decision making. If employees are going to feel like they are a part of a greater good or feel like they're contributing to something that they really believe in, they want to know about their CEO's decisions, why they've been made, and how it impacts them. That influence affects all of the employees, and having more transparency in the decision-making process feels more comforting. 

On the flip side, if somebody has less power, less influence, then there's a lesser need for transparency and they can have more privacy for their data and choices day-to-day. AI sort of breaks that paradigm in the sense that as it gets more complex, we often lose transparency. So, if you're going to have this paper clip-making system and it's going to become more sophisticated as we lose sight of exactly how that's going to happen, that can feel very disconcerting.

Jeremie: This is a really deep and fundamental point. The difference between safety and transparency. Safety in what's called interpretability or clarity in Open AI actually has a clarity team.

The difference between interpretability and clarity gets really muddy really fast. If I can't tell what a system's doing, my ability to ensure that it's reliable and safe is really limited. These two things are intimately linked. You can't have safety without clarity, and to some degree, the reverse is also true.

Open AI has a safety team and they have a clarity team; the clarity team spends all their time figuring out how to show the way in which the neural network is thinking for precisely the reason that you have articulated for it. 

With great power comes great responsibility. You can literally think of the brain of the CEO of a company as an extension of whatever neural networks that the company is deploying within a  limit. This is a distributed intelligence — some part of it is running on neurons and some part of it is running on Silicon, but there's a connection between the two and you need to be able to audit both ends of the spectrum. 

drone with little people

Corey: Jeremie, you brought up a really good point before, and I want to talk about drones because there is a recent example that fits into what we're discussing. We talk about unpredictability and the fact that when you're trying to automate something, it could lead to something that's unpredictable. 

There's recently an example in Libya in which a drone that was automated had no human in the loop that was operating it at that time, and the drone attacked areas in Libya, automated not prompted by a human. So, we're talking about unintended consequences now. Is the drone a non-state actor? Who's responsible for it? Is it the drone, or because that drone was automated by an algorithm by a nation-state, is the state liable for that? Is that the official policy of that government specifically? It really opens this whole gray area here that no one is considering. Hopefully, a lot of countries that are operating drones either in warfare or for reconnaissance or research purposes, realize that there's a learning opportunity there to see what went wrong and what didn't. I think that's an interesting example. 

Jeremie: There's this illusion that just because AI is being applied in a different area, that this is somehow a new set of problems, but fundamentally they come from our ability to have accountability, transparency, and reliability in these systems.

Jakob Foerster is a fascinating guy because he is focused on the technical side of things but also has a real interest in drones and specifically in armed drones. He was making the case that one of the fallacies we have when we look at drone usage and weaponized drones, in particular, is that we create this axis of decision-making. You're allowed to have a weaponized drone, but only if it does just so much and a human in the loop that will do the rest of the thinking. We put this little box with a human inside and that box has to be there otherwise you're breaking some sort of rule. Foerster argues that as a technical person, this is just a losing proposition. You're going to have shifting goalposts, you're going to have AI encroaching in ways that aren't obvious or predictable on the box. Eventually, even if you optimize for a black box it will ultimately become more than just a black box. 

Eventually, the AI will effectively transcend all of this. We'll find hacks to make full automation effective and then try to fool each other into thinking we have responsible systems. We need to think about restricting weaponization instead of limiting the degree of automation which is a fruitless kind of a continuum that we'll never be able to find a nice cutoff for. I think it's an interesting proposition that drones cannot have weapons, but I don't think it is a tractable one. Unfortunately, when it comes to what countries are doing with their drone weapons, every country has an incentive to compete. It's a race to the bottom, ultimately on safety. I really think the answer has to come down to coordination in some way and how we do that. That's a different problem, and I would be lying if I pretended to have a clear idea. Maybe AI can help, and that's actually something that some people at Open AI certainly seem to believe based on my conversations with them. We have to figure out how to build trust.

CPM: I love this concept of alignment globally, especially as it has to do with AI and Responsible AI. We throw around terms all the time about responsibility and ethics, but those are very subjective depending upon who you are and where you come from. Ethics and values can be very different, and if we all are going to agree upon some sort of set ground rules, whether it's for drone operation or anything else in the space, it can't necessarily be along those lines. There's so much subjectivity there and that's something that we're going to have to overcome. 

Jeremie: Absolutely. I think the idea of getting humans to align on a shared set of shared values is literally as old as human civilization. It's one of these thorny things, right? Let's say we could have a safe AI according to one set of values that comes out of Silicon Valley, it won't necessarily be good AI from the standpoint of somebody who lives in the Midwest. In different areas, different people have different views on what “good” is. As you get closer and closer to defining exactly what you're going to implement as you move into the concept of  “not being evil and doing the right thing.” I think so many of these AI problems are really people problems in a new costume.

Corey: If we look back to the first piece that we're talking about, about becoming an intelligent organization, correlation doesn't equal causation. We were just talking about AI and global policy in the drone example; you can create models to utilize tools, platforms, etc., but at the end of the day is the data good and predictive? Then, do I believe in what I'm predicting? If you have a fully automated drone and it's doing exactly what you taught it to do, who is really thinking? Who really believes in that? I think that becomes right now, especially at the enterprise level, places that aren't as advanced in their maturity scale for AI. They're still at a level where they can safely claim belief in the predictions. However, as things get more advanced and the scale starts sliding up, at what point does that become a fallacy? When do you really need to still think and when do you really believe in what you're predicting?

CPM: You can think about the movie WALL-E in which there is a clip of a dystopian future. Some people would argue that if a computer is helping you move around and helping you make your own decisions, the food is just a liquid that is much easier to ingest, and everything is available to you, that it is making everything more convenient and better. Life is so much more improved, but at the end of the day, you have the counterargument that people are losing their own skills. If we think about, for example, autopilot on planes, a set of tools that pilots have at their disposal is fantastic, but if they use autopilot too much they'll forget how to fly a plane. There are benefits and detractions to AI becoming more sophisticated, and the line is sort of blurred as to where these benefits and detractions fall. 

Corey: A few years ago, there was so much hype with ride-sharing companies over who was going to be the first company to be able to turn their fleet into fully automated vehicles. Then, for a number of reasons, there was a data-based decision that the companies were not yet comfortable to expose that technology and that the consumer wouldn't be comfortable either. You don't hear the conversation surrounding automated ride-shares anymore. It is interesting that you see automation is clearly the future, with automated car fleets that have everything controlled on the app, but that has not worked out yet. We are so sure about all of these advances, and five years ago it was obvious there would be so much progress on this front, but it is not today’s reality. 

Jeremie: It speaks to the very deep challenge of being able to predict what exactly technologies are going to be able to do easily and what they're not going to be able to easily. 

There's this thing called Moravec's paradox, named after Hans Moravec the computer scientist. It is the idea that what's easy for humans is very often not easy for machines and vice versa. For example, articulating my hands around. For me this is a very easy task, and AI can't even do this. That said, can you identify 10,000 melanomas in 60 seconds better than a radiologist? No, you can't, but AI is doing that. It becomes really difficult to forecast the capabilities of these systems, because like Corey said, it's all very tempting. 

It ties into how we evaluate these systems too. Looking at the example of the self-driving car, one of the most common points of objection that people raise is examples of failures in self-driving cars. However, in the aggregate, these systems are actually far safer in most cases than human beings. There are mistakes that humans make that are just as bad or worse, just through a different way, than those of the AI. It becomes hard to say when you hand it off. It is very difficult to manage these things.

CPM: To me, this hearkens back to crisis management. In an organization, the best crisis management is when it's averted before it is a problem, but things are sensationalized when something big goes wrong. Those are situations that are focused on even though in the long run there is a reduced rate of these occurrences, or even in aggregate, those occurrences are less impactful. 

Jeremie: In many cases, the fact that these systems are automated too makes it tempting to completely trust them. Perhaps we shouldn't be as trusting for some applications just because it's not a fallible human. We make mistakes in a lot of different directions here. The fallibility of humans starts to interact with the fallibility of these machines, and the errors you sometimes get are very difficult to predict because they come from these complex interactions.

Corey: This has been a lively conversation. Frankly, I'm a little terrified. The future of AI is bright, but it's also dark at the same time. It's a nice yin and yang.

Jeremie, it has really been a pleasure working with you and with Towards Data Science on this crossover episode. I think this collaboration was wonderful, and I really appreciate you joining us. 

Jeremie: I appreciate the invitation. This was a ton of fun. I will add on that last note from something that I've seen on the Towards Data Science podcast. There are a lot of really impressive people working on these problems that we just spoke about. Richard Feynman, the physicist, has this anecdote of walking around New York City in the seventies and looking around and he's telling himself that it was so sad that all of it was going to be gone in the future. All this to say, we forget too easily that we've been through challenging phases like this before, and we have very clever, creative people working on these problems. I think it's important to keep the positives in mind as well. This technology can shape a really bright future; we just have to steward it in the right direction. I think podcasts like this one and conversations like these can encourage people to kind of dive into the space and see where they can contribute to the margins. That's an awesome thing. I thank you both for a lively and entertaining conversation. 

CPM: The pleasure has been all ours. I certainly enjoyed our first date, Jeremie. 

For our loyal listeners, you know that we'll be back in just about two weeks with another episode. Make sure to subscribe wherever you listen to podcasts. Jeremie, where can we find you and the Towards Data Science Podcast. 

Jeremie: If you're listening to this as part of the Towards Data Science podcast, nice to see you again. I hope you enjoy the Banana Data Podcast. You can find the Towards Data Science on all the platforms. I had a ton of fun. Thank you both again for inviting me.

Corey: Thank you, Jeremy. We'll see you next time. 

You May Also Like

Fine-Tuning a Model (In Plain English!)

Read More

How to Reach the Apex of Data Preparation

Read More

How to Address Churn With Predictive Analytics

Read More

What Is MLOps?

Read More