Following OpenAI’s update of the latest staged release of GPT-2, it’s a good time to reflect on what the project means for the future of AI’s openness, but more broadly — and probably more importantly — for the future of regulation and openness in the field.
Taking a step back for those that haven’t been following along, OpenAI is a nonprofit AI research organization co-founded by Elon Musk. They developed an AI called GPT-2, and with some impressive results. Well, impressive might be an understatement: GPT-2 is so good at text generation that the team decided to self-regulate its release out of fear of the technology’s negative implication (see New AI fake text generator may be too dangerous to release, say creators).
The question of whether or not an AI that generates text actually could or would be dangerous or misused, and the more philosophical if-you-don’t-release-it-someone-else-will debate, is interesting, but for another time. Instead, I’d like to take a look at the global differences in philosophies surrounding AI that might impact future development, whether that means spurring or hindering its progress.
In order to look at how different countries and regions treat emerging technology, one has to examine from the angle of:
- Purpose — how the country or region deals with technology’s end goal — in this case AI. In other words, how it’s being used day-to-day.
- Technology — how the country or region approaches advances from a purely technological standpoint (separate from practical applications).
From GMO crops to the EU General Data Protection Regulation (GDPR), Europe has a history of being driven by regulation. However, it doesn’t mean that they are restrictive when it comes to technology’s purpose or the technology itself — they tend to let things grow freely until there is a need to regulate.
If you think of a garden, then Europe is providing the fertilizer and the right conditions for growth, sees what blooms, and then prunes when necessary. So when it comes to AI, it’s probably a fair assumption that we will see the first wide-scale regulations coming out of Europe — GDPR was just the first step in this journey. As we know, the European Commission has since announced its intention to take on AI ethics.
But the question is, will their reputation precede them, and eventually, will the growth of AI purposes be hindered here? Maybe, but that might also be a good thing when it comes to the extremely important and (until recently) under-addressed issues of AI interpretability and ethics.
China: Speed- and Control-Driven
In China, speed matters — a lot. Both in the sense of technology itself and in the arms race sense; that is, they are racing to be the first to X. On top of that, they are also working with a relatively free sense of purpose with AI technology — nothing seems to be off the table when it comes to what AI can do.
Yet the big question mark here is the elephant in the room, which is the one of political control. If we come back to the garden analogy, this one would be walled. China has ensured that the weather is perfect and that there is ample fertilizer for quick growth, and seeds are thriving. Better than thriving — they are overgrown and taking over. But with a lack of sharing (both from the outside in and the inside out), the impact of this overgrowth is unclear.
Some express concern that “unchecked expansion” in China could have negative global consequences both domestically for citizens whose concerns about ethics are unheard. But what happens in the worst-case scenario if some of China’s AI bets pay off big time, and whether ethical or not, the rest of the world wants in?
Imagine, for example, that using a facial recognition database, extensive network of cameras, and image detection, China is able to drastically reduce crime. Would other countries be willing to forgo ethical concerns and follow suit?
United States: Intent- and Balance-Driven
The United States provides an interesting environment, which is the race-like encouragement of technological development plus an extremely litigious society that inherently limits purpose. I can’t work out how to fit this into the garden analogy, but maybe it would be as if the soil isn’t quite the right type to encourage unbridled growth for all types of plants, though certainly some thrive.
What’s interesting here is the OpenAI case, the idea that there are not just individuals but entire companies who want to step up and take a stand for the future of AI in a way that we haven’t seen (yet) elsewhere in the world.
But as is inevitable with massively competitive technological growth, a company that doesn’t have the same ethical priorities will soon come around and won’t be as noble or responsible. So the question here is, can (or will) regulations fill the gaps? The United States has not been quick to regulate in the past for fear of stunting growth, but what will it take for that to change?