Artificial Power Comes with Real Responsibility: Is AI Going To Be Less Open?

Scaling AI Florian Douetteau

Today, the world currently enjoys a relatively free flow of information when it comes to AI. That is, technological advancements being worked on today could be easily leveraged by someone with enough knowledge — it’s the combination of “platform thinking” (there’s a way to build the thing) and the availability of such technology.

spiderman great wall of china

The last edition of KDD 2018, for instance, contained the latest theoretical and practical advancements on AI: vision, fleet automation, self-driving cars, etc. Obviously, some companies try to keep an edge on the domain, but lots of it is flying out in the open.

In fact, there’s a general freedom surrounding AI that, thus far, has not been touched or affected by very many regulations. But this is on the cusp of change, especially following increasing regulations around data privacy (hello, GDPR and the Facebook saga).

While it’s tempting in the wake of negative PR news and regulations to shy away from using data out of fear, that is a mistake. Organizations should still aim to be at the cutting edge in terms of using data in creative ways; however, given the lack of strict laws governing AI (for now), it’s up to businesses to ensure they are moving forward in a conscious way that leaves room both for innovation and ethics.

The real danger would come if companies face regulation because of a lack of understanding from those regulators of what AI is and what it actually does on a day-to-day basis. Obviously, there are some concerns about the danger ahead if applications of AI went uncontrolled (see the 100-page report here, if you’re curious). The report highlights the danger of AI technology in the wild, including concrete scenarios ranging from political propaganda, cyber attacks, or automated drones.

westworld-robotObviously, there are some concerns about the danger ahead if applications of AI went uncontrolled (Westworld jokes aside, see this report:
 
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

From the perspective of researchers or, in fact, any practitioner, the main fear is dual-use. That is, the idea that a very elaborate image detector for the skin developed during a postdoc for medical purposes could be used by an evil drone, or maybe that a super smart literature generator algorithm could become the backbone of the propaganda of an authoritarian regime.

Most of these fears don’t have substance, but their existence might raise demons that have been bottled up for decades. What if research becomes controlled? What if conference proceedings are locked down? Or what if, eventually, open source for AI is not really allowed ?

The only way to overcome those fears would be to have a far greater global understanding of how AI works: when can you replicate some work with the data or not? What is difficult or specific vs. what is simple and generic ? So perhaps one of the best things organizations can do now to prepare is to educate — both employees as well as the public — so that people understand AI and how it works.

You May Also Like

Explainable AI in Practice (In Plain English!)

Read More

Democratizing Access to AI: SLB and Deloitte

Read More

Secure and Scalable Enterprise AI: TitanML & the Dataiku LLM Mesh

Read More

Revolutionizing Renault: AI's Impact on Supply Chain Efficiency

Read More