Executing Data Projects in the Age of Data Privacy

Data Basics, Scaling AI Lynn Heidmann

With some industry experts naming 2019 as the year of increased data regulation, it’s certainly true that at this point, there is no looking back — the use of data across roles and industry will only become increasingly restricted. But that doesn’t have to mean a pause or paralyzation in data use.

privacy-locksget the guide to executing compliant data projects

The Data Privacy Landscape (a Story of Consumer Trust)

Of course, the first regulation that really brought the idea of data privacy to the forefront was the European Union’s General Data Protection Regulation (GDPR) in 2016. But by the time the enforcement deadline came in May 2018, many other governments were also considering (and are still considering) their own data privacy regulations, including several states in the U.S.

In mid-2018, the White House said it will be working with Congress to draft data privacy legislation and that it “began holding stakeholder meetings to identify common ground and formulate core, high-level principles on data privacy.”

congressIs GDPR-like regulation on the horizon for the U.S. as well?

All of this brewing data privacy action ties back to one of 2019’s AI hot topic: trust. What it all means is that companies and their data teams are left to negotiate tricky waters — to continue developing insights that move their organization forward in getting value out of data without compromising individuals’ privacy (which is, ultimately, important not only to those individuals, but to the businesses themselves whose livelihood depends on the trust of their users).

Moving Forward

Companies that are organized for these changes will be able to continue moving forward in their machine learning and AI efforts with minimal disruption by:

  1. Centralizing data efforts into one central place or tool for simple governance of larger projects as well as individual data sets and sources.
  2. Clearly tagging data sets and projects that contain personal data and that need to be handled differently.
  3. Developing straightforward processes (and, of course, properly training staff on those policies and procedures) for working with personal data.
  4. Ensuring the monitoring and enforcement of personal data processes.

A recent study by MarketsandMarkets predicts the market for data science platforms will climb to $101.4 billion by the year 2021. In general, data teams need a data science platform for a variety of reasons, including pure efficiency. But critically, one of the biggest advantages is compliance with data regulations.

You May Also Like

Fine-Tuning a Model (In Plain English!)

Read More

Talking AI Democratization With Dr. Anastassia Lauterbach

Read More

6 Top-of-Mind Topics About AI & Trust in 2024

Read More

3 Concrete Ways to Drive AI ROI

Read More