Is it really possible to anonymise data?

Tamara Dull
Share:
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale

Did you know that most Americans (87%) can be uniquely identified from just three pieces of personal data: a birth date, five-digit zip code and gender? Sort of disconcerting, right?

This oft-quoted statistic was initially reported almost 15 years ago in a 2001 AT&T research paper on Personally Identifiable Information (PII). Even though the dates have changed and data volumes have grown exponentially since this report, the challenge is still the same: protecting one’s personal identity in the name of privacy.

PII is at the heart of the anonymized data debate. In my last post, I debunked a belief about anonymized data, namely: Anonymized data keeps my personal identity private. The more accurate statement I proposed was: Individuals may be re-identified from anonymized data. Let’s explore this one a bit further.

About PII and anonymized data. What does it mean to anonymize (or de-identify) data? In simple terms, it means removing any information from a data set that could personally identify a specific individual; for example, the person’s name, a credit card number, a social security number, home address, etc. Companies that sell consumer data, such as data brokers, typically only sell anonymized, and often aggregated, data. So if PII is stripped from these data sets (as depicted in the figure below), what’s the big deal?

Image

If we’re talking about a single data set (like the example above), then it’s probably not that big of a deal. Where it gets interesting, though, is when multiple data sets are combined. The figure below is a simplistic view of what a data aggregator (or broker) does with data sets:

Image

These two data sets could be totally harmless, but when brought together and analyzed over time, they could introduce new privacy concerns. Microsoft’s Cynthia Dwork illustrates it this way: “What’s the harm in learning that I buy bread? There is no harm in learning that, but if you notice that, over time, I’m no longer buying bread, you may conclude that maybe I have diabetes … What’s going on here is a failure of privacy mechanisms; they’re not composing effectively.”

Why this matters. Let’s go back to my earlier question: If my personally identifiable information (PII) is being stripped out and aggregated before it’s sold or gets passed on, what’s the big deal?

The big deal is this: With today’s big data technologies, it’s becoming easier to re-identify individuals from this anonymized data. Programming techniques continue to be developed to pull these anonymized pieces back together from one or more data sets. So if a company says it anonymizes your data before passing it onto others, be aware that your identity could still be revealed through advanced re-identification techniques.

Image

There is actually a heated debate going on about this. One camp stands firmly behind the techniques and algorithms being used to anonymize data; and they’re rather confident that individuals can’t be re-identified because the technology just isn’t there. The other camp doesn’t buy it, and says that re-identification algorithms are, in fact, working and only getting better. They also point out how some of the anonymization techniques currently being used simply don’t work.

I tend to agree with the latter camp. Even if it’s not happening now, it’s just a matter of time before the technologies and algorithms rise to a level of sophistication that not only re-identifies individuals faster, but does so faster and cheaper.

One final thought. Re-identification algorithms aren’t good or bad; it just depends on how they’re used. So when a well-meaning company or data broker tells you that your personal information is protected and is not shared with or sold to others, this is not an invitation to let your guard down. You know how it works now—so take heed and be vigilant.

This article is published in collaboration with Smart Data Collective. Publication does not imply endorsement of views by the World Economic Forum.

To keep up with the Agenda subscribe to our weekly newsletter.

Author: Tamara Dull is  the Director of Emerging Technologies on the SAS Best Practices team, a thought leadership organization at SAS Institute.

Image: A magnifying glass is held in front of a computer screen. REUTERS/Pawel Kopczynski.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum